Interactive WebGL Hover Effects

I love WebGL, and in this article I will explain one of the cool effects you can make if you master shaders. The effect I want to recreate is originally from Jesper Landberg’s website. He’s a really cool dude, make sure to check out his stuff:

So let’s get to business! Let’s start with this simple HTML:

<div class="item">
    <img src="img.jpg" class="js-image" alt="">
    <h2>Some title</h2>
    <p>Lorem ipsum.</p>
</div>
<script src="app.js"></script>

Couldn’t be any easier! Let’s style it a bit to look prettier:

All the animations will happen in a Canvas element. So now we need to add a bit of JavaScript. I’m using Parcel here, as it’s quite simple to get started with. I’ll use Three.js for the WebGL part.

So let’s add some JavaScript and start with a basic Three.js setup from the official documentation:

import * as THREE from "three";

var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth/window.innerHeight, 0.1, 1000 );

var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );


camera.position.z = 5;

var animate = function () {
	requestAnimationFrame( animate );

	cube.rotation.x += 0.01;
	cube.rotation.y += 0.01;

	renderer.render( scene, camera );
};

animate();

Let’s style the Canvas element:

body { margin: 0; }

canvas { 
	display: block; 
	position: fixed;
	z-index: -1; // put it to background
	left: 0; // position it to fill the whole screen
	top: 0; // position it to fill the whole screen
}

Once you have all this in place, you can just run it with `parcel index.html`. Now, you wouldn’t see much, its an empty 3D scene so far. Let’s leave the HTML for a moment, and concentrate on the 3D scene for now.

Let’s create a simple PlaneBufferGeometry object with an image on it. Just like this:

let TEXTURE = new TextureLoader().load('supaAmazingImage.jpg'); 
let mesh = new Mesh(
	new PlaneBufferGeometry(), 
	new MeshBasicMaterial({map: TEXTURE})
)

And now we’ll see the following:

Obviously we are not there yet, we need that color trail following our mouse. And of course, we need shaders for that. If you are interested in shaders, you’ve probably come across some tutorials on how to displace images, like displacing on hover or liquid distortion effects.

But we have a problem: we can only use shaders on (and inside) that image from the example above. But the effect is not constrained to any image borders, but rather, it’s fluid, covering more area, like the whole screen.

Postprocessing to the rescue

It turns out that the output of the Three.js renderer is just another image. We can make use of that and apply the shader displacement on that output!

Here is the missing part of the code:

// set up post processing
let composer = new EffectComposer(renderer);
let renderPass = new RenderPass(scene, camera);
// rendering our scene with an image
composer.addPass(renderPass);

// our custom shader pass for the whole screen, to displace previous render
let customPass = new ShaderPass({vertexShader,fragmentShader});
// making sure we are rendering it.
customPass.renderToScreen = true;
composer.addPass(customPass);

// actually render scene with our shader pass
composer.render()
// instead of previous
// renderer.render(scene, camera);

There are a bunch of things happening here, but it’s pretty straightforward: you apply your shader to the whole screen.

So let’s do that final shader with the effect:

// get small circle around mouse, with distances to it
float c = circle(uv, mouse, 0.0, 0.2);
// get texture 3 times, each time with a different offset, depending on mouse speed:
float r = texture2D(tDiffuse, uv.xy += (mouseVelocity * .5)).x;
float g = texture2D(tDiffuse, uv.xy += (mouseVelocity * .525)).y;
float b = texture2D(tDiffuse, uv.xy += (mouseVelocity * .55)).z;
// combine it all to final output
color = vec4(r, g, b, 1.);

You can see the result of this in the first demo.

Applying the effect to several images

A screen has its size, and so do images in 3D. So what we need to do now is to calculate some kind of relation of those two.

Just like I did in my previous article, we can make a plane with a width of 1, and fit it exactly to the screen width. So practically, we have WidthOfPlane=ScreenSize.

For our Three.js scene, this means that if want an image with a width of 100px on the screen, we will make a Three.js object with width of 100*(WidthOfPlane/ScreenSize). That’s it! With this kind of math we can also set some margins and positions easily.

When the page loads, I will loop through all the images, get their dimensions, and add them to my 3D world:

let images = [...document.querySelectorAll('.js-image')];
images.forEach(image=>{
	// and we have the width, height and left, top position of the image now!
	let dimensions = image.getBoundingClientRect();
	// hide original image
	image.style.visibility = hidden;
	// add 3D object to your scene, according to its HTML brother dimensions
	createMesh(dimensions);
})

Now it’s quite straightforward to make this HTML-3D hybrid.

Another thing that I added here is mouseVelocity. I used it to change the radius of the effect. The faster the mouse moves, the bigger the radius.

To make it scrollable, we would just need to move the whole scene, the same amount that the screen was scrolled. Using that same formula I mentioned before: NumberOfPixels*(WidthOfPlane/ScreenSize).

Sometimes it’s even easier to make WidthOfPlane equal to ScreenSize. That way, you end up with exactly the same numbers in both worlds!

Exploring different effects

With different shaders you can come up with any kind of effect with this approach. So I decided to play a little bit with the parameters.

Instead of separating the image in three color layers, we could simply displace it depending on the distance to the mouse:

vec2 newUV = mix(uv, mouse, circle); 
color = texture2D(tDiffuse,newUV);

And for the last effect I used some randomness, to get a pixelated effect around the mouse cursor.

In this last demo you can switch between effects to see some modifications you can make. With the “zoom” effect, I just use a displacement, but in the last one, I also randomize the pixels, which looks kinda cool to me!

I’d be happy to see your ideas for this animation. What kind of effect would you do with this technique?

Interactive WebGL Hover Effects was written by Yuriy Artyukh and published on Codrops.

How to Unroll Images with Three.js

Do you like to roll up things? Or maybe you prefer rolling them out?

I spent my childhood doing crepes. I loved those rolls.

I guess, the time has come to unroll all kinds of things, including images. And to unroll as many rolls as possible I decided to automate this process with a bit of JavaScript and WebGL.

rolled crepe

The setup

I will be using Three.js for this animation, and set up a basic scene with some planes.

Just as in this tutorial by Paul Henschel, we will replace all the images with those PlaneGeometry objects. So, we will simply work with HTML images:

<img src="owl.jpg" class="js-image" />

Once the page loads and a “loaded” class is set to the body, we will hide those images:

.loaded .js-image {
	opacity: 0;
}

Then we’ll get the dimensions of each image and position them into our 3D Planes, exactly there where the DOM image elements were.

MY_SCENE.add(
	new THREE.Mesh(
		new PlaneGeometry(640,480),  // some size of image
		new Material({texture:’owl.jpg’) // texture i took from DOM
	)
)

Because rolling things with CSS or SVG is next to impossible, I needed all the power of WebGL, and I moved parts of my page into that world. But I like the idea of progressive enhancement, so we still have the usual images there in the markup, and if for some reason the JavaScript or WebGL isn’t working, we would still see them.

After that, its quite easy to sync the scroll of the page with the 3D scene position. For that I used a custom smooth scroll described in this tutorial. You should definitely check it out, as it works regardless of platform and actions that you use to scroll. You know this is usually the biggest pain with custom scrolling libraries.

Let’s rock and roll

So, we have what we want in 3D! Now, with power of shaders we can do anything!

Well, anything we are capable of, at least =).

Red parts are WebGL, everything else is HTML

I started from the end. I imagine every animation as some function which takes a number between 0 and 1 (I call it ‘progress’) and returns a visual. So the result of my imaginary RollFunction(0) should be something rolled. And RollFunction(1), should be just the default state of the plane. That’s how I got the last line of my animation:

vec3 finalPosition = mix(RolledPosition, DefaultPosition, progress);

I had DefaultPosition from the start, it’s usually called ‘position’. So all I needed is to create the RolledPosition and change my progress!

I figured out a couple of ways to do that. I could have had another object (like an .obj file) done in some editor, or even fully exported the animation from Blender or another program.

But I decided to transform DefaultPosition into RolledPosition with a couple of math functions inside my Vertex shader. So, imagine we have a plane lying in the Z plane, so, to roll something like that, you could do the following:

RolledPosition.x = RADIUS*cos(position.x);
RolledPosition.y = position.y; // stays the same
RolledPosition.z = RADIUS*sin(position.x);

If you ever tried to draw a circle yourself, you can easily guess where this is coming from. If not here is a famous GIF illustrating that visualizes that sheds some light on it:

See how those two functions are essentially part of circle

Of course this would just make a (not so) perfect tube out of our plane, but just adding a couple of parameters here, we can make it into a real roll:

RADIUS *= 1 - position.x; // so it gets smaller when we roll the plane
newposition.z =  RADIUS*sin(position.x*TWO_PI);
newposition.x =  RADIUS*cos(position.x*TWO_PI); 

And you will get something like that:

This is done with the help of the D3 library, but the idea is the same.

This two-dimensional animation has really helped me to get the idea of rolling things. So I recommend that you to dig into the code, it’s quite interesting!

After that step, it was a matter of time and arithmetics to play a bit with the progress, so I got this kind of animation for my plane:

There are a number of other steps, to make the angle parametric, and to make it a bit more beautiful with subtle shadows, but that’s the most important part right here. Sine and Cosine functions are often at the core of all the cool things you see on the web! =).

So let me know if you like these rolling effects, and how much time you have spent scrolling back and forth just to see ‘em roll! Have a nice day! =)

How to Unroll Images with Three.js was written by Yuriy Artyukh and published on Codrops.

Creative WebGL Image Transitions

Everybody loves images. They are bright and colorful and we can do fun things with them. Even text sometimes strives to be an image:

     |\_/|                  
     | @ @   Woof! 
     |   <>              _  
     |  _/\------____ ((| |))
     |               `--' |   
 ____|_       ___|   |___.' 
/_/_____/____/_______|

Once you want to show more than one image, you can’t help making a transition between them. Or is it just me?

Jokes aside, image transitions are all over the web. They can be powered by CSS, SVG or WebGL. But of course, the most efficient way to work with graphics in the browser is using the Graphics Processor, or GPU. And the the best way to do this is with WebGL, specifically with shaders written in GLSL.

Today we want to show you some interesting image transition experiments that reveal the boundless possibilities of WebGL. These effects were inspired by the countless incredible design examples and effects seen on websites like The Avener and Oversize Studio.

Setup

I will be using the Three.js framework for my transitions. It doesn’t really matter what library you use, it could have also been the amazing Pixi.js library, or simply (but not so straightforward) native WebGL. I’ve used native WebGL in my previous experiment, so this time I’m going to use Three.js. It also seems most beginner friendly to me.

Three.js uses concepts like Camera, Scene and Objects. We will create a simple Plane object, add it to Scene and put it in front of the Camera, so that it is the only thing that you can see. There is a template for that kind of object, PlaneBufferGeometry:

To cover the whole screen with a plane you need a little bit of geometry. The Camera has a fov (field of view), and the plane has a size. So with some calculations you can get it to fill your whole screen:

camera.fov = 2*(180/Math.PI)*Math.atan(PlaneSize/(2*CameraDistance));

Looks complicated, but it’s just getting the angle(fov), knowing all the distances here:

That is actually the end of the 3D part, everything else will be happening in 2D.

GLSL

In case you are not yet familiar with this language, I highly advise you to check out the wonderful Book Of Shaders.

So, we have a plane and we have a fragment shader attached to it that calculates each pixels color. How do we make a transition? The simplest one done with a shader looks like this:

void main() {
  vec4 image1 = texture2D(texture1,uv);
  vec4 image2 = texture2D(texture2,uv);
  gl_FragColor = mix(image1, image2, progress);
}

Where progress is a number between 0 and 1, indicating the progress of the animation.

With that kind of code you will get a simple fade transition between images. But that’s not that cool, right?

Cool transitions

Usually all transitions are based on changing so called UVs, or the way texture is wrapped on the plane. So for instance, multiplying UV scales the image, adding a number just shifts the image on the plane.

UVs are nothing magical. Think of them as a coordinate system for pixels on a plane:

Let’s start with some basic code:

gl_FragColor = texture2D(texture,uv);

This just shows an image on the screen. Now let’s adjust that code a bit:

gl_FragColor = texture2D(texture,fract(uv + uv));

By taking the fractional part, we make sure that all the values stay between 0 and 1. And if UV was from 0 to 1, doubling it means it will be from 0 to 2, so we should see the fractional part changing from 0 to 1, and from 0 to 1 again!

And that’s what you get: a repeated image. Now let’s try something different: subtracting UV and using the progress for the animation:

gl_FragColor = texture2D(texture, uv - uv * vec2(1.,0) * progress * 0.5);

First, we make sure that we are only changing one axis of UV, by multiplying it with vec2(1.,0). So, when the progress is 0, it should be the default image. Let’s see:

Now we can stretch the image! Let’s combine those two effects into one.

gl_FragColor = texture2D(uTextureOne, uv - fract(uv * vec2(5.,0.)) * progress * 0.1 );

So basically, we do the stretching and repeat it 5 times. We could use any other number as well.

Much better! Next, if we add another image, we get the effect that you can see in demo 7.

Cool isn’t it? Just two simple arithmetic operations, and you get an interesting transition effect.

That’s just one way of changing UVs. Check out all the other demos, and try to guess what’s the math behind them! Try to come up with your own unique animation and share it with me!

References and Credits

Creative WebGL Image Transitions was written by Yuriy Artyukh and published on Codrops.

Exploding 3D Objects with Three.js

Today we’d like to share an exploding object experiment with you. The effect is inspired by Kubrick Life Website: 3D Motion. No icosahedrons were hurt during these experiments!

The following short walk-through assumes that you are familiar with some WebGL and shaders.

The demo is kindly sponsored by Airtable: Build MVPs faster than ever before. If you would like to sponsor one of our demos, find out more here.

How it’s done

For this effect we need to break apart the respective object and calculate all fragments.

The easiest way to produce naturally looking fragments, is to look at how nature does them:

giraffe

Giraffes have been using those fashionable fragments for millions of years.

This kind of pattern is called a Voronoi diagram (after Georgy Feodosevich Voronoy, mathematician).

voronoi
Image by Mike Bostock done with Voronator

We are lucky to have algorithms that can create those diagrams programmatically. Not only on surfaces, as giraffes do, but also as spatial ones that break down volumes. We can even partition four dimensional space. But let’s stop at three dimensions for today’s example. I will leave the four dimensional explosions as an exercise for the reader 😉

We prepared some models (you could use Blender/Cinema4D for that, or your own Voronoi algorithm):

heart
You can see that this heart is no longer whole. This heart is broken. With Voronoi.

That looks already beautiful by itself, doesn’t it? ❤

On the other hand, that’s a lot of data to load, so I managed to compress it with the glTF file format using Draco 3D data compression.

Shader

I decided to use three.js for the rendering, as it has a lot of useful built-in stuff. It’s great if you want reflecting materials, and it has some utilities for working with fragments and lightning.

With too many fragments it is not very wise to put all calculations on the CPU, so it’s better to animate that in the shaders, i.e. on the GPU. There’s a really simple vertex shader to tear all those fragments apart:

	position = rotate(position);
	position += position + direction*progress;

…where direction is the explosion direction and progress is the animation progress.

We can then use some three.js materials and CubeTexture to color all the surfaces, and that’s basically it!

During development, I accidentally typed the wrong variable in one of my shaders, and got pretty interesting result:

error

So, don’t be afraid to make mistakes, you never know what you end up with when you try something new!

I hope you like the demos and the short insight into how it works, and that this story will inspire you to do more cool things, too! Let me know what you think, and what ideas you have!

References and Credits

Exploding 3D Objects with Three.js was written by Yuriy Artyukh and published on Codrops.

How to Create a Fake 3D Image Effect with WebGL








WebGL is becoming quite popular these days as it allows us to create unique interactive graphics for the web. You might have seen the recent text distortion effects using Blotter.js or the animated WebGL lines created with the THREE.MeshLine library. Today you’ll see how to quickly create an interactive “fake” 3D effect for images with plain WebGL.

If you use Facebook, you might have seen the update of 3D photos for the news feed and VR. With special phone cameras that capture the distance between the subject in the foreground and the background, 3D photos bring scenes to life with depth and movement. We can recreate this kind of effect with any photo, some image editing and a little bit of coding.

Usually, these kind of effects would rely on either Three.js or Pixi.js, the powerful libraries that come with many useful features and simplifications when coding. Today we won’t use any libraries but go with the native WebGL API.

So let’s dig in.

Getting started

So, for this effect we’ll go with the native WebGL API. A great place to help you get started with WebGL is webglfundamentals.org. WebGL is usually being berated for its verboseness. And there is a reason for that. The foundation of all fullcreen shader effects (even if they are 2D) is some sort of plane or mesh, or so called quad, which is stretched over the whole screen. So, speaking of being verbose, while we would simply write THREE.PlaneGeometry(1,1) in three.js which creates the 1×1 plane, here is what we need in plain WebGL:

let vertices = new Float32Array([
	  -1, -1,
	  1, -1,
	  -1, 1,
	  1, 1,
	])
	let buffer = gl.createBuffer();
	gl.bindBuffer( gl.ARRAY_BUFFER, buffer );
	gl.bufferData( gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW );

Now that we have our plane, we can apply vertex and fragment shaders to it.

Preparing the image

For our effect to work, we need to create a depth map of the image. The main principle for building a depth map is that we’ve got to separate some parts of the image depending on their Z position, i.e. being far or close, hence isolate the foreground from the background.

For that, we can open the image in Photoshop and paint gray areas over the original photo in the following way:

fake3d_01

This image shows some mountains where you can see that the closer the objects are to the camera, the brighter the area is painted in the depth map. Let’s see in the next section why this kind of shading makes sense.

Shaders

The rendering logic is mostly happening in shaders. As described in the MDN web docs:

A shader is a program, written using the OpenGL ES Shading Language (GLSL), that takes information about the vertices that make up a shape and generates the data needed to render the pixels onto the screen: namely, the positions of the pixels and their colors. There are two shader functions run when drawing WebGL content: the vertex shader and the fragment shader.

A great resource to learn more about shaders is The Book Of Shaders.

The vertex shader will not do much; it just shows the vertices:

attribute vec2 position;
    void main() {
    gl_Position = vec4( position, 0, 1 );
}

The most interesting part will happen in a fragment shader. Let’s load the two images there:

void main(){
    vec4 depth = texture2D(depthImage, uv);
    gl_FragColor = texture2D(originalImage, uv); // just showing original photo
}

Remember, the depth map image is black and white. For shaders, color is just a number: 1 is white and 0 is pitch black. The uv variable is a two dimensional map storing information on which pixel to show. With these two things we can use the depth information to move the pixels of the original photo a little bit.

Let’s start with a mouse movement:

vec4 depth = texture2D(depthImage, uv);
gl_FragColor = texture2D(originalImage, uv + mouse);

Here is how it looks like:

fake3d_02

Now let’s add the depth:

vec4 depth = texture2D(depthImage, uv);
gl_FragColor = texture2D(originalImage, uv + mouse*depth.r);

And here we are:

fake3d_03

Because the texture is black and white, we can just take the red channel (depth.r), and multiply it to the mouse position value on the screen. That means, the brighter the pixel is, the more it will move with the mouse. On the other hand, dark pixels will just stay in place. It’s so simple, yet, it results in such a nice 3D illusion of an image.

Of course, shaders are capable of doing all kinds of other crazy things, but I hope you like this small experiment of “faking” a 3D movement. Let me know what you think about it, and I hope to see your creations with this!

References and Credits

How to Create a Fake 3D Image Effect with WebGL was written by Yuriy Artyukh and published on Codrops.