Adding a Persistence Effect to Three.js Scenes

If you have written any WebGL applications in the past, be it using the vanilla API or a helper library such as Three.js, you know that you set up the things you want to render, perhaps include different types of cameras, animations and fancy lighting and voilà, the results are rendered to the default WebGL framebuffer, which is the device screen.

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing. They allow us to “post-process” our scenes, applying different effects on them once rendered.

This article assumes some intermediate knowledge of WebGL with Three.js. The core ideas behind framebuffers have already been covered in-depth in this article here on Codrops. Please make sure to read it first, as the persistence effect we will be achieving directly builds on top of these ideas.

Persistence effect in a nutshell

I call it persistence, yet am not really sure if it’s the best name for this effect and I am simply unaware of the proper way to call it. What is it useful for?

We can use it subtly to blend each previous and current animation frame together or perhaps a bit less subtly to hide bad framerate. Looking at video games like Grand Theft Auto, we can simulate our characters getting drunk. Another thing that comes to mind is rendering the view from the cockpit of a spacecraft when traveling at supersonic speed. Or, since the effect is just so good looking in my opinion, use it for all kinds of audio visualisers, cool website effects and so on.

To achieve it, first we would need to create 2 WebGL framebuffers. Since we will be using threejs for this demo, we would have to use THREE.WebGLRenderTarget. We will call them Framebuffer 1 and Framebuffer 2 from now on and they will have the same dimensions as the size of the canvas we are rendering to.

To draw one frame of our animation with persistence, we will need to:

  1. Render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. For the purposes of our demo, we will use a pure black color with full opacity
  2. Render our threejs scene that holds the actual meshes we want to show on the screen to Framebuffer 2 as well
  3. Render the contents of Framebuffer 2 to the Default WebGL framebuffer (device screen)
  4. Swap Framebuffer 1 with Framebuffer 2

Afterwards, for each new animation frame, we will need to repeat the above steps. Let’s illustrate each step:

Here is a visualisation of our framebuffers. WebGL gives us the Default framebuffer, represented by the device screen, automatically. It’s up to us as developers to manually create Framebuffer 1 and Framebuffer 2. No animation or rendering has happened yet, so the pixel contents of all 3 framebuffers are empty.

Our 2 manually created framebuffers on the left, the default device screen framebuffer on the right. Nothing has been drawn yet.

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. As said, we will use a black color, but for illustration purposes I am fading out to transparent white color with opacity 0.2. As Framebuffer 1 is empty, this will result in empty Framebuffer 2:

Even though we have rendered Framebuffer 1 to Framebuffer 2, at this point we still have framebuffers with empty pixel contents.

Step 2: we need to render our threejs scene that holds our meshes / cameras / lighting / etc to Framebuffer 2. Please notice that both Step 1 and Step 2 render on top of each other to Framebuffer 2.

Our threejs scene rendered to Framebuffer 2.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

Final result rendered to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Framebuffer 1 and Framebuffer 2 swapped.

Now comes the interesting part, since Framebuffer 1 is no longer empty. Let’s go over each step once again:

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. Let’s assume a transparent white color with 0.2 opacity.

We have rendered and faded out the pixel contents of Framebuffer 1 to Framebuffer 2 by a factor of 0.2.

Step 2: we need to render our threejs scene to Framebuffer 2. For illustration purposes, let’s assume we have an animation that slowly moves our 3D cube to the right, meaning that now it will be a few pixels to the right:

Once again, we render both Framebuffer 1 and our threejs scene to Framebuffer 2. Notice how the threejs scene is rendered on top of the faded out contents of Framebuffer 1.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

The pixel contents of Framebuffer 2 copied over to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Rinse and repeat. Back to Step 1.

I hope you can see a pattern emerging. If we repeat this process enough times, we will start accumulating each new frame to the previous faded one. Here is how it would look if we repeat enough times:

Postprocessing on our threejs scene

Here is the demo we will build in this article. The result of the repeated enough times process above is evident:

Notice the accumulated trails. At every animation loop, we are repeating step 1 to 4.

So with this theory out of the way, let’s create this effect with threejs!

Our skeleton app

Let’s write a simple threejs app that will animate a bunch of objects around the screen and use perspective camera to look at them. I will not explain the code for my example here, as it does not really matter what we render, as long as there is some animation present so things move and we can observe the persistence.

I encourage you to disregard my example and draw something else yourself. Even the most basic spinning cube that moves around the screen will be enough. That being said, here is my scene:

See the Pen 1. Skeleton app by Georgi Nikoloff (@gbnikolov) on CodePen.

The important thing to keep in mind here is that this demo renders to the default framebuffer, represented by the device screen, that WebGL automatically gives us. There are no extra framebuffers involved in this demo up to this point.

Achieving the persistence

Let’s add the code needed to achieve actual persistence. We will start by introducing a THREE.OrthographicCamera.

Orthographic camera can be useful for rendering 2D scenes and UI elements, amongst other things.

threejs docs

Remember, framebuffers allow us to render to image buffers in the video card’s memory instead of the device screen. These image buffers are represented by the THREE.Texture class and are automatically created for us when we create our Framebuffer 1 and Framebuffer 2 by instantiating a new THREE.WebGLRenderTarget. In order to display these textures back to the device screen, we need to create two 2D fullscreen quads that span the width and height of our monitor. Since these quads will be 2D, THREE.OrthographicCamera is best suited to display them.

const leftScreenBorder = -innerWidth / 2
const rightScreenBorder = innerWidth / 2
const topScreenBorder = -innerHeight / 2
const bottomScreenBorder = innerHeight / 2
const near = -100
const far = 100
const orthoCamera = new THREE.OrthographicCamera(
  leftScreenBorder,
  rightScreenBorder,
  topScreenBorder,
  bottomScreenBorder,
  near,
  far
)
orthoCamera.position.z = -10
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

As a next step, let’s create a fullscreen quad geometry using THREE.PlaneGeometry:

const fullscreenQuadGeometry = new THREE.PlaneGeometry(innerWidth, innerHeight)

Using our newly created 2D quad geometry, let’s create two fullscreen planes. I will call them fadePlane and resultPlane. They will use THREE.ShaderMaterial and THREE.MeshBasicMaterial respectively:

// To achieve the fading out to black, we will use THREE.ShaderMaterial
const fadeMaterial = new THREE.ShaderMaterial({
  // Pass the texture result of our rendering to Framebuffer 1 as uniform variable
  uniforms: {
    inputTexture: { value: null }
  },
  vertexShader: `
    // Declare a varying variable for texture coordinates
    varying vec2 vUv;

    void main () {
      // Set each vertex position according to the
      // orthographic camera position and projection
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  
      // Pass the plane texture coordinates as interpolated varying
      // variable to the fragment shader
      vUv = uv;
    }
  `,
  fragmentShader: `
    // Pass the texture from Framebuffer 1
    uniform sampler2D inputTexture;

    // Consume the interpolated texture coordinates
    varying vec2 vUv;

    void main () {
      // Get pixel color from texture
      vec4 texColor = texture2D(inputTexture, vUv);

      // Our fade-out color
      vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

      // this step achieves the actual fading out
      // mix texColor into fadeColor by a factor of 0.05
      // you can change the value of the factor and see
      // the result will change accordingly
      gl_FragColor = mix(texColor, fadeColor, 0.05);
    }
  `
})

// Create our fadePlane
const fadePlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  fadeMaterial
)

// create our resultPlane
// Please notice we don't use fancy shader materials for resultPlane
// We will use it simply to copy the contents of fadePlane to the device screen
// So we can just use the .map property of THREE.MeshBasicMaterial
const resultMaterial = new THREE.MeshBasicMaterial({ map: null })
const resultPlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  resultMaterial
)

We will use fadePlane to perform step 1 and step 2 from the list above (rendering the previous frame represented by Framebuffer 1, fading it out to black color and finally rendering our original threejs scene on top). We will render to Framebuffer 2 and update its corresponding THREE.Texture.

We will use the resulting texture of Framebuffer 2 as an input to <strong>resultPlane</strong>. This time, we will render to the device screen. We will essentially copy the contents of Framebuffer 2 to the Default Framebuffer (device screen), thus achieving step 3.

Up next, let’s actually create our Framebuffer 1 and Framebuffer 2. They are represented by THREE.WebGLRenderTarget:

// Create two extra framebuffers manually
// It is important we use let instead of const variables,
// as we will need to swap them as discussed in Step 4!
let framebuffer1 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)
let framebuffer2 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)

// Before we start using these framebuffers by rendering to them,
// let's explicitly clear their pixel contents to #111111
// If we don't do this, our persistence effect will end up wrong,
// due to how accumulation between step 1 and 3 works. 
// The first frame will never fade out when we mix Framebuffer 1 to
// Framebuffer 2 and will be always visible.
// This bug is better observed, rather then explained, so please
// make sure to comment out these lines and see the change for yourself.
renderer.setClearColor(0x111111)
renderer.setRenderTarget(framebuffer1)
renderer.clearColor()
renderer.setRenderTarget(framebuffer2)
renderer.clearColor()

As you might have guessed already, we will achieve step 4 as described above by swapping framebuffer1 and framebuffer2 at the end of each animation frame.

At this point we have everything ready and initialised: our THREE.OrthographicCamera, our 2 quads that will fade out and copy the contents of our framebuffers to the device screens and, of course, the framebuffers themselves. It should be noted that up until this point we did not change our animation loop code and logic, rather we just created these new things at the initialisation step of our program. Let’s now put them to practice in our rendering loop.

Here is how my function that is executed on each animation frame looks like right now. Taken directly from the codepen example above:

function drawFrame (timeElapsed) {
  for (let i = 0; i < meshes.length; i++) {
     const mesh = meshes[i]
     // some animation logic that moves each mesh around the screen
     // with different radius, offset and speed
     // ...
  }
  // Render our entire scene to the device screen, represented by
  // the default WebGL framebuffer
  renderer.render(scene, perspectiveCamera)
}

If you have written any threejs code before, this drawFrame method should not be any news to you. We apply some animation logic to our meshes and then render them to the device screen by calling renderer.render() on the whole scene with the appropriate camera.

Let’s incorporate our steps 1 to 4 from above and achieve our persistence:

function drawFrame (timeElapsed) {
  // The for loop remains unchanged from the previous snippet
  for (let i = 0; i < meshes.length; i++) {
     // ...
  }

  // By default, threejs clears the pixel color buffer when
  // calling renderer.render()
  // We want to disable it explicitly, since both step 1 and step 2 render
  // to Framebuffer 2 accumulatively
  renderer.autoClearColor = false

  // Set Framebuffer 2 as active WebGL framebuffer to render to
  renderer.setRenderTarget(framebuffer2)

  // <strong>Step 1</strong>
  // Render the image buffer associated with Framebuffer 1 to Framebuffer 2
  // fading it out to pure black by a factor of 0.05 in the fadeMaterial
  // fragment shader
  fadePlane.material.uniforms.inputTexture.value = framebuffer1.texture
  renderer.render(fadePlane, orthoCamera)

  // <strong>Step 2</strong>
  // Render our entire scene to Framebuffer 2, on top of the faded out 
  // texture of Framebuffer 1.
  renderer.render(scene, perspectiveCamera)

  // Set the Default Framebuffer (device screen) represented by null as active WebGL framebuffer to render to.
  renderer.setRenderTarget(null)
  
  // <strong>Step 3</strong>
  // Copy the pixel contents of Framebuffer 2 by passing them as a texture
  // to resultPlane and rendering it to the Default Framebuffer (device screen)
  resultPlane.material.map = framebuffer2.texture
  renderer.render(resultPlane, orthoCamera)

  // <strong>Step 4</strong>
  // Swap Framebuffer 1 and Framebuffer 2
  const swap = framebuffer1
  framebuffer1 = framebuffer2
  framebuffer2 = swap

  // End of the effect
  // When the next animation frame is executed, the meshes will be animated
  // and the whole process will repeat
}

And with these changes out of the way, here is our updated example using persistence:

See the Pen 2. Persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Applying texture transformations

Now that we have our effect properly working, we can get more creative and expand on top of it.

You might remember this snippet from the fragment shader code where we faded out the contents of Framebuffer 1 to Framebuffer 2:

void main () {
   // Get pixel color from texture
   vec4 texColor = texture2D(inputTexture, vUv);

   // Our fade-out color
   vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

   // mix texColor into fadeColor by a factor of 0.05
   gl_FragColor = mix(texColor, fadeColor, 0.05);
}

When we sample from our inputTexture, we can upscale our texture coordinates by a factor of 0.0075 like so:

vec4 texColor = texture2D(inputTexture, vUv * 0.9925);

With this transformation applied to our texture coordinates, here is our updated example:

See the Pen 3. Peristence with upscaled texture coords by Georgi Nikoloff (@gbnikolov) on CodePen.

Or how about increasing our fade factor from 0.05 to 0.2?

gl_FragColor = mix(texColor, fadeColor, 0.2);

This will intensify the effect of fadeColor by a magnitude of four, thus decreasing our persistence effect:

See the Pen 4. Reduced persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

But why stop there? Here is a final demo that provides you with UI controls to tweak the scale, rotation and fade factor parameters in the demo. It uses THREE.Matrix3 and more specifically its setUvTransform method that allows us to express the translation, scale and rotation of our texture coordinates as a 3×3 matrix.

We can then pass this 3×3 as another uniform to our vertex shader and apply it to the texture coordinates. Here is the updated fragmentShader property of our fadeMaterial:

// pass the texture coordinate matrix as another uniform variable
uniform mat3 uvMatrix;
varying vec2 vUv;
void main () {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  // Since our texture coordinates, represented by uv, are a vector with
  // 2 floats and our matrix holds 9 floats, we need to temporarily
  // add extra dimension to the texture coordinates to make the
  // multiplication possible.
  // In the end, we simply grab the .xy of the final result, thus
  // transforming it back to vec2
  vUv = (uvMatrix * vec3(uv, 1.0)).xy;
}

And here is the result. I also added controls for the different parameters:

See the Pen 5. Parameterised persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

Further readings:

The post Adding a Persistence Effect to Three.js Scenes appeared first on Codrops.

Creating Grid-to-Fullscreen Animations with Three.js

Animations play a big role in how users feels about your website. They convey a lot of the personality and feel of your site. They also help the user navigate new and already known screens with more ease.

In this tutorial we want to look at how to create some interesting grid-to-fullscreen animations on images. The idea is to have a grid of smaller images and when clicking on one, the image enlarges with a special animation to cover the whole screen. We’ll aim for making them accessible, unique and visually appealing. Additionally, we want to show you the steps for making your own.

The building blocks

Before we can start doing all sorts of crazy animations, timing calculations and reality deformation we need to get the basic setup of the effect ready:

  • Initialize Three.js and the plane we’ll use
  • Position and scale the plane so it is similar to the item’s image whenever the user clicks an item
  • Animate the plane so it covers the complete screen

For the sake of not going too crazy with all the effects we can make, we’ll focus on making a flip effect like the one in our first demo.

GridFullscreen_demo1

Initialization

To begin, lets make a basic Three.js setup and add a single 1×1 plane which we’ll re-use for the animation of every grid item. Since only one animation can happen at the time. We can have better performance by only using one plane for all animations.

This simple change is going to allow us to have any number of HTML items without affecting the performance of the animation.

As a side note, in our approach we decided to only use Three.js for the time of the animation. This means all the items are good old HTML.

This allows our code to have a natural fallback for browsers that don’t have WebGL support. And it also makes our effect more accessible.

class GridToFullscreenEffect {
	...
	init(){
		... 
		const segments = 128;
		var geometry = new THREE.PlaneBufferGeometry(1, 1, segments, segments);
		// We'll be using the shader material later on ;)
		var material = new THREE.ShaderMaterial({
		  side: THREE.DoubleSide
		});
		this.mesh = new THREE.Mesh(geometry, material);
		this.scene.add(this.mesh);
	}
}

Note: We are skipping over the Three.js initialization since it’s pretty basic.

Setting the the plane geometry’s size to be 1×1 simplifies things a little bit. It removes a some of the math involved with calculating the correct scale. Since 1 scaled by any number is always going to return that same number.

Positioning and resizing

Now, we’ll resize and position the plane to match the item’s image. To do this, we’ll need to get the item’s getBoundingClientRect. Then we need to transform its values from pixels to the camera’s view units. After, we need to transform them from relative to the top left, to relative from the center. Summarized:

  1. Map pixel units to camera’s view units
  2. Make the units relative to the center instead of the top left
  3. Make the position’s origin start on the plane’s center, not on the top left
  4. Scale and position the mesh using these new values
class GridToFullscreenEffect {
...
 onGridImageClick(ev,itemIndex){
	// getBoundingClientRect gives pixel units relative to the top left of the pge
	 const rect = ev.target.getBoundingClientRect();
	const viewSize = this.getViewSize();
	
	// 1. Transform pixel units to camera's view units
	const widthViewUnit = (rect.width * viewSize.width) / window.innerWidth;
	const heightViewUnit = (rect.height * viewSize.height) / window.innerHeight;
	let xViewUnit =
	  (rect.left * viewSize.width) / window.innerWidth;
	let yViewUnit =
	  (rect.top * viewSize.height) / window.innerHeight;
	
	// 2. Make units relative to center instead of the top left.
	xViewUnit = xViewUnit - viewSize.width / 2;
	yViewUnit = yViewUnit - viewSize.height / 2;
   

	// 3. Make the origin of the plane's position to be the center instead of top Left.
	let x = xViewUnit + widthViewUnit / 2;
	let y = -yViewUnit - heightViewUnit / 2;

	// 4. Scale and position mesh
	const mesh = this.mesh;
	// Since the geometry's size is 1. The scale is equivalent to the size.
	mesh.scale.x = widthViewUnit;
	mesh.scale.y = heightViewUnit;
	mesh.position.x = x;
	mesh.position.y = y;

	}
 }

As a side note, scaling the mesh instead of scaling the geometry is more performant. Scaling the geometry actually changes its internal data which is slow and expensive, while scaling the mesh happens at rendering. This decision will come into play later on, so keep it in mind.

Now, bind this function to each item’s onclick event. Then our plane resizes to match the item’s image.

It’s a very simple concept, yet quite performant in the long run. Now that our plane is ready to go when clicked, lets make it cover the screen.

Basic animation

First, lets initialize the few uniforms:

  • uProgress – Progress of the animation
  • uMeshScale – Scale of the mesh
  • uMeshPosition – Mesh’s position from the center
  • uViewSize – Size of the camera’s view

We’ll also create the base for our shaders.

class GridToFullscreenEffect {
	constructor(container, items){
		this.uniforms = {
		  uProgress: new THREE.Uniform(0),
		  uMeshScale: new THREE.Uniform(new THREE.Vector2(1, 1)),
		  uMeshPosition: new THREE.Uniform(new THREE.Vector2(0, 0)),
		  uViewSize: new THREE.Uniform(new THREE.Vector2(1, 1)),
		}
	}
	init(){
		... 
		const viewSize = this.getViewSize();
		this.uniforms.uViewSize.x = viewSize.width;
		this.uniforms.uViewSize.y = viewSize.height;
		var material = new THREE.ShaderMaterial({
			uniform: this.uniforms,
			vertexShader: vertexShader,
			fragmentShader: fragmentShader,
			side: THREE.DoubleSide
		});
		
		...
	}
	...
}
const vertexShader = `
	uniform float uProgress;
	uniform vec2 uMeshScale;
	uniform vec2 uMeshPosition;
	uniform vec2 uViewSize;

	void main(){
		vec3 pos = position.xyz;
		 gl_Position = projectionMatrix * modelViewMatrix * vec4(pos,1.);
	}
`;
const fragmentShader = `
	void main(){
		 gl_FragColor = vec4(vec3(0.2),1.);
	}
`;

We need to update uMeshScale and uMeshPositon uniforms whenever we click an item.

class GridToFullscreenEffect {
	...
	onGridImageClick(ev,itemIndex){
		...
		// Divide by scale because on the fragment shader we need values before the scale 
		this.uniforms.uMeshPosition.value.x = x / widthViewUnit;
		this.uniforms.uMeshPosition.value.y = y / heightViewUnit;

		this.uniforms.uMeshScale.value.x = widthViewUnit;
		this.uniforms.uMeshScale.value.y = heightViewUnit;
	}
}

Since we scaled the mesh and not the geometry, on the vertex shader our vertices still represent a 1×1 square in the center of the scene. But it ends up rendered in another position and with a different size because of the mesh. As a consequence of this optimization, we need to use “down-scaled” values in the vertex shaders. With that out of the way, lets make the effect happen in our vertex Shader:

  1. Calculate the scale needed to match the screen size using our mesh’s scale
  2. Move the vertices by their negative position so they move to the center
  3. Multiply those values by the progress of the effect
...
const vertexShader = `
	uniform float uProgress;
	uniform vec2 uPlaneSize;
	uniform vec2 uPlanePosition;
	uniform vec2 uViewSize;

	void main(){
		vec3 pos = position.xyz;
		
		// Scale to page view size/page size
		vec2 scaleToViewSize = uViewSize / uPlaneSize - 1.;
		vec2 scale = vec2(
		  1. + scaleToViewSize * uProgress
		);
		pos.xy *= scale;
		
		// Move towards center 
		pos.y += -uPlanePosition.y * uProgress;
		pos.x += -uPlanePosition.x * uProgress;
		
		
		 gl_Position = projectionMatrix * modelViewMatrix * vec4(pos,1.);
	}
`;

Now, when we click an item. We are going to:

  • set our canvas container on top of the items
  • make the HTML item invisible
  • tween uProgress between 0 and 1
class GridToFullscreenEffect {
	...
	constructor(container,items){
		...
		this.itemIndex = -1;
		this.animating = false;
		this.state = "grid";
	}
	toGrid(){
		if (this.state === 'grid' || this.isAnimating) return;
		this.animating = true;
		this.tween = TweenLite.to(
		  this.uniforms.uProgress,1.,
		  {
			value: 0,
			onUpdate: this.render.bind(this),
			onComplete: () => {
			  this.isAnimating = false;
			  this.state = "grid";
			this.container.style.zIndex = "0";
			}
		  }
		);
	}
	toFullscreen(){
	if (this.state === 'fullscreen' || this.isAnimating) return;
		this.animating = true;
		this.container.style.zIndex = "2";
		this.tween = TweenLite.to(
		  this.uniforms.uProgress,1.,
		  {
			value: 1,
			onUpdate: this.render.bind(this),
			onComplete: () => {
			  this.isAnimating = false;
			  this.state = "fullscreen";
			}
		  }
		);
	}

	onGridImageClick(ev,itemIndex){
		...
		this.itemIndex = itemIndex;
		this.toFullscreen();
	}
}

We start the tween whenever we click an item. And there you go, our plane goes back and forth no matter which item we choose.

Pretty good, but not too impressive yet.

Now that we have the basic building blocks done, we can start making the cool stuff. For starters, lets go ahead and add timing.

Activation and timing

Scaling the whole plane is a little bit boring. So, lets give it some more flavor by making it scale with different patterns: Top-to-bottom, left-to-right, topLeft-to-bottomRight.

Lets take a look at how those effects behave and figure out what we need to do:

Grid Effects

By observing the effects for a minute, we can notice that the effect is all about timing. Some parts of the plane start later than others.

What we are going to do is to create an “activation” of the effect. We’ll use that activation to determine which vertices are going to start later than others.

Effects with activations

And lets see how that looks like in code:

...
const vertexShader = `
	...
	void main(){
		vec3 pos = position.xyz;
		
		// Activation for left-to-right
		float activation = uv.x;
		
		float latestStart = 0.5;
		float startAt = activation * latestStart;
		float vertexProgress = smoothstep(startAt,1.,uProgress);
	   
		...
	}
`;

We’ll replace uProgress with vertexprogres for any calculations in the vertex shader.

...
const vertexShader = `
	...
	void main(){
		...
		float vertexProgress = smoothstep(startAt,1.,uProgress);
		
		vec2 scaleToViewSize = uViewSize / uMeshScale - 1.;
		vec2 scale = vec2(
		  1. + scaleToViewSize * vertexProgress
		);
		pos.xy *= scale;
		
		// Move towards center 
		pos.y += -uMeshPosition.y * vertexProgress;
		pos.x += -uMeshPosition.x * vertexProgress;
		...
	}
`;

With this little change, our animation is not much more interesting.

Note that the gradients on the demo are there for demonstration purposes. They have nothing to do with the effect itself.

The great thing about these “activation” and “timing” concepts is that they are interchangeable implementations. This allows us to create a ton of variations.

With the activation and timing in place, lets make it more interesting with transformations.

Transformations

If you haven’t noticed, we already know how to make a transformation. We successfully scaled and moved the plane forwards and backwards.

We interpolate or move from one state to another using vertexProgress. Just like we are doing in the scale and movement:

...
const vertexShader = `
	...
	void main(){
	...
		// Base state = 1.
		// Target state = uScaleToViewSize;
		// Interpolation value: vertexProgress
		scale = vec2(
		  1. + uScaleToViewSize * vertexProgress
		);

		// Base state = pos
		// Target state = -uPlaneCenter;
		// Interpolation value: vertexProgress
		pos.y += -uPlaneCenter.y * vertexProgress;
		pos.x += -uPlaneCenter.x * vertexProgress;
	...
	}
`

Lets apply this same idea to make a flip transformation:

  • Base state: the vertex’s current position
  • Target state: The vertex flipped position
  • Interpolate with: the vertex progress
...
const vertexShader = `
	...
	void main(){
		...
		float vertexProgress = smoothstep(startAt,1.,uProgress);
		// Base state: pos.x
		// Target state: flippedX
		// Interpolation with: vertexProgress 
		float flippedX = -pos.x;
		pos.x = mix(pos.x,flippedX, vertexProgress);
		// Put vertices that are closer to its target in front. 
		pos.z += vertexProgress;
		...
	}
`;

Note that, because this flip sometimes puts vertices on top of each other we need to bring some of them slightly to the front to make it look correctly.

Combining these flips with different activations, these are some of the variations we came up with:

If you pay close attention to the flip you’ll notice it also flips the color/image backwards. To fix this issue we have to flip the UVs along with the position.

And there we have it! We’ve not only created an interesting and exciting flip effect, but also made sure that using this structure we can discover all kinds of effects by changing one or more of the pieces.

In fact, we created the effects seen in our demos using the configurations as part of our creative process.

There is so much more to explore! And we would love to see what you can come up with.

Here are the most interesting variations we came up with:

Different timing creation:

GridFullscreen_demo2

Activation based on mouse position, and deformation with noise:

GridFullscreen_demo4

Distance deformation and mouse position activation:

GridFullscreen_demo5

We hope you enjoyed this tutorial and find it helpful!

GitHub link coming soon!

Creating Grid-to-Fullscreen Animations with Three.js was written by Daniel Velasquez and published on Codrops.

Pulling Apart SVGs with Reusable WebGL Components Using React-three-fiber

We will be looking at how to pull apart SVGs in 3D space with Three.js and React, using abstractions that allow us to break the scene graph into reusable components.

React and Three.js, what’s the problem?

My background in the past had more to do with front-end work than design, and React has been my preferred tool for a couple of years now. I like it because it pretty much maps the way i think. The ideas in my head are puzzle-pieces, which in React turn to composable components. It makes prototyping faster, and from a visual/design standpoint, it’s even fun, because it allows you to play around without repercussions. If everything is a self-contained lego-brick, you can rip it out, place it here, or there, and observe the result from different angles and perspectives. Especially for visual coding this can make a difference.

The problems that arise when handling programming tasks in an imperative way are always the same. Once we have created a sufficiently complex dependency-graph then things tend to be cobbled together, which causes the whole to be less flexible. Adding, updating or deleting items in sync with state and other operations can get complex. Orchestrating animations makes it even worse, because now you need to await animations to conclude before you continue with other operations and so on. Without a clear component-model it can be a reasonable challenge to keep it all together.

We run into this when working with user interfaces, as well as when creating scenes with Three.js, which can lend to especially unwieldy structures as it forces us to create a ton of objects that we have to track, mutate and manage. But React can solve that, too.

Think of React as a standard that defines what a component is and how it functions. React needs a so called “reconciler” to tell it what to do with these components and how to render them into a host. The browsers dom is a host, hence the react-dom package, which instructs React about the dom. React-native is another one you may be familiar with, but really there are dozens, reaching into all kinds of platforms, from AR, VR, console shells to, you guessed it, Three.js. The reconciler we will be using in this tutorial is called react-three-fiber, it renders components into a Three.js scene graph. Think of it as a portal into Three.js.

Let’s build!

Setting up the scene

Our portal into Three.js will be react-three-fiber’s “Canvas” component. Everything that goes in there will be cast into Three.js-native objects. The following will create a responsive canvas with some lights in it.

function App() {
  return (
    <Canvas>
      <ambientLight intensity={0.5} />
      <spotLight intensity={0.5} position={[300, 300, 4000]} />
    </Canvas>
  )
}

Converting SVGs into shapes

Our goal is to extract SVG paths, once we have that we can display them in all sorts of interesting ways. We will be using fairly simple sketches for that, they won’t create many layers and the effect will be less pronounced.

example

In order to transform SVGs into shape geometries we use Three.js’s SVGLoader. The following will give us a nested array of objects that contains the shapes and colors. We collect the index, too, which we will be using to offset the z-vector.

const svgResource = new Promise(resolve =>
  new loader().load(url, shapes =>
    resolve(
      flatten(
        shapes.map((group, index) =>
          group.toShapes(true).map(shape => ({ shape, color: group.color, index }))
        )
      )
    )
  )
)

Next we define a “Shape” component which renders a single shape. Each shape is offset 50 units by its own index.

function Shape({ shape, position, color, opacity, index }) {
  return (
    <mesh position={[0, 0, index * 50]}>
      <meshPhongMaterial attach="material" color={color} />
      <shapeBufferGeometry attach="geometry" args={[shape]} />
    </mesh>
  )
}

All we are missing now is a component that maps through the shapes we have created. Since the resource we have created is a promise we have to await its resolved state. Once it has loaded, we wrote it into the local component state and forward each shape to the “Shape” component we have just created.

function Scene() {
  const [shapes, set] = useState([])
  useEffect(() => void svgResource.then(set), [])
  return (
    <group>
      {shapes.map(item => <Shape key={item.shape.uuid} {...item} />)}
    </group>
  )
}

This is it, our canvas shows an offset SVG.

Adding animations

If you wanted to animate Three.js you would most likely do it manually and use tools like GSAP. And since we want to animate elements that go in and out you need to have some system in place that orchestrates it, which is not an easy task to pull off.

Here comes the nice part, we are rendering React components and that opens up a lot of possibilities. We can use pretty much everything that exists in the eco system, including animation and transition tools. In this case we use react-spring.

Really all we need to do is convert out shapes into a transition-group. A transition group is something that watches state for changes and helps to retain and transition old state until it can be safely removed. In react-springs case it is called “useTransition”. It takes the original data, shapes in this case, keys in order to identify changes in the data-set, and a couple of lifecycles in which we can define what happens when state is added, removed or changed.

The following takes care of everything. If shapes are added, they will transition into the scene in a trailed motion. If shapes are removed, they will transition out.

const transitions = useTransition(shapes, item => item.shape.uuid, {
  from: { position: [0, 50, -200], opacity: 0 },
  enter: { position: [0, 0, 0], opacity: 1 },
  leave: { position: [0, -50, 10], opacity: 0 },
})

return (
  <group>
    {transitions.map(({ item, key, props }) => <Shape key={key} {...item} {...props} />)}
  </group>
)

useTransition creates an array of objects which contain generated keys, the data items (our shapes) and animated properties. We spread everything over the Shape component. Now we just need to prepare that component to receive animated values and we are done.

react-spring exports a little helper called “animated”, as well as a shortcut called “a”. If you extend any element with it, it will be able to handle these properties. Basically, if you had a div, it would become a.div, if you had a mesh, it now becomes a.mesh.

I hope you had fun! You will find detailed explanations for everything in the respective docs for react-three-fiber and react-spring. The full code for the original demo can be found here.

Pulling Apart SVGs with Reusable WebGL Components Using React-three-fiber was written by Paul Henschel and published on Codrops.