Adding a Persistence Effect to Three.js Scenes

If you have written any WebGL applications in the past, be it using the vanilla API or a helper library such as Three.js, you know that you set up the things you want to render, perhaps include different types of cameras, animations and fancy lighting and voilà, the results are rendered to the default WebGL framebuffer, which is the device screen.

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing. They allow us to “post-process” our scenes, applying different effects on them once rendered.

This article assumes some intermediate knowledge of WebGL with Three.js. The core ideas behind framebuffers have already been covered in-depth in this article here on Codrops. Please make sure to read it first, as the persistence effect we will be achieving directly builds on top of these ideas.

Persistence effect in a nutshell

I call it persistence, yet am not really sure if it’s the best name for this effect and I am simply unaware of the proper way to call it. What is it useful for?

We can use it subtly to blend each previous and current animation frame together or perhaps a bit less subtly to hide bad framerate. Looking at video games like Grand Theft Auto, we can simulate our characters getting drunk. Another thing that comes to mind is rendering the view from the cockpit of a spacecraft when traveling at supersonic speed. Or, since the effect is just so good looking in my opinion, use it for all kinds of audio visualisers, cool website effects and so on.

To achieve it, first we would need to create 2 WebGL framebuffers. Since we will be using threejs for this demo, we would have to use THREE.WebGLRenderTarget. We will call them Framebuffer 1 and Framebuffer 2 from now on and they will have the same dimensions as the size of the canvas we are rendering to.

To draw one frame of our animation with persistence, we will need to:

  1. Render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. For the purposes of our demo, we will use a pure black color with full opacity
  2. Render our threejs scene that holds the actual meshes we want to show on the screen to Framebuffer 2 as well
  3. Render the contents of Framebuffer 2 to the Default WebGL framebuffer (device screen)
  4. Swap Framebuffer 1 with Framebuffer 2

Afterwards, for each new animation frame, we will need to repeat the above steps. Let’s illustrate each step:

Here is a visualisation of our framebuffers. WebGL gives us the Default framebuffer, represented by the device screen, automatically. It’s up to us as developers to manually create Framebuffer 1 and Framebuffer 2. No animation or rendering has happened yet, so the pixel contents of all 3 framebuffers are empty.

Our 2 manually created framebuffers on the left, the default device screen framebuffer on the right. Nothing has been drawn yet.

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. As said, we will use a black color, but for illustration purposes I am fading out to transparent white color with opacity 0.2. As Framebuffer 1 is empty, this will result in empty Framebuffer 2:

Even though we have rendered Framebuffer 1 to Framebuffer 2, at this point we still have framebuffers with empty pixel contents.

Step 2: we need to render our threejs scene that holds our meshes / cameras / lighting / etc to Framebuffer 2. Please notice that both Step 1 and Step 2 render on top of each other to Framebuffer 2.

Our threejs scene rendered to Framebuffer 2.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

Final result rendered to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Framebuffer 1 and Framebuffer 2 swapped.

Now comes the interesting part, since Framebuffer 1 is no longer empty. Let’s go over each step once again:

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. Let’s assume a transparent white color with 0.2 opacity.

We have rendered and faded out the pixel contents of Framebuffer 1 to Framebuffer 2 by a factor of 0.2.

Step 2: we need to render our threejs scene to Framebuffer 2. For illustration purposes, let’s assume we have an animation that slowly moves our 3D cube to the right, meaning that now it will be a few pixels to the right:

Once again, we render both Framebuffer 1 and our threejs scene to Framebuffer 2. Notice how the threejs scene is rendered on top of the faded out contents of Framebuffer 1.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

The pixel contents of Framebuffer 2 copied over to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Rinse and repeat. Back to Step 1.

I hope you can see a pattern emerging. If we repeat this process enough times, we will start accumulating each new frame to the previous faded one. Here is how it would look if we repeat enough times:

Postprocessing on our threejs scene

Here is the demo we will build in this article. The result of the repeated enough times process above is evident:

Notice the accumulated trails. At every animation loop, we are repeating step 1 to 4.

So with this theory out of the way, let’s create this effect with threejs!

Our skeleton app

Let’s write a simple threejs app that will animate a bunch of objects around the screen and use perspective camera to look at them. I will not explain the code for my example here, as it does not really matter what we render, as long as there is some animation present so things move and we can observe the persistence.

I encourage you to disregard my example and draw something else yourself. Even the most basic spinning cube that moves around the screen will be enough. That being said, here is my scene:

See the Pen 1. Skeleton app by Georgi Nikoloff (@gbnikolov) on CodePen.

The important thing to keep in mind here is that this demo renders to the default framebuffer, represented by the device screen, that WebGL automatically gives us. There are no extra framebuffers involved in this demo up to this point.

Achieving the persistence

Let’s add the code needed to achieve actual persistence. We will start by introducing a THREE.OrthographicCamera.

Orthographic camera can be useful for rendering 2D scenes and UI elements, amongst other things.

threejs docs

Remember, framebuffers allow us to render to image buffers in the video card’s memory instead of the device screen. These image buffers are represented by the THREE.Texture class and are automatically created for us when we create our Framebuffer 1 and Framebuffer 2 by instantiating a new THREE.WebGLRenderTarget. In order to display these textures back to the device screen, we need to create two 2D fullscreen quads that span the width and height of our monitor. Since these quads will be 2D, THREE.OrthographicCamera is best suited to display them.

const leftScreenBorder = -innerWidth / 2
const rightScreenBorder = innerWidth / 2
const topScreenBorder = -innerHeight / 2
const bottomScreenBorder = innerHeight / 2
const near = -100
const far = 100
const orthoCamera = new THREE.OrthographicCamera(
  leftScreenBorder,
  rightScreenBorder,
  topScreenBorder,
  bottomScreenBorder,
  near,
  far
)
orthoCamera.position.z = -10
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

As a next step, let’s create a fullscreen quad geometry using THREE.PlaneGeometry:

const fullscreenQuadGeometry = new THREE.PlaneGeometry(innerWidth, innerHeight)

Using our newly created 2D quad geometry, let’s create two fullscreen planes. I will call them fadePlane and resultPlane. They will use THREE.ShaderMaterial and THREE.MeshBasicMaterial respectively:

// To achieve the fading out to black, we will use THREE.ShaderMaterial
const fadeMaterial = new THREE.ShaderMaterial({
  // Pass the texture result of our rendering to Framebuffer 1 as uniform variable
  uniforms: {
    inputTexture: { value: null }
  },
  vertexShader: `
    // Declare a varying variable for texture coordinates
    varying vec2 vUv;

    void main () {
      // Set each vertex position according to the
      // orthographic camera position and projection
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  
      // Pass the plane texture coordinates as interpolated varying
      // variable to the fragment shader
      vUv = uv;
    }
  `,
  fragmentShader: `
    // Pass the texture from Framebuffer 1
    uniform sampler2D inputTexture;

    // Consume the interpolated texture coordinates
    varying vec2 vUv;

    void main () {
      // Get pixel color from texture
      vec4 texColor = texture2D(inputTexture, vUv);

      // Our fade-out color
      vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

      // this step achieves the actual fading out
      // mix texColor into fadeColor by a factor of 0.05
      // you can change the value of the factor and see
      // the result will change accordingly
      gl_FragColor = mix(texColor, fadeColor, 0.05);
    }
  `
})

// Create our fadePlane
const fadePlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  fadeMaterial
)

// create our resultPlane
// Please notice we don't use fancy shader materials for resultPlane
// We will use it simply to copy the contents of fadePlane to the device screen
// So we can just use the .map property of THREE.MeshBasicMaterial
const resultMaterial = new THREE.MeshBasicMaterial({ map: null })
const resultPlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  resultMaterial
)

We will use fadePlane to perform step 1 and step 2 from the list above (rendering the previous frame represented by Framebuffer 1, fading it out to black color and finally rendering our original threejs scene on top). We will render to Framebuffer 2 and update its corresponding THREE.Texture.

We will use the resulting texture of Framebuffer 2 as an input to <strong>resultPlane</strong>. This time, we will render to the device screen. We will essentially copy the contents of Framebuffer 2 to the Default Framebuffer (device screen), thus achieving step 3.

Up next, let’s actually create our Framebuffer 1 and Framebuffer 2. They are represented by THREE.WebGLRenderTarget:

// Create two extra framebuffers manually
// It is important we use let instead of const variables,
// as we will need to swap them as discussed in Step 4!
let framebuffer1 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)
let framebuffer2 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)

// Before we start using these framebuffers by rendering to them,
// let's explicitly clear their pixel contents to #111111
// If we don't do this, our persistence effect will end up wrong,
// due to how accumulation between step 1 and 3 works. 
// The first frame will never fade out when we mix Framebuffer 1 to
// Framebuffer 2 and will be always visible.
// This bug is better observed, rather then explained, so please
// make sure to comment out these lines and see the change for yourself.
renderer.setClearColor(0x111111)
renderer.setRenderTarget(framebuffer1)
renderer.clearColor()
renderer.setRenderTarget(framebuffer2)
renderer.clearColor()

As you might have guessed already, we will achieve step 4 as described above by swapping framebuffer1 and framebuffer2 at the end of each animation frame.

At this point we have everything ready and initialised: our THREE.OrthographicCamera, our 2 quads that will fade out and copy the contents of our framebuffers to the device screens and, of course, the framebuffers themselves. It should be noted that up until this point we did not change our animation loop code and logic, rather we just created these new things at the initialisation step of our program. Let’s now put them to practice in our rendering loop.

Here is how my function that is executed on each animation frame looks like right now. Taken directly from the codepen example above:

function drawFrame (timeElapsed) {
  for (let i = 0; i < meshes.length; i++) {
     const mesh = meshes[i]
     // some animation logic that moves each mesh around the screen
     // with different radius, offset and speed
     // ...
  }
  // Render our entire scene to the device screen, represented by
  // the default WebGL framebuffer
  renderer.render(scene, perspectiveCamera)
}

If you have written any threejs code before, this drawFrame method should not be any news to you. We apply some animation logic to our meshes and then render them to the device screen by calling renderer.render() on the whole scene with the appropriate camera.

Let’s incorporate our steps 1 to 4 from above and achieve our persistence:

function drawFrame (timeElapsed) {
  // The for loop remains unchanged from the previous snippet
  for (let i = 0; i < meshes.length; i++) {
     // ...
  }

  // By default, threejs clears the pixel color buffer when
  // calling renderer.render()
  // We want to disable it explicitly, since both step 1 and step 2 render
  // to Framebuffer 2 accumulatively
  renderer.autoClearColor = false

  // Set Framebuffer 2 as active WebGL framebuffer to render to
  renderer.setRenderTarget(framebuffer2)

  // <strong>Step 1</strong>
  // Render the image buffer associated with Framebuffer 1 to Framebuffer 2
  // fading it out to pure black by a factor of 0.05 in the fadeMaterial
  // fragment shader
  fadePlane.material.uniforms.inputTexture.value = framebuffer1.texture
  renderer.render(fadePlane, orthoCamera)

  // <strong>Step 2</strong>
  // Render our entire scene to Framebuffer 2, on top of the faded out 
  // texture of Framebuffer 1.
  renderer.render(scene, perspectiveCamera)

  // Set the Default Framebuffer (device screen) represented by null as active WebGL framebuffer to render to.
  renderer.setRenderTarget(null)
  
  // <strong>Step 3</strong>
  // Copy the pixel contents of Framebuffer 2 by passing them as a texture
  // to resultPlane and rendering it to the Default Framebuffer (device screen)
  resultPlane.material.map = framebuffer2.texture
  renderer.render(resultPlane, orthoCamera)

  // <strong>Step 4</strong>
  // Swap Framebuffer 1 and Framebuffer 2
  const swap = framebuffer1
  framebuffer1 = framebuffer2
  framebuffer2 = swap

  // End of the effect
  // When the next animation frame is executed, the meshes will be animated
  // and the whole process will repeat
}

And with these changes out of the way, here is our updated example using persistence:

See the Pen 2. Persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Applying texture transformations

Now that we have our effect properly working, we can get more creative and expand on top of it.

You might remember this snippet from the fragment shader code where we faded out the contents of Framebuffer 1 to Framebuffer 2:

void main () {
   // Get pixel color from texture
   vec4 texColor = texture2D(inputTexture, vUv);

   // Our fade-out color
   vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

   // mix texColor into fadeColor by a factor of 0.05
   gl_FragColor = mix(texColor, fadeColor, 0.05);
}

When we sample from our inputTexture, we can upscale our texture coordinates by a factor of 0.0075 like so:

vec4 texColor = texture2D(inputTexture, vUv * 0.9925);

With this transformation applied to our texture coordinates, here is our updated example:

See the Pen 3. Peristence with upscaled texture coords by Georgi Nikoloff (@gbnikolov) on CodePen.

Or how about increasing our fade factor from 0.05 to 0.2?

gl_FragColor = mix(texColor, fadeColor, 0.2);

This will intensify the effect of fadeColor by a magnitude of four, thus decreasing our persistence effect:

See the Pen 4. Reduced persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

But why stop there? Here is a final demo that provides you with UI controls to tweak the scale, rotation and fade factor parameters in the demo. It uses THREE.Matrix3 and more specifically its setUvTransform method that allows us to express the translation, scale and rotation of our texture coordinates as a 3×3 matrix.

We can then pass this 3×3 as another uniform to our vertex shader and apply it to the texture coordinates. Here is the updated fragmentShader property of our fadeMaterial:

// pass the texture coordinate matrix as another uniform variable
uniform mat3 uvMatrix;
varying vec2 vUv;
void main () {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  // Since our texture coordinates, represented by uv, are a vector with
  // 2 floats and our matrix holds 9 floats, we need to temporarily
  // add extra dimension to the texture coordinates to make the
  // multiplication possible.
  // In the end, we simply grab the .xy of the final result, thus
  // transforming it back to vec2
  vUv = (uvMatrix * vec3(uv, 1.0)).xy;
}

And here is the result. I also added controls for the different parameters:

See the Pen 5. Parameterised persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

Further readings:

The post Adding a Persistence Effect to Three.js Scenes appeared first on Codrops.

Creating a Typography Motion Trail Effect with Three.js

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing and have already been covered in-depth here on Codrops. They allow us to “post-process” our scenes, applying different effects on them once rendered. But how exactly do they work?

By default, WebGL (and also Three.js and all other libraries built on top of it) render to the default framebuffer, which is the device screen. If you have used Three.js or any other WebGL framework before, you know that you create your mesh with the correct geometry and material, render it, and voilà, it’s visible on your screen.

However, we as developers can create new framebuffers besides the default one and explicitly instruct WebGL to render to them. By doing so, we render our scenes to image buffers in the video card’s memory instead of the device screen. Afterwards, we can treat these image buffers like regular textures and apply filters and effects before eventually rendering them to the device screen.

Here is a video breaking down the post-processing and effects in Metal Gear Solid 5: Phantom Pain that really brings home the idea. Notice how it starts by footage from the actual game rendered to the default framebuffer (device screen) and then breaks down how each framebuffer looks like. All of these framebuffers are composited together on each frame and the result is the final picture you see when playing the game:

So with the theory out of the way, let’s create a cool typography motion trail effect by rendering to a framebuffer!

Our skeleton app

Let’s render some 2D text to the default framebuffer, i.e. device screen, using threejs. Here is our boilerplate:

const LABEL_TEXT = 'ABC'

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a threejs renderer:
// 1. Size it correctly
// 2. Set default background color
// 3. Append it to the page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
document.body.appendChild(renderer.domElement)

// Create an orthographic camera that covers the entire screen
// 1. Position it correctly in the positive Z dimension
// 2. Orient it towards the scene center
const orthoCamera = new THREE.OrthographicCamera(
  -innerWidth / 2,
  innerWidth / 2,
  innerHeight / 2,
  -innerHeight / 2,
  0.1,
  10,
)
orthoCamera.position.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

// Create a plane geometry that spawns either the entire
// viewport height or width depending on which one is bigger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
  labelMeshSize,
  labelMeshSize
)

// Programmaticaly create a texture that will hold the text
let labelTextureCanvas
{
  // Canvas and corresponding context2d to be used for
  // drawing the text
  labelTextureCanvas = document.createElement('canvas')
  const labelTextureCtx = labelTextureCanvas.getContext('2d')

  // Dynamic texture size based on the device capabilities
  const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
  const relativeFontSize = 20
  // Size our text canvas
  labelTextureCanvas.width = textureSize
  labelTextureCanvas.height = textureSize
  labelTextureCtx.textAlign = 'center'
  labelTextureCtx.textBaseline = 'middle'

  // Dynamic font size based on the texture size
  // (based on the device capabilities)
  labelTextureCtx.font = `${relativeFontSize}px Helvetica`
  const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
  const widthDelta = labelTextureCanvas.width / textWidth
  const fontSize = relativeFontSize * widthDelta
  labelTextureCtx.font = `${fontSize}px Helvetica`
  labelTextureCtx.fillStyle = 'white'
  labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.height / 2)
}
// Create a material with our programmaticaly created text
// texture as input
const labelMaterial = new THREE.MeshBasicMaterial({
  map: new THREE.CanvasTexture(labelTextureCanvas),
  transparent: true,
})

// Create a plane mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)

// Start out animation render loop
renderer.setAnimationLoop(onAnimLoop)

function onAnimLoop() {
  // On each new frame, render the scene to the default framebuffer 
  // (device screen)
  renderer.render(scene, orthoCamera)
}

This code simply initialises a threejs scene, adds a 2D plane with a text texture to it and renders it to the default framebuffer (device screen). If we are execute it with threejs included in our project, we will get this:

See the Pen
Step 1: Render to default framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Again, we don’t explicitly specify otherwise, so we are rendering to the default framebuffer (device screen).

Now that we managed to render our scene to the device screen, let’s add a framebuffer (THEEE.WebGLRenderTarget) and render it to a texture in the video card memory.

Rendering to a framebuffer

Let’s start by creating a new framebuffer when we initialise our app:

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a new framebuffer we will use to render to
// the video card memory
const renderBufferA = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

// ... rest of application

Now that we have created it, we must explicitly instruct threejs to render to it instead of the default framebuffer, i.e. device screen. We will do this in our program animation loop:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)
  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
}

And here is our result:

See the Pen
Step 2: Render to a framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

As you can see, we are getting an empty screen, yet our program contains no errors – so what happened? Well, we are no longer rendering to the device screen, but another framebuffer! Our scene is being rendered to a texture in the video card memory, so that’s why we see the empty screen.

In order to display this generated texture containing our scene back to the default framebuffer (device screen), we need to create another 2D plane that will cover the entire screen of our app and pass the texture as material input to it.

First we will create a fullscreen 2D plane that will span the entire device screen:

// ... rest of initialisation step

// Create a second scene that will hold our fullscreen plane
const postFXScene = new THREE.Scene()

// Create a plane geometry that covers the entire screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)

// Create a plane material that expects a sampler texture input
// We will pass our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
  },
  // vertex shader will be in charge of positioning our plane correctly
  vertexShader: `
      varying vec2 v_uv;

      void main () {
        // Set the correct position of each plane vertex
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);

        // Pass in the correct UVs to the fragment shader
        v_uv = uv;
      }
    `,
  fragmentShader: `
      // Declare our texture input as a "sampler" variable
      uniform sampler2D sampler;

      // Consume the correct UVs from the vertex shader to use
      // when displaying the generated texture
      varying vec2 v_uv;

      void main () {
        // Sample the correct color from the generated texture
        vec4 inputColor = texture2D(sampler, v_uv);
        // Set the correct color of each pixel that makes up the plane
        gl_FragColor = inputColor;
      }
    `
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)

// ... animation loop code here, same as before

As you can see, we are creating a new scene that will hold our fullscreen plane. After creating it, we need to augment our animation loop to render the generated texture from the previous step to the fullscreen plane on our screen:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
  
  // 👇
  // Set the device screen as the framebuffer to render to
  // In WebGL, framebuffer "null" corresponds to the default 
  // framebuffer!
  renderer.setRenderTarget(null)

  // 👇
  // Assign the generated texture to the sampler variable used
  // in the postFXMesh that covers the device screen
  postFXMesh.material.uniforms.sampler.value = renderBufferA.texture

  // 👇
  // Render the postFX mesh to the default framebuffer
  renderer.render(postFXScene, orthoCamera)
}

After including these snippets, we can see our scene once again rendered on the screen:

See the Pen
Step 3: Display the generated framebuffer on the device screen
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s recap the necessary steps needed to produce this image on our screen on each render loop:

  1. Create renderTargetA framebuffer that will allow us to render to a separate texture in the users device video memory
  2. Create our “ABC” plane mesh
  3. Render the “ABC” plane mesh to renderTargetA instead of the device screen
  4. Create a separate fullscreen plane mesh that expects a texture as an input to its material
  5. Render the fullscreen plane mesh back to the default framebuffer (device screen) using the generated texture created by rendering the “ABC” mesh to renderTargetA

Achieving the persistence effect by using two framebuffers

We don’t have much use of framebuffers if we are simply displaying them as they are to the device screen, as we do right now. Now that we have our setup ready, let’s actually do some cool post-processing.

First, we actually want to create yet another framebuffer – renderTargetB, and make sure it and renderTargetA are let variables, rather then consts. That’s because we will actually swap them at the end of each render so we can achieve framebuffer ping-ponging.

“Ping-ponging” in WebGl is a technique that alternates the use of a framebuffer as either input or output. It is a neat trick that allows for general purpose GPU computations and is used in effects such as gaussian blur, where in order to blur our scene we need to:

  1. Render it to framebuffer A using a 2D plane and apply horizontal blur via the fragment shader
  2. Render the result horizontally blurred image from step 1 to framebuffer B and apply vertical blur via the fragment shader
  3. Swap framebuffer A and framebuffer B
  4. Keep repeating steps 1 to 3 and incrementally applying blur until desired gaussian blur radius is achieved.

Here is a small chart illustrating the steps needed to achieve ping-pong:

So with that in mind, we will render the contents of renderTargetA into renderTargetB using the postFXMesh we created and apply some special effect via the fragment shader.

Let’s kick things off by creating our renderTargetB:

let renderBufferA = new THREE.WebGLRenderTarget(
  // ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

Next up, let’s augment our animation loop to actually do the ping-pong technique:

function onAnimLoop() {
  // 👇
  // Do not clear the contents of the canvas on each render
  // In order to achieve our ping-pong effect, we must draw
  // the new frame on top of the previous one!
  renderer.autoClearColor = false

  // 👇
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // 👇
  // Render the postFXScene to renderBufferA.
  // This will contain our ping-pong accumulated texture
  renderer.render(postFXScene, orthoCamera)

  // 👇
  // Render the original scene containing ABC again on top
  renderer.render(scene, orthoCamera)
  
  // Same as before
  // ...
  // ...
  
  // 👇
  // Ping-pong our framebuffers by swapping them
  // at the end of each frame render
  const temp = renderBufferA
  renderBufferA = renderBufferB
  renderBufferB = temp
}

If we are to render our scene again with these updated snippets, we will see no visual difference, even though we do in fact alternate between the two framebuffers to render it. That’s because, as it is right now, we do not apply any special effects in the fragment shader of our postFXMesh.

Let’s change our fragment shader like so:

// Sample the correct color from the generated texture
// 👇
// Notice how we now apply a slight 0.005 offset to our UVs when
// looking up the correct texture color

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the correct color of each pixel that makes up the plane
// 👇
// We fade out the color from the previous step to 97.5% of
// whatever it was before
gl_FragColor = vec4(inputColor * 0.975);

With these changes in place, here is our updated program:

See the Pen
Step 4: Create a second framebuffer and ping-pong between them
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s break down one frame render of our updated example:

  1. We render renderTargetB result to renderTargetA
  2. We render our “ABC” text to renderTargetA, compositing it on top of renderTargetB result in step 1 (we do not clear the contents of the canvas on new renders, because we set renderer.autoClearColor = false)
  3. We pass the generated renderTargetA texture to postFXMesh, apply a small offset vec2(0.002) to its UVs when looking up the texture color and fade it out a bit by multiplying the result by 0.975
  4. We render postFXMesh to the device screen
  5. We swap renderTargetA with renderTargetB (ping-ponging)

For each new frame render, we will repeat steps 1 to 5. This way, the previous target framebuffer we rendered to will be used as an input to the current render and so on. You can clearly see this effect visually in the last demo – notice how as the ping-ponging progresses, more and more offset is being applied to the UVs and more and more the opacity fades out.

Applying simplex noise and mouse interaction

Now that we have implemented and can see the ping-pong technique working correctly, we can get creative and expand on it.

Instead of simply adding an offset in our fragment shader as before:

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));

Let’s actually use simplex noise for more interesting visual result. We will also control the direction using our mouse position.

Here is our updated fragment shader:

// Pass in elapsed time since start of our program
uniform float time;

// Pass in normalised mouse position
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;

// <Insert snoise function definition from the link above here>

// Calculate different offsets for x and y by using the UVs
// and different time offsets to the snoise method
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;

// Add the snoise offset multiplied by the normalised mouse position
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);

We also need to specify mousePos and time as inputs to our postFXMesh material shader:

const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
    time: { value: 0 },
    mousePos: { value: new THREE.Vector2(0, 0) }
  },
  // ...
})

Finally let’s make sure we attach a mousemove event listener to our page and pass the updated normalised mouse coordinates from Javascript to our GLSL fragment shader:

// ... initialisation step

// Attach mousemove event listener
document.addEventListener('mousemove', onMouseMove)

function onMouseMove (e) {
  // Normalise horizontal mouse pos from -1 to 1
  const x = (e.pageX / innerWidth) * 2 - 1

  // Normalise vertical mouse pos from -1 to 1
  const y = (1 - e.pageY / innerHeight) * 2 - 1

  // Pass normalised mouse coordinates to fragment shader
  postFXMesh.material.uniforms.mousePos.value.set(x, y)
}

// ... animation loop

With these changes in place, here is our final result. Make sure to hover around it (you might have to wait a moment for everything to load):

See the Pen
Step 5: Perlin Noise and mouse interaction
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

I encourage you to experiment with the provided examples, try to render more elements, alternate the “ABC” text color between each renderTargetA and renderTargetB swap to achieve different color mixing, etc.

In the first demo, you can see a specific example of how this typography effect could be used and the second demo is a playground for you to try some different settings (just open the controls in the top right corner).

Further readings:

The post Creating a Typography Motion Trail Effect with Three.js appeared first on Codrops.