Adding a Persistence Effect to Three.js Scenes

If you have written any WebGL applications in the past, be it using the vanilla API or a helper library such as Three.js, you know that you set up the things you want to render, perhaps include different types of cameras, animations and fancy lighting and voilà, the results are rendered to the default WebGL framebuffer, which is the device screen.

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing. They allow us to “post-process” our scenes, applying different effects on them once rendered.

This article assumes some intermediate knowledge of WebGL with Three.js. The core ideas behind framebuffers have already been covered in-depth in this article here on Codrops. Please make sure to read it first, as the persistence effect we will be achieving directly builds on top of these ideas.

Persistence effect in a nutshell

I call it persistence, yet am not really sure if it’s the best name for this effect and I am simply unaware of the proper way to call it. What is it useful for?

We can use it subtly to blend each previous and current animation frame together or perhaps a bit less subtly to hide bad framerate. Looking at video games like Grand Theft Auto, we can simulate our characters getting drunk. Another thing that comes to mind is rendering the view from the cockpit of a spacecraft when traveling at supersonic speed. Or, since the effect is just so good looking in my opinion, use it for all kinds of audio visualisers, cool website effects and so on.

To achieve it, first we would need to create 2 WebGL framebuffers. Since we will be using threejs for this demo, we would have to use THREE.WebGLRenderTarget. We will call them Framebuffer 1 and Framebuffer 2 from now on and they will have the same dimensions as the size of the canvas we are rendering to.

To draw one frame of our animation with persistence, we will need to:

  1. Render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. For the purposes of our demo, we will use a pure black color with full opacity
  2. Render our threejs scene that holds the actual meshes we want to show on the screen to Framebuffer 2 as well
  3. Render the contents of Framebuffer 2 to the Default WebGL framebuffer (device screen)
  4. Swap Framebuffer 1 with Framebuffer 2

Afterwards, for each new animation frame, we will need to repeat the above steps. Let’s illustrate each step:

Here is a visualisation of our framebuffers. WebGL gives us the Default framebuffer, represented by the device screen, automatically. It’s up to us as developers to manually create Framebuffer 1 and Framebuffer 2. No animation or rendering has happened yet, so the pixel contents of all 3 framebuffers are empty.

Our 2 manually created framebuffers on the left, the default device screen framebuffer on the right. Nothing has been drawn yet.

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. As said, we will use a black color, but for illustration purposes I am fading out to transparent white color with opacity 0.2. As Framebuffer 1 is empty, this will result in empty Framebuffer 2:

Even though we have rendered Framebuffer 1 to Framebuffer 2, at this point we still have framebuffers with empty pixel contents.

Step 2: we need to render our threejs scene that holds our meshes / cameras / lighting / etc to Framebuffer 2. Please notice that both Step 1 and Step 2 render on top of each other to Framebuffer 2.

Our threejs scene rendered to Framebuffer 2.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

Final result rendered to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Framebuffer 1 and Framebuffer 2 swapped.

Now comes the interesting part, since Framebuffer 1 is no longer empty. Let’s go over each step once again:

Step 1: we need to render the contents of Framebuffer 1 to Framebuffer 2 with a shader that fades to a certain color. Let’s assume a transparent white color with 0.2 opacity.

We have rendered and faded out the pixel contents of Framebuffer 1 to Framebuffer 2 by a factor of 0.2.

Step 2: we need to render our threejs scene to Framebuffer 2. For illustration purposes, let’s assume we have an animation that slowly moves our 3D cube to the right, meaning that now it will be a few pixels to the right:

Once again, we render both Framebuffer 1 and our threejs scene to Framebuffer 2. Notice how the threejs scene is rendered on top of the faded out contents of Framebuffer 1.

Step 3: After we have successfully rendered Step 1 and Step 2 to Framebuffer 2, we need to render Framebuffer 2 itself to the Default framebuffer:

The pixel contents of Framebuffer 2 copied over to the device screen.

Step 4: Now we need to swap Framebuffer 1 with Framebuffer 2. We then clear Framebuffer 1 and the Default framebuffer:

Rinse and repeat. Back to Step 1.

I hope you can see a pattern emerging. If we repeat this process enough times, we will start accumulating each new frame to the previous faded one. Here is how it would look if we repeat enough times:

Postprocessing on our threejs scene

Here is the demo we will build in this article. The result of the repeated enough times process above is evident:

Notice the accumulated trails. At every animation loop, we are repeating step 1 to 4.

So with this theory out of the way, let’s create this effect with threejs!

Our skeleton app

Let’s write a simple threejs app that will animate a bunch of objects around the screen and use perspective camera to look at them. I will not explain the code for my example here, as it does not really matter what we render, as long as there is some animation present so things move and we can observe the persistence.

I encourage you to disregard my example and draw something else yourself. Even the most basic spinning cube that moves around the screen will be enough. That being said, here is my scene:

See the Pen 1. Skeleton app by Georgi Nikoloff (@gbnikolov) on CodePen.

The important thing to keep in mind here is that this demo renders to the default framebuffer, represented by the device screen, that WebGL automatically gives us. There are no extra framebuffers involved in this demo up to this point.

Achieving the persistence

Let’s add the code needed to achieve actual persistence. We will start by introducing a THREE.OrthographicCamera.

Orthographic camera can be useful for rendering 2D scenes and UI elements, amongst other things.

threejs docs

Remember, framebuffers allow us to render to image buffers in the video card’s memory instead of the device screen. These image buffers are represented by the THREE.Texture class and are automatically created for us when we create our Framebuffer 1 and Framebuffer 2 by instantiating a new THREE.WebGLRenderTarget. In order to display these textures back to the device screen, we need to create two 2D fullscreen quads that span the width and height of our monitor. Since these quads will be 2D, THREE.OrthographicCamera is best suited to display them.

const leftScreenBorder = -innerWidth / 2
const rightScreenBorder = innerWidth / 2
const topScreenBorder = -innerHeight / 2
const bottomScreenBorder = innerHeight / 2
const near = -100
const far = 100
const orthoCamera = new THREE.OrthographicCamera(
  leftScreenBorder,
  rightScreenBorder,
  topScreenBorder,
  bottomScreenBorder,
  near,
  far
)
orthoCamera.position.z = -10
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

As a next step, let’s create a fullscreen quad geometry using THREE.PlaneGeometry:

const fullscreenQuadGeometry = new THREE.PlaneGeometry(innerWidth, innerHeight)

Using our newly created 2D quad geometry, let’s create two fullscreen planes. I will call them fadePlane and resultPlane. They will use THREE.ShaderMaterial and THREE.MeshBasicMaterial respectively:

// To achieve the fading out to black, we will use THREE.ShaderMaterial
const fadeMaterial = new THREE.ShaderMaterial({
  // Pass the texture result of our rendering to Framebuffer 1 as uniform variable
  uniforms: {
    inputTexture: { value: null }
  },
  vertexShader: `
    // Declare a varying variable for texture coordinates
    varying vec2 vUv;

    void main () {
      // Set each vertex position according to the
      // orthographic camera position and projection
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  
      // Pass the plane texture coordinates as interpolated varying
      // variable to the fragment shader
      vUv = uv;
    }
  `,
  fragmentShader: `
    // Pass the texture from Framebuffer 1
    uniform sampler2D inputTexture;

    // Consume the interpolated texture coordinates
    varying vec2 vUv;

    void main () {
      // Get pixel color from texture
      vec4 texColor = texture2D(inputTexture, vUv);

      // Our fade-out color
      vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

      // this step achieves the actual fading out
      // mix texColor into fadeColor by a factor of 0.05
      // you can change the value of the factor and see
      // the result will change accordingly
      gl_FragColor = mix(texColor, fadeColor, 0.05);
    }
  `
})

// Create our fadePlane
const fadePlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  fadeMaterial
)

// create our resultPlane
// Please notice we don't use fancy shader materials for resultPlane
// We will use it simply to copy the contents of fadePlane to the device screen
// So we can just use the .map property of THREE.MeshBasicMaterial
const resultMaterial = new THREE.MeshBasicMaterial({ map: null })
const resultPlane = new THREE.Mesh(
  fullscreenQuadGeometry,
  resultMaterial
)

We will use fadePlane to perform step 1 and step 2 from the list above (rendering the previous frame represented by Framebuffer 1, fading it out to black color and finally rendering our original threejs scene on top). We will render to Framebuffer 2 and update its corresponding THREE.Texture.

We will use the resulting texture of Framebuffer 2 as an input to <strong>resultPlane</strong>. This time, we will render to the device screen. We will essentially copy the contents of Framebuffer 2 to the Default Framebuffer (device screen), thus achieving step 3.

Up next, let’s actually create our Framebuffer 1 and Framebuffer 2. They are represented by THREE.WebGLRenderTarget:

// Create two extra framebuffers manually
// It is important we use let instead of const variables,
// as we will need to swap them as discussed in Step 4!
let framebuffer1 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)
let framebuffer2 = new THREE.WebGLRenderTarget(innerWidth, innerHeight)

// Before we start using these framebuffers by rendering to them,
// let's explicitly clear their pixel contents to #111111
// If we don't do this, our persistence effect will end up wrong,
// due to how accumulation between step 1 and 3 works. 
// The first frame will never fade out when we mix Framebuffer 1 to
// Framebuffer 2 and will be always visible.
// This bug is better observed, rather then explained, so please
// make sure to comment out these lines and see the change for yourself.
renderer.setClearColor(0x111111)
renderer.setRenderTarget(framebuffer1)
renderer.clearColor()
renderer.setRenderTarget(framebuffer2)
renderer.clearColor()

As you might have guessed already, we will achieve step 4 as described above by swapping framebuffer1 and framebuffer2 at the end of each animation frame.

At this point we have everything ready and initialised: our THREE.OrthographicCamera, our 2 quads that will fade out and copy the contents of our framebuffers to the device screens and, of course, the framebuffers themselves. It should be noted that up until this point we did not change our animation loop code and logic, rather we just created these new things at the initialisation step of our program. Let’s now put them to practice in our rendering loop.

Here is how my function that is executed on each animation frame looks like right now. Taken directly from the codepen example above:

function drawFrame (timeElapsed) {
  for (let i = 0; i < meshes.length; i++) {
     const mesh = meshes[i]
     // some animation logic that moves each mesh around the screen
     // with different radius, offset and speed
     // ...
  }
  // Render our entire scene to the device screen, represented by
  // the default WebGL framebuffer
  renderer.render(scene, perspectiveCamera)
}

If you have written any threejs code before, this drawFrame method should not be any news to you. We apply some animation logic to our meshes and then render them to the device screen by calling renderer.render() on the whole scene with the appropriate camera.

Let’s incorporate our steps 1 to 4 from above and achieve our persistence:

function drawFrame (timeElapsed) {
  // The for loop remains unchanged from the previous snippet
  for (let i = 0; i < meshes.length; i++) {
     // ...
  }

  // By default, threejs clears the pixel color buffer when
  // calling renderer.render()
  // We want to disable it explicitly, since both step 1 and step 2 render
  // to Framebuffer 2 accumulatively
  renderer.autoClearColor = false

  // Set Framebuffer 2 as active WebGL framebuffer to render to
  renderer.setRenderTarget(framebuffer2)

  // <strong>Step 1</strong>
  // Render the image buffer associated with Framebuffer 1 to Framebuffer 2
  // fading it out to pure black by a factor of 0.05 in the fadeMaterial
  // fragment shader
  fadePlane.material.uniforms.inputTexture.value = framebuffer1.texture
  renderer.render(fadePlane, orthoCamera)

  // <strong>Step 2</strong>
  // Render our entire scene to Framebuffer 2, on top of the faded out 
  // texture of Framebuffer 1.
  renderer.render(scene, perspectiveCamera)

  // Set the Default Framebuffer (device screen) represented by null as active WebGL framebuffer to render to.
  renderer.setRenderTarget(null)
  
  // <strong>Step 3</strong>
  // Copy the pixel contents of Framebuffer 2 by passing them as a texture
  // to resultPlane and rendering it to the Default Framebuffer (device screen)
  resultPlane.material.map = framebuffer2.texture
  renderer.render(resultPlane, orthoCamera)

  // <strong>Step 4</strong>
  // Swap Framebuffer 1 and Framebuffer 2
  const swap = framebuffer1
  framebuffer1 = framebuffer2
  framebuffer2 = swap

  // End of the effect
  // When the next animation frame is executed, the meshes will be animated
  // and the whole process will repeat
}

And with these changes out of the way, here is our updated example using persistence:

See the Pen 2. Persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Applying texture transformations

Now that we have our effect properly working, we can get more creative and expand on top of it.

You might remember this snippet from the fragment shader code where we faded out the contents of Framebuffer 1 to Framebuffer 2:

void main () {
   // Get pixel color from texture
   vec4 texColor = texture2D(inputTexture, vUv);

   // Our fade-out color
   vec4 fadeColor = vec4(0.0, 0.0, 0.0, 1.0);

   // mix texColor into fadeColor by a factor of 0.05
   gl_FragColor = mix(texColor, fadeColor, 0.05);
}

When we sample from our inputTexture, we can upscale our texture coordinates by a factor of 0.0075 like so:

vec4 texColor = texture2D(inputTexture, vUv * 0.9925);

With this transformation applied to our texture coordinates, here is our updated example:

See the Pen 3. Peristence with upscaled texture coords by Georgi Nikoloff (@gbnikolov) on CodePen.

Or how about increasing our fade factor from 0.05 to 0.2?

gl_FragColor = mix(texColor, fadeColor, 0.2);

This will intensify the effect of fadeColor by a magnitude of four, thus decreasing our persistence effect:

See the Pen 4. Reduced persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

But why stop there? Here is a final demo that provides you with UI controls to tweak the scale, rotation and fade factor parameters in the demo. It uses THREE.Matrix3 and more specifically its setUvTransform method that allows us to express the translation, scale and rotation of our texture coordinates as a 3×3 matrix.

We can then pass this 3×3 as another uniform to our vertex shader and apply it to the texture coordinates. Here is the updated fragmentShader property of our fadeMaterial:

// pass the texture coordinate matrix as another uniform variable
uniform mat3 uvMatrix;
varying vec2 vUv;
void main () {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  // Since our texture coordinates, represented by uv, are a vector with
  // 2 floats and our matrix holds 9 floats, we need to temporarily
  // add extra dimension to the texture coordinates to make the
  // multiplication possible.
  // In the end, we simply grab the .xy of the final result, thus
  // transforming it back to vec2
  vUv = (uvMatrix * vec3(uv, 1.0)).xy;
}

And here is the result. I also added controls for the different parameters:

See the Pen 5. Parameterised persistence by Georgi Nikoloff (@gbnikolov) on CodePen.

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

Further readings:

The post Adding a Persistence Effect to Three.js Scenes appeared first on Codrops.

Creating a Typography Motion Trail Effect with Three.js

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing and have already been covered in-depth here on Codrops. They allow us to “post-process” our scenes, applying different effects on them once rendered. But how exactly do they work?

By default, WebGL (and also Three.js and all other libraries built on top of it) render to the default framebuffer, which is the device screen. If you have used Three.js or any other WebGL framework before, you know that you create your mesh with the correct geometry and material, render it, and voilà, it’s visible on your screen.

However, we as developers can create new framebuffers besides the default one and explicitly instruct WebGL to render to them. By doing so, we render our scenes to image buffers in the video card’s memory instead of the device screen. Afterwards, we can treat these image buffers like regular textures and apply filters and effects before eventually rendering them to the device screen.

Here is a video breaking down the post-processing and effects in Metal Gear Solid 5: Phantom Pain that really brings home the idea. Notice how it starts by footage from the actual game rendered to the default framebuffer (device screen) and then breaks down how each framebuffer looks like. All of these framebuffers are composited together on each frame and the result is the final picture you see when playing the game:

So with the theory out of the way, let’s create a cool typography motion trail effect by rendering to a framebuffer!

Our skeleton app

Let’s render some 2D text to the default framebuffer, i.e. device screen, using threejs. Here is our boilerplate:

const LABEL_TEXT = 'ABC'

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a threejs renderer:
// 1. Size it correctly
// 2. Set default background color
// 3. Append it to the page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
document.body.appendChild(renderer.domElement)

// Create an orthographic camera that covers the entire screen
// 1. Position it correctly in the positive Z dimension
// 2. Orient it towards the scene center
const orthoCamera = new THREE.OrthographicCamera(
  -innerWidth / 2,
  innerWidth / 2,
  innerHeight / 2,
  -innerHeight / 2,
  0.1,
  10,
)
orthoCamera.position.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

// Create a plane geometry that spawns either the entire
// viewport height or width depending on which one is bigger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
  labelMeshSize,
  labelMeshSize
)

// Programmaticaly create a texture that will hold the text
let labelTextureCanvas
{
  // Canvas and corresponding context2d to be used for
  // drawing the text
  labelTextureCanvas = document.createElement('canvas')
  const labelTextureCtx = labelTextureCanvas.getContext('2d')

  // Dynamic texture size based on the device capabilities
  const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
  const relativeFontSize = 20
  // Size our text canvas
  labelTextureCanvas.width = textureSize
  labelTextureCanvas.height = textureSize
  labelTextureCtx.textAlign = 'center'
  labelTextureCtx.textBaseline = 'middle'

  // Dynamic font size based on the texture size
  // (based on the device capabilities)
  labelTextureCtx.font = `${relativeFontSize}px Helvetica`
  const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
  const widthDelta = labelTextureCanvas.width / textWidth
  const fontSize = relativeFontSize * widthDelta
  labelTextureCtx.font = `${fontSize}px Helvetica`
  labelTextureCtx.fillStyle = 'white'
  labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.height / 2)
}
// Create a material with our programmaticaly created text
// texture as input
const labelMaterial = new THREE.MeshBasicMaterial({
  map: new THREE.CanvasTexture(labelTextureCanvas),
  transparent: true,
})

// Create a plane mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)

// Start out animation render loop
renderer.setAnimationLoop(onAnimLoop)

function onAnimLoop() {
  // On each new frame, render the scene to the default framebuffer 
  // (device screen)
  renderer.render(scene, orthoCamera)
}

This code simply initialises a threejs scene, adds a 2D plane with a text texture to it and renders it to the default framebuffer (device screen). If we are execute it with threejs included in our project, we will get this:

See the Pen
Step 1: Render to default framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Again, we don’t explicitly specify otherwise, so we are rendering to the default framebuffer (device screen).

Now that we managed to render our scene to the device screen, let’s add a framebuffer (THEEE.WebGLRenderTarget) and render it to a texture in the video card memory.

Rendering to a framebuffer

Let’s start by creating a new framebuffer when we initialise our app:

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a new framebuffer we will use to render to
// the video card memory
const renderBufferA = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

// ... rest of application

Now that we have created it, we must explicitly instruct threejs to render to it instead of the default framebuffer, i.e. device screen. We will do this in our program animation loop:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)
  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
}

And here is our result:

See the Pen
Step 2: Render to a framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

As you can see, we are getting an empty screen, yet our program contains no errors – so what happened? Well, we are no longer rendering to the device screen, but another framebuffer! Our scene is being rendered to a texture in the video card memory, so that’s why we see the empty screen.

In order to display this generated texture containing our scene back to the default framebuffer (device screen), we need to create another 2D plane that will cover the entire screen of our app and pass the texture as material input to it.

First we will create a fullscreen 2D plane that will span the entire device screen:

// ... rest of initialisation step

// Create a second scene that will hold our fullscreen plane
const postFXScene = new THREE.Scene()

// Create a plane geometry that covers the entire screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)

// Create a plane material that expects a sampler texture input
// We will pass our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
  },
  // vertex shader will be in charge of positioning our plane correctly
  vertexShader: `
      varying vec2 v_uv;

      void main () {
        // Set the correct position of each plane vertex
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);

        // Pass in the correct UVs to the fragment shader
        v_uv = uv;
      }
    `,
  fragmentShader: `
      // Declare our texture input as a "sampler" variable
      uniform sampler2D sampler;

      // Consume the correct UVs from the vertex shader to use
      // when displaying the generated texture
      varying vec2 v_uv;

      void main () {
        // Sample the correct color from the generated texture
        vec4 inputColor = texture2D(sampler, v_uv);
        // Set the correct color of each pixel that makes up the plane
        gl_FragColor = inputColor;
      }
    `
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)

// ... animation loop code here, same as before

As you can see, we are creating a new scene that will hold our fullscreen plane. After creating it, we need to augment our animation loop to render the generated texture from the previous step to the fullscreen plane on our screen:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
  
  // 👇
  // Set the device screen as the framebuffer to render to
  // In WebGL, framebuffer "null" corresponds to the default 
  // framebuffer!
  renderer.setRenderTarget(null)

  // 👇
  // Assign the generated texture to the sampler variable used
  // in the postFXMesh that covers the device screen
  postFXMesh.material.uniforms.sampler.value = renderBufferA.texture

  // 👇
  // Render the postFX mesh to the default framebuffer
  renderer.render(postFXScene, orthoCamera)
}

After including these snippets, we can see our scene once again rendered on the screen:

See the Pen
Step 3: Display the generated framebuffer on the device screen
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s recap the necessary steps needed to produce this image on our screen on each render loop:

  1. Create renderTargetA framebuffer that will allow us to render to a separate texture in the users device video memory
  2. Create our “ABC” plane mesh
  3. Render the “ABC” plane mesh to renderTargetA instead of the device screen
  4. Create a separate fullscreen plane mesh that expects a texture as an input to its material
  5. Render the fullscreen plane mesh back to the default framebuffer (device screen) using the generated texture created by rendering the “ABC” mesh to renderTargetA

Achieving the persistence effect by using two framebuffers

We don’t have much use of framebuffers if we are simply displaying them as they are to the device screen, as we do right now. Now that we have our setup ready, let’s actually do some cool post-processing.

First, we actually want to create yet another framebuffer – renderTargetB, and make sure it and renderTargetA are let variables, rather then consts. That’s because we will actually swap them at the end of each render so we can achieve framebuffer ping-ponging.

“Ping-ponging” in WebGl is a technique that alternates the use of a framebuffer as either input or output. It is a neat trick that allows for general purpose GPU computations and is used in effects such as gaussian blur, where in order to blur our scene we need to:

  1. Render it to framebuffer A using a 2D plane and apply horizontal blur via the fragment shader
  2. Render the result horizontally blurred image from step 1 to framebuffer B and apply vertical blur via the fragment shader
  3. Swap framebuffer A and framebuffer B
  4. Keep repeating steps 1 to 3 and incrementally applying blur until desired gaussian blur radius is achieved.

Here is a small chart illustrating the steps needed to achieve ping-pong:

So with that in mind, we will render the contents of renderTargetA into renderTargetB using the postFXMesh we created and apply some special effect via the fragment shader.

Let’s kick things off by creating our renderTargetB:

let renderBufferA = new THREE.WebGLRenderTarget(
  // ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

Next up, let’s augment our animation loop to actually do the ping-pong technique:

function onAnimLoop() {
  // 👇
  // Do not clear the contents of the canvas on each render
  // In order to achieve our ping-pong effect, we must draw
  // the new frame on top of the previous one!
  renderer.autoClearColor = false

  // 👇
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // 👇
  // Render the postFXScene to renderBufferA.
  // This will contain our ping-pong accumulated texture
  renderer.render(postFXScene, orthoCamera)

  // 👇
  // Render the original scene containing ABC again on top
  renderer.render(scene, orthoCamera)
  
  // Same as before
  // ...
  // ...
  
  // 👇
  // Ping-pong our framebuffers by swapping them
  // at the end of each frame render
  const temp = renderBufferA
  renderBufferA = renderBufferB
  renderBufferB = temp
}

If we are to render our scene again with these updated snippets, we will see no visual difference, even though we do in fact alternate between the two framebuffers to render it. That’s because, as it is right now, we do not apply any special effects in the fragment shader of our postFXMesh.

Let’s change our fragment shader like so:

// Sample the correct color from the generated texture
// 👇
// Notice how we now apply a slight 0.005 offset to our UVs when
// looking up the correct texture color

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the correct color of each pixel that makes up the plane
// 👇
// We fade out the color from the previous step to 97.5% of
// whatever it was before
gl_FragColor = vec4(inputColor * 0.975);

With these changes in place, here is our updated program:

See the Pen
Step 4: Create a second framebuffer and ping-pong between them
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s break down one frame render of our updated example:

  1. We render renderTargetB result to renderTargetA
  2. We render our “ABC” text to renderTargetA, compositing it on top of renderTargetB result in step 1 (we do not clear the contents of the canvas on new renders, because we set renderer.autoClearColor = false)
  3. We pass the generated renderTargetA texture to postFXMesh, apply a small offset vec2(0.002) to its UVs when looking up the texture color and fade it out a bit by multiplying the result by 0.975
  4. We render postFXMesh to the device screen
  5. We swap renderTargetA with renderTargetB (ping-ponging)

For each new frame render, we will repeat steps 1 to 5. This way, the previous target framebuffer we rendered to will be used as an input to the current render and so on. You can clearly see this effect visually in the last demo – notice how as the ping-ponging progresses, more and more offset is being applied to the UVs and more and more the opacity fades out.

Applying simplex noise and mouse interaction

Now that we have implemented and can see the ping-pong technique working correctly, we can get creative and expand on it.

Instead of simply adding an offset in our fragment shader as before:

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));

Let’s actually use simplex noise for more interesting visual result. We will also control the direction using our mouse position.

Here is our updated fragment shader:

// Pass in elapsed time since start of our program
uniform float time;

// Pass in normalised mouse position
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;

// <Insert snoise function definition from the link above here>

// Calculate different offsets for x and y by using the UVs
// and different time offsets to the snoise method
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;

// Add the snoise offset multiplied by the normalised mouse position
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);

We also need to specify mousePos and time as inputs to our postFXMesh material shader:

const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
    time: { value: 0 },
    mousePos: { value: new THREE.Vector2(0, 0) }
  },
  // ...
})

Finally let’s make sure we attach a mousemove event listener to our page and pass the updated normalised mouse coordinates from Javascript to our GLSL fragment shader:

// ... initialisation step

// Attach mousemove event listener
document.addEventListener('mousemove', onMouseMove)

function onMouseMove (e) {
  // Normalise horizontal mouse pos from -1 to 1
  const x = (e.pageX / innerWidth) * 2 - 1

  // Normalise vertical mouse pos from -1 to 1
  const y = (1 - e.pageY / innerHeight) * 2 - 1

  // Pass normalised mouse coordinates to fragment shader
  postFXMesh.material.uniforms.mousePos.value.set(x, y)
}

// ... animation loop

With these changes in place, here is our final result. Make sure to hover around it (you might have to wait a moment for everything to load):

See the Pen
Step 5: Perlin Noise and mouse interaction
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

I encourage you to experiment with the provided examples, try to render more elements, alternate the “ABC” text color between each renderTargetA and renderTargetB swap to achieve different color mixing, etc.

In the first demo, you can see a specific example of how this typography effect could be used and the second demo is a playground for you to try some different settings (just open the controls in the top right corner).

Further readings:

The post Creating a Typography Motion Trail Effect with Three.js appeared first on Codrops.

Drawing Graphics with the CSS Paint API

CSS Paint is an API that allows developers to programatically generate and draw graphics where CSS expects an image.

It is part of CSS Houdini, an umbrella term for seven new low-level APIs that expose different parts of the CSS engine and allows developers to extend CSS by hooking into the styling and layout process of a browser’s rendering engine.

It enables developers to write code the browser can parse as CSS, thereby creating new CSS features without waiting for them to be implemented natively in browsers.

Today we will explore two particular APIs, that are part of the CSS Houdini umbrella:

  1. CSS Paint, which at the time of writing this article, has been fully implemented in Chrome, Opera and Edge and is available in Firefox and Safari via a polyfill.
  2. CSS Properties and Values API, that will allow us to explicitly define our CSS variables, their initial values, what type of values they support and whether these variables can be inherited.

CSS Paint provides us with ability to render graphics using a PaintWorklet, a stripped down version of the CanvasRenderingContext2D. The major differences are:

  • No support for text rendering
  • No direct pixel access / manipulation

With these two omissions in mind, anything you can draw using canvas2d, you can draw on a regular DOM element using the CSS Paint API. For those of you who have done any graphics using canvas2d, you should be right at home.

Furthermore, we as developers have the ability to pass CSS variables as inputs to our PaintWorklet and control its presentation using custom predefined attributes.

This allows for a high degree of customisation, even by design people who may not be necessarily familiar with Javascript.

You can see more examples here and here. And with that out of the way, let’s get to coding!

Simplest example: two diagonal lines

Let’s create a CSS paintlet, that once loaded, will draw two diagonal lines across the surface of the DOM element we apply it to. The paintlet drawing surface size will adapt to the width and height of the DOM element and we will be able to control the diagonal line thickness by passing in a CSS variable.

Creating our PaintWorklet

In order to load a PaintWorklet, we will need to create it as a separate Javascript file (diagonal-lines.js).

const PAINTLET_NAME = 'diagonal-lines'

class CSSPaintlet {

  // 👉 Define the names of the input CSS variables we will support
  static get inputProperties() {
    return [
      `--${PAINTLET_NAME}-line-width`,
    ]
  }

  // 👉 Define names for input CSS arguments supported in paint()
  // ⚠ This part of the API is still experimental and hidden
  //    behind a flag.
  static get inputArguments () {
    return []
  }

  // 👉 paint() will be executed every time:
  //  - any input property changes
  //  - the DOM element we apply our paintlet to changes its dimensions
  paint(ctx, paintSize, props) {
    // 👉 Obtain the numeric value of our line width that is passed
    //    as a CSS variable
    const lineWidth = Number(props.get(`--${PAINTLET_NAME}-line-width`))

    ctx.lineWidth = lineWidth

    // 🎨 Draw diagonal line #1
    ctx.beginPath()
    ctx.moveTo(0, 0)
    ctx.lineTo(paintSize.width, paintSize.height)
    ctx.stroke()

    // 🎨 Draw diagonal line #2
    ctx.beginPath()
    ctx.moveTo(0, paintSize.height)
    ctx.lineTo(paintSize.width, 0)
    ctx.stroke()
  }
}

// 👉 Register our CSS Paintlet with the correct name
//    so we can reference it from our CSS
registerPaint(PAINTLET_NAME, CSSPaintlet)

We define our CSS paintlet as a standalone class. This class needs only one method to work – paint(), which will draw the graphics on top of the surface we assign our CSS paintlet to. It will be executed upon changing any of the CSS variables our paintlet relies on or when our DOM element changes it’s dimensions.

The other static method inputProperties() is optional. It tells the CSS paintlet which input CSS variables exactly does it support. In our case, that would be --diagonal-lines-line-width. We declare it as an input property and consume it for use in our paint() method. It is important we cast it to a number by putting it in a Number to ensure cross-browser support.

There is yet another optional static method supported: inputArguments. It exposes arguments to our paint() method like so:

#myImage {
  background-image: paint(myWorklet, 30px, red, 10deg);
}

However, this part of the CSS paintlet API is still hidden behind a flag and considered experimental. For ease of use and compatability, we will not be covering it in this article, but I encourage you to read up on it on your own. Instead, we will use CSS variables using the inputProperties() method to control all of the inputs to our paintlet.

Registering our CSS PaintWorklet

Afterwards we must reference our CSS paintlet and register it to our main page. It is important we conditionally load the awesome css-paint-polyfill package, which will ensure our paintlets will work in Firefox and Safari.

It should be noted that along our CSS paintlet, we can use the new CSS Properties and Values API, also part of the Houdini umbrella, to explicitly define our CSS variables inputs via CSS.registerProperty(). We can control our CSS variables like so:

  • Their types & syntax
  • Whether this CSS variable inherits from any parent elements
  • What is it’s initial value if the user does not specify one

This API is also not supported in Firefox and Safari, but we can still use it in Chromium browsers. This way we will future-proof our demos and browsers that don’t support it will simply ignore it.

;(async function() {
  // ⚠ Handle Firefox and Safari by importing a polyfill for CSS Pain    
  if (CSS['paintWorklet'] === undefined) {
    await import('https://unpkg.com/css-paint-polyfill')
  }

  // 👉 Explicitly define our custom CSS variable
  //    This is not supported in Safari and Firefox, so they will
  //    ignore it, but we can optionally use it in browsers that 
  //    support it. 
  //    This way we will future-proof our applications so once Safari
  //    and Firefox support it, they will benefit from these
  //    definitions too.
  //
  //    Make sure that the browser treats it as a number
  //    It does not inherit it's value
  //    It's initial value defaults to 1
  if ('registerProperty' in CSS) {
    CSS.registerProperty({
      name: '--diagonal-lines-line-width',
      syntax: '<number>',
      inherits: false,
      initialValue: 1
    })
  }

  // 👉 Include our separate paintlet file
  CSS.paintWorklet.addModule('path/to/our/external/worklet/diagonal-files.js')
})()

Referencing our paintlet as a CSS background

Once we have included our paintlet as a JS file, using it is dead simple. We select our target DOM element we want to style in CSS and apply our paintlet via the paint() CSS command:

#myElement {
   // 👉 Reference our CSS paintlet
   background-image: paint('--diagonal-lines');

   // 👉 Pass in custom CSS variable to be used in our CSS paintlet
   --diagonal-lines-line-width: 10;

   // 👉 Remember - the browser treats this as a regular image
   // referenced in CSS. We can control it's repeat, size, position
   // and any other background related property available
   background-repeat: no-repeat;
   background-size: cover;
   background-position: 50% 50%;

   // Some more styles to make sure we can see our element on the page
   border: 1px solid red;
   width: 200px;
   height: 200px;
   margin: 0 auto;
}

And with this code out of the way, here is what we will get:

Remember, we can apply this CSS paintlet as a background to any DOM element with any dimensions. Let’s blow up our DOM element to fullscreen, lower it’s background-size x and y values and set it’s background-repeat to repeat. Here is our updated example:

We are using the same CSS paintlet from our previous example, but now we have expanded it to cover the whole demo page.

So now that we covered our base example and saw how to organise our code, let’s write some nicer looking demos!

Particle Connections

See the Pen CSS Worklet Particles by Georgi Nikoloff (@gbnikolov) on CodePen.

This paintlet was inspired by the awesome demo by @nucliweb.

Again, for those of you who have used the canvas2d API to draw graphics in the past, this will be pretty straightforward.

We control how many points we are going to render via the `–dots-connections-count` CSS variable. Once we obtain its numeric value in our paintlet, we create an array with the appropriate size and fill it with objects with random x, y and radius properties.

Then we loop each of our items in the array, draw a sphere at its coordinates, find the nearest neighbour to it (the minimum distance is controlled via the `–dots-connections-connection-min-dist` CSS variable) and connect them with a line.

We will also control the spheres fill color and the lines stroke color via the `–dots-connections-fill-color` and --dots-connections-stroke-color CSS variables respectively.

Here is the complete workled code:

const PAINTLET_NAME = 'dots-connections'

class CSSPaintlet {
  // 👉 Define names for input CSS variables we will support
  static get inputProperties() {
    return [
      `--${PAINTLET_NAME}-line-width`,
      `--${PAINTLET_NAME}-stroke-color`,
      `--${PAINTLET_NAME}-fill-color`,
      `--${PAINTLET_NAME}-connection-min-dist`,
      `--${PAINTLET_NAME}-count`,
    ]
  }

  // 👉 Our paint method to be executed when CSS vars change
  paint(ctx, paintSize, props, args) {
    const lineWidth = Number(props.get(`--${PAINTLET_NAME}-line-width`))
    const minDist = Number(props.get(`--${PAINTLET_NAME}-connection-min-dist`))
    const strokeColor = props.get(`--${PAINTLET_NAME}-stroke-color`)
    const fillColor = props.get(`--${PAINTLET_NAME}-fill-color`)
    const numParticles = Number(props.get(`--${PAINTLET_NAME}-count`))
    
    // 👉 Generate particles at random positions
    //    across our DOM element surface
    const particles = new Array(numParticles).fill(null).map(_ => ({
      x: Math.random() * paintSize.width,
      y: Math.random() * paintSize.height,
      radius: 2 + Math.random() * 2,
    }))
    
    // 👉 Assign lineWidth coming from CSS variables and make sure
    //    lineCap and lineWidth are round
    ctx.lineWidth = lineWidth
    ctx.lineJoin = 'round'
    ctx.lineCap = 'round'
    
    // 👉 Loop over the particles with nested loops - O(n^2)
    for (let i = 0; i < numParticles; i++) {
      const particle = particles[i]
      // 👉 Loop second time 
      for (let n = 0; n < numParticles; n++) {
        if (i === n) {
          continue
        }
        const nextParticle = particles[n]
        // 👉 Calculate distance between the current particle
        //    and the particle from the previous loop iteration
        const dx = nextParticle.x - particle.x
        const dy = nextParticle.y - particle.y
        const dist = Math.sqrt(dx * dx + dy * dy)
        // 👉 If the dist is smaller then the minDist specified via
        //    CSS variable, then we will connect them with a line
        if (dist < minDist) {
          ctx.strokeStyle = strokeColor
          ctx.beginPath()
          ctx.moveTo(nextParticle.x, nextParticle.y)
          ctx.lineTo(particle.x, particle.y)
          // 👉 Draw the connecting line
          ctx.stroke()
        }
      }
      // Finally draw the particle at the right position
      ctx.fillStyle = fillColor
      ctx.beginPath()
      ctx.arc(particle.x, particle.y, particle.radius, 0, Math.PI * 2)
      ctx.closePath()
      ctx.fill()
    }
    
  }
}

// 👉 Register our CSS paintlet with a unique name
//    so we can reference it from our CSS
registerPaint(PAINTLET_NAME, CSSPaintlet)

Line Loop

Here is our next example. It expects the following CSS variables as inputs to our paintlet:

--loop-line-width
--loop-stroke-color
--loop-sides
--loop-scale
--loop-rotation

We loop around a full circle (PI * 2) and position them along it’s perimeters based on the --loop-sides count CSS variables. For each position, we loop again around our full circle, and connect it to all other positions via a ctx.lineTo() command:

const PAINTLET_NAME = 'loop'

class CSSPaintlet {
  // 👉 Define names for input CSS variables we will support
  static get inputProperties() {
    return [
      `--${PAINTLET_NAME}-line-width`,
      `--${PAINTLET_NAME}-stroke-color`,
      `--${PAINTLET_NAME}-sides`,
      `--${PAINTLET_NAME}-scale`,
      `--${PAINTLET_NAME}-rotation`,
    ]
  }
  // 👉 Our paint method to be executed when CSS vars change
  paint(ctx, paintSize, props, args) {
    const lineWidth = Number(props.get(`--${PAINTLET_NAME}-line-width`))
    const strokeColor = props.get(`--${PAINTLET_NAME}-stroke-color`)
    const numSides = Number(props.get(`--${PAINTLET_NAME}-sides`))
    const scale = Number(props.get(`--${PAINTLET_NAME}-scale`))
    const rotation = Number(props.get(`--${PAINTLET_NAME}-rotation`))
    
    const angle = Math.PI * 2 / numSides
    const radius = paintSize.width / 2
    ctx.save()
    ctx.lineWidth = lineWidth
    ctx.lineJoin = 'round'
    ctx.lineCap = 'round'
    ctx.strokeStyle = strokeColor
    ctx.translate(paintSize.width / 2, paintSize.height / 2)
    ctx.rotate(rotation * (Math.PI / 180))
    ctx.scale(scale / 100, scale / 100)
    ctx.moveTo(0, radius)

    // 👉 Loop over the numsides twice in nested loop - O(n^2)
    //    Connect each corner with all other corners
    for (let i = 0; i < numSides; i++) {
      const x = Math.sin(i * angle) * radius
      const y = Math.cos(i * angle) * radius
      for (let n = i; n < numSides; n++) {
        const x2 = Math.sin(n * angle) * radius
        const y2 = Math.cos(n * angle) * radius
        ctx.lineTo(x, y)
        ctx.lineTo(x2, y2);
      }
    }
    ctx.closePath()
    ctx.stroke()
    ctx.restore()
  }   
}

// 👉 Register our CSS paintlet with a unique name
//    so we can reference it from our CSS
registerPaint(PAINTLET_NAME, CSSPaintlet)

Noise Button

Here is our next example. It is inspired by this other awesome CSS Paintlet by Jhey Tompkins. It expects the following CSS variables as inputs to our paintlet:

--grid-size
--grid-color
--grid-noise-scale

The paintlet itself uses perlin noise (code courtesy of joeiddon) to control the opacity of each individual cell.

const PAINTLET_NAME = 'grid'

class CSSPaintlet {
  // 👉 Define names for input CSS variables we will support
  static get inputProperties() {
    return [
      `--${PAINTLET_NAME}-size`,
      `--${PAINTLET_NAME}-color`,
      `--${PAINTLET_NAME}-noise-scale`
    ]
  }

  // 👉 Our paint method to be executed when CSS vars change
  paint(ctx, paintSize, props, args) {
    const gridSize = Number(props.get(`--${PAINTLET_NAME}-size`))
    const color = props.get(`--${PAINTLET_NAME}-color`)
    const noiseScale = Number(props.get(`--${PAINTLET_NAME}-noise-scale`))

    ctx.fillStyle = color
    for (let x = 0; x < paintSize.width; x += gridSize) {
      for (let y = 0; y < paintSize.height; y += gridSize) {
        // 👉 Use perlin noise to determine the cell opacity
        ctx.globalAlpha = mapRange(perlin.get(x * noiseScale, y * noiseScale), -1, 1, 0.5, 1)
        ctx.fillRect(x, y, gridSize, gridSize)
      }
    }
  }
}

// 👉 Register our CSS paintlet with a unique name
//    so we can reference it from our CSS
registerPaint(PAINTLET_NAME, CSSPaintlet)

Curvy dividers

As a last example, let’s do something perhaps a bit more useful. We will programatically draw dividers to separate the text content of our page:

And as usual, here is the CSS paintlet code:

const PAINTLET_NAME = 'curvy-dividor'

class CSSPaintlet {
  // 👉 Define names for input CSS variables we will support
  static get inputProperties() {
    return [
      `--${PAINTLET_NAME}-points-count`,
      `--${PAINTLET_NAME}-line-width`,
      `--${PAINTLET_NAME}-stroke-color`
    ]
  }
  // 👉 Our paint method to be executed when CSS vars change
  paint(ctx, paintSize, props, args) {
    const pointsCount = Number(props.get(`--${PAINTLET_NAME}-points-count`))
    const lineWidth = Number(props.get(`--${PAINTLET_NAME}-line-width`))
    const strokeColor = props.get(`--${PAINTLET_NAME}-stroke-color`)
    
    const stepX = paintSize.width / pointsCount
    
    ctx.lineWidth = lineWidth
    ctx.lineJoin = 'round'
    ctx.lineCap = 'round'
    
    ctx.strokeStyle = strokeColor
    
    const offsetUpBound = -paintSize.height / 2
    const offsetDownBound = paintSize.height / 2
    
    // 👉 Draw quadratic bezier curves across the horizontal axies
    //    of our dividers:
    ctx.moveTo(-stepX / 2, paintSize.height / 2)
    for (let i = 0; i < pointsCount; i++) {
      const x = (i + 1) * stepX - stepX / 2
      const y = paintSize.height / 2 + (i % 2 === 0 ? offsetDownBound : offsetUpBound)
      const nextx = (i + 2) * stepX - stepX / 2
      const nexty = paintSize.height / 2 + (i % 2 === 0 ? offsetUpBound : offsetDownBound)
      const ctrlx = (x + nextx) / 2
      const ctrly = (y + nexty) / 2
      ctx.quadraticCurveTo(x, y, ctrlx, ctrly)
    }
    ctx.stroke()
  }
}

// 👉 Register our CSS paintlet with a unique name
//    so we can reference it from our CSS
registerPaint(PAINTLET_NAME, CSSPaintlet)

Conclusion

In this article we went through all the key components and methods of the CSS Paint API. It is pretty easy to setup and very useful if we want to draw more advanced graphics that CSS does not support out-of-the box.

We can easily create a library out of these CSS paintlets and keep reusing them across our projects with minimum setup required.

As a good practice, I encourage you to find cool canvas2d demos and port them to the new CSS Paint API.

The post Drawing Graphics with the CSS Paint API appeared first on Codrops.

Drawing 2D Metaballs with WebGL2

While many people shy away from writing vanilla WebGL and immediately jump to frameworks such as three.js or PixiJS, it is possible to achieve great visuals and complex animation with relatively small amounts of code. Today, I would like to present core WebGL concepts while programming some simple 2D visuals. This article assumes at least some higher-level knowledge of WebGL through a library.

Please note: WebGL2 has been around for years, yet Safari only recently enabled it behind a flag. It is a pretty significant upgrade from WebGL1 and brings tons of new useful features, some of which we will take advantage of in this tutorial.

What are we going to build

From a high level standpoint, to implement our 2D metaballs we need two steps:

  • Draw a bunch of rectangles with radial linear gradient starting from their centers and expanding to their edges. Draw a lot of them and alpha blend them together in a separate framebuffer.
  • Take the resulting image with the blended quads from step #1, scan its pixels one by one and decide the new color of the pixel depending on its opacity. For example – if the pixel has opacity smaller then 0.5, render it in red. Otherwise render it in yellow and so on.
Rendering multiple 2D quads and turning them to metaballs with post-processing.
Left: Multiple quads rendered with radial gradient, alpha blended and rendered to a texture.
Right: Post-processing on the generated texture and rendering the result to the device screen. Conditional coloring of each pixel based on opacity.

Don’t worry if these terms don’t make a lot of sense just yet – we will go over each of the steps needed in detail. Let’s jump into the code and start building!

Bootstrapping our program

We will start things by

  • Creating a HTMLCanvasElement, sizing it to our device viewport and inserting it into the page DOM
  • Obtaining a WebGL2RenderingContext to use for drawing stuff
  • Setting the correct WebGL viewport and the background color for our scene
  • Starting a requestAnimationFrame loop that will draw our scene as fast as the device allows. The speed is determined by various factors such as the hardware, current CPU / GPU workloads, battery levels, user preferences and so on. For smooth animation we are going to aim for 60FPS.
/* Create our canvas and obtain it's WebGL2RenderingContext */
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl2')

/* Handle error somehow if no WebGL2 support */
if (!gl) {
  // ...
}

/* Size our canvas and listen for resize events */
resizeCanvas()
window.addEventListener('resize', resizeCanvas)

/* Append our canvas to the DOM and set its background-color with CSS */
canvas.style.backgroundColor = 'black'
document.body.appendChild(canvas)

/* Issue first frame paint */
requestAnimationFrame(updateFrame)

function updateFrame (timestampMs) {
   /* Set our program viewport to fit the actual size of our monitor with devicePixelRatio into account */
   gl.viewport(0, 0, canvas.width, canvas.height)
   /* Set the WebGL background colour to be transparent */
   gl.clearColor(0, 0, 0, 0)
   /* Clear the current canvas pixels */
   gl.clear(gl.COLOR_BUFFER_BIT)

   /* Issue next frame paint */
   requestAnimationFrame(updateFrame)
}

function resizeCanvas () {
   /*
      We need to account for devicePixelRatio when sizing our canvas.
      We will use it to obtain the actual pixel size of our viewport and size our canvas to match it.
      We will then downscale it back to CSS units so it neatly fills our viewport and we benefit from downsampling antialiasing
      We also need to limit it because it can really slow our program. Modern iPhones have devicePixelRatios of 3. This means rendering 9x more pixels each frame!

      More info: https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html 
   */
   const dpr = devicePixelRatio > 2 ? 2 : devicePixelRatio
   canvas.width = innerWidth * dpr
   canvas.height = innerHeight * dpr
   canvas.style.width = `${innerWidth}px`
   canvas.style.height = `${innerHeight}px`
}

Drawing a quad

The next step is to actually draw a shape. WebGL has a rendering pipeline, which dictates how does the object you draw and its corresponding geometry and material end up on the device screen. WebGL is essentially just a rasterising engine, in the sense that you give it properly formatted data and it produces pixels for you.

The full rendering pipeline is out of the scope for this tutorial, but you can read more about it here. Let’s break down what exactly we need for our program:

Defining our geometry and its attributes

Each object we draw in WebGL is represented as a WebGLProgram running on the device GPU. It consists of input variables and vertex and fragment shader to operate on these variables. The vertex shader responsibility is to position our geometry correctly on the device screen and fragment shader’s responsibility is to control its appearance.

It’s up to us as developers to write our vertex and fragment shaders, compile them on the device GPU and link them in a GLSL program. Once we have successfully done this, we must query this program’s input variable locations that were allocated on the GPU for us, supply correctly formatted data to them, enable them and instruct them how to unpack and use our data.

To render our quad, we need 3 input variables:

  1. a_position will dictate the position of each vertex of our quad geometry. We will pass it as an array of 12 floats, i.e. 2 triangles with 3 points per triangle, each represented by 2 floats (x, y). This variable is an attribute, i.e. it is obviously different for each of the points that make up our geometry.
  2. a_uv will describe the texture offset for each point of our geometry. They too will be described as an array of 12 floats. We will use this data not to texture our quad with an image, but to dynamically create a radial linear gradient from the quad center. This variable is also an attribute and will too be different for each of our geometry points.
  3. u_projectionMatrix will be an input variable represented as a 32bit float array of 16 items that will dictate how do we transform our geometry positions described in pixel values to the normalised WebGL coordinate system. This variable is a uniform, unlike the previous two, it will not change for each geometry position.

We can take advantage of Vertex Array Object to store the description of our GLSL program input variables, their locations on the GPU and how should they be unpacked and used.

WebGLVertexArrayObjects or VAOs are 1st class citizens in WebGL2, unlike in WebGL1 where they were hidden behind an optional extension and their support was not guaranteed. They let us type less, execute fewer WebGL bindings and keep our drawing state into a single, easy to manage object that is simpler to track. They essentially store the description of our geometry and we can reference them later.

We need to write the shaders in GLSL 3.00 ES, which WebGL2 supports. Our vertex shader will be pretty simple:

/*
  Pass in geometry position and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;

/*
  Pass in global projection matrix for each vertex
*/
uniform mat4 u_projectionMatrix;

/*
  Specify varying variable to be passed to fragment shader
*/
out vec2 v_uv;

void main () {
  /*
   We need to convert our quad points positions from pixels to the normalized WebGL coordinate system
  */
  gl_Position = u_projectionMatrix * a_position;
  v_uv = a_uv;
}

At this point, after we have successfully executed our vertex shader, WebGL will fill in the pixels between the points that make up the geometry on the device screen. The way the space between the points is filled depends on what primitives are we using for drawing – WebGL supports points, lines and triangles.

We as developers do not have control over this step.

After it has rasterised our geometry, it will execute our fragment shader on each generated pixel. The fragment shader responsibility is the final appearance of each generated pixel and wether it should even be rendered. Here is our fragment shader:

/*
  Set fragment shader float precision
*/
precision highp float;

/*
  Consume interpolated tex coord varying from vertex shader
*/
in vec2 v_uv;

/*
  Final color represented as a vector of 4 components - r, g, b, a
*/
out vec4 outColor;

void main () {
  /*
    This function will run on each each pixel generated by our quad geometry
  */
  /*
    Calculate the distance for each pixel from the center of the quad (0.5, 0.5)
  */
  float dist = distance(v_uv, vec2(0.5)) * 2.0;
  /*
    Invert and clamp our distance from 0.0 to 1.0
  */
  float c = clamp(1.0 - dist, 0.0, 1.0);
  /*
    Use the distance to generate the pixel opacity. We have to explicitly enable alpha blending in WebGL to see the correct result
  */
  outColor = vec4(vec3(1.0), c);
}

Let’s write two utility methods: makeGLShader() to create and compile our GLSL shaders and makeGLProgram() to link them into a GLSL program to be ran on the GPU:

/*
  Utility method to create a WebGLShader object and compile it on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLShader
*/
function makeGLShader (shaderType, shaderSource) {
  /* Create a WebGLShader object with correct type */
  const shader = gl.createShader(shaderType)
  /* Attach the shaderSource string to the newly created shader */
  gl.shaderSource(shader, shaderSource)
  /* Compile our newly created shader */
  gl.compileShader(shader)
  const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS)
  /* Return the WebGLShader if compilation was a success */
  if (success) {
    return shader
  }
  /* Otherwise log the error and delete the faulty shader */
  console.error(gl.getShaderInfoLog(shader))
  gl.deleteShader(shader)
}

/*
  Utility method to create a WebGLProgram object
  It will create both a vertex and fragment WebGLShader and link them into a program on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram
*/
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {
  /* Create and compile vertex WebGLShader */
  const vertexShader = makeGLShader(gl.VERTEX_SHADER, vertexShaderSource)
  /* Create and compile fragment WebGLShader */
  const fragmentShader = makeGLShader(gl.FRAGMENT_SHADER, fragmentShaderSource)
  /* Create a WebGLProgram and attach our shaders to it */
  const program = gl.createProgram()
  gl.attachShader(program, vertexShader)
  gl.attachShader(program, fragmentShader)
  /* Link the newly created program on the device GPU */
  gl.linkProgram(program) 
  /* Return the WebGLProgram if linking was successfull */
  const success = gl.getProgramParameter(program, gl.LINK_STATUS)
  if (success) {
    return program
  }
  /* Otherwise log errors to the console and delete fauly WebGLProgram */
  console.error(gl.getProgramInfoLog(program))
  gl.deleteProgram(program)
}

And here is the complete code snippet we need to add to our previous code snippet to generate our geometry, compile our shaders and link them into a GLSL program:

const canvas = document.createElement('canvas')
/* rest of code */

/* Enable WebGL alpha blending */
gl.enable(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)

/*
  Generate the Vertex Array Object and GLSL program
  we need to render our 2D quad
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(innerWidth / 2, innerHeight / 2)

/* --------------- Utils ----------------- */

function makeQuad (positionX, positionY, width = 50, height = 50, drawType = gl.STATIC_DRAW) {
  /*
    Write our vertex and fragment shader programs as simple JS strings

    !!! Important !!!!
    
    WebGL2 requires GLSL 3.00 ES
    We need to declare this version on the FIRST LINE OF OUR PROGRAM
    Otherwise it would not work!
  */
  const vertexShaderSource = `#version 300 es
    /*
      Pass in geometry position and tex coord from the CPU
    */
    in vec4 a_position;
    in vec2 a_uv;
    
    /*
     Pass in global projection matrix for each vertex
    */
    uniform mat4 u_projectionMatrix;
    
    /*
      Specify varying variable to be passed to fragment shader
    */
    out vec2 v_uv;
    
    void main () {
      gl_Position = u_projectionMatrix * a_position;
      v_uv = a_uv;
    }
  `
  const fragmentShaderSource = `#version 300 es
    /*
      Set fragment shader float precision
    */
    precision highp float;
    
    /*
      Consume interpolated tex coord varying from vertex shader
    */
    in vec2 v_uv;
    
    /*
      Final color represented as a vector of 4 components - r, g, b, a
    */
    out vec4 outColor;
    
    void main () {
      float dist = distance(v_uv, vec2(0.5)) * 2.0;
      float c = clamp(1.0 - dist, 0.0, 1.0);
      outColor = vec4(vec3(1.0), c);
    }
  `
  /*
    Construct a WebGLProgram object out of our shader sources and link it on the GPU
  */
  const quadProgram = makeGLProgram(vertexShaderSource, fragmentShaderSource)
  
  /*
    Create a Vertex Array Object that will store a description of our geometry
    that we can reference later when rendering
  */
  const quadVertexArrayObject = gl.createVertexArray()
  
  /*
    1. Defining geometry positions
    
    Create the geometry points for our quad
        
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
     
     We need two triangles to form a single quad
     As you can see, we end up duplicating vertices:
     V5 & V3 and V4 & V1 end up occupying the same position.
     
     There are better ways to prepare our data so we don't end up with
     duplicates, but let's keep it simple for this demo and duplicate them
     
     Unlike regular Javascript arrays, WebGL needs strongly typed data
     That's why we supply our positions as an array of 32 bit floating point numbers
  */
  const vertexArray = new Float32Array([
    /*
      First set of 3 points are for our first triangle
    */
    positionX - width / 2,  positionY + height / 2, // Vertex 1 (X, Y)
    positionX + width / 2,  positionY + height / 2, // Vertex 2 (X, Y)
    positionX + width / 2,  positionY - height / 2, // Vertex 3 (X, Y)
    /*
      Second set of 3 points are for our second triangle
    */
    positionX - width / 2, positionY + height / 2, // Vertex 4 (X, Y)
    positionX + width / 2, positionY - height / 2, // Vertex 5 (X, Y)
    positionX - width / 2, positionY - height / 2  // Vertex 6 (X, Y)
  ])

  /*
    Create a WebGLBuffer that will hold our triangles positions
  */
  const vertexBuffer = gl.createBuffer()
  /*
    Now that we've created a GLSL program on the GPU we need to supply data to it
    We need to supply our 32bit float array to the a_position variable used by the GLSL program
    
    When you link a vertex shader with a fragment shader by calling gl.linkProgram(someProgram)
    WebGL (the driver/GPU/browser) decide on their own which index/location to use for each attribute
    
    Therefore we need to find the location of a_position from our program
  */
  const a_positionLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_position')
  
  /*
    Bind the Vertex Array Object descriptior for this geometry
    Each geometry instruction from now on will be recorded under it
    
    To stop recording after we are done describing our geometry, we need to simply unbind it
  */
  gl.bindVertexArray(quadVertexArrayObject)

  /*
    Bind the active gl.ARRAY_BUFFER to our WebGLBuffer that describe the geometry positions
  */
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
  /*
    Feed our 32bit float array that describes our quad to the vertexBuffer using the
    gl.ARRAY_BUFFER global handle
  */
  gl.bufferData(gl.ARRAY_BUFFER, vertexArray, drawType)
  /*
    We need to explicitly enable our the a_position variable on the GPU
  */
  gl.enableVertexAttribArray(a_positionLocationOnGPU)
  /*
    Finally we need to instruct the GPU how to pull the data out of our
    vertexBuffer and feed it into the a_position variable in the GLSL program
  */
  /*
    Tell the attribute how to get data out of positionBuffer (ARRAY_BUFFER)
  */
  const size = 2           // 2 components per iteration
  const type = gl.FLOAT    // the data is 32bit floats
  const normalize = false  // don't normalize the data
  const stride = 0         // 0 = move forward size * sizeof(type) each iteration to get the next position
  const offset = 0         // start at the beginning of the buffer
  gl.vertexAttribPointer(a_positionLocationOnGPU, size, type, normalize, stride, offset)
  
  /*
    2. Defining geometry UV texCoords
    
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
  */
  const uvsArray = new Float32Array([
    0, 0, // V1
    1, 0, // V2
    1, 1, // V3
    0, 0, // V4
    1, 1, // V5
    0, 1  // V6
  ])
  /*
    The rest of the code is exactly like in the vertices step above.
    We need to put our data in a WebGLBuffer, look up the a_uv variable
    in our GLSL program, enable it, supply data to it and instruct
    WebGL how to pull it out:
  */
  const uvsBuffer = gl.createBuffer()
  const a_uvLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_uv')
  gl.bindBuffer(gl.ARRAY_BUFFER, uvsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, uvsArray, drawType)
  gl.enableVertexAttribArray(a_uvLocationOnGPU)
  gl.vertexAttribPointer(a_uvLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptior for this geometry
  */
  gl.bindVertexArray(null)
  
  /*
    WebGL has a normalized viewport coordinate system which looks like this:
    
         Device Viewport
       ------- 1.0 ------  
      |         |         |
      |         |         |
    -1.0 --------------- 1.0
      |         |         | 
      |         |         |
       ------ -1.0 -------
       
     However as you can see, we pass the position and size of our quad in actual pixels
     To convert these pixels values to the normalized coordinate system, we will
     use the simplest 2D projection matrix.
     It will be represented as an array of 16 32bit floats
     
     You can read a gentle introduction to 2D matrices here
     https://webglfundamentals.org/webgl/lessons/webgl-2d-matrices.html
  */
  const projectionMatrix = new Float32Array([
    2 / innerWidth, 0, 0, 0,
    0, -2 / innerHeight, 0, 0,
    0, 0, 0, 0,
    -1, 1, 0, 1,
  ])
  
  /*
    In order to supply uniform data to our quad GLSL program, we first need to enable the GLSL program responsible for rendering our quad
  */
  gl.useProgram(quadProgram)
  /*
    Just like the a_position attribute variable earlier, we also need to look up
    the location of uniform variables in the GLSL program in order to supply them data
  */
  const u_projectionMatrixLocation = gl.getUniformLocation(quadProgram, 'u_projectionMatrix')
  /*
    Supply our projection matrix as a Float32Array of 16 items to the u_projection uniform
  */
  gl.uniformMatrix4fv(u_projectionMatrixLocation, false, projectionMatrix)
  /*
    We have set up our uniform variables correctly, stop using the quad program for now
  */
  gl.useProgram(null)

  /*
    Return our GLSL program and the Vertex Array Object descriptor of our geometry
    We will need them to render our quad in our updateFrame method
  */
  return {
    quadProgram,
    quadVertexArrayObject,
  }
}

/* rest of code */
function makeGLShader (shaderType, shaderSource) {}
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {}
function updateFrame (timestampMs) {}

We have successfully created a GLSL program quadProgram, which is running on the GPU, waiting to be drawn on the screen. We also have obtained a Vertex Array Object quadVertexArrayObject, which describes our geometry and can be referenced before we draw. We can now draw our quad. Let’s augment our updateFrame() method like so:

function updateFrame (timestampMs) {
   /* rest of our code */

  /*
    Bind the Vertex Array Object descriptor of our quad we generated earlier
  */
  gl.bindVertexArray(quadVertexArrayObject)
  /*
    Use our quad GLSL program
  */
  gl.useProgram(quadProgram)
  /*
    Issue a render command to paint our quad triangles
  */
  {
    const drawPrimitive = gl.TRIANGLES
    const vertexArrayOffset = 0
    const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
    gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
  }
  /*     
    After a successful render, it is good practice to unbind our 
GLSL program and Vertex Array Object so we keep WebGL state clean.
    We will bind them again anyway on the next render
  */
  gl.useProgram(null)
  gl.bindVertexArray(null)

  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

And here is our result:

We can use the great SpectorJS Chrome extension to capture our WebGL operations on each frame. We can look at the entire command list with their associated visual states and context information. Here is what it takes to render a single frame with our updateFrame() call:

Draw calls needed to render a single 2D quad on the center of our screen.
A screenshot of all the steps we implemented to render a single quad. (Click to see a larger version)

Some gotchas:

  1. We declare the vertices positions of our triangles in a counter clockwise order. This is important.
  2. We need to explicitly enable blending in WebGL and specify it’s blend operation. For our demo we will use gl.ONE_MINUS_SRC_ALPHA as a blend function (multiplies all colors by 1 minus the source alpha value).
  3. In our vertex shader you can see we expect the input variable a_position to be vector with 4 components (vec4), while in Javascript we specify only 2 items per vertex. That’s because the default attribute value is 0, 0, 0, 1. It doesn’t matter that you’re only supplying x and y from your attributes. z defaults to 0 and w defaults to 1.
  4. As you can see, WebGL is a state machine, where you have to constantly bind stuff before you are able to work on it and you always have to make sure you unbind it afterwards. Consider how in the code snippet above we supplied a Float32Array with out positions to the vertexBuffer:
const vertexArray = new Float32Array([/* ... */])
const vertexBuffer = gl.createBuffer()
/* Bind our vertexBuffer to the global binding WebGL bind point gl.ARRAY_BUFFER */
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/* At this point, gl.ARRAY_BUFFER represents vertexBuffer */
/* Supply data to our vertexBuffer using the gl.ARRAY_BUFFER binding point */
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW)
/* Do a bunch of other stuff with the active gl.ARRAY_BUFFER (vertexBuffer) here */
// ...

/* After you have done your work, unbind it */
gl.bindBuffer(gl.ARRAY_BUFFER, null)

This is totally opposite of Javascript, where this same operation would be expressed like this for example (pseudocode):

const vertexBuffer = gl.createBuffer()
vertexBuffer.addData(vertexArray)
vertexBuffer.setDrawOperation(gl.STATIC_DRAW)
// etc.

Coming from Javascript background, initially I found WebGL’s state machine way of doing things by constantly binding and unbinding really odd. One must exercise good discipline and always make sure to unbind stuff after using it, even in trivial programs like ours! Otherwise you risk things not working and hard to track bugs.

Drawing lots of quads

We have successfully rendered a single quad, but in order to make things more interesting and visually appealing, we need to draw more.

As we saw already, we can easily create new geometries with different position using our makeQuad() utility helper. We can pass them different positions and radiuses and compile each one of them into a separate GLSL program to be executed on the GPU. This will work, however:

As we saw in our update loop method updateFrame, to render our quad on each frame we must:

  1. Use the correct GLSL program by calling gl.useProgram()
  2. Bind the correct VAO describing our geometry by calling gl.bindVertexArray()
  3. Issue a draw call with correct primitive type by calling gl.drawArrays()

So 3 WebGL commands in total.

What if we want to render 500 quads? Suddenly we jump to 500×3 or 1500 individual WebGL calls on each frame of our animation. If we want 1000quads we jump up to 3000 individual calls, without even counting all of the preparation WebGL bindings we have to do before our updateFrame loop starts.

Geometry Instancing is a way to reduce these calls. It works by letting you tell WebGL how many times you want the same thing drawn (the number of instances) with minor variations, such as rotation, scale, position etc. Examples include trees, grass, crowd of people, boxes in a warehouse, etc.

Just like VAOs, instancing is a 1st class citizen in WebGL2 and does not require extensions, unlike WebGL1. Let’s augment our code to support geometry instancing and render 1000 quads with random positions.

First of all, we need to decide on how many quads we want rendered and prepare the offset positions for each one as a new array of 32bit floats. Let’s do 1000 quads and positions them randomly in our viewport:

/* rest of code */

/* How many quads we want rendered */
const QUADS_COUNT = 1000
/*
  Array to store our quads positions
  We need to layout our array as a continuous set
  of numbers, where each pair represents the X and Y
  or a single 2D position.
  
  Hence for 1000 quads we need an array of 2000 items
  or 1000 pairs of X and Y
*/
const quadsPositions = new Float32Array(QUADS_COUNT * 2)
for (let i = 0; i < QUADS_COUNT; i++) {
  /*
    Generate a random X and Y position
  */
  const randX = Math.random() * innerWidth
  const randY = Math.random() * innerHeight
  /*
    Set the correct X and Y for each pair in our array
  */
  quadsPositions[i * 2 + 0] = randX
  quadsPositions[i * 2 + 1] = randY
}

/*
  We also need to augment our makeQuad() method
  It no longer expects a single position, rather an array of positions
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(quadsPositions)

/* rest of code */

Instead of a single position, we will now pass an array of positions into our makeQuad() method. Let’s augment this method to receive our offsets array as a new variable input a_offset to our shaders which will contain the correct XY offset for a particular instance. To do this, we need to prepare our offsets as a new WebGLBuffer and instruct WebGL how to upack them, just like we did for a_position and a_uv

function makeQuad (quadsPositions, width = 70, height = 70, drawType = gl.STATIC_DRAW) {
  /* rest of code */

  /*
    Add offset positions for our individual instances
    They are declared and used in exactly the same way as
    "a_position" and "a_uv" above
  */
  const offsetsBuffer = gl.createBuffer()
  const a_offsetLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_offset')
  gl.bindBuffer(gl.ARRAY_BUFFER, offsetsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, quadsPositions, drawType)
  gl.enableVertexAttribArray(a_offsetLocationOnGPU)
  gl.vertexAttribPointer(a_offsetLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  /*
    HOWEVER, we must add an additional WebGL call to set this attribute to only
    change per instance, instead of per vertex like a_position and a_uv above
  */
  const instancesDivisor = 1
  gl.vertexAttribDivisor(a_offsetLocationOnGPU, instancesDivisor)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptor for this geometry
  */
  gl.bindVertexArray(null)

  /* rest of code */
}

We need to augment our original vertexArray responsible for passing data into our a_position GLSL variable. We no longer need to offset it to the desired position like in the first example, now the a_offset variable will take care of this in the vertex shader:

const vertexArray = new Float32Array([
  /*
    First set of 3 points are for our first triangle
  */
 -width / 2,  height / 2, // Vertex 1 (X, Y)
  width / 2,  height / 2, // Vertex 2 (X, Y)
  width / 2, -height / 2, // Vertex 3 (X, Y)
  /*
    Second set of 3 points are for our second triangle
  */
 -width / 2,  height / 2, // Vertex 4 (X, Y)
  width / 2, -height / 2, // Vertex 5 (X, Y)
 -width / 2, -height / 2  // Vertex 6 (X, Y)
])

We also need to augment our vertex shader to consume and use the new a_offset input variable we pass from Javascript:

const vertexShaderSource = `#version 300 es
  /* rest of GLSL code */
  /*
    This input vector will change once per instance
  */
  in vec4 a_offset;

  void main () {
     /* Account a_offset in the final geometry posiiton */
     vec4 newPosition = a_position + a_offset;
     gl_Position = u_projectionMatrix * newPosition;
  }
  /* rest of GLSL code */
`

And as a final step we need to change our drawArrays call in our updateFrame to drawArraysInstanced to account for instancing. This new method expects the exact same arguments and adds instanceCount as last one:

function updateFrame (timestampMs) {
   /* rest of code */
   {
     const drawPrimitive = gl.TRIANGLES
     const vertexArrayOffset = 0
     const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
     gl.drawArraysInstanced(drawPrimitive, vertexArrayOffset, numberOfVertices, QUADS_COUNT)
   }
   /* rest of code */
}

And with all these changes, here is our updated example:

Even though we increased the amount of rendered objects by 1000x, we are still making 3 WebGL calls on each frame. That’s a pretty great performance win!

Steps needed so our WebGL can draw 1000 of quads via geometry instancing.
All WebGL calls needed to draw our 1000 quads in a single updateFrame()call. Note the amount of needed calls did not increase from the previous example thanks to instancing.

Post Processing with a fullscreen quad

Now that we have our 1000 quads successfully rendering to the device screen on each frame, we can turn them into metaballs. As we established, we need to scan the pixels of the picture we generated in the previous steps and determine the alpha value of each pixel. If it is below a certain threshold, we discard it, otherwise we color it.

To do this, instead of rendering our scene directly to the screen as we do right now, we need to render it to a texture. We will do our post processing on this texture and render the result to the device screen.

Post-Processing is a technique used in graphics that allows you to take a current input texture, and manipulate its pixels to produce a transformed image. This can be used to apply shiny effects like volumetric lighting, or any other filter type effect you’ve seen in applications like Photoshop or Instagram.

Nicolas Garcia Belmonte

The basic technique for creating these effects is pretty straightforward:

  1. A WebGLTexture is created with the same size as the canvas and attached as a color attachment to a WebGLFramebuffer. At the beginning of our updateFrame() method, the framebuffer is set as the render target, and the entire scene is rendered normally to it.
  2. Next, a full-screen quad is rendered to the device screen using the texture generated in step 1 as an input. The shader used during the rendering of the quad is what contains the post-process effect.

Creating a texture and framebuffer to render to

A framebuffer is just a collection of attachments. Attachments are either textures or renderbuffers. Let’s create a WebGLTexture and attach it to a framebuffer as the first color attachment:

/* rest of code */

const renderTexture = makeTexture()
const framebuffer = makeFramebuffer(renderTexture)

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) {
  /*
    Create the texture that we will use to render to
  */
  const targetTexture = gl.createTexture()
  /*
    Just like everything else in WebGL up until now, we need to bind it
    so we can configure it. We will unbind it once we are done with it.
  */
  gl.bindTexture(gl.TEXTURE_2D, targetTexture)

  /*
    Define texture settings
  */
  const level = 0
  const internalFormat = gl.RGBA
  const border = 0
  const format = gl.RGBA
  const type = gl.UNSIGNED_BYTE
  /*
    Notice how data is null. That's because we don't have data for this texture just yet
    We just need WebGL to allocate the texture
  */
  const data = null
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /*
    Set the filtering so we don't need mips
  */
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
  
  return renderTexture
}

function makeFramebuffer (texture) {
  /*
    Create and bind the framebuffer
  */
  const fb = gl.createFramebuffer()
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb)
 
  /*
    Attach the texture as the first color attachment
  */
  const attachmentPoint = gl.COLOR_ATTACHMENT0
  gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, targetTexture, level)
}

We have successfully created a texture and attached it as color attachment to a framebuffer. Now we can render our scene to it. Let’s augment our updateFrame()method:

function updateFrame () {
  gl.viewport(0, 0, canvas.width, canvas.height)
  gl.clearColor(0, 0, 0, 0)
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Bind the framebuffer we created
    From now on until we unbind it, each WebGL draw command will render in it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
  
  /* Set the offscreen framebuffer background color to be transparent */
  gl.clearColor(0.2, 0.2, 0.2, 1.0)
  /* Clear the offscreen framebuffer pixels */
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Code for rendering our instanced quads here
  */

  /*
    We have successfully rendered to the framebuffer at this point
    In order to render to the screen next, we need to unbind it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, null)
  
  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

Let’s take a look at our result:

As you can see, we get an empty screen. There are no errors and the program is running just fine – keep in mind however that we are rendering to a separate framebuffer, not the default device screen framebuffer!

Break down of our WebGL scene and the steps needed to render it to a separate framebuffer.
Our program produces black screen, since we are rendering to the offscreen framebuffer

In order to display our offscreen framebuffer back on the screen, we need to render a fullscreen quad and use the framebuffer’s texture as an input.

Creating a fullscreen quad and displaying our texture on it

Let’s create a new quad. We can reuse our makeQuad() method from the above snippets, but we need to augment it to support instancing optionally and be able to put vertex and fragment shader sources as outside argument variables. This time we need only one quad and the shaders we need for it are different.

Take a look at the updated makeQuad()signature:

/* rename our instanced quads program & VAO */
const {
  quadProgram: instancedQuadsProgram,
  quadVertexArrayObject: instancedQuadsVAO,
} = makeQuad({
  instancedOffsets: quadsPositions,
  /*
    We need different set of vertex and fragment shaders
    for the different quads we need to render, so pass them from outside
  */
  vertexShaderSource: instancedQuadVertexShader,
  fragmentShaderSource: instancedQuadFragmentShader,
  /*
    support optional instancing
  */
  isInstanced: true,
})

Let’s use the same method to create a new fullscreen quad and render it. First our vertex and fragment shader:

const fullscreenQuadVertexShader = `#version 300 es
   in vec4 a_position;
   in vec2 a_uv;
   
   uniform mat4 u_projectionMatrix;
   
   out vec2 v_uv;
   
   void main () {
    gl_Position = u_projectionMatrix * a_position;
    v_uv = a_uv;
   }
`
const fullscreenQuadFragmentShader = `#version 300 es
  precision highp float;
  
  /*
    Pass our texture we render to as an uniform
  */
  uniform sampler2D u_texture;
  
  in vec2 v_uv;
  
  out vec4 outputColor;
  
  void main () {
    /*
      Use our interpolated UVs we assigned in Javascript to lookup
      texture color value at each pixel
    */
    vec4 inputColor = texture(u_texture, v_uv);
    
    /*
      0.5 is our alpha threshold we use to decide if
      pixel should be discarded or painted
    */
    float cutoffThreshold = 0.5;
    /*
      "cutoff" will be 0 if pixel is below 0.5 or 1 if above
      
      step() docs - https://thebookofshaders.com/glossary/?search=step
    */
    float cutoff = step(cutoffThreshold, inputColor.a);
    
    /*
      Let's use mix() GLSL method instead of if statement
      if cutoff is 0, we will discard the pixel by using empty color with no alpha
      otherwise, let's use black with alpha of 1
      
      mix() docs - https://thebookofshaders.com/glossary/?search=mix
    */
    vec4 emptyColor = vec4(0.0);
    /* Render base metaballs shapes */
    vec4 borderColor = vec4(1.0, 0.0, 0.0, 1.0);
    outputColor = mix(
      emptyColor,
      borderColor,
      cutoff
    );
    
    /*
      Increase the treshold and calculate new cutoff, so we can render smaller shapes again, this time in different color and with smaller radius
    */
    cutoffThreshold += 0.05;
    cutoff = step(cutoffThreshold, inputColor.a);
    vec4 fillColor = vec4(1.0, 1.0, 0.0, 1.0);
    /*
      Add new smaller metaballs color on top of the old one
    */
    outputColor = mix(
      outputColor,
      fillColor,
      cutoff
    );
  }
`

Let’s use them to create and link a valid GLSL program, just like when we rendered our instances:

const {
  quadProgram: fullscreenQuadProgram,
  quadVertexArrayObject: fullscreenQuadVAO,
} = makeQuad({
  vertexShaderSource: fullscreenQuadVertexShader,
  fragmentShaderSource: fullscreenQuadFragmentShader,
  isInstanced: false,
  width: innerWidth,
  height: innerHeight
})
/*
  Unlike our instances GLSL program, here we need to pass an extra uniform - a "u_texture"!
  Tell the shader to use texture unit 0 for u_texture
*/
gl.useProgram(fullscreenQuadProgram)
const u_textureLocation = gl.getUniformLocation(fullscreenQuadProgram, 'u_texture')
gl.uniform1i(u_textureLocation, 0)
gl.useProgram(null)

Finally we can render the fullscreen quad with the result texture as an uniform u_texture. Let’s change our updateFrame() method:

function updateFrame () {
 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
 /* render instanced quads here */
 gl.bindFramebuffer(gl.FRAMEBUFFER, null)
 
 /*
   Render our fullscreen quad
 */
 gl.bindVertexArray(fullscreenQuadVAO)
 gl.useProgram(fullscreenQuadProgram)
 /*
  Bind the texture we render to as active TEXTURE_2D
 */
 gl.bindTexture(gl.TEXTURE_2D, renderTexture)
 {
   const drawPrimitive = gl.TRIANGLES
   const vertexArrayOffset = 0
   const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
   gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
 }
 /*
   Just like everything else, unbind our texture once we are done rendering
 */
 gl.bindTexture(gl.TEXTURE_2D, null)
 gl.useProgram(null)
 gl.bindVertexArray(null)
 requestAnimationFrame(updateFrame)
}

And here is our final result (I also added a simple animation to make the effect more apparent):

And here is the breakdown of one updateFrame() call:

Breakdown of our WebGL scene amd the steps needed to render 1000 quads and post-process them to metaballs.
You can clearly see how we render our 1000 instanced quads in separate framebuffer in steps 1 to 3. We then draw and manipulate the resulting texture to a fullscreen quad that we render in steps 4 to 7.

Aliasing issues

On my 2016 MacBook Pro with retina display I can clearly see aliasing issues with our current example. If we are to add bigger radiuses and blow our animation to fullscreen the problem will become only more noticeable.

The issue comes from the fact we are rendering to a 8bit gl.UNSIGNED_BYTE texture. If we want to increase the detail, we need to switch to floating point textures (32 bit float gl.RGBA32F or 16 bit float gl.RGBA16F). The catch is that these textures are not supported on all hardware and are not part of WebGL2 core. They are available through optional extensions, that we need to check if exist.

The extensions we are interested in to render to 32bit floating point textures are

  • EXT_color_buffer_float
  • OES_texture_float_linear

If these extensions are present on the user device, we can use internalFormat = gl.RGBA32F and textureType = gl.FLOAT when creating our render textures. If they are not present, we can optionally fallback and render to 16bit floating textures. The extensions we need in that case are:

  • EXT_color_buffer_half_float
  • OES_texture_half_float_linear

If these extensions are present, we can use internalFormat = gl.RGBA16F and textureType = gl.HALF_FLOAT for our render texture. If not, we will fallback to what we have used up until now – internalFormat = gl.RGBA and textureType = gl.UNSIGNED_BYTE.

Here is our updated makeTexture() method:

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) { 
  /*
   Initialize internal format & texture type to default values
  */
  let internalFormat = gl.RGBA
  let type = gl.UNSIGNED_BYTE
  
  /*
    Check if optional extensions are present on device
  */
  const rgba32fSupported = gl.getExtension('EXT_color_buffer_float') && gl.getExtension('OES_texture_float_linear')
  
  if (rgba32fSupported) {
    internalFormat = gl.RGBA32F
    type = gl.FLOAT
  } else {
    /*
      Check if optional fallback extensions are present on device
    */
    const rgba16fSupported = gl.getExtension('EXT_color_buffer_half_float') && gl.getExtension('OES_texture_half_float_linear')
    if (rgba16fSupported) {
      internalFormat = gl.RGBA16F
      type = gl.HALF_FLOAT
    }
  }

  /* rest of code */
  
  /*
    Pass in correct internalFormat and textureType to texImage2D call 
  */
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /* rest of code */
}

And here is our updated result:

Conclusion

I hope I managed to showcase the core principles behind WebGL2 with this demo. As you can see, the API itself is low-level and requires quite a bit of typing, yet at the same time is really powerful and let’s you draw complex scenes with fine-grained control over the rendering.

Writing production ready WebGL requires even more typing, checking for optional features / extensions and handling missing extensions and fallbacks, so I would advise you to use a framework. At the same time, I believe it is important to understand the key concepts behind the API so you can successfully use higher level libraries like threejs and dig into their internals if needed.

I am a big fan of twgl, which hides away much of the verbosity of the API, while still being really low level with a small footprint. This demo’s code can easily be reduced by more then half by using it.

I encourage you to experiment around with the code after reading this article, plug in different values, change the order of things, add more draw commands and what not. I hope you walk away with a high level understanding of core WebGL2 API and how it all ties together, so you can learn more on your own.

The post Drawing 2D Metaballs with WebGL2 appeared first on Codrops.