Replicating the Interweave Shape Animation with Three.js

In this new ALL YOUR HTML coding session you’ll learn how to reconstruct the beautiful shape animation from the website of INTERWEAVE agency with Three.js. We’ll be calculating tangents and bitangents and use Physical materials to make a beautiful object.

Original website: https://interweaveagency.com/

This coding session was streamed live on March 20, 2022.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Interweave Shape Animation with Three.js appeared first on Codrops.

Noise Pattern Reconstruction from Monopo Studio

In this new ALL YOUR HTML coding session we’ll be reconstructing the beautiful noise pattern from Monopo Studio’s website using Three.js and GLSL and some postprocessing.

Monopo Studio: https://twitter.com/monopo_en

Developer: https://twitter.com/bramichou

This coding session was streamed live on February 20, 2022.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Noise Pattern Reconstruction from Monopo Studio appeared first on Codrops.

Magical Marbles in Three.js

In April 2019, Harry Alisavakis made a great write-up about the “magical marbles” effect he shared prior on Twitter. Check that out first to get a high level overview of the effect we’re after (while you’re at it, you should see some of his other excellent shader posts).

While his write-up provided a brief summary of the technique, the purpose of this tutorial is to offer a more concrete look at how you could implement code for this in Three.js to run on the web. There’s also some tweaks to the technique here and there that try to make things more straightforward.

⚠ This tutorial assumes intermediate familiarity with Three.js and GLSL

Overview

You should read Harry’s post first because he provides helpful visuals, but the gist of it is this:

  • Add fake depth to a material by offsetting the texture look-ups based on camera direction
  • Instead of using the same texture at each iteration, let’s use depth-wise “slices” of a heightmap so that the shape of our volume is more dynamic
  • Add wavy motion by displacing the texture look-ups with scrolling noise

There were a couple parts of this write-up that weren’t totally clear to me, likely due to the difference in features available in Unity vs Three.js. One is the jump from parallax mapping on a plane to a sphere. Another is how to get vertex tangents for the transformation to tangent space. Finally, I wasn’t sure if the noise for the heightmap was evaluated as code inside the shader or pre-rendered. After some experimentation I came to my own conclusions for these, but I encourage you to come up with your own variations of this technique 🙂

Here’s the Pen I’ll be starting from, it sets up a boilerplate Three.js app with an init and tick lifecycle, color management, and an environment map from Poly Haven for lighting.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 1: A Blank Marble

Marbles are made of glass, and Harry’s marbles definitely showed some specular shine. In order to make a truly beautiful glassy material it would take some pretty complex PBR shader code, which is too much work! Instead, let’s just take one of Three.js’s built-in PBR materials and hook our magical bits into that, like the shader parasite we are.

Enter onBeforeCompile, a callback property of the THREE.Material base class that lets you apply patches to built-in shaders before they get compiled by WebGL. This technique is very hacky and not well explained in the official docs, but a good place to learn more about it is Dusan Bosnjak’s post “Extending three.js materials with GLSL”. The hardest part about it is determining which part of the shaders you need to change exactly. Unfortunately, your best bet is to just read through the source code of the shader you want to modify, find a line or chunk that looks vaguely relevant, and try tweaking stuff until the property you want to modify shows visible changes. I’ve been writing personal notes of what I discover since it’s really hard to keep track of what the different chunks and variables do.

ℹ I recently discovered there’s a much more elegant way to extend the built-in materials using Three’s experimental Node Materials, but that deserves a whole tutorial of its own, so for this guide I’ll stick with the more common onBeforeCompile approach.

For our purposes, MeshStandardMaterial is a good base to start from. It has specular and environment reflections that will make out material look very glassy, plus it gives you the option to add a normal map later on if you want to add scratches onto the surface. The only part we want to change is the base color on which the lighting is applied. Luckily, this is easy to find. The fragment shader for MeshStandardMaterial is defined in meshphysical_frag.glsl.js (it’s a subset of MeshPhysicalMaterial, so they are both defined in the same file). Oftentimes you need to go digging through the shader chunks represented by each of the #include statements you’ll see in the file, however, this is a rare occasion where the variable we want to tweak is in plain sight.

It’s the line right near the top of the main() function that says:

vec4 diffuseColor = vec4( diffuse, opacity );

This line normally reads from the diffuse and opacity uniforms which you set via the .color and .opacity JavaScript properties of the material, and then all the chunks after that do the complicated lighting work. We are going to replace this line with our own assignment to diffuseColor so we can apply whatever pattern we want on the marble’s surface. You can do this using regular JavaScript string methods on the .fragmentShader field of the shader provided to the onBeforeCompile callback.

material.onBeforeCompile = shader => {
  shader.fragmentShader = shader.fragmentShader.replace('/vec4 diffuseColor.*;/, `
    // Assign whatever you want!
    vec4 diffuseColor = vec4(1., 0., 0., 1.);
  `)
}

By the way, the type definition for that mysterious callback argument is available here.

In the following Pen I swapped our geometry for a sphere, lowered the roughness, and filled the diffuseColor with the screen space normals which are available in the standard fragment shader on vNormal. The result looks like a shiny version of MeshNormalMaterial.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 2: Fake Volume

Now comes the harder part — using the diffuse color to create the illusion of volume inside our marble. In Harry’s earlier parallax post, he talks about finding the camera direction in tangent space and using this to offset the UV coordinates. There’s a great explanation of how this general principle works for parallax effects on learnopengl.com and in this archived post.

However, converting stuff into tangent space in Three.js can be tricky. To the best of my knowledge, there’s not a built-in utility to help with this like there are for other space transformations, so it takes some legwork to both generate vertex tangents and then assemble a TBN matrix to perform the transformation. On top of that, spheres are not a nice shape for tangents due to the hairy ball theorem (yes, that’s a real thing), and Three’s computeTangents() function was producing discontinuities for me so you basically have to compute tangents manually. Yuck!

Luckily, we don’t really need to use tangent space if we frame this as a 3D raymarching problem. We have a ray pointing from the camera to the surface of our marble, and we want to march this through the sphere volume as well as down the slices of our height map. We just need to know how to convert a point in 3D space into a point on the surface of our sphere so we can perform texture lookups. In theory you could also just plug the 3D position right into your noise function of choice and skip using the texture, but this effect relies on lots of iterations and I’m operating under the assumption that a texture lookup is cheaper than all the number crunching happening in e.g. the 3D simplex noise function (shader gurus, please correct me if I’m wrong). The other benefit of reading from a texture is that it allows us to use a more art-oriented pipeline to craft our heightmaps, so we can make all sorts of interesting volumes without writing new code.

Originally I wrote a function to do this spherical XYZ→UV conversion based on some answers I saw online, but it turns out there’s already a function that does the same thing inside of common.glsl.js called equirectUv. We can reuse that as long as put our raymarching logic after the #include <common> line in the standard shader.

Creating our heightmap

For the heightmap, we want a texture that seamlessly projects on the surface of a UV sphere. It’s not hard to find seamless noise textures online, but the problem is that these flat projections of noise will look warped near the poles when applied to a sphere. To solve this, let’s craft our own texture using Blender. One way to do this is to bend a high resolution “Grid” mesh into a sphere using two instances of the “Simple Deform modifier”, plug the resulting “Object” texture coordinates into your procedural shader of choice, and then do an emissive bake with the Cycles renderer. I also added some loop cuts near the poles and a subdivision modifier to prevent any artifacts in the bake.

The resulting bake looks something like this:

Raymarching

Now the moment we’ve been waiting for (or dreading) — raymarching! It’s actually not so bad, the following is an abbreviated version of the code. For now there’s no animation, I’m just taking slices of the heightmap using smoothstep (note the smoothing factor which helps hide the sharp edges between layers), adding them up, and then using this to mix two colors.

uniform sampler2D heightMap;
uniform vec3 colorA;
uniform vec3 colorB;
uniform float iterations;
uniform float depth;
uniform float smoothing;

/**
  * @param rayOrigin - Point on sphere
  * @param rayDir - Normalized ray direction
  * @returns Diffuse RGB color
  */
vec3 marchMarble(vec3 rayOrigin, vec3 rayDir) {
  float perIteration = 1. / float(iterations);
  vec3 deltaRay = rayDir * perIteration * depth;

  // Start at point of intersection and accumulate volume
  vec3 p = rayOrigin;
  float totalVolume = 0.;

  for (int i=0; i<iterations; ++i) {
    // Read heightmap from current spherical direction
    vec2 uv = equirectUv(p);
    float heightMapVal = texture(heightMap, uv).r;

    // Take a slice of the heightmap
    float height = length(p); // 1 at surface, 0 at core, assuming radius = 1
    float cutoff = 1. - float(i) * perIteration;
    float slice = smoothstep(cutoff, cutoff + smoothing, heightMapVal);

    // Accumulate the volume and advance the ray forward one step
    totalVolume += slice * perIteration;
    p += deltaRay;
  }
  return mix(colorA, colorB, totalVolume);
}

/**
 * We can user this later like:
 *
 * vec4 diffuseColor = vec4(marchMarble(rayOrigin, rayDir), 1.0);
 */

ℹ This logic isn’t really physically accurate — taking slices of the heightmap based on the iteration index assumes that the ray is pointing towards the center of the sphere, but this isn’t true for most of the pixels. As a result, the marble appears to have some heavy refraction. However, I think this actually looks cool and further sells the effect of it being solid glass!

Injecting uniforms

One final note before we see the fruits of our labor — how do we include all these custom uniforms in our modified material? We can’t just stuck stuff onto material.uniforms like you would with THREE.ShaderMaterial. The trick is to create your own personal uniforms object and then wire up its contents onto the shader argument inside of onBeforeCompile. For instance:

const myUniforms = {
  foo: { value: 0 }
}

material.onBeforeCompile = shader => {
  shader.uniforms.foo = myUniforms.foo

  // ... (all your other patches)
}

When the shader tries to read its shader.uniforms.foo.value reference, it’s actually reading from your local myUniforms.foo.value, so any change to the values in your uniforms object will automatically be reflected in the shader.

I typically use the JavaScript spread operator to wire up all my uniforms at once:

const myUniforms = {
  // ...(lots of stuff)
}

material.onBeforeCompile = shader => {
  shader.uniforms = { ...shader.uniforms, ...myUniforms }

  // ... (all your other patches)
}

Putting this all together, we get a gassy (and glassy) volume. I’ve added sliders to this Pen so you can play around with the iteration count, smoothing, max depth, and colors.

See the Pen
by Matt (@mattrossman)
on CodePen.0

ℹ Technically the ray origin and ray direction should be in local space so the effect doesn’t break when the marble moves. However, I’m skipping this transformation because we’re not moving the marble, so world space and local space are interchangeable. Work smarter not harder!

Step 3: Wavy Motion

Almost done! The final touch is to make this marble come alive by animating the volume. Harry’s waving displacement post explains how he accomplishes this using a 2D displacement texture. However, just like with the heightmap, a flat displacement texture warps near the poles of a sphere. So, we’ll make our own again. You can use the same Blender setup as before, but this time let’s bake a 3D noise texture to the RGB channels:

Then in our marchMarble function, we’ll read from this texture using the same equirectUv function as before, center the values, and then add a scaled version of that vector to the position used for the heightmap texture lookup. To animate the displacement, introduce a time uniform and use that to scroll the displacement texture horizontally. For an even better effect, we’ll sample the displacement map twice (once upright, then upside down so they never perfectly align), scroll them in opposite directions and add them together to produce noise that looks chaotic. This general strategy is often used in water shaders to create waves.

uniform float time;
uniform float strength;

// Lookup displacement texture
vec2 uv = equirectUv(normalize(p));
vec2 scrollX = vec2(time, 0.);
vec2 flipY = vec2(1., -1.);
vec3 displacementA = texture(displacementMap, uv + scrollX).rgb;
vec3 displacementB = texture(displacementMap, uv * flipY - scrollX).rgb;

// Center the noise
displacementA -= 0.5;
displacementB -= 0.5;

// Displace current ray position and lookup heightmap
vec3 displaced = p + strength * (displacementA + displacementB);
uv = equirectUv(normalize(displaced));
float heightMapVal = texture(heightMap, uv).r;

Behold, your magical marble!

See the Pen
by Matt (@mattrossman)
on CodePen.0

Extra Credit

Hard part’s over! This formula is a starting point from which there are endless possibilities for improvements and deviations. For instance, what happens if we swap out the noise texture we used earlier for something else like this:

This was created using the “Wave Texture” node in Blender

See the Pen
by Matt (@mattrossman)
on CodePen.0

Or how about something recognizable, like this map of the earth?

Try dragging the “displacement” slider and watch how the floating continents dance around!

See the Pen
by Matt (@mattrossman)
on CodePen.0

In that example I modified the shader to make the volume look less gaseous by boosting the rate of volume accumulation, breaking the loop once it reached a certain volume threshold, and tinting based on the final number of iterations rather than accumulated volume.

For my last trick, I’ll point back to Harry’s write-up where he suggests mixing between two HDR colors. This basically means mixing between colors whose RGB values exceed the typical [0, 1] range. If we plug such a color into our shader as-is, it’ll create color artifacts in the pixels where the lighting is blown out. There’s an easy solve for this by wrapping the color in a toneMapping() call as is done in tonemapping_fragment.glsl.js, which “tones down” the color range. I couldn’t find where that function is actually defined, but it works!

I’ve added some color multiplier sliders to this Pen so you can push the colors outside the [0, 1] range and observe how mixing these HDR colors creates pleasant color ramps.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Conclusion

Thanks again to Harry for the great learning resources. I had a ton of fun trying to recreate this effect and I learned a lot along the way. Hopefully you learned something too!

Your challenge now is to take these examples and run with them. Change the code, the textures, the colors, and make your very own magical marble. Show me and Harry what you make on Twitter.

Surprise me!

The post Magical Marbles in Three.js appeared first on Codrops.

Rock the Stage with a Smooth WebGL Shader Transformation on Scroll

It’s fascinating which magical effects you can add to a website when you experiment with vertex displacement. Today we’d like to share a method with you that you can use to create your own WebGL shader animation linked to scroll progress. It’s a great way to learn how to bind shader vertices and colors to user interactions and to find the best flow.

We’ll be using Pug, Sass, Three.js and GSAP for our project.

Let’s rock!

The stage

For our flexible scroll stage, we quickly create three sections with Pug. By adding an element to the sections array, it’s easy to expand the stage.

index.pug:

.scroll__stage
  .scroll__content
    - const sections = ['Logma', 'Naos', 'Chara']
      each section, index in sections
        section.section
          .section__title
            h1.section__title-number= index < 9 ? `0${index + 1}` : index + 1

            h2.section__title-text= section

          p.section__paragraph The fireball that we rode was moving – But now we've got a new machine – They got music in the solar system
            br
            a.section__button Discover

The sections are quickly formatted with Sass, the mixins we will need later.

index.sass:

%top
  top: 0
  left: 0
  width: 100%

%fixed
  @extend %top

  position: fixed

%absolute
  @extend %top

  position: absolute

*,
*::after,
*::before
  margin: 0
  padding: 0
  box-sizing: border-box

.section
  display: flex
  justify-content: space-evenly
  align-items: center
  width: 100%
  min-height: 100vh
  padding: 8rem
  color: white
  background-color: black

  &:nth-child(even)
    flex-direction: row-reverse
    background: blue

  /* your design */

Now we write our ScrollStage class and set up a scene with Three.js. The camera range of 10 is enough for us here. We already prepare the loop for later instructions.

index.js:

import * as THREE from 'three'

class ScrollStage {
  constructor() {
    this.element = document.querySelector('.content')

    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight,
    }

    this.scene = new THREE.Scene()

    this.renderer = new THREE.WebGLRenderer({ 
      antialias: true, 
      alpha: true 
    })

    this.canvas = this.renderer.domElement

    this.camera = new THREE.PerspectiveCamera( 
      75, 
      this.viewport.width / this.viewport.height, 
      .1, 
      10
    )

    this.clock = new THREE.Clock()

    this.update = this.update.bind(this)

    this.init()
  }

  init() {
    this.addCanvas()
    this.addCamera()
    this.addEventListeners()
    this.onResize()
    this.update()
  }

  /**
   * STAGE
   */
  addCanvas() {
    this.canvas.classList.add('webgl')
    document.body.appendChild(this.canvas)
  }

  addCamera() {
    this.camera.position.set(0, 0, 2.5)
    this.scene.add(this.camera)
  }

  /**
   * EVENTS
   */
  addEventListeners() {
    window.addEventListener('resize', this.onResize.bind(this))
  }

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.camera.aspect = this.viewport.width / this.viewport.height
    this.camera.updateProjectionMatrix()

    this.renderer.setSize(this.viewport.width, this.viewport.height)
    this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 1.5))  
  }

  /**
   * LOOP
   */
  update() {
    this.render()

    window.requestAnimationFrame(this.update) 
  }

  /**
   * RENDER
   */
  render() {
    this.renderer.render(this.scene, this.camera)
  }  
}

new ScrollStage()

We disable the pointer events and let the canvas blend.

index.sass:

...

canvas.webgl
  @extend %fixed

  pointer-events: none
  mix-blend-mode: screen

...

The rockstar

We create a mesh, assign a icosahedron geometry and set the blending of its material to additive for loud colors. And – I like the wireframe style. For now, we set the value of all uniforms to 0 (uOpacity to 1).
I usually scale down the mesh for portrait screens. With only one object, we can do it this way. Otherwise you better transform the camera.position.z.

Let’s rotate our sphere slowly.

index.js:

...

import vertexShader from './shaders/vertex.glsl'
import fragmentShader from './shaders/fragment.glsl'

...

  init() {

    ...

    this.addMesh()

    ...
  }

  /**
   * OBJECT
   */
  addMesh() {
    this.geometry = new THREE.IcosahedronGeometry(1, 64)

    this.material = new THREE.ShaderMaterial({
      wireframe: true,
      blending: THREE.AdditiveBlending,
      transparent: true,
      vertexShader,
      fragmentShader,
      uniforms: {
        uFrequency: { value: 0 },
        uAmplitude: { value: 0 },
        uDensity: { value: 0 },
        uStrength: { value: 0 },
        uDeepPurple: { value: 0 },
        uOpacity: { value: 1 }
      }
    })

    this.mesh = new THREE.Mesh(this.geometry, this.material)

    this.scene.add(this.mesh)
  }

  ...

  onResize() {

    ...

    if (this.viewport.width < this.viewport.height) {
      this.mesh.scale.set(.75, .75, .75)
    } else {
      this.mesh.scale.set(1, 1, 1)
    }

    ...

  }

  update() {
    const elapsedTime = this.clock.getElapsedTime()

    this.mesh.rotation.y = elapsedTime * .05

    ...

  }

In the vertex shader (which positions the geometry) and fragment shader (which assigns a color to the pixels) we control the values of the uniforms that we will get from the scroll position. To generate an organic randomness, we make some noise. This shader program runs now on the GPU.

/shaders/vertex.glsl:

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

uniform float uFrequency;
uniform float uAmplitude;
uniform float uDensity;
uniform float uStrength;

varying float vDistortion;

void main() {  
  float distortion = pnoise(normal * uDensity, vec3(10.)) * uStrength;

  vec3 pos = position + (normal * distortion);
  float angle = sin(uv.y * uFrequency) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistortion = distortion;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}

/shaders/fragment.glsl:

uniform float uOpacity;
uniform float uDeepPurple;

varying float vDistortion;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}     

void main() {
  float distort = vDistortion * 3.;

  vec3 brightness = vec3(.1, .1, .9);
  vec3 contrast = vec3(.3, .3, .3);
  vec3 oscilation = vec3(.5, .5, .9);
  vec3 phase = vec3(.9, .1, .8);

  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, vDistortion);
  gl_FragColor += vec4(min(uDeepPurple, 1.), 0., .5, min(uOpacity, 1.));
}

If you don’t understand what’s happening here, I recommend this tutorial by Mario Carrillo.

The soundcheck

To find your preferred settings, you can set up a dat.gui for example. I’ll show you another approach here, in which you can combine two (or more) parameters to intuitively find a cool flow of movement. We simply connect the uniform values with the normalized values of the mouse event and log them to the console. As we use this approach only for development, we do not call rAF (requestAnimationFrames).

index.js:

...

import GSAP from 'gsap'

...

  constructor() {

    ...

    this.mouse = {
      x: 0,
      y: 0
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 0
      },
      uAmplitude: {
        start: 0,
        end: 0
      },
      uDensity: {
        start: 0,
        end: 0
      },
      uStrength: {
        start: 0,
        end: 0
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 0,
        end: 0
      },
      uOpacity: {  // max 1
        start: 1,
        end: 1
      }
    }

    ...

  }

  addEventListeners() {

    ...

    window.addEventListener('mousemove', this.onMouseMove.bind(this))
  }

  onMouseMove(event) {
    // play with it!
    // enable / disable / change x, y, multiplier …

    this.mouse.x = (event.clientX / this.viewport.width).toFixed(2) * 4
    this.mouse.y = (event.clientY / this.viewport.height).toFixed(2) * 2

    GSAP.to(this.mesh.material.uniforms.uFrequency, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uAmplitude, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uDensity, { value: this.mouse.y })
    GSAP.to(this.mesh.material.uniforms.uStrength, { value: this.mouse.y })
    // GSAP.to(this.mesh.material.uniforms.uDeepPurple, { value: this.mouse.x })
    // GSAP.to(this.mesh.material.uniforms.uOpacity, { value: this.mouse.y })

    console.info(`X: ${this.mouse.x}  |  Y: ${this.mouse.y}`)
  }

The support act

To create a really fluid mood, we first implement our smooth scroll.

index.sass:

body
  overscroll-behavior: none
  width: 100%
  height: 100vh

  ...

.scroll
  &__stage
    @extend %fixed

    height: 100vh

  &__content
    @extend %absolute

     will-change: transform

SmoothScroll.js:

import GSAP from 'gsap'

export default class {
  constructor({ element, viewport, scroll }) {
    this.element = element
    this.viewport = viewport
    this.scroll = scroll

    this.elements = {
      scrollContent: this.element.querySelector('.scroll__content')
    }
  }

  setSizes() {
    this.scroll.height = this.elements.scrollContent.getBoundingClientRect().height
    this.scroll.limit = this.elements.scrollContent.clientHeight - this.viewport.height

    document.body.style.height = `${this.scroll.height}px`
  }

  update() {
    this.scroll.hard = window.scrollY
    this.scroll.hard = GSAP.utils.clamp(0, this.scroll.limit, this.scroll.hard)
    this.scroll.soft = GSAP.utils.interpolate(this.scroll.soft, this.scroll.hard, this.scroll.ease)

    if (this.scroll.soft < 0.01) {
      this.scroll.soft = 0
    }

    this.elements.scrollContent.style.transform = `translateY(${-this.scroll.soft}px)`
  }    

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.setSizes()
  }
}

index.js:

...

import SmoothScroll from './SmoothScroll'

...

  constructor() {

    ...

    this.scroll = {
      height: 0,
      limit: 0,
      hard: 0,
      soft: 0,
      ease: 0.05
    }

    this.smoothScroll = new SmoothScroll({ 
      element: this.element, 
      viewport: this.viewport, 
      scroll: this.scroll
    })

    ...

  }

  ...

  onResize() {

    ...

    this.smoothScroll.onResize()

    ...

  }

  update() {

    ...

    this.smoothScroll.update()

    ...

  }

The show

Finally, let’s rock the stage!

Once we have chosen the start and end values, it’s easy to attach them to the scroll position. In this example, we want to drop the purple mesh through the blue section so that it is subsequently soaked in blue itself. We increase the frequency and the strength of our vertex displacement. Let’s first enter this values in our settings and update the mesh material. We normalize scrollY so that we can get the values from 0 to 1 and make our calculations with them.

To render the shader only while scrolling, we call rAF by the scroll listener. We don’t need the mouse event listener anymore.

To improve performance, we add an overwrite to the GSAP default settings. This way we kill any existing tweens while generating a new one for every frame. A long duration renders the movement extra smooth. Once again we let the object rotate slightly with the scroll movement. We iterate over our settings and GSAP makes the music.

index.js:

  constructor() {

  ...

    this.scroll = {

      ...

      normalized: 0, 
      running: false
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 4
      },
      uAmplitude: {
        start: 4,
        end: 4
      },
      uDensity: {
        start: 1,
        end: 1
      },
      uStrength: {
        start: 0,
        end: 1.1
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 1,
        end: 0
      },
      uOpacity: { // max 1
        start: .33,
        end: .66
      }
    }

    GSAP.defaults({
      ease: 'power2',
      duration: 6.6,
      overwrite: true
    })

    this.updateScrollAnimations = this.updateScrollAnimations.bind(this)

    ...

  }

...

  addMesh() {

  ...

    uniforms: {
      uFrequency: { value: this.settings.uFrequency.start },
      uAmplitude: { value: this.settings.uAmplitude.start },
      uDensity: { value: this.settings.uDensity.start },
      uStrength: { value: this.settings.uStrength.start },
      uDeepPurple: { value: this.settings.uDeepPurple.start },
      uOpacity: { value: this.settings.uOpacity.start }
    }
  }

  ...

  addEventListeners() {

    ...

    // window.addEventListener('mousemove', this.onMouseMove.bind(this))  // enable to find your preferred values (console)

    window.addEventListener('scroll', this.onScroll.bind(this))
  }

  ...

  /**
   * SCROLL BASED ANIMATIONS
   */
  onScroll() {
    this.scroll.normalized = (this.scroll.hard / this.scroll.limit).toFixed(1)

    if (!this.scroll.running) {
      window.requestAnimationFrame(this.updateScrollAnimations)

      this.scroll.running = true
    }
  }

  updateScrollAnimations() {
    this.scroll.running = false

    GSAP.to(this.mesh.rotation, {
      x: this.scroll.normalized * Math.PI
    })

    for (const key in this.settings) {
      if (this.settings[key].start !== this.settings[key].end) {
        GSAP.to(this.mesh.material.uniforms[key], {
          value: this.settings[key].start + this.scroll.normalized * (this.settings[key].end - this.settings[key].start)
        })
      }
    }
  }

Thanks for reading this tutorial, hope you like it!
Try it out, go new ways, have fun – dare a stage dive!

The post Rock the Stage with a Smooth WebGL Shader Transformation on Scroll appeared first on Codrops.

Curly Tubes from the Lusion Website with Three.js

In this ALL YOUR HTML coding session we’re going to look at recreating the curly noise tubes with light scattering from the fantastic website of Lusion using Three.js.

This coding session was streamed live on May 16, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Curly Tubes from the Lusion Website with Three.js appeared first on Codrops.

Noisy Strokes Texture with Three.js and GLSL

In this ALL YOUR HTML coding session you’ll learn how to recreate the amazing noisy strokes texture seen on the website of Leonard, the inventive agency, using Three.js with GLSL. The wonderful effect was originally made by Damien Mortini.

This coding session was streamed live on May 9, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Noisy Strokes Texture with Three.js and GLSL appeared first on Codrops.

Collective #659






Practical Accessibility

Sara Soueidan is launching a web accessibility course for web designers and developers. Subscribe and be among the first to know when it’s out.

Check it out



















Madosel

A family of responsive front-end frameworks that make it easy to design responsive websites.

Check it out







The post Collective #659 appeared first on Codrops.

Texture Ripples and Video Zoom Effect with Three.js and GLSL

In this ALL YOUR HTML coding session we’ll be replicating the ripples and video zoom effect from The Avener by TOOSOON studio using Three.js and GLSL coding.

This coding session was streamed live on April 25, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Texture Ripples and Video Zoom Effect with Three.js and GLSL appeared first on Codrops.

Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

In this tutorial we’ll implement an infinite circular gallery using WebGL with OGL based on the website Lions Good News 2020 made by SHIFTBRAIN inc.

Most of the steps of this tutorial can be also reproduced in other WebGL libraries such as Three.js or Babylon.js with the correct adaptations.

With that being said, let’s start coding!

Creating our OGL 3D environment

The first step of any WebGL tutorial is making sure that you’re setting up all the rendering logic required to create a 3D environment.

Usually what’s required is: a camera, a scene and a renderer that is going to output everything into a canvas element. Then inside a requestAnimationFrame loop, you’ll use your camera to render a scene inside the renderer. So here’s our initial snippet:

import { Renderer, Camera, Transform } from 'ogl'
 
export default class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer()
 
    this.gl = this.renderer.gl
    this.gl.clearColor(0.79607843137, 0.79215686274, 0.74117647058, 1)
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 20
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Events.
   */
  onTouchDown (event) {
      
  }
 
  onTouchMove (event) {
      
  }
 
  onTouchUp (event) {
      
  }
 
  onWheel (event) {
      
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
    
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
 
    window.addEventListener('mousedown', this.onTouchDown.bind(this))
    window.addEventListener('mousemove', this.onTouchMove.bind(this))
    window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
    window.addEventListener('touchstart', this.onTouchDown.bind(this))
    window.addEventListener('touchmove', this.onTouchMove.bind(this))
    window.addEventListener('touchend', this.onTouchUp.bind(this))
  }
}
 
new App()

Explaining the App class setup

In our createRenderer method, we’re initializing a renderer with a fixed color background by calling this.gl.clearColor. Then we’re storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> (this.gl.canvas) element to our document.body.

In our createCamera method, we’re creating a new Camera() instance and setting some of its attributes: fov and its z position. The FOV is the field of view of your camera, what you’re able to see from it. And the z is the position of your camera in the z axis.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

You’ll also notice that we already included the wheel, touchstart, touchmove, touchend, mousedown, mousemove and mouseup events, they will be used to include user interactions with our application.

Creating a reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl, {
    heightSegments: 50,
    widthSegments: 100
  })
}

The reason for including heightSegments and widthSegments with these values is being able to manipulate vertices in a way to make the Plane behave like a paper in the air.

Importing our images using Webpack

Now it’s time to import our images into our application. Since we’re using Webpack in this tutorial, all we need to do to request our images is using import:

import Image1 from 'images/1.jpg'
import Image2 from 'images/2.jpg'
import Image3 from 'images/3.jpg'
import Image4 from 'images/4.jpg'
import Image5 from 'images/5.jpg'
import Image6 from 'images/6.jpg'
import Image7 from 'images/7.jpg'
import Image8 from 'images/8.jpg'
import Image9 from 'images/9.jpg'
import Image10 from 'images/10.jpg'
import Image11 from 'images/11.jpg'
import Image12 from 'images/12.jpg'

Now let’s create our array of images that we want to use in our infinite slider, so we’re basically going to call the variables above inside a createMedia method, and use .map to create new instances of the Media class (new Media()), which is going to be our representation of each image of the gallery.

createMedias () {
  this.mediasImages = [
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
  ]
 
 
  this.medias = this.mediasImages.map(({ image, text }, index) => {
    const media = new Media({
      geometry: this.planeGeometry,
      gl: this.gl,
      image,
      index,
      length: this.mediasImages.length,
      scene: this.scene,
      screen: this.screen,
      text,
      viewport: this.viewport
    })
 
    return media
  })
}

As you’ve probably noticed, we’re passing a bunch of arguments to our Media class, I’ll explain why they’re needed when we start setting up the class in the next section. We’re also duplicating the amount of images to avoid any issues of not having enough images when making our gallery infinite on very wide screens.

It’s important to also include some specific calls in the onResize and update methods for our this.medias array, because we want the images to be responsive:

onResize () {
  if (this.medias) {
    this.medias.forEach(media => media.onResize({
      screen: this.screen,
      viewport: this.viewport
    }))
  }
}

And also do some real-time manipulations inside the requestAnimationFrame:

update () {
  this.medias.forEach(media => media.update(this.scroll, this.direction))
}

Setting up the Media class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

export default class {
  constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
    this.geometry = geometry
    this.gl = gl
    this.image = image
    this.index = index
    this.length = length
    this.scene = scene
    this.screen = screen
    this.text = text
    this.viewport = viewport
 
    this.createShader()
    this.createMesh()
 
    this.onResize()
  }
}

Explaining a few of these arguments, basically the geometry is the geometry we’re going to apply to our Mesh class. The this.gl is our GL context, useful to keep doing WebGL manipulations inside the class. The this.image is the URL of the image. Both of the this.index and this.length will be used to do positions calculations of the mesh. The this.scene is the group which we’re going to append our mesh to. And finally this.screen and this.viewport are the sizes of the viewport and environment.

Now it’s time to create the shader that is going to be applied to our Mesh in the createShader method, in OGL shaders are created with Program:

createShader () {
  const texture = new Texture(this.gl, {
    generateMipmaps: false
  })
 
  this.program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uPlaneSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] },
      uViewportSizes: { value: [this.viewport.width, this.viewport.height] }
      },
    transparent: true
  })
 
  const image = new Image()
 
  image.src = this.image
  image.onload = _ => {
    texture.image = image
 
    this.program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  }
}

In the snippet above, we’re basically creating a new Texture() instance, making sure to use generateMipmaps as false so it preserves the quality of the image. Then creating a new Program() instance, which represents a shader composed of fragment and vertex with some uniforms used to manipulate it.

We’re also creating a new Image() instance to preload the image before applying it to the texture.image. And also updating the this.program.uniforms.uImageSizes.value because it’s going to be used to preserve the aspect ratio of our images.

It’s important to create our fragment and vertex shaders now, so we’re going to create two new files: fragment.glsl and vertex.glsl:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}
precision highp float;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  vec3 p = position;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
}

And require them in the start of Media.js using Webpack:

import fragment from './fragment.glsl'
import vertex from './vertex.glsl'

Now let’s create our new Mesh() instance in the createMesh method merging together the geometry and shader.

createMesh () {
  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program: this.program
  })
 
  this.plane.setParent(this.scene)
}

The Mesh instance is stored in the this.plane variable to be reused in the onResize and update methods, then appended as a child of the this.scene group.

The only thing we have now on the screen is a simple square with our image:

Let’s now implement the onResize method and make sure we’re rendering rectangles:

onResize ({ screen, viewport } = {}) {
  if (screen) {
    this.screen = screen
  }
 
  if (viewport) {
    this.viewport = viewport
 
    this.plane.program.uniforms.uViewportSizes.value = [this.viewport.width, this.viewport.height]
  }
 
  this.scale = this.screen.height / 1500
 
  this.plane.scale.y = this.viewport.height * (900 * this.scale) / this.screen.height
  this.plane.scale.x = this.viewport.width * (700 * this.scale) / this.screen.width
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

The scale.y and scale.x calls are responsible for scaling our element properly, transforming our previous square into a rectangle of 700×900 sizes based on the scale.

And the uViewportSizes and uPlaneSizes uniform value updates makes the image display correctly. That’s basically what makes the image have the background-size: cover; behavior, but in WebGL environment.

Now we need to position all the rectangles in the x axis, making sure we have a small gap between them. To achieve that, we’re going to use this.plane.scale.x, this.padding and this.index variables to do the calculation required to move them:

this.padding = 2
 
this.width = this.plane.scale.x + this.padding
this.widthTotal = this.width * this.length
 
this.x = this.width * this.index

And in the update method, we’re going to set the this.plane.position to these variables:

update () {
  this.plane.position.x = this.x
}

Now you’ve setup all the initial code of Media, which results in the following image:

Including infinite scrolling logic

Now it’s time to make it interesting and include scrolling logic on it, so we have at least an infinite gallery in place when the user scrolls through your page. In our index.js, we’ll do the following updates.

First, let’s include a new object called this.scroll in our constructor with all variables that we will manipulate to do the smooth scrolling:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

Now let’s add the touch and wheel events, so when the user interacts with the canvas, he will be able to move stuff:

onTouchDown (event) {
  this.isDown = true
 
  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientX : event.clientX
}
 
onTouchMove (event) {
  if (!this.isDown) return
 
  const x = event.touches ? event.touches[0].clientX : event.clientX
  const distance = (this.start - x) * 0.01
 
  this.scroll.target = this.scroll.position + distance
}
 
onTouchUp (event) {
  this.isDown = false
}

Then, we’ll include the NormalizeWheel library in onWheel event, so this way we have the same value on all browsers when the user scrolls:

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.005
}

In our update method with requestAnimationFrame, we’ll lerp the this.scroll.current with this.scroll.target to make it smooth, then we’ll pass it to all medias:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll))
  }
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

And now we just update our Media file to use the current scroll value to move the Mesh to the new scroll position:

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1
}

This is the current result we have:

As you’ve noticed, it’s not infinite yet, to achieve that, we need to include some extra code. The first step is including the direction of the scroll in the update method from index.js:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'right'
  } else {
    this.direction = 'left'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
 
  this.scroll.last = this.scroll.current
}

Now in the Media class, you need to include a variable called this.extra in the constructor, and do some manipulations on it to sum the total width of the gallery, when the element is outside of the screen.

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.extra = 0
}

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1 - this.extra
    
  const planeOffset = this.plane.scale.x / 2
  const viewportOffset = this.viewport.width
 
  this.isBefore = this.plane.position.x + planeOffset < -viewportOffset
  this.isAfter = this.plane.position.x - planeOffset > viewportOffset
 
  if (direction === 'right' && this.isBefore) {
    this.extra -= this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
 
  if (direction === 'left' && this.isAfter) {
    this.extra += this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
}

That’s it, now we have the infinite scrolling gallery, pretty cool right?

Including circular rotation

Now it’s time to include the special flavor of the tutorial, which is making the infinite scrolling also have the circular rotation. To achieve it, we’ll use Math.cos to change the this.mesh.position.y accordingly to the rotation of the element. And map technique to change the this.mesh.rotation.z based on the element position in the z axis.

First, let’s make it rotate in a smooth way based on the position. The map method is basically a way to serve values based on another specific range, let’s say for example you use map(0.5, 0, 1, -500, 500);, it’s going to return 0 because it’s the middle between -500 and 500. Basically the first argument controls the output of min2 and max2:

export function map (num, min1, max1, min2, max2, round = false) {
  const num1 = (num - min1) / (max1 - min1)
  const num2 = (num1 * (max2 - min2)) + min2
 
  if (round) return Math.round(num2)
 
  return num2
}

Let’s see it in action by including the following like of code in the Media class:

this.plane.rotation.z = map(this.plane.position.x, -this.widthTotal, this.widthTotal, Math.PI, -Math.PI)

And that’s the result we get so far. It’s already pretty cool because you’re able to see the rotation changing based on the plane position:

Now it’s time to make it look circular. Let’s use Math.cos, we just need to do a simple calculation with this.plane.position.x / this.widthTotal, this way we’ll have a cos that will return a normalized value that we can just tweak multiplying by how much we want to change the y position of the element:

this.plane.position.y = Math.cos((this.plane.position.x / this.widthTotal) * Math.PI) * 75 - 75

Simple as that, we’re just moving it by 75 in environment space based in the position, this gives us the following result, which is exactly what we wanted to achieve:

Snapping to the closest item

Now let’s include a simple snapping to the closest item when the user stops scrolling. To achieve that, we need to create a new method called onCheck, it’s going to do some calculations when the user releases the scrolling:

onCheck () {
  const { width } = this.medias[0]
  const itemIndex = Math.round(Math.abs(this.scroll.target) / width)
  const item = width * itemIndex
 
  if (this.scroll.target < 0) {
    this.scroll.target = -item
  } else {
    this.scroll.target = item
  }
}

The result of the item variable is always the center of one of the elements in the gallery, which snaps the user to the corresponding position.

For wheel events, we need a debounced version of it called onCheckDebounce that we can include in the constructor by including lodash/debounce:

import debounce from 'lodash/debounce'
 
constructor ({ camera, color, gl, renderer, scene, screen, url, viewport }) {
  this.onCheckDebounce = debounce(this.onCheck, 200)
}
 
onWheel (event) {
  this.onCheckDebounce()
}

Now the gallery is always being snapped to the correct entry:

Writing paper shaders

Finally let’s include the most interesting part of our project, which is enhancing the shaders a little bit by taking into account the scroll velocity and distorting the vertices of our meshes.

The first step is to include two new uniforms in our this.program declaration from Media class: uSpeed and uTime.

this.program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uSpeed: { value: 0 },
    uTime: { value: 0 }
  },
  transparent: true
})

Now let’s write some shader code to make our images bend and distort in a very cool way. In your vertex.glsl file, you should include the new uniforms: uniform float uTime and uniform float uSpeed:

uniform float uTime;
uniform float uSpeed;

Then inside the void main() of your shader, you can now manipulate the vertices in the z axis using these two values plus the position stored variable in p. We’re going to use a sin and cos to bend our vertices like it’s a plane, so all you need to do is including the following line:

p.z = (sin(p.x * 4.0 + uTime) * 1.5 + cos(p.y * 2.0 + uTime) * 1.5);

Also don’t forget to include uTime increment in the update() method from Media:

this.program.uniforms.uTime.value += 0.04

Just this line of code outputs a pretty cool paper effect animation:

Including text in WebGL using MSDF fonts

Now let’s include our text inside the WebGL, to achieve that, we’re going to use msdf-bmfont to generate our files, you can see how to do that in this GitHub repository, but basically it’s installing the npm dependency and running the command below:

msdf-bmfont -f json -m 1024,1024 -d 4 --pot --smart-size freight.otf

After running it, you should now have a .png and .json file in the same directory, these are the files that we’re going to use on our MSDF implementation in OGL.

Now let’s create a new file called Title and start setting up the code of it. First let’s create our class and use import in the shaders and the files:

import AutoBind from 'auto-bind'
import { Color, Geometry, Mesh, Program, Text, Texture } from 'ogl'
 
import fragment from 'shaders/text-fragment.glsl'
import vertex from 'shaders/text-vertex.glsl'
 
import font from 'fonts/freight.json'
import src from 'fonts/freight.png'
 
export default class {
  constructor ({ gl, plane, renderer, text }) {
    AutoBind(this)
 
    this.gl = gl
    this.plane = plane
    this.renderer = renderer
    this.text = text
 
    this.createShader()
    this.createMesh()
  }
}

Now it’s time to start setting up MSDF implementation code inside the createShader() method. The first thing we’re going to do is create a new Texture() instance and load the fonts/freight.png one stored in src:

createShader () {
  const texture = new Texture(this.gl, { generateMipmaps: false })
  const textureImage = new Image()
 
  textureImage.src = src
  textureImage.onload = _ => texture.image = textureImage
}

Then we need to start setting up the fragment shader we’re going to use to render the MSDF text, because MSDF can be optimized in WebGL 2.0, we’re going to use this.renderer.isWebgl2 from OGL to check if it’s supported or not and declare different shaders based on it, so we’ll have vertex300, fragment300, vertex100 and fragment100:

createShader () {
  const vertex100 = `${vertex}`
 
  const fragment100 = `
    #extension GL_OES_standard_derivatives : enable
 
    precision highp float;
 
    ${fragment}
  `
 
  const vertex300 = `#version 300 es
 
    #define attribute in
    #define varying out
 
    ${vertex}
  `
 
  const fragment300 = `#version 300 es
 
    precision highp float;
 
    #define varying in
    #define texture2D texture
    #define gl_FragColor FragColor
 
    out vec4 FragColor;
 
    ${fragment}
  `
 
  let fragmentShader = fragment100
  let vertexShader = vertex100
 
  if (this.renderer.isWebgl2) {
    fragmentShader = fragment300
    vertexShader = vertex300
  }
 
  this.program = new Program(this.gl, {
    cullFace: null,
    depthTest: false,
    depthWrite: false,
    transparent: true,
    fragment: fragmentShader,
    vertex: vertexShader,
    uniforms: {
      uColor: { value: new Color('#545050') },
      tMap: { value: texture }
    }
  })
}

As you’ve probably noticed, we’re prepending fragment and vertex with different setup based on the renderer WebGL version, let’s create also our text-fragment.glsl and text-vertex.glsl files:

uniform vec3 uColor;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec3 color = texture2D(tMap, vUv).rgb;
 
  float signed = max(min(color.r, color.g), min(max(color.r, color.g), color.b)) - 0.5;
  float d = fwidth(signed);
  float alpha = smoothstep(-d, d, signed);
 
  if (alpha < 0.02) discard;
 
  gl_FragColor = vec4(uColor, alpha);
}
attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

Finally let’s create the geometry of our MSDF font implementation in the createMesh() method, for that we’ll use the new Text() instance from OGL, and then apply the buffers generated from it to the new Geometry() instance:

createMesh () {
  const text = new Text({
    align: 'center',
    font,
    letterSpacing: -0.05,
    size: 0.08,
    text: this.text,
    wordSpacing: 0,
  })
 
  const geometry = new Geometry(this.gl, {
    position: { size: 3, data: text.buffers.position },
    uv: { size: 2, data: text.buffers.uv },
    id: { size: 1, data: text.buffers.id },
    index: { data: text.buffers.index }
  })
 
  geometry.computeBoundingBox()
 
  this.mesh = new Mesh(this.gl, { geometry, program: this.program })
  this.mesh.position.y = -this.plane.scale.y * 0.5 - 0.085
  this.mesh.setParent(this.plane)
}

Now let’s apply our brand new titles in the Media class, we’re going to create a new method called createTilte() and apply it to the constructor:

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.createTitle()
}

createTitle () {
  this.title = new Title({
    gl: this.gl,
    plane: this.plane,
    renderer: this.renderer,
    text: this.text,
  })
}

Simple as that, we’re just including a new Title() instance inside our Media class, this will output the following result for you:

One of the best things about rendering text inside WebGL is reducing the overload of calculations required by the browser when animating the text to the right position. If you go with the DOM approach, you’ll usually have a little bit of performance impact because browsers will need to recalculate DOM sections when translating the text properly and checking composite layers.

For the purpose of this demo, we also included a new Number() class implementation that will be responsible for showing the current index that the user is seeing. You can check how it’s implemented in source code, but it’s basically the same implementation of the Title class with the only difference of it loading a different font style:

Including background blocks

To finalize the demo, let’s implement some blocks in the background that will be moving in x and y axis to enhance the depth effect of it:

To achieve this effect we’re going to create a new Background class and inside of it we’ll initialize some new Plane() geometries in a new Mesh() with random sizes and positions by changing the scale and position of the meshes of the for loop:

import { Color, Mesh, Plane, Program } from 'ogl'
 
import fragment from 'shaders/background-fragment.glsl'
import vertex from 'shaders/background-vertex.glsl'
 
import { random } from 'utils/math'
 
export default class {
  constructor ({ gl, scene, viewport }) {
    this.gl = gl
    this.scene = scene
    this.viewport = viewport
 
    const geometry = new Plane(this.gl)
    const program = new Program(this.gl, {
      vertex,
      fragment,
      uniforms: {
        uColor: { value: new Color('#c4c3b6') }
      },
      transparent: true
    })
 
    this.meshes = []
 
    for (let i = 0; i < 50; i++) {
      let mesh = new Mesh(this.gl, {
        geometry,
        program,
      })
 
      const scale = random(0.75, 1)
 
      mesh.scale.x = 1.6 * scale
      mesh.scale.y = 0.9 * scale
 
      mesh.speed = random(0.75, 1)
 
      mesh.xExtra = 0
 
      mesh.x = mesh.position.x = random(-this.viewport.width * 0.5, this.viewport.width * 0.5)
      mesh.y = mesh.position.y = random(-this.viewport.height * 0.5, this.viewport.height * 0.5)
 
      this.meshes.push(mesh)
 
      this.scene.addChild(mesh)
    }
  }
}

Then after that we just need to apply endless scrolling logic on them as well, following the same directional validation we have in the Media class:

update (scroll, direction) {
  this.meshes.forEach(mesh => {
    mesh.position.x = mesh.x - scroll.current * mesh.speed - mesh.xExtra
 
    const viewportOffset = this.viewport.width * 0.5
    const widthTotal = this.viewport.width + mesh.scale.x
 
    mesh.isBefore = mesh.position.x < -viewportOffset
    mesh.isAfter = mesh.position.x > viewportOffset
 
    if (direction === 'right' && mesh.isBefore) {
      mesh.xExtra -= widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    if (direction === 'left' && mesh.isAfter) {
      mesh.xExtra += widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    mesh.position.y += 0.05 * mesh.speed
 
    if (mesh.position.y > this.viewport.height * 0.5 + mesh.scale.y) {
      mesh.position.y -= this.viewport.height + mesh.scale.y
    }
  })
}

That’s simple as that, now we have the blocks in the background as well, finalizing the code of our demo!

I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

The post Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

Twisted Colorful Spheres with Three.js

I love blobs and I enjoy looking for interesting ways to change basic geometries with Three.js: bending a plane, twisting a box, or exploring a torus (like in this 10-min video tutorial). So this time, my love for shaping things will be the excuse to see what we can do with a sphere, transforming it using shaders. 

This tutorial will be brief, so we’ll skip the basic render/scene setup and focus on manipulating the sphere’s shape and colors, but if you want to know more about the setup check out these steps.

We’ll go with a more rounded than irregular shape, so the premise is to deform a sphere and use that same distortion to color it.

Vertex displacement

As you’ve probably been thinking, we’ll be using noise to deform the geometry by moving each vertex along the direction of its normal. Think of it as if we were pushing each vertex from the inside out with different strengths. I could elaborate more on this, but I rather point you to this article by The Spite aka Jaume Sanchez Elias, he explains this so well! I bet some of you have stumbled upon this article already.

So in code, it looks like this:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And now we should see a blobby sphere:

See the Pen Vertex displacement by Mario (@marioecg) on CodePen.

You can experiment and change its values to see how the blob changes. I know we’re going with a more subtle and rounded distortion, but feel free to go crazy with it; there are audio visualizers out there that deform a sphere to the point that you don’t even think it’s based on a sphere.

Now, this already looks interesting, but let’s add one more touch to it next.

Noitation

…is just a word I came up with to combine noise with rotation (ba dum tss), but yes! Adding some twirl to the mix makes things more compelling.

If you’ve ever played with Play-Doh as a child, you have surely molded a big chunk of clay into a ball, grab it with each hand, and twisted in opposite directions until the clay tore apart. This is kind of what we want to do (except for the breaking part).

To twist the sphere, we are going to generate a sine wave from top to bottom of the sphere. Then, we are going to use this top-bottom wave as a rotation for the current position. Since the values increase/decrease from top to bottom, the rotation is going to oscillate as well, creating a twist:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

Notice how the waves emerge from the top, it’s soothing. Some of you might find this movement therapeutic, so take some time to appreciate it and think about what we’ve learned so far…

See the Pen Noitation by Mario (@marioecg) on CodePen.

Alright! Now that you’re back let’s get on to the fragment shader.

Colorific

If you take a close look at the shaders before, you see, almost at the end, that we’ve been passing the normals to the fragment shader. Remember that we want to use the distortion to color the shape, so first let’s create a varying where we pass that distortion to:

varying float vDistort;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistort = distortion; // Train goes to the fragment shader! Tchu tchuuu

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And use vDistort to color the pixels instead:

varying float vDistort;

uniform float uIntensity;

void main() {
  float distort = vDistort * uIntensity;

  vec3 color = vec3(distort);

  gl_FragColor = vec4(color, 1.0);
}

We should get a kind of twisted, smokey black and white color like so:

See the Pen Colorific by Mario (@marioecg) on CodePen.

With this basis, we’ll take it a step further and use it in conjunction with one of my favorite color functions out there.

Cospalette

Cosine palette is a very useful function to create and control color with code based on the brightness, contrast, oscillation of cosine, and phase of cosine. I encourage you to watch Char Stiles explain this further, which is soooo good. Final s/o to Inigo Quilez who wrote an article about this function some years ago; for those of you who haven’t stumbled upon his genius work, please do. I would love to write more about him, but I’ll save that for a poem.

Let’s use cospalette to input the distortion and see how it looks:

varying vec2 vUv;
varying float vDistort;

uniform float uIntensity;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}   

void main() {
  float distort = vDistort * uIntensity;

  // These values are my fav combination, 
  // they remind me of Zach Lieberman's work.
  // You can find more combos in the examples from IQ:
  // https://iquilezles.org/www/articles/palettes/palettes.htm
  // Experiment with these!
  vec3 brightness = vec3(0.5, 0.5, 0.5);
  vec3 contrast = vec3(0.5, 0.5, 0.5);
  vec3 oscilation = vec3(1.0, 1.0, 1.0);
  vec3 phase = vec3(0.0, 0.1, 0.2);

  // Pass the distortion as input of cospalette
  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, 1.0);
}

¡Liiistoooooo! See how the color palette behaves similar to the distortion because we’re using it as input. Swap it for vUv.x or vUv.y to see different results of the palette, or even better, come up with your own input!

See the Pen Cospalette by Mario (@marioecg) on CodePen.

And that’s it! I hope this short tutorial gave you some ideas to apply to anything you’re creating or inspired you to make something. Next time you use noise, stop and think if you can do something extra to make it more interesting and make sure to save Cospalette in your shader toolbelt.

Explore and have fun with this! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

I hope you learned something new. Till next time! 

References and Credits

Thanks to all the amazing people that put knowledge out in the world!

The post Twisted Colorful Spheres with Three.js appeared first on Codrops.

Drawing 2D Metaballs with WebGL2

While many people shy away from writing vanilla WebGL and immediately jump to frameworks such as three.js or PixiJS, it is possible to achieve great visuals and complex animation with relatively small amounts of code. Today, I would like to present core WebGL concepts while programming some simple 2D visuals. This article assumes at least some higher-level knowledge of WebGL through a library.

Please note: WebGL2 has been around for years, yet Safari only recently enabled it behind a flag. It is a pretty significant upgrade from WebGL1 and brings tons of new useful features, some of which we will take advantage of in this tutorial.

What are we going to build

From a high level standpoint, to implement our 2D metaballs we need two steps:

  • Draw a bunch of rectangles with radial linear gradient starting from their centers and expanding to their edges. Draw a lot of them and alpha blend them together in a separate framebuffer.
  • Take the resulting image with the blended quads from step #1, scan its pixels one by one and decide the new color of the pixel depending on its opacity. For example – if the pixel has opacity smaller then 0.5, render it in red. Otherwise render it in yellow and so on.
Rendering multiple 2D quads and turning them to metaballs with post-processing.
Left: Multiple quads rendered with radial gradient, alpha blended and rendered to a texture.
Right: Post-processing on the generated texture and rendering the result to the device screen. Conditional coloring of each pixel based on opacity.

Don’t worry if these terms don’t make a lot of sense just yet – we will go over each of the steps needed in detail. Let’s jump into the code and start building!

Bootstrapping our program

We will start things by

  • Creating a HTMLCanvasElement, sizing it to our device viewport and inserting it into the page DOM
  • Obtaining a WebGL2RenderingContext to use for drawing stuff
  • Setting the correct WebGL viewport and the background color for our scene
  • Starting a requestAnimationFrame loop that will draw our scene as fast as the device allows. The speed is determined by various factors such as the hardware, current CPU / GPU workloads, battery levels, user preferences and so on. For smooth animation we are going to aim for 60FPS.
/* Create our canvas and obtain it's WebGL2RenderingContext */
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl2')

/* Handle error somehow if no WebGL2 support */
if (!gl) {
  // ...
}

/* Size our canvas and listen for resize events */
resizeCanvas()
window.addEventListener('resize', resizeCanvas)

/* Append our canvas to the DOM and set its background-color with CSS */
canvas.style.backgroundColor = 'black'
document.body.appendChild(canvas)

/* Issue first frame paint */
requestAnimationFrame(updateFrame)

function updateFrame (timestampMs) {
   /* Set our program viewport to fit the actual size of our monitor with devicePixelRatio into account */
   gl.viewport(0, 0, canvas.width, canvas.height)
   /* Set the WebGL background colour to be transparent */
   gl.clearColor(0, 0, 0, 0)
   /* Clear the current canvas pixels */
   gl.clear(gl.COLOR_BUFFER_BIT)

   /* Issue next frame paint */
   requestAnimationFrame(updateFrame)
}

function resizeCanvas () {
   /*
      We need to account for devicePixelRatio when sizing our canvas.
      We will use it to obtain the actual pixel size of our viewport and size our canvas to match it.
      We will then downscale it back to CSS units so it neatly fills our viewport and we benefit from downsampling antialiasing
      We also need to limit it because it can really slow our program. Modern iPhones have devicePixelRatios of 3. This means rendering 9x more pixels each frame!

      More info: https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html 
   */
   const dpr = devicePixelRatio > 2 ? 2 : devicePixelRatio
   canvas.width = innerWidth * dpr
   canvas.height = innerHeight * dpr
   canvas.style.width = `${innerWidth}px`
   canvas.style.height = `${innerHeight}px`
}

Drawing a quad

The next step is to actually draw a shape. WebGL has a rendering pipeline, which dictates how does the object you draw and its corresponding geometry and material end up on the device screen. WebGL is essentially just a rasterising engine, in the sense that you give it properly formatted data and it produces pixels for you.

The full rendering pipeline is out of the scope for this tutorial, but you can read more about it here. Let’s break down what exactly we need for our program:

Defining our geometry and its attributes

Each object we draw in WebGL is represented as a WebGLProgram running on the device GPU. It consists of input variables and vertex and fragment shader to operate on these variables. The vertex shader responsibility is to position our geometry correctly on the device screen and fragment shader’s responsibility is to control its appearance.

It’s up to us as developers to write our vertex and fragment shaders, compile them on the device GPU and link them in a GLSL program. Once we have successfully done this, we must query this program’s input variable locations that were allocated on the GPU for us, supply correctly formatted data to them, enable them and instruct them how to unpack and use our data.

To render our quad, we need 3 input variables:

  1. a_position will dictate the position of each vertex of our quad geometry. We will pass it as an array of 12 floats, i.e. 2 triangles with 3 points per triangle, each represented by 2 floats (x, y). This variable is an attribute, i.e. it is obviously different for each of the points that make up our geometry.
  2. a_uv will describe the texture offset for each point of our geometry. They too will be described as an array of 12 floats. We will use this data not to texture our quad with an image, but to dynamically create a radial linear gradient from the quad center. This variable is also an attribute and will too be different for each of our geometry points.
  3. u_projectionMatrix will be an input variable represented as a 32bit float array of 16 items that will dictate how do we transform our geometry positions described in pixel values to the normalised WebGL coordinate system. This variable is a uniform, unlike the previous two, it will not change for each geometry position.

We can take advantage of Vertex Array Object to store the description of our GLSL program input variables, their locations on the GPU and how should they be unpacked and used.

WebGLVertexArrayObjects or VAOs are 1st class citizens in WebGL2, unlike in WebGL1 where they were hidden behind an optional extension and their support was not guaranteed. They let us type less, execute fewer WebGL bindings and keep our drawing state into a single, easy to manage object that is simpler to track. They essentially store the description of our geometry and we can reference them later.

We need to write the shaders in GLSL 3.00 ES, which WebGL2 supports. Our vertex shader will be pretty simple:

/*
  Pass in geometry position and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;

/*
  Pass in global projection matrix for each vertex
*/
uniform mat4 u_projectionMatrix;

/*
  Specify varying variable to be passed to fragment shader
*/
out vec2 v_uv;

void main () {
  /*
   We need to convert our quad points positions from pixels to the normalized WebGL coordinate system
  */
  gl_Position = u_projectionMatrix * a_position;
  v_uv = a_uv;
}

At this point, after we have successfully executed our vertex shader, WebGL will fill in the pixels between the points that make up the geometry on the device screen. The way the space between the points is filled depends on what primitives are we using for drawing – WebGL supports points, lines and triangles.

We as developers do not have control over this step.

After it has rasterised our geometry, it will execute our fragment shader on each generated pixel. The fragment shader responsibility is the final appearance of each generated pixel and wether it should even be rendered. Here is our fragment shader:

/*
  Set fragment shader float precision
*/
precision highp float;

/*
  Consume interpolated tex coord varying from vertex shader
*/
in vec2 v_uv;

/*
  Final color represented as a vector of 4 components - r, g, b, a
*/
out vec4 outColor;

void main () {
  /*
    This function will run on each each pixel generated by our quad geometry
  */
  /*
    Calculate the distance for each pixel from the center of the quad (0.5, 0.5)
  */
  float dist = distance(v_uv, vec2(0.5)) * 2.0;
  /*
    Invert and clamp our distance from 0.0 to 1.0
  */
  float c = clamp(1.0 - dist, 0.0, 1.0);
  /*
    Use the distance to generate the pixel opacity. We have to explicitly enable alpha blending in WebGL to see the correct result
  */
  outColor = vec4(vec3(1.0), c);
}

Let’s write two utility methods: makeGLShader() to create and compile our GLSL shaders and makeGLProgram() to link them into a GLSL program to be ran on the GPU:

/*
  Utility method to create a WebGLShader object and compile it on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLShader
*/
function makeGLShader (shaderType, shaderSource) {
  /* Create a WebGLShader object with correct type */
  const shader = gl.createShader(shaderType)
  /* Attach the shaderSource string to the newly created shader */
  gl.shaderSource(shader, shaderSource)
  /* Compile our newly created shader */
  gl.compileShader(shader)
  const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS)
  /* Return the WebGLShader if compilation was a success */
  if (success) {
    return shader
  }
  /* Otherwise log the error and delete the faulty shader */
  console.error(gl.getShaderInfoLog(shader))
  gl.deleteShader(shader)
}

/*
  Utility method to create a WebGLProgram object
  It will create both a vertex and fragment WebGLShader and link them into a program on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram
*/
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {
  /* Create and compile vertex WebGLShader */
  const vertexShader = makeGLShader(gl.VERTEX_SHADER, vertexShaderSource)
  /* Create and compile fragment WebGLShader */
  const fragmentShader = makeGLShader(gl.FRAGMENT_SHADER, fragmentShaderSource)
  /* Create a WebGLProgram and attach our shaders to it */
  const program = gl.createProgram()
  gl.attachShader(program, vertexShader)
  gl.attachShader(program, fragmentShader)
  /* Link the newly created program on the device GPU */
  gl.linkProgram(program) 
  /* Return the WebGLProgram if linking was successfull */
  const success = gl.getProgramParameter(program, gl.LINK_STATUS)
  if (success) {
    return program
  }
  /* Otherwise log errors to the console and delete fauly WebGLProgram */
  console.error(gl.getProgramInfoLog(program))
  gl.deleteProgram(program)
}

And here is the complete code snippet we need to add to our previous code snippet to generate our geometry, compile our shaders and link them into a GLSL program:

const canvas = document.createElement('canvas')
/* rest of code */

/* Enable WebGL alpha blending */
gl.enable(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)

/*
  Generate the Vertex Array Object and GLSL program
  we need to render our 2D quad
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(innerWidth / 2, innerHeight / 2)

/* --------------- Utils ----------------- */

function makeQuad (positionX, positionY, width = 50, height = 50, drawType = gl.STATIC_DRAW) {
  /*
    Write our vertex and fragment shader programs as simple JS strings

    !!! Important !!!!
    
    WebGL2 requires GLSL 3.00 ES
    We need to declare this version on the FIRST LINE OF OUR PROGRAM
    Otherwise it would not work!
  */
  const vertexShaderSource = `#version 300 es
    /*
      Pass in geometry position and tex coord from the CPU
    */
    in vec4 a_position;
    in vec2 a_uv;
    
    /*
     Pass in global projection matrix for each vertex
    */
    uniform mat4 u_projectionMatrix;
    
    /*
      Specify varying variable to be passed to fragment shader
    */
    out vec2 v_uv;
    
    void main () {
      gl_Position = u_projectionMatrix * a_position;
      v_uv = a_uv;
    }
  `
  const fragmentShaderSource = `#version 300 es
    /*
      Set fragment shader float precision
    */
    precision highp float;
    
    /*
      Consume interpolated tex coord varying from vertex shader
    */
    in vec2 v_uv;
    
    /*
      Final color represented as a vector of 4 components - r, g, b, a
    */
    out vec4 outColor;
    
    void main () {
      float dist = distance(v_uv, vec2(0.5)) * 2.0;
      float c = clamp(1.0 - dist, 0.0, 1.0);
      outColor = vec4(vec3(1.0), c);
    }
  `
  /*
    Construct a WebGLProgram object out of our shader sources and link it on the GPU
  */
  const quadProgram = makeGLProgram(vertexShaderSource, fragmentShaderSource)
  
  /*
    Create a Vertex Array Object that will store a description of our geometry
    that we can reference later when rendering
  */
  const quadVertexArrayObject = gl.createVertexArray()
  
  /*
    1. Defining geometry positions
    
    Create the geometry points for our quad
        
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
     
     We need two triangles to form a single quad
     As you can see, we end up duplicating vertices:
     V5 & V3 and V4 & V1 end up occupying the same position.
     
     There are better ways to prepare our data so we don't end up with
     duplicates, but let's keep it simple for this demo and duplicate them
     
     Unlike regular Javascript arrays, WebGL needs strongly typed data
     That's why we supply our positions as an array of 32 bit floating point numbers
  */
  const vertexArray = new Float32Array([
    /*
      First set of 3 points are for our first triangle
    */
    positionX - width / 2,  positionY + height / 2, // Vertex 1 (X, Y)
    positionX + width / 2,  positionY + height / 2, // Vertex 2 (X, Y)
    positionX + width / 2,  positionY - height / 2, // Vertex 3 (X, Y)
    /*
      Second set of 3 points are for our second triangle
    */
    positionX - width / 2, positionY + height / 2, // Vertex 4 (X, Y)
    positionX + width / 2, positionY - height / 2, // Vertex 5 (X, Y)
    positionX - width / 2, positionY - height / 2  // Vertex 6 (X, Y)
  ])

  /*
    Create a WebGLBuffer that will hold our triangles positions
  */
  const vertexBuffer = gl.createBuffer()
  /*
    Now that we've created a GLSL program on the GPU we need to supply data to it
    We need to supply our 32bit float array to the a_position variable used by the GLSL program
    
    When you link a vertex shader with a fragment shader by calling gl.linkProgram(someProgram)
    WebGL (the driver/GPU/browser) decide on their own which index/location to use for each attribute
    
    Therefore we need to find the location of a_position from our program
  */
  const a_positionLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_position')
  
  /*
    Bind the Vertex Array Object descriptior for this geometry
    Each geometry instruction from now on will be recorded under it
    
    To stop recording after we are done describing our geometry, we need to simply unbind it
  */
  gl.bindVertexArray(quadVertexArrayObject)

  /*
    Bind the active gl.ARRAY_BUFFER to our WebGLBuffer that describe the geometry positions
  */
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
  /*
    Feed our 32bit float array that describes our quad to the vertexBuffer using the
    gl.ARRAY_BUFFER global handle
  */
  gl.bufferData(gl.ARRAY_BUFFER, vertexArray, drawType)
  /*
    We need to explicitly enable our the a_position variable on the GPU
  */
  gl.enableVertexAttribArray(a_positionLocationOnGPU)
  /*
    Finally we need to instruct the GPU how to pull the data out of our
    vertexBuffer and feed it into the a_position variable in the GLSL program
  */
  /*
    Tell the attribute how to get data out of positionBuffer (ARRAY_BUFFER)
  */
  const size = 2           // 2 components per iteration
  const type = gl.FLOAT    // the data is 32bit floats
  const normalize = false  // don't normalize the data
  const stride = 0         // 0 = move forward size * sizeof(type) each iteration to get the next position
  const offset = 0         // start at the beginning of the buffer
  gl.vertexAttribPointer(a_positionLocationOnGPU, size, type, normalize, stride, offset)
  
  /*
    2. Defining geometry UV texCoords
    
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
  */
  const uvsArray = new Float32Array([
    0, 0, // V1
    1, 0, // V2
    1, 1, // V3
    0, 0, // V4
    1, 1, // V5
    0, 1  // V6
  ])
  /*
    The rest of the code is exactly like in the vertices step above.
    We need to put our data in a WebGLBuffer, look up the a_uv variable
    in our GLSL program, enable it, supply data to it and instruct
    WebGL how to pull it out:
  */
  const uvsBuffer = gl.createBuffer()
  const a_uvLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_uv')
  gl.bindBuffer(gl.ARRAY_BUFFER, uvsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, uvsArray, drawType)
  gl.enableVertexAttribArray(a_uvLocationOnGPU)
  gl.vertexAttribPointer(a_uvLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptior for this geometry
  */
  gl.bindVertexArray(null)
  
  /*
    WebGL has a normalized viewport coordinate system which looks like this:
    
         Device Viewport
       ------- 1.0 ------  
      |         |         |
      |         |         |
    -1.0 --------------- 1.0
      |         |         | 
      |         |         |
       ------ -1.0 -------
       
     However as you can see, we pass the position and size of our quad in actual pixels
     To convert these pixels values to the normalized coordinate system, we will
     use the simplest 2D projection matrix.
     It will be represented as an array of 16 32bit floats
     
     You can read a gentle introduction to 2D matrices here
     https://webglfundamentals.org/webgl/lessons/webgl-2d-matrices.html
  */
  const projectionMatrix = new Float32Array([
    2 / innerWidth, 0, 0, 0,
    0, -2 / innerHeight, 0, 0,
    0, 0, 0, 0,
    -1, 1, 0, 1,
  ])
  
  /*
    In order to supply uniform data to our quad GLSL program, we first need to enable the GLSL program responsible for rendering our quad
  */
  gl.useProgram(quadProgram)
  /*
    Just like the a_position attribute variable earlier, we also need to look up
    the location of uniform variables in the GLSL program in order to supply them data
  */
  const u_projectionMatrixLocation = gl.getUniformLocation(quadProgram, 'u_projectionMatrix')
  /*
    Supply our projection matrix as a Float32Array of 16 items to the u_projection uniform
  */
  gl.uniformMatrix4fv(u_projectionMatrixLocation, false, projectionMatrix)
  /*
    We have set up our uniform variables correctly, stop using the quad program for now
  */
  gl.useProgram(null)

  /*
    Return our GLSL program and the Vertex Array Object descriptor of our geometry
    We will need them to render our quad in our updateFrame method
  */
  return {
    quadProgram,
    quadVertexArrayObject,
  }
}

/* rest of code */
function makeGLShader (shaderType, shaderSource) {}
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {}
function updateFrame (timestampMs) {}

We have successfully created a GLSL program quadProgram, which is running on the GPU, waiting to be drawn on the screen. We also have obtained a Vertex Array Object quadVertexArrayObject, which describes our geometry and can be referenced before we draw. We can now draw our quad. Let’s augment our updateFrame() method like so:

function updateFrame (timestampMs) {
   /* rest of our code */

  /*
    Bind the Vertex Array Object descriptor of our quad we generated earlier
  */
  gl.bindVertexArray(quadVertexArrayObject)
  /*
    Use our quad GLSL program
  */
  gl.useProgram(quadProgram)
  /*
    Issue a render command to paint our quad triangles
  */
  {
    const drawPrimitive = gl.TRIANGLES
    const vertexArrayOffset = 0
    const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
    gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
  }
  /*     
    After a successful render, it is good practice to unbind our 
GLSL program and Vertex Array Object so we keep WebGL state clean.
    We will bind them again anyway on the next render
  */
  gl.useProgram(null)
  gl.bindVertexArray(null)

  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

And here is our result:

We can use the great SpectorJS Chrome extension to capture our WebGL operations on each frame. We can look at the entire command list with their associated visual states and context information. Here is what it takes to render a single frame with our updateFrame() call:

Draw calls needed to render a single 2D quad on the center of our screen.
A screenshot of all the steps we implemented to render a single quad. (Click to see a larger version)

Some gotchas:

  1. We declare the vertices positions of our triangles in a counter clockwise order. This is important.
  2. We need to explicitly enable blending in WebGL and specify it’s blend operation. For our demo we will use gl.ONE_MINUS_SRC_ALPHA as a blend function (multiplies all colors by 1 minus the source alpha value).
  3. In our vertex shader you can see we expect the input variable a_position to be vector with 4 components (vec4), while in Javascript we specify only 2 items per vertex. That’s because the default attribute value is 0, 0, 0, 1. It doesn’t matter that you’re only supplying x and y from your attributes. z defaults to 0 and w defaults to 1.
  4. As you can see, WebGL is a state machine, where you have to constantly bind stuff before you are able to work on it and you always have to make sure you unbind it afterwards. Consider how in the code snippet above we supplied a Float32Array with out positions to the vertexBuffer:
const vertexArray = new Float32Array([/* ... */])
const vertexBuffer = gl.createBuffer()
/* Bind our vertexBuffer to the global binding WebGL bind point gl.ARRAY_BUFFER */
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/* At this point, gl.ARRAY_BUFFER represents vertexBuffer */
/* Supply data to our vertexBuffer using the gl.ARRAY_BUFFER binding point */
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW)
/* Do a bunch of other stuff with the active gl.ARRAY_BUFFER (vertexBuffer) here */
// ...

/* After you have done your work, unbind it */
gl.bindBuffer(gl.ARRAY_BUFFER, null)

This is totally opposite of Javascript, where this same operation would be expressed like this for example (pseudocode):

const vertexBuffer = gl.createBuffer()
vertexBuffer.addData(vertexArray)
vertexBuffer.setDrawOperation(gl.STATIC_DRAW)
// etc.

Coming from Javascript background, initially I found WebGL’s state machine way of doing things by constantly binding and unbinding really odd. One must exercise good discipline and always make sure to unbind stuff after using it, even in trivial programs like ours! Otherwise you risk things not working and hard to track bugs.

Drawing lots of quads

We have successfully rendered a single quad, but in order to make things more interesting and visually appealing, we need to draw more.

As we saw already, we can easily create new geometries with different position using our makeQuad() utility helper. We can pass them different positions and radiuses and compile each one of them into a separate GLSL program to be executed on the GPU. This will work, however:

As we saw in our update loop method updateFrame, to render our quad on each frame we must:

  1. Use the correct GLSL program by calling gl.useProgram()
  2. Bind the correct VAO describing our geometry by calling gl.bindVertexArray()
  3. Issue a draw call with correct primitive type by calling gl.drawArrays()

So 3 WebGL commands in total.

What if we want to render 500 quads? Suddenly we jump to 500×3 or 1500 individual WebGL calls on each frame of our animation. If we want 1000quads we jump up to 3000 individual calls, without even counting all of the preparation WebGL bindings we have to do before our updateFrame loop starts.

Geometry Instancing is a way to reduce these calls. It works by letting you tell WebGL how many times you want the same thing drawn (the number of instances) with minor variations, such as rotation, scale, position etc. Examples include trees, grass, crowd of people, boxes in a warehouse, etc.

Just like VAOs, instancing is a 1st class citizen in WebGL2 and does not require extensions, unlike WebGL1. Let’s augment our code to support geometry instancing and render 1000 quads with random positions.

First of all, we need to decide on how many quads we want rendered and prepare the offset positions for each one as a new array of 32bit floats. Let’s do 1000 quads and positions them randomly in our viewport:

/* rest of code */

/* How many quads we want rendered */
const QUADS_COUNT = 1000
/*
  Array to store our quads positions
  We need to layout our array as a continuous set
  of numbers, where each pair represents the X and Y
  or a single 2D position.
  
  Hence for 1000 quads we need an array of 2000 items
  or 1000 pairs of X and Y
*/
const quadsPositions = new Float32Array(QUADS_COUNT * 2)
for (let i = 0; i < QUADS_COUNT; i++) {
  /*
    Generate a random X and Y position
  */
  const randX = Math.random() * innerWidth
  const randY = Math.random() * innerHeight
  /*
    Set the correct X and Y for each pair in our array
  */
  quadsPositions[i * 2 + 0] = randX
  quadsPositions[i * 2 + 1] = randY
}

/*
  We also need to augment our makeQuad() method
  It no longer expects a single position, rather an array of positions
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(quadsPositions)

/* rest of code */

Instead of a single position, we will now pass an array of positions into our makeQuad() method. Let’s augment this method to receive our offsets array as a new variable input a_offset to our shaders which will contain the correct XY offset for a particular instance. To do this, we need to prepare our offsets as a new WebGLBuffer and instruct WebGL how to upack them, just like we did for a_position and a_uv

function makeQuad (quadsPositions, width = 70, height = 70, drawType = gl.STATIC_DRAW) {
  /* rest of code */

  /*
    Add offset positions for our individual instances
    They are declared and used in exactly the same way as
    "a_position" and "a_uv" above
  */
  const offsetsBuffer = gl.createBuffer()
  const a_offsetLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_offset')
  gl.bindBuffer(gl.ARRAY_BUFFER, offsetsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, quadsPositions, drawType)
  gl.enableVertexAttribArray(a_offsetLocationOnGPU)
  gl.vertexAttribPointer(a_offsetLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  /*
    HOWEVER, we must add an additional WebGL call to set this attribute to only
    change per instance, instead of per vertex like a_position and a_uv above
  */
  const instancesDivisor = 1
  gl.vertexAttribDivisor(a_offsetLocationOnGPU, instancesDivisor)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptor for this geometry
  */
  gl.bindVertexArray(null)

  /* rest of code */
}

We need to augment our original vertexArray responsible for passing data into our a_position GLSL variable. We no longer need to offset it to the desired position like in the first example, now the a_offset variable will take care of this in the vertex shader:

const vertexArray = new Float32Array([
  /*
    First set of 3 points are for our first triangle
  */
 -width / 2,  height / 2, // Vertex 1 (X, Y)
  width / 2,  height / 2, // Vertex 2 (X, Y)
  width / 2, -height / 2, // Vertex 3 (X, Y)
  /*
    Second set of 3 points are for our second triangle
  */
 -width / 2,  height / 2, // Vertex 4 (X, Y)
  width / 2, -height / 2, // Vertex 5 (X, Y)
 -width / 2, -height / 2  // Vertex 6 (X, Y)
])

We also need to augment our vertex shader to consume and use the new a_offset input variable we pass from Javascript:

const vertexShaderSource = `#version 300 es
  /* rest of GLSL code */
  /*
    This input vector will change once per instance
  */
  in vec4 a_offset;

  void main () {
     /* Account a_offset in the final geometry posiiton */
     vec4 newPosition = a_position + a_offset;
     gl_Position = u_projectionMatrix * newPosition;
  }
  /* rest of GLSL code */
`

And as a final step we need to change our drawArrays call in our updateFrame to drawArraysInstanced to account for instancing. This new method expects the exact same arguments and adds instanceCount as last one:

function updateFrame (timestampMs) {
   /* rest of code */
   {
     const drawPrimitive = gl.TRIANGLES
     const vertexArrayOffset = 0
     const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
     gl.drawArraysInstanced(drawPrimitive, vertexArrayOffset, numberOfVertices, QUADS_COUNT)
   }
   /* rest of code */
}

And with all these changes, here is our updated example:

Even though we increased the amount of rendered objects by 1000x, we are still making 3 WebGL calls on each frame. That’s a pretty great performance win!

Steps needed so our WebGL can draw 1000 of quads via geometry instancing.
All WebGL calls needed to draw our 1000 quads in a single updateFrame()call. Note the amount of needed calls did not increase from the previous example thanks to instancing.

Post Processing with a fullscreen quad

Now that we have our 1000 quads successfully rendering to the device screen on each frame, we can turn them into metaballs. As we established, we need to scan the pixels of the picture we generated in the previous steps and determine the alpha value of each pixel. If it is below a certain threshold, we discard it, otherwise we color it.

To do this, instead of rendering our scene directly to the screen as we do right now, we need to render it to a texture. We will do our post processing on this texture and render the result to the device screen.

Post-Processing is a technique used in graphics that allows you to take a current input texture, and manipulate its pixels to produce a transformed image. This can be used to apply shiny effects like volumetric lighting, or any other filter type effect you’ve seen in applications like Photoshop or Instagram.

Nicolas Garcia Belmonte

The basic technique for creating these effects is pretty straightforward:

  1. A WebGLTexture is created with the same size as the canvas and attached as a color attachment to a WebGLFramebuffer. At the beginning of our updateFrame() method, the framebuffer is set as the render target, and the entire scene is rendered normally to it.
  2. Next, a full-screen quad is rendered to the device screen using the texture generated in step 1 as an input. The shader used during the rendering of the quad is what contains the post-process effect.

Creating a texture and framebuffer to render to

A framebuffer is just a collection of attachments. Attachments are either textures or renderbuffers. Let’s create a WebGLTexture and attach it to a framebuffer as the first color attachment:

/* rest of code */

const renderTexture = makeTexture()
const framebuffer = makeFramebuffer(renderTexture)

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) {
  /*
    Create the texture that we will use to render to
  */
  const targetTexture = gl.createTexture()
  /*
    Just like everything else in WebGL up until now, we need to bind it
    so we can configure it. We will unbind it once we are done with it.
  */
  gl.bindTexture(gl.TEXTURE_2D, targetTexture)

  /*
    Define texture settings
  */
  const level = 0
  const internalFormat = gl.RGBA
  const border = 0
  const format = gl.RGBA
  const type = gl.UNSIGNED_BYTE
  /*
    Notice how data is null. That's because we don't have data for this texture just yet
    We just need WebGL to allocate the texture
  */
  const data = null
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /*
    Set the filtering so we don't need mips
  */
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
  
  return renderTexture
}

function makeFramebuffer (texture) {
  /*
    Create and bind the framebuffer
  */
  const fb = gl.createFramebuffer()
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb)
 
  /*
    Attach the texture as the first color attachment
  */
  const attachmentPoint = gl.COLOR_ATTACHMENT0
  gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, targetTexture, level)
}

We have successfully created a texture and attached it as color attachment to a framebuffer. Now we can render our scene to it. Let’s augment our updateFrame()method:

function updateFrame () {
  gl.viewport(0, 0, canvas.width, canvas.height)
  gl.clearColor(0, 0, 0, 0)
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Bind the framebuffer we created
    From now on until we unbind it, each WebGL draw command will render in it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
  
  /* Set the offscreen framebuffer background color to be transparent */
  gl.clearColor(0.2, 0.2, 0.2, 1.0)
  /* Clear the offscreen framebuffer pixels */
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Code for rendering our instanced quads here
  */

  /*
    We have successfully rendered to the framebuffer at this point
    In order to render to the screen next, we need to unbind it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, null)
  
  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

Let’s take a look at our result:

As you can see, we get an empty screen. There are no errors and the program is running just fine – keep in mind however that we are rendering to a separate framebuffer, not the default device screen framebuffer!

Break down of our WebGL scene and the steps needed to render it to a separate framebuffer.
Our program produces black screen, since we are rendering to the offscreen framebuffer

In order to display our offscreen framebuffer back on the screen, we need to render a fullscreen quad and use the framebuffer’s texture as an input.

Creating a fullscreen quad and displaying our texture on it

Let’s create a new quad. We can reuse our makeQuad() method from the above snippets, but we need to augment it to support instancing optionally and be able to put vertex and fragment shader sources as outside argument variables. This time we need only one quad and the shaders we need for it are different.

Take a look at the updated makeQuad()signature:

/* rename our instanced quads program & VAO */
const {
  quadProgram: instancedQuadsProgram,
  quadVertexArrayObject: instancedQuadsVAO,
} = makeQuad({
  instancedOffsets: quadsPositions,
  /*
    We need different set of vertex and fragment shaders
    for the different quads we need to render, so pass them from outside
  */
  vertexShaderSource: instancedQuadVertexShader,
  fragmentShaderSource: instancedQuadFragmentShader,
  /*
    support optional instancing
  */
  isInstanced: true,
})

Let’s use the same method to create a new fullscreen quad and render it. First our vertex and fragment shader:

const fullscreenQuadVertexShader = `#version 300 es
   in vec4 a_position;
   in vec2 a_uv;
   
   uniform mat4 u_projectionMatrix;
   
   out vec2 v_uv;
   
   void main () {
    gl_Position = u_projectionMatrix * a_position;
    v_uv = a_uv;
   }
`
const fullscreenQuadFragmentShader = `#version 300 es
  precision highp float;
  
  /*
    Pass our texture we render to as an uniform
  */
  uniform sampler2D u_texture;
  
  in vec2 v_uv;
  
  out vec4 outputColor;
  
  void main () {
    /*
      Use our interpolated UVs we assigned in Javascript to lookup
      texture color value at each pixel
    */
    vec4 inputColor = texture(u_texture, v_uv);
    
    /*
      0.5 is our alpha threshold we use to decide if
      pixel should be discarded or painted
    */
    float cutoffThreshold = 0.5;
    /*
      "cutoff" will be 0 if pixel is below 0.5 or 1 if above
      
      step() docs - https://thebookofshaders.com/glossary/?search=step
    */
    float cutoff = step(cutoffThreshold, inputColor.a);
    
    /*
      Let's use mix() GLSL method instead of if statement
      if cutoff is 0, we will discard the pixel by using empty color with no alpha
      otherwise, let's use black with alpha of 1
      
      mix() docs - https://thebookofshaders.com/glossary/?search=mix
    */
    vec4 emptyColor = vec4(0.0);
    /* Render base metaballs shapes */
    vec4 borderColor = vec4(1.0, 0.0, 0.0, 1.0);
    outputColor = mix(
      emptyColor,
      borderColor,
      cutoff
    );
    
    /*
      Increase the treshold and calculate new cutoff, so we can render smaller shapes again, this time in different color and with smaller radius
    */
    cutoffThreshold += 0.05;
    cutoff = step(cutoffThreshold, inputColor.a);
    vec4 fillColor = vec4(1.0, 1.0, 0.0, 1.0);
    /*
      Add new smaller metaballs color on top of the old one
    */
    outputColor = mix(
      outputColor,
      fillColor,
      cutoff
    );
  }
`

Let’s use them to create and link a valid GLSL program, just like when we rendered our instances:

const {
  quadProgram: fullscreenQuadProgram,
  quadVertexArrayObject: fullscreenQuadVAO,
} = makeQuad({
  vertexShaderSource: fullscreenQuadVertexShader,
  fragmentShaderSource: fullscreenQuadFragmentShader,
  isInstanced: false,
  width: innerWidth,
  height: innerHeight
})
/*
  Unlike our instances GLSL program, here we need to pass an extra uniform - a "u_texture"!
  Tell the shader to use texture unit 0 for u_texture
*/
gl.useProgram(fullscreenQuadProgram)
const u_textureLocation = gl.getUniformLocation(fullscreenQuadProgram, 'u_texture')
gl.uniform1i(u_textureLocation, 0)
gl.useProgram(null)

Finally we can render the fullscreen quad with the result texture as an uniform u_texture. Let’s change our updateFrame() method:

function updateFrame () {
 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
 /* render instanced quads here */
 gl.bindFramebuffer(gl.FRAMEBUFFER, null)
 
 /*
   Render our fullscreen quad
 */
 gl.bindVertexArray(fullscreenQuadVAO)
 gl.useProgram(fullscreenQuadProgram)
 /*
  Bind the texture we render to as active TEXTURE_2D
 */
 gl.bindTexture(gl.TEXTURE_2D, renderTexture)
 {
   const drawPrimitive = gl.TRIANGLES
   const vertexArrayOffset = 0
   const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
   gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
 }
 /*
   Just like everything else, unbind our texture once we are done rendering
 */
 gl.bindTexture(gl.TEXTURE_2D, null)
 gl.useProgram(null)
 gl.bindVertexArray(null)
 requestAnimationFrame(updateFrame)
}

And here is our final result (I also added a simple animation to make the effect more apparent):

And here is the breakdown of one updateFrame() call:

Breakdown of our WebGL scene amd the steps needed to render 1000 quads and post-process them to metaballs.
You can clearly see how we render our 1000 instanced quads in separate framebuffer in steps 1 to 3. We then draw and manipulate the resulting texture to a fullscreen quad that we render in steps 4 to 7.

Aliasing issues

On my 2016 MacBook Pro with retina display I can clearly see aliasing issues with our current example. If we are to add bigger radiuses and blow our animation to fullscreen the problem will become only more noticeable.

The issue comes from the fact we are rendering to a 8bit gl.UNSIGNED_BYTE texture. If we want to increase the detail, we need to switch to floating point textures (32 bit float gl.RGBA32F or 16 bit float gl.RGBA16F). The catch is that these textures are not supported on all hardware and are not part of WebGL2 core. They are available through optional extensions, that we need to check if exist.

The extensions we are interested in to render to 32bit floating point textures are

  • EXT_color_buffer_float
  • OES_texture_float_linear

If these extensions are present on the user device, we can use internalFormat = gl.RGBA32F and textureType = gl.FLOAT when creating our render textures. If they are not present, we can optionally fallback and render to 16bit floating textures. The extensions we need in that case are:

  • EXT_color_buffer_half_float
  • OES_texture_half_float_linear

If these extensions are present, we can use internalFormat = gl.RGBA16F and textureType = gl.HALF_FLOAT for our render texture. If not, we will fallback to what we have used up until now – internalFormat = gl.RGBA and textureType = gl.UNSIGNED_BYTE.

Here is our updated makeTexture() method:

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) { 
  /*
   Initialize internal format & texture type to default values
  */
  let internalFormat = gl.RGBA
  let type = gl.UNSIGNED_BYTE
  
  /*
    Check if optional extensions are present on device
  */
  const rgba32fSupported = gl.getExtension('EXT_color_buffer_float') && gl.getExtension('OES_texture_float_linear')
  
  if (rgba32fSupported) {
    internalFormat = gl.RGBA32F
    type = gl.FLOAT
  } else {
    /*
      Check if optional fallback extensions are present on device
    */
    const rgba16fSupported = gl.getExtension('EXT_color_buffer_half_float') && gl.getExtension('OES_texture_half_float_linear')
    if (rgba16fSupported) {
      internalFormat = gl.RGBA16F
      type = gl.HALF_FLOAT
    }
  }

  /* rest of code */
  
  /*
    Pass in correct internalFormat and textureType to texImage2D call 
  */
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /* rest of code */
}

And here is our updated result:

Conclusion

I hope I managed to showcase the core principles behind WebGL2 with this demo. As you can see, the API itself is low-level and requires quite a bit of typing, yet at the same time is really powerful and let’s you draw complex scenes with fine-grained control over the rendering.

Writing production ready WebGL requires even more typing, checking for optional features / extensions and handling missing extensions and fallbacks, so I would advise you to use a framework. At the same time, I believe it is important to understand the key concepts behind the API so you can successfully use higher level libraries like threejs and dig into their internals if needed.

I am a big fan of twgl, which hides away much of the verbosity of the API, while still being really low level with a small footprint. This demo’s code can easily be reduced by more then half by using it.

I encourage you to experiment around with the code after reading this article, plug in different values, change the order of things, add more draw commands and what not. I hope you walk away with a high level understanding of core WebGL2 API and how it all ties together, so you can learn more on your own.

The post Drawing 2D Metaballs with WebGL2 appeared first on Codrops.

Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders

Hello everyone, introducing myself a little bit first, I’m Luis Henrique Bizarro, I’m a Senior Creative Developer at Active Theory based in São Paulo, Brazil. It’s always a pleasure to me having the opportunity to collaborate with Codrops to help other developers learn new things, so I hope everyone enjoys this tutorial!

In this tutorial I’ll explain you how to create an auto scrolling infinite image gallery. The image grid is also scrollable be user interaction, making it an interesting design element to showcase works. It’s based on this great animation seen on Oneshot.finance made by Jesper Landberg.

I’ve been using the technique of styling images first with HTML + CSS and then creating an abstraction of these elements inside WebGL using some camera and viewport calculations in multiple websites, so this is the approach we’re going to use in this tutorial.

The good thing about this implementation is that it can be reused across any WebGL library, so if you’re more familiar with Three.js or Babylon.js than OGL, you’ll also be able to achieve the same results using a similar code, when it’s about shading and scaling the plane meshes.

So let’s get into it!

Implementing our HTML markup

The first step is implementing our HTML markup. We’re going to use <figure> and <img> elements, nothing special here, just the standard:

<div class="demo-1__gallery">
  <figure class="demo-1__gallery__figure">
    <img class="demo-1__gallery__image" src="images/demo-1/1.jpg">
  </figure>

  <!-- Repeating the same markup until 12.jpg. -->
</div>

Setting our CSS styles

The second step is styling our elements using CSS. One of the first things I do in a website is defining the font-size of the html element because I use rem to help with the responsive breakpoints.

This comes in handy if you’re doing creative websites that only require two or three different breakpoints, so I highly recommend starting using it if you haven’t adopted rem yet.

One thing I’m also using is calc() with the size of the designs. In our tutorial we’re going to use 1920 as our main width, scaling our font-size depending on the screen size of 100vw. This results in 10px at a 1920px screen, for example:

html {
  font-size: calc(100vw / 1920 * 10);
}

Now let’s style our grid of images. We want to freely place our images across the screen using absolute positioning, so we’re just going to set the height, width and left/top styles across all our demo-1 classes:

.demo-1__gallery {
  height: 295rem;
  position: relative;
  visibility: hidden;
}

.demo-1__gallery__figure {
  position: absolute;
 
  &:nth-child(1) {
    height: 40rem;
    width: 70rem;
  }
 
  &:nth-child(2) {
    height: 50rem;
    left: 85rem;
    top: 30rem;
    width: 40rem;
  }
 
  &:nth-child(3) {
    height: 50rem;
    left: 15rem;
    top: 60rem;
    width: 60rem;
  }
 
  &:nth-child(4) {
    height: 30rem;
    right: 0;
    top: 10rem;
    width: 50rem;
  }
 
  &:nth-child(5) {
    height: 60rem;
    right: 15rem;
    top: 55rem;
    width: 40rem;
  }
 
  &:nth-child(6) {
    height: 75rem;
    left: 5rem;
    top: 120rem;
    width: 57.5rem;
  }
 
  &:nth-child(7) {
    height: 70rem;
    right: 0;
    top: 130rem;
    width: 50rem;
  }
 
  &:nth-child(8) {
    height: 50rem;
    left: 85rem;
    top: 95rem;
    width: 40rem;
  }
 
  &:nth-child(9) {
    height: 65rem;
    left: 75rem;
    top: 155rem;
    width: 50rem;
  }
 
  &:nth-child(10) {
    height: 43rem;
    right: 0;
    top: 215rem;
    width: 30rem;
  }
 
  &:nth-child(11) {
    height: 50rem;
    left: 70rem;
    top: 235rem;
    width: 80rem;
  }
 
  &:nth-child(12) {
    left: 0;
    top: 210rem;
    height: 70rem;
    width: 50rem;
  }
}
 
.demo-1__gallery__image {
  height: 100%;
  left: 0;
  object-fit: cover;
  position: absolute;
  top: 0;
  width: 100%;
}

Note that we’re hiding the visibility of our HTML, because it’s not going to be visible for the users since we’re going to load these images inside the <canvas> element. But below you can find a screenshot of what the result will look like.

Creating our OGL 3D environment

Now it’s time to get started with the WebGL implementation using OGL. First let’s create an App class that is going to be the entry point of our demo and inside of it, let’s also create the initial methods: createRenderer, createCamera, createScene, onResize and our requestAnimationFrame loop with update.

import { Renderer, Camera, Transform } from 'ogl'

class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer({
      alpha: true
    })
 
    this.gl = this.renderer.gl
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 5
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Wheel.
   */
  onWheel (event) {
 
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
 
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
  }
}

new App()

Explaining some part of our App.js file

In our createRenderer method, we’re initializing one renderer with alpha enabled, storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> element to our document.body.

In our createCamera method, we’re just creating a new Camera and setting some of its attributes: fov and its z position.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

Create our reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl)
}

Select all images and create a new class for each one

Now it’s time to use document.querySelector to select all our images and create one reusable class that is going to represent our images. (We’re going to create a single Media.js file later.)

createMedias () {
  this.mediasElements = document.querySelectorAll('.demo-1__gallery__figure')
  this.medias = Array.from(this.mediasElements).map(element => {
    let media = new Media({
      element,
      geometry: this.planeGeometry,
      gl: this.gl,
      scene: this.scene,
      screen: this.screen,
      viewport: this.viewport
    })
 
    return media
  })
}

As you can see, we’re just selecting all .demo-1__gallery__figure elements, going through them and generating an array of `this.medias` with new instances of Media.

Now it’s important to start attaching this array in important pieces of our setup code.

Let’s first include all our media inside the method onResize and also call media.onResize for each one of these new instances:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    screen: this.screen,
    viewport: this.viewport
  }))
}

And inside our update method, we’re going to call media.update() as well:

if (this.medias) {
  this.medias.forEach(media => media.update())
}

Setting up our Media.js file and class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

import { Mesh, Program, Texture } from 'ogl'
 
import fragment from 'shaders/fragment.glsl'
import vertex from 'shaders/vertex.glsl'
 
export default class {
  constructor ({ element, geometry, gl, scene, screen, viewport }) {
    this.element = element
    this.image = this.element.querySelector('img')
 
    this.geometry = geometry
    this.gl = gl
    this.scene = scene
    this.screen = screen
    this.viewport = viewport
 
    this.createMesh()
    this.createBounds()
 
    this.onResize()
  }
}

In our createMesh method, we’ll load the image texture using the this.image.src attribute, then create a new Program, which is basically a representation of the material we’re applying to our Mesh. So our method looks like this:

createMesh () {
  const image = new Image()
  const texture = new Texture(this.gl)

  image.src = this.image.src
  image.onload = _ => {
    texture.image = image
  }

  const program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uScreenSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] }
    },
    transparent: true
  })

  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program
  })

  this.plane.setParent(this.scene)
}

Looks pretty simple, right? After we generate a new Mesh, we’re setting the plane as children of this.scene, so we’re including our mesh inside our main scene.

As you’ve probably noticed, our Program receives fragment and vertex. These both represent the shaders we’re going to use on our planes. For now, we’re just using simple implementations of both.

In our vertex.glsl file we’re getting the uv and position attributes, and making sure we’re rendering our planes in the right 3D world position.

attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

In our fragment.glsl file, we’re receiving a tMap texture, as you can see in the tMap: { value: texture } declaration, and rendering it in our plane geometry:

precision highp float;
 
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  gl_FragColor.rgb = texture2D(tMap, vUv).rgb;
  gl_FragColor.a = 1.0;
}

The createBounds method is important to make sure we’re positioning and scaling our planes in the correct DOM elements positions, so it’s basically going to call for this.element.getBoundingClientRect() to get the right position of our planes, and then after that using these values to calculate the 3D values of our plane.

createBounds () {
  this.bounds = this.element.getBoundingClientRect()

  this.updateScale()
  this.updateX()
  this.updateY()
}

updateScale () {
  this.plane.scale.x = this.viewport.width * this.bounds.width / this.screen.width
  this.plane.scale.y = this.viewport.height * this.bounds.height / this.screen.height
}

updateX (x = 0) {
  this.plane.position.x = -(this.viewport.width / 2) + (this.plane.scale.x / 2) + ((this.bounds.left - x) / this.screen.width) * this.viewport.width
}

updateY (y = 0) {
  this.plane.position.y = (this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height
}

update (y) {
  this.updateScale()
  this.updateX()
  this.updateY(y)
}

As you’ve probably noticed, the calculations for scale.x and scale.y are going to stretch our plane to make it the same width and height of the <img> elements. And the position.x and position.y takes the offset from the element and makes our translate our planes to the correct x and y axis in 3D.

And let’s not forget our onResize method, which is basically going to call createBounds again to refresh our getBoundingClientRect values and make sure we keep our 3D implementation responsive as well.

onResize (sizes) {
  if (sizes) {
    const { screen, viewport } = sizes
 
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
 
  this.createBounds()
}

This is the result we’ve got so far.

Implement cover behavior in fragment shaders

As you’ve probably noticed, our images are stretched. It happens because we need to make proper calculations in the fragment shaders in order to have a behavior like object-fit: cover; or background-size: cover; in WebGL.

I like to use an approach to pass the images’ real sizes and do some ratio calculations inside the fragment shader, so let’s adapt our code to this approach. So in our Program, we’re going to pass two new uniforms called uPlaneSizes and uImageSizes:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] }
  },
  transparent: true
})

Now we need to update our fragment.glsl and use these values to calculate our images ratios:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}

And then we also need update our image.onload method to pass naturalWidth and naturalHeight to uImageSizes:

image.onload = _ => {
  program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  texture.image = image
}

And createBounds to update the uPlaneSizes uniforms:

createBounds () {
  this.bounds = this.element.getBoundingClientRect()
 
  this.updateScale()
  this.updateX()
  this.updateY()
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

That’s it! Now we have properly scaled images.

Implementing smooth scrolling

Before we implement our infinite logic, it’s good to start making scrolling work properly. In our setup code, we have included a onWheel method, which is going to be used to lerp some variables and make our scroll butter smooth.

In our constructor from index.js, let’s create the this.scroll object with these variables:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
}

Now let’s update our onWheel implementation. When working with wheel events, it’s always important to normalize it, because it behaves differently based on the browser, I’ve been using normalize-wheel library to help on it:

import NormalizeWheel from 'normalize-wheel'

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.5
}

Let’s also create our lerp utility function inside the file utils/math.js:

export function lerp (p1, p2, t) {
  return p1 + (p2 - p1) * t
}

And now we just need to lerp from the this.scroll.current to the this.scroll.target inside the update method. And finally pass it to the media.update() methods:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current))
  }
}

After that we already have a result like this.

Making our smooth scrolling infinite

The approach of making an infinite scrolling logic is basically repeating the same grid over and over while the user keeps scrolling your page. Since the user can scroll up or down, you also need to take under consideration what direction is being scrolled, so overall the algorithm should work this way:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

To explain it in a visual way, let’s say we’re scrolling down and the red area is our viewport and the blue elements are not in the viewport anymore.

When we are in this state, we just need to move the blue elements to the end of our gallery grid, which is the entire height of our gallery: 295rem.

Let’s include the logic for it then. First, we need to create a new variable called this.scroll.last to store the last value of our scroll, this is going to be checked to give us up or down strings:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

In our update method, we need to include the following lines of validations and pass this.direction to our this.medias elements.

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current, this.direction))
  }
 
  this.renderer.render({
    scene: this.scene,
    camera: this.camera
  })
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

Then we need to get the total gallery height and transform it to 3D dimensions, so let’s include a querySelector of .demo-1__gallery and call the createGallery method in our index.js constructor.

createGallery () {
  this.gallery = document.querySelector('.demo-1__gallery')
}

It’s time to do the real calculations using this selector, so in our onResize method, we need to include the following lines:

this.galleryBounds = this.gallery.getBoundingClientRect()
this.galleryHeight = this.viewport.height * this.galleryBounds.height / this.screen.height

The this.galleryHeight variable now is storing the 3D size of the entire grid, now we need to pass it to both onResize and new Media() calls:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    height: this.galleryHeight,
    screen: this.screen,
    viewport: this.viewport
  }))
}
this.medias = Array.from(this.mediasElements).map(element => {
  let media = new Media({
    element,
    geometry: this.planeGeometry,
    gl: this.gl,
    height: this.galleryHeight,
    scene: this.scene,
    screen: this.screen,
    viewport: this.viewport
  })
 
  return media
})

And then inside our Media class, we need to store the height as well in the constructor and also in the onResize methods:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
  this.height = height
}
onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
}

Now we’re going to include the logic to move our elements based on their viewport position, just like our visual representation of the red and blue rectangles.

If the idea is to keep summing up a value based on the scroll and element position, we can achieve this by just creating a new variable called this.extra = 0, this is going to store how much we need to sum (or subtract) of our media, so in our constructor let’s include it:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
    this.extra = 0
}

And let’s reset it on resizing the browser, to make all values consistent so it doesn’t break when users resizes their viewport:

onResize (sizes) {
  this.extra = 0
}

And in our updateY method, we’re going to include it as well:

updateY (y = 0) {
  this.plane.position.y = ((this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height) - this.extra
}

Finally, the only thing left now is updating the this.extra variable inside our update method, making sure we’re adding or subtracting the this.height depending on the direction.

const planeOffset = this.plane.scale.y / 2
const viewportOffset = this.viewport.height / 2
 
this.isBefore = this.plane.position.y + planeOffset < -viewportOffset
this.isAfter = this.plane.position.y - planeOffset > viewportOffset
 
if (direction === 'up' && this.isBefore) {
  this.extra -= this.height
 
  this.isBefore = false
  this.isAfter = false
}
 
if (direction === 'down' && this.isAfter) {
  this.extra += this.height
 
  this.isBefore = false
  this.isAfter = false
}

Since we’re working in 3D space, we’re dealing with cartesian coordinates, that’s why you can notice we’re dividing most things by two (ex: this.viewport.heighht / 2). So that’s also the reason why we had to do a different logic for the this.isBefore and this.isAfter checks.

Awesome, we’re almost finishing our demo! That’s how it looks now, pretty cool to have it endless right?

Including touch events

Let’s also include touch events, so this demo can be more responsive to user interactions! In our addEventListeners method, let’s include some window.addEventListener calls:

window.addEventListener('mousedown', this.onTouchDown.bind(this))
window.addEventListener('mousemove', this.onTouchMove.bind(this))
window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
window.addEventListener('touchstart', this.onTouchDown.bind(this))
window.addEventListener('touchmove', this.onTouchMove.bind(this))
window.addEventListener('touchend', this.onTouchUp.bind(this))

Then we just need to implement simple touch events calculations, including the three methods: onTouchDown, onTouchMove and onTouchUp.

onTouchDown (event) {
  this.isDown = true

  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientY : event.clientY
}

onTouchMove (event) {
  if (!this.isDown) return

  const y = event.touches ? event.touches[0].clientY : event.clientY
  const distance = (this.start - y) * 2

  this.scroll.target = this.scroll.position + distance
}

onTouchUp (event) {
  this.isDown = false
}

Done! Now we also have touch events support enabled for our gallery.

Implementing direction-aware auto scrolling

Let’s also implement auto scrolling to make our interaction even better. In order to achieve that we just need to create a new variable that will store our speed based on the direction the user is scrolling.

So let’s create a variable called this.speed in our index.js file:

constructor () {
  this.speed = 2
}

This variable is going to be changed by our down and up validations we have in our update loop, so if the user is scrolling down, we’re going to keep the speed as 2, if the user is scrolling up, we’re going to replace it with -2, and before that we will sum this.speed to the this.scroll.target variable:

update () {
  this.scroll.target += this.speed

  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)

  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
    this.speed = 2
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
    this.speed = -2
  }
}

Implementing distortion shaders

Now let’s make everything even more interesting, it’s time to play a little bit with shaders and distort our planes while the user is scrolling through our page.

First, let’s update our update method from index.js, making sure we expose both current and last scroll values to all our medias, we’re going to do a simple calculation with them.

update () {
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
}

And now let’s create two uniforms for our Program shader: uOffset and uViewportSizes, and pass them:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uStrength: { value: 0 }
  },
  transparent: true
})

As you can probably notice, we’re going to need to set uViewportSizes in our onResize method as well, since this.viewport changes when we resize, so to keep this.viewport.width and this.viewport.height up to date, we also need to include the following lines of code in onResize:

onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) {
      this.viewport = viewport
 
      this.plane.program.uniforms.uOffset.value = [this.viewport.width, this.viewport.height]
    }
  }
}

Remember the this.scroll update we’ve made from index.js? Now it’s time to include a small trick to generate a speed value inside our Media.js:

update (y, direction) {
  this.updateY(y.current)
 
  this.plane.program.uniforms.uStrength.value = ((y.current - y.last) / this.screen.width) * 10
}

We’re basically checking the difference between the current and last values, which returns us some kind of “speed” of the scrolling, and dividing it by the this.screen.width, to keep our effect value behaving correctly independently of the width of our screen.

Finally now it’s time to play a little bit with our vertex shader. We’re going to bend our planes a little bit while the user is scrolling through the page. So let’s update our vertex.glsl file with this new code:

#define PI 3.1415926535897932384626433832795
 
precision highp float;
precision highp int;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
uniform float uStrength;
uniform vec2 uViewportSizes;
 
varying vec2 vUv;
 
void main() {
  vec4 newPosition = modelViewMatrix * vec4(position, 1.0);
 
  newPosition.z += sin(newPosition.y / uViewportSizes.y * PI + PI / 2.0) * -uStrength;
 
  vUv = uv;
 
  gl_Position = projectionMatrix * newPosition;
}

That’s it! Now we’re also bending our images creating an unique type of effect!

Explaining a little bit of the shader logic: basically what’s implemented in the newPosition.z line is taking into consideration the uViewportSize.y, which is our height from the viewport and the current position.y of our plane, getting the division of both and multiplying by PI that we defined on the very top of our shader file. And then we use the uStrength which is the strength of the bending, that is tight with our scrolling values, making it bend based on how faster you scroll the demo.

That’s the final result of our demo! I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

Photography used in the demos by Planete Elevene and Jayson Hinrichsen.

The post Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

Coding a 3D Lines Animation with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be recreating the cool 3D lines animation seen on the Voice of Racism website by Assembly using Three.jsDepthTexture.

This coding session was streamed live on December 13, 2020.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a 3D Lines Animation with Three.js appeared first on Codrops.

Coding a Simple Raymarching Scene with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be coding a Raymarching demo in Three.js from scratch. We’ll be using some cool Matcap textures, and add a wobbly animation to the scene, and all will be done as math functions. The scene is inspired by this demo made by Luigi De Rosa.

This coding session was streamed live on December 6, 2020.

Check out the live demo.

The Art of Code channel: https://www.youtube.com/c/TheArtofCodeIsCool/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a Simple Raymarching Scene with Three.js appeared first on Codrops.

Building a Svelte Static Website with Smooth Page Transitions

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, you’ll learn how image transitions with GLSL and Three.js work and how to build a static website with Svelte.js that will be using a third party API. Finally, we’ll code some smooth page transitions using GSAP and Three.js.

This coding session was streamed live on November 29, 2020.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Building a Svelte Static Website with Smooth Page Transitions appeared first on Codrops.

Replicating the Icosahedron from Rogier de Boevé’s Website

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we’ll be replicating the beautiful icosahedron animation from Rogier de Boevé’s website. We’ll be using Three.js and GLSL to make things cool, and also some postprocessing.

This coding session was streamed live on November 22, 2020.

This is what we’ll be learning to code:

Original website: https://rogierdeboeve.com/

Developer’s Twitter: https://twitter.com/rogierdeboeve

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Icosahedron from Rogier de Boevé’s Website appeared first on Codrops.

Creating WebGL Effects with CurtainsJS

This article focuses adding WebGL effects to <image> and <video> elements of an already “completed” web page. While there are a few helpful resources out there on this subject (like these two), I hope to help simplify this subject by distilling the process into a few steps: 

  • Create a web page as you normally would.
  • Render pieces that you want to add WebGL effects to with WebGL.
  • Create (or find) the WebGL effects to use.
  • Add event listeners to connect your page with the WebGL effects.

Specifically, we’ll focus on the connection between regular web pages and WebGL. What are we going to make? How about a draggle image slider with an interactive mouse hover!

We won’t cover the core functionality of slider or go very far into the technical details of WebGL or GLSL shaders. However, there are plenty of comments in the demo code and links to outside resources if you’d like to learn more. 

We’re using the latest version of WebGL (WebGL2) and GLSL (GLSL 300) which currently do not work in Safari or in Internet Explorer. So, use Firefox or Chrome to view the demos. If you’re planning to use any of what we’re covering in production, you should load both the GLSL 100 and 300 versions of the shaders and use the GLSL 300 version only if curtains.renderer._isWebGL2 is true. I cover this in the demo above.

First, create a web page as you normally would

You know, HTML and CSS and whatnot. In this case, we’re making an image slider but that’s just for demonstration. We’re not going to go full-depth on how to make a slider (Robin has a nice post on that). But here’s what I put together:

  1. Each slide is equal to the full width of the page.
  2. After a slide has been dragged, the slider continues to slide in the direction of the drag and gradually slow down with momentum.
  3. The momentum snaps the slider to the nearest slide at the end point. 
  4. Each slide has an exit animation that’s fired when the drag starts and an enter animation that’s fired when the dragging stops.
  5. When hovering the slider, a hover effect is applied similar to this video.

I’m a huge fan of the GreenSock Animation Platform (GSAP). It’s especially useful for us here because it provides a plugin for dragging, one that enables momentum on drag, and one for splitting text by line . If you’re uncomfortable creating sliders with GSAP, I recommend spending some time getting familiar with the code in the demo above.

Again, this is just for demonstration, but I wanted to at least describe the component a bit. These are the DOM elements that we will keep our WebGL synced with. 

Next, use WebGL to render the pieces that will contain WebGL effects

Now we need to render our images in WebGL. To do that we need to:

  1. Load the image as a texture into a GLSL shader.
  2. Create a WebGL plane for the image and correctly apply the image texture to the plane.
  3. Position the plane where the DOM version of the image is and scale it correctly.

The third step is particularly non-trivial using pure WebGL because we need to track the position of the DOM elements we want to port into the WebGL world while keeping the DOM and WebGL parts in sync during scroll and user interactions.

There’s actually a library that helps us do all of this with ease: CurtainsJS! It’s the only library I’ve found that easily creates WebGL versions of DOM images and videos and syncs them without too many other features (but I’d love to be proven wrong on that point, so please leave a comment if you know of others that do this well).

With Curtains, this is all the JavaScript we need to add:

// Create a new curtains instance
const curtains = new Curtains({ container: "canvas", autoRender: false });
// Use a single rAF for both GSAP and Curtains
function renderScene() {
  curtains.render();
}
gsap.ticker.add(renderScene);
// Params passed to the curtains instance
const params = {
  vertexShaderID: "slider-planes-vs", // The vertex shader we want to use
  fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use
  
 // Include any variables to update the WebGL state here
  uniforms: {
    // ...
  }
};
// Create a curtains plane for each slide
const planeElements = document.querySelectorAll(".slide");
planeElements.forEach((planeEl, i) => {
  const plane = curtains.addPlane(planeEl, params);
  // const plane = new Plane(curtains, planeEl, params); // v7 version
  // If our plane has been successfully created
  if(plane) {
    // onReady is called once our plane is ready and all its texture have been created
    plane.onReady(function() {
      // Add a "loaded" class to display the image container
      plane.htmlElement.closest(".slide").classList.add("loaded");
    });
  }
});

We also need to update our updateProgress function so that it updates our WebGL planes.

function updateProgress() {
  // Update the actual slider
  animation.progress(wrapVal(this.x) / wrapWidth);
  
  // Update the WebGL slider planes
  planes.forEach(plane => plane.updatePosition());
}

We also need to add a very basic vertex and fragment shader to display the texture that we’re loading. We can do that by loading them via <script> tags, like I do in the demo, or by using backticks as I show in the final demo.  

Again, this article will not go into a lot of detail on the technical aspects of these GLSL shaders. I recommend reading The Book of Shaders and the WebGL topic on Codrops as starting points.

If you don’t know much about shaders, it’s sufficient to say that the vertex shader positions the planes and the fragment shader processes the texture’s pixels. There are also three variable prefixes that I want to point out:

  • ins are passed in from a data buffer. In vertex shaders, they come from the CPU (our program). In fragment shaders, they come from the vertex shader.
  • uniforms are passed in from the CPU (our program).
  • outs are outputs from our shaders. In vertex shaders, they are passed into our fragment shader. In fragment shaders, they are passed to the frame buffer (what is drawn to the screen).

Once we’ve added all of that to our project, we have the same exact thing before but our slider is now being displayed via WebGL! Neat.

CurtainsJS easily converts images and videos to WebGL. As far as adding WebGL effects to text, there are several different methods but perhaps the most common is to draw the text to a <canvas> and then use it as a texture in the shader (e.g. 1, 2). It’s possible to do most other HTML using html2canvas (or similar) and use that canvas as a texture in the shader; however, this is not very performant.

Create (or find) the WebGL effects to use

Now we can add WebGL effects since we have our slider rendering with WebGL. Let’s break down the effects seen in our inspiration video:

  1. The image colors are inverted.
  2. There is a radius around the mouse position that shows the normal color and creates a fisheye effect.
  3. The radius around the mouse animates from 0 when the slider is hovered and animates back to 0 when it is no longer hovered.
  4. The radius doesn’t jump to the mouse’s position but animates there over time.
  5. The entire image translates based on the mouse’s position in reference to the center of the image.

When creating WebGL effects, it’s important to remember that shaders don’t have a memory state that exists between frames. It can do something based on where the mouse is at a given time, but it can’t do something based on where the mouse has been all by itself. That’s why for certain effects, like animating the radius once the mouse has entered the slider or animating the position of the radius over time, we should use a JavaScript variable and pass that value to each frame of the slider. We’ll talk more about that process in the next section.

Once we modify our shaders to invert the color outside of the radius and create the fisheye effect inside of the radius, we’ll get something like the demo below. Again, the point of this article is to focus on the connection between DOM elements and WebGL so I won’t go into detail about the shaders, but I did add comments to them.

But that’s not too exciting yet because the radius is not reacting to our mouse. That’s what we’ll cover in the next section.

I haven’t found a repository with a lot of pre-made WebGL shaders to use for regular websites. There’s ShaderToy and VertexShaderArt (which have some truly amazing shaders!), but neither is aimed at the type of effects that fit on most websites. I’d really like to see someone create a repository of WebGL shaders as a resource for people working on everyday sites. If you know of one, please let me know.

Add event listeners to connect your page with the WebGL effects

Now we can add interactivity to the WebGL portion! We need to pass in some variables (uniforms) to our shaders and affect those variables when the user interacts with our elements. This is the section where I’ll go into the most detail because it’s the core for how we connect JavaScript to our shaders.

First, we need to declare some uniforms in our shaders. We only need the mouse position in our vertex shader:

// The un-transformed mouse position
uniform vec2 uMouse;

We need to declare the radius and resolution in our fragment shader:

uniform float uRadius; // Radius of pixels to warp/invert
uniform vec2 uResolution; // Used in anti-aliasing

Then let’s add some values for these inside of the parameters we pass into our Curtains instance. We were already doing this for uResolution! We need to specify the name of the variable in the shader, it’s type, and then the starting value:

const params = {
  vertexShaderID: "slider-planes-vs", // The vertex shader we want to use
  fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use
  
  // The variables that we're going to be animating to update our WebGL state
  uniforms: {
    // For the cursor effects
    mouse: { 
      name: "uMouse", // The shader variable name
      type: "2f",     // The type for the variable - https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html
      value: mouse    // The initial value to use
    },
    radius: { 
      name: "uRadius",
      type: "1f",
      value: radius.val
    },
    
    // For the antialiasing
    resolution: { 
      name: "uResolution",
      type: "2f", 
      value: [innerWidth, innerHeight] 
    }
  },
};

Now the shader uniforms are connected to our JavaScript! At this point, we need to create some event listeners and animations to affect the values that we’re passing into the shaders. First, let’s set up the animation for the radius and the function to update the value we pass into our shader:

const radius = { val: 0.1 };
const radiusAnim = gsap.from(radius, { 
  val: 0, 
  duration: 0.3, 
  paused: true,
  onUpdate: updateRadius
});
function updateRadius() {
  planes.forEach((plane, i) => {
    plane.uniforms.radius.value = radius.val;
  });
}

If we play the radius animation, then our shader will use the new value each tick.

We also need to update the mouse position when it’s over our slider for both mouse devices and touch screens. There’s a lot of code here, but you can walk through it pretty linearly. Take your time and process what’s happening.

const mouse = new Vec2(0, 0);
function addMouseListeners() {
  if ("ontouchstart" in window) {
    wrapper.addEventListener("touchstart", updateMouse, false);
    wrapper.addEventListener("touchmove", updateMouse, false);
    wrapper.addEventListener("blur", mouseOut, false);
  } else {
    wrapper.addEventListener("mousemove", updateMouse, false);
    wrapper.addEventListener("mouseleave", mouseOut, false);
  }
}


// Update the stored mouse position along with WebGL "mouse"
function updateMouse(e) {
  radiusAnim.play();
  
  if (e.changedTouches && e.changedTouches.length) {
    e.x = e.changedTouches[0].pageX;
    e.y = e.changedTouches[0].pageY;
  }
  if (e.x === undefined) {
    e.x = e.pageX;
    e.y = e.pageY;
  }
  
  mouse.x = e.x;
  mouse.y = e.y;
  
  updateWebGLMouse();
}


// Updates the mouse position for all planes
function updateWebGLMouse(dur) {
  // update the planes mouse position uniforms
  planes.forEach((plane, i) => {
    const webglMousePos = plane.mouseToPlaneCoords(mouse);
    updatePlaneMouse(plane, webglMousePos, dur);
  });
}


// Updates the mouse position for the given plane
function updatePlaneMouse(plane, endPos = new Vec2(0, 0), dur = 0.1) {
  gsap.to(plane.uniforms.mouse.value, {
    x: endPos.x,
    y: endPos.y,
    duration: dur,
    overwrite: true,
  });
}


// When the mouse leaves the slider, animate the WebGL "mouse" to the center of slider
function mouseOut(e) {
  planes.forEach((plane, i) => updatePlaneMouse(plane, new Vec2(0, 0), 1) );
  
  radiusAnim.reverse();
}

We should also modify our existing updateProgress function to keep our WebGL mouse synced.

// Update the slider along with the necessary WebGL variables
function updateProgress() {
  // Update the actual slider
  animation.progress(wrapVal(this.x) / wrapWidth);
  
  // Update the WebGL slider planes
  planes.forEach(plane => plane.updatePosition());
  
  // Update the WebGL "mouse"
  updateWebGLMouse(0);
}

Now we’re cooking with fire! Our slider now mets all of our requirements.

Two additional benefits of using GSAP for your animations is that it provides access to callbacks, like onComplete, and GSAP keeps everything perfectly synced no matter the refresh rate (e.g. this situation).

You take it from here!

This is, of course, just the tip of the iceberg when it comes to what we can do with the slider now that it is in WebGL. For example,  common effects like turbulence and displacement can be added to the images in WebGL. The core concept of a displacement effect is to move pixels around based on a gradient lightmap that we use as an input source. We can use this texture (that I pulled from this displacement demo by Jesper Landberg — you should give him a follow) as our source and then plug it into our shader. 

To learn more about creating textures like these, see this article, this tweet, and this tool. I am not aware of any existing repositories of images like these, but if you know of one please, let me know.

If we hook up the texture above and animate the displacement power and intensity so that they vary over time and based on our drag velocity, then it will create a nice semi-random, but natural-looking displacement effect:

It’s also worth noting that Curtains has its own React version if that’s how you like to roll.

That’s all I’ve got for now. If you create something using what you’ve learned from this article, I’d love to see it! Connect with me via Twitter.


The post Creating WebGL Effects with CurtainsJS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.