Magical Marbles in Three.js

In April 2019, Harry Alisavakis made a great write-up about the “magical marbles” effect he shared prior on Twitter. Check that out first to get a high level overview of the effect we’re after (while you’re at it, you should see some of his other excellent shader posts).

While his write-up provided a brief summary of the technique, the purpose of this tutorial is to offer a more concrete look at how you could implement code for this in Three.js to run on the web. There’s also some tweaks to the technique here and there that try to make things more straightforward.

⚠ This tutorial assumes intermediate familiarity with Three.js and GLSL

Overview

You should read Harry’s post first because he provides helpful visuals, but the gist of it is this:

  • Add fake depth to a material by offsetting the texture look-ups based on camera direction
  • Instead of using the same texture at each iteration, let’s use depth-wise “slices” of a heightmap so that the shape of our volume is more dynamic
  • Add wavy motion by displacing the texture look-ups with scrolling noise

There were a couple parts of this write-up that weren’t totally clear to me, likely due to the difference in features available in Unity vs Three.js. One is the jump from parallax mapping on a plane to a sphere. Another is how to get vertex tangents for the transformation to tangent space. Finally, I wasn’t sure if the noise for the heightmap was evaluated as code inside the shader or pre-rendered. After some experimentation I came to my own conclusions for these, but I encourage you to come up with your own variations of this technique 🙂

Here’s the Pen I’ll be starting from, it sets up a boilerplate Three.js app with an init and tick lifecycle, color management, and an environment map from Poly Haven for lighting.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 1: A Blank Marble

Marbles are made of glass, and Harry’s marbles definitely showed some specular shine. In order to make a truly beautiful glassy material it would take some pretty complex PBR shader code, which is too much work! Instead, let’s just take one of Three.js’s built-in PBR materials and hook our magical bits into that, like the shader parasite we are.

Enter onBeforeCompile, a callback property of the THREE.Material base class that lets you apply patches to built-in shaders before they get compiled by WebGL. This technique is very hacky and not well explained in the official docs, but a good place to learn more about it is Dusan Bosnjak’s post “Extending three.js materials with GLSL”. The hardest part about it is determining which part of the shaders you need to change exactly. Unfortunately, your best bet is to just read through the source code of the shader you want to modify, find a line or chunk that looks vaguely relevant, and try tweaking stuff until the property you want to modify shows visible changes. I’ve been writing personal notes of what I discover since it’s really hard to keep track of what the different chunks and variables do.

ℹ I recently discovered there’s a much more elegant way to extend the built-in materials using Three’s experimental Node Materials, but that deserves a whole tutorial of its own, so for this guide I’ll stick with the more common onBeforeCompile approach.

For our purposes, MeshStandardMaterial is a good base to start from. It has specular and environment reflections that will make out material look very glassy, plus it gives you the option to add a normal map later on if you want to add scratches onto the surface. The only part we want to change is the base color on which the lighting is applied. Luckily, this is easy to find. The fragment shader for MeshStandardMaterial is defined in meshphysical_frag.glsl.js (it’s a subset of MeshPhysicalMaterial, so they are both defined in the same file). Oftentimes you need to go digging through the shader chunks represented by each of the #include statements you’ll see in the file, however, this is a rare occasion where the variable we want to tweak is in plain sight.

It’s the line right near the top of the main() function that says:

vec4 diffuseColor = vec4( diffuse, opacity );

This line normally reads from the diffuse and opacity uniforms which you set via the .color and .opacity JavaScript properties of the material, and then all the chunks after that do the complicated lighting work. We are going to replace this line with our own assignment to diffuseColor so we can apply whatever pattern we want on the marble’s surface. You can do this using regular JavaScript string methods on the .fragmentShader field of the shader provided to the onBeforeCompile callback.

material.onBeforeCompile = shader => {
  shader.fragmentShader = shader.fragmentShader.replace('/vec4 diffuseColor.*;/, `
    // Assign whatever you want!
    vec4 diffuseColor = vec4(1., 0., 0., 1.);
  `)
}

By the way, the type definition for that mysterious callback argument is available here.

In the following Pen I swapped our geometry for a sphere, lowered the roughness, and filled the diffuseColor with the screen space normals which are available in the standard fragment shader on vNormal. The result looks like a shiny version of MeshNormalMaterial.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 2: Fake Volume

Now comes the harder part — using the diffuse color to create the illusion of volume inside our marble. In Harry’s earlier parallax post, he talks about finding the camera direction in tangent space and using this to offset the UV coordinates. There’s a great explanation of how this general principle works for parallax effects on learnopengl.com and in this archived post.

However, converting stuff into tangent space in Three.js can be tricky. To the best of my knowledge, there’s not a built-in utility to help with this like there are for other space transformations, so it takes some legwork to both generate vertex tangents and then assemble a TBN matrix to perform the transformation. On top of that, spheres are not a nice shape for tangents due to the hairy ball theorem (yes, that’s a real thing), and Three’s computeTangents() function was producing discontinuities for me so you basically have to compute tangents manually. Yuck!

Luckily, we don’t really need to use tangent space if we frame this as a 3D raymarching problem. We have a ray pointing from the camera to the surface of our marble, and we want to march this through the sphere volume as well as down the slices of our height map. We just need to know how to convert a point in 3D space into a point on the surface of our sphere so we can perform texture lookups. In theory you could also just plug the 3D position right into your noise function of choice and skip using the texture, but this effect relies on lots of iterations and I’m operating under the assumption that a texture lookup is cheaper than all the number crunching happening in e.g. the 3D simplex noise function (shader gurus, please correct me if I’m wrong). The other benefit of reading from a texture is that it allows us to use a more art-oriented pipeline to craft our heightmaps, so we can make all sorts of interesting volumes without writing new code.

Originally I wrote a function to do this spherical XYZ→UV conversion based on some answers I saw online, but it turns out there’s already a function that does the same thing inside of common.glsl.js called equirectUv. We can reuse that as long as put our raymarching logic after the #include <common> line in the standard shader.

Creating our heightmap

For the heightmap, we want a texture that seamlessly projects on the surface of a UV sphere. It’s not hard to find seamless noise textures online, but the problem is that these flat projections of noise will look warped near the poles when applied to a sphere. To solve this, let’s craft our own texture using Blender. One way to do this is to bend a high resolution “Grid” mesh into a sphere using two instances of the “Simple Deform modifier”, plug the resulting “Object” texture coordinates into your procedural shader of choice, and then do an emissive bake with the Cycles renderer. I also added some loop cuts near the poles and a subdivision modifier to prevent any artifacts in the bake.

The resulting bake looks something like this:

Raymarching

Now the moment we’ve been waiting for (or dreading) — raymarching! It’s actually not so bad, the following is an abbreviated version of the code. For now there’s no animation, I’m just taking slices of the heightmap using smoothstep (note the smoothing factor which helps hide the sharp edges between layers), adding them up, and then using this to mix two colors.

uniform sampler2D heightMap;
uniform vec3 colorA;
uniform vec3 colorB;
uniform float iterations;
uniform float depth;
uniform float smoothing;

/**
  * @param rayOrigin - Point on sphere
  * @param rayDir - Normalized ray direction
  * @returns Diffuse RGB color
  */
vec3 marchMarble(vec3 rayOrigin, vec3 rayDir) {
  float perIteration = 1. / float(iterations);
  vec3 deltaRay = rayDir * perIteration * depth;

  // Start at point of intersection and accumulate volume
  vec3 p = rayOrigin;
  float totalVolume = 0.;

  for (int i=0; i<iterations; ++i) {
    // Read heightmap from current spherical direction
    vec2 uv = equirectUv(p);
    float heightMapVal = texture(heightMap, uv).r;

    // Take a slice of the heightmap
    float height = length(p); // 1 at surface, 0 at core, assuming radius = 1
    float cutoff = 1. - float(i) * perIteration;
    float slice = smoothstep(cutoff, cutoff + smoothing, heightMapVal);

    // Accumulate the volume and advance the ray forward one step
    totalVolume += slice * perIteration;
    p += deltaRay;
  }
  return mix(colorA, colorB, totalVolume);
}

/**
 * We can user this later like:
 *
 * vec4 diffuseColor = vec4(marchMarble(rayOrigin, rayDir), 1.0);
 */

ℹ This logic isn’t really physically accurate — taking slices of the heightmap based on the iteration index assumes that the ray is pointing towards the center of the sphere, but this isn’t true for most of the pixels. As a result, the marble appears to have some heavy refraction. However, I think this actually looks cool and further sells the effect of it being solid glass!

Injecting uniforms

One final note before we see the fruits of our labor — how do we include all these custom uniforms in our modified material? We can’t just stuck stuff onto material.uniforms like you would with THREE.ShaderMaterial. The trick is to create your own personal uniforms object and then wire up its contents onto the shader argument inside of onBeforeCompile. For instance:

const myUniforms = {
  foo: { value: 0 }
}

material.onBeforeCompile = shader => {
  shader.uniforms.foo = myUniforms.foo

  // ... (all your other patches)
}

When the shader tries to read its shader.uniforms.foo.value reference, it’s actually reading from your local myUniforms.foo.value, so any change to the values in your uniforms object will automatically be reflected in the shader.

I typically use the JavaScript spread operator to wire up all my uniforms at once:

const myUniforms = {
  // ...(lots of stuff)
}

material.onBeforeCompile = shader => {
  shader.uniforms = { ...shader.uniforms, ...myUniforms }

  // ... (all your other patches)
}

Putting this all together, we get a gassy (and glassy) volume. I’ve added sliders to this Pen so you can play around with the iteration count, smoothing, max depth, and colors.

See the Pen
by Matt (@mattrossman)
on CodePen.0

ℹ Technically the ray origin and ray direction should be in local space so the effect doesn’t break when the marble moves. However, I’m skipping this transformation because we’re not moving the marble, so world space and local space are interchangeable. Work smarter not harder!

Step 3: Wavy Motion

Almost done! The final touch is to make this marble come alive by animating the volume. Harry’s waving displacement post explains how he accomplishes this using a 2D displacement texture. However, just like with the heightmap, a flat displacement texture warps near the poles of a sphere. So, we’ll make our own again. You can use the same Blender setup as before, but this time let’s bake a 3D noise texture to the RGB channels:

Then in our marchMarble function, we’ll read from this texture using the same equirectUv function as before, center the values, and then add a scaled version of that vector to the position used for the heightmap texture lookup. To animate the displacement, introduce a time uniform and use that to scroll the displacement texture horizontally. For an even better effect, we’ll sample the displacement map twice (once upright, then upside down so they never perfectly align), scroll them in opposite directions and add them together to produce noise that looks chaotic. This general strategy is often used in water shaders to create waves.

uniform float time;
uniform float strength;

// Lookup displacement texture
vec2 uv = equirectUv(normalize(p));
vec2 scrollX = vec2(time, 0.);
vec2 flipY = vec2(1., -1.);
vec3 displacementA = texture(displacementMap, uv + scrollX).rgb;
vec3 displacementB = texture(displacementMap, uv * flipY - scrollX).rgb;

// Center the noise
displacementA -= 0.5;
displacementB -= 0.5;

// Displace current ray position and lookup heightmap
vec3 displaced = p + strength * (displacementA + displacementB);
uv = equirectUv(normalize(displaced));
float heightMapVal = texture(heightMap, uv).r;

Behold, your magical marble!

See the Pen
by Matt (@mattrossman)
on CodePen.0

Extra Credit

Hard part’s over! This formula is a starting point from which there are endless possibilities for improvements and deviations. For instance, what happens if we swap out the noise texture we used earlier for something else like this:

This was created using the “Wave Texture” node in Blender

See the Pen
by Matt (@mattrossman)
on CodePen.0

Or how about something recognizable, like this map of the earth?

Try dragging the “displacement” slider and watch how the floating continents dance around!

See the Pen
by Matt (@mattrossman)
on CodePen.0

In that example I modified the shader to make the volume look less gaseous by boosting the rate of volume accumulation, breaking the loop once it reached a certain volume threshold, and tinting based on the final number of iterations rather than accumulated volume.

For my last trick, I’ll point back to Harry’s write-up where he suggests mixing between two HDR colors. This basically means mixing between colors whose RGB values exceed the typical [0, 1] range. If we plug such a color into our shader as-is, it’ll create color artifacts in the pixels where the lighting is blown out. There’s an easy solve for this by wrapping the color in a toneMapping() call as is done in tonemapping_fragment.glsl.js, which “tones down” the color range. I couldn’t find where that function is actually defined, but it works!

I’ve added some color multiplier sliders to this Pen so you can push the colors outside the [0, 1] range and observe how mixing these HDR colors creates pleasant color ramps.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Conclusion

Thanks again to Harry for the great learning resources. I had a ton of fun trying to recreate this effect and I learned a lot along the way. Hopefully you learned something too!

Your challenge now is to take these examples and run with them. Change the code, the textures, the colors, and make your very own magical marble. Show me and Harry what you make on Twitter.

Surprise me!

The post Magical Marbles in Three.js appeared first on Codrops.

Rock the Stage with a Smooth WebGL Shader Transformation on Scroll

It’s fascinating which magical effects you can add to a website when you experiment with vertex displacement. Today we’d like to share a method with you that you can use to create your own WebGL shader animation linked to scroll progress. It’s a great way to learn how to bind shader vertices and colors to user interactions and to find the best flow.

We’ll be using Pug, Sass, Three.js and GSAP for our project.

Let’s rock!

The stage

For our flexible scroll stage, we quickly create three sections with Pug. By adding an element to the sections array, it’s easy to expand the stage.

index.pug:

.scroll__stage
  .scroll__content
    - const sections = ['Logma', 'Naos', 'Chara']
      each section, index in sections
        section.section
          .section__title
            h1.section__title-number= index < 9 ? `0${index + 1}` : index + 1

            h2.section__title-text= section

          p.section__paragraph The fireball that we rode was moving – But now we've got a new machine – They got music in the solar system
            br
            a.section__button Discover

The sections are quickly formatted with Sass, the mixins we will need later.

index.sass:

%top
  top: 0
  left: 0
  width: 100%

%fixed
  @extend %top

  position: fixed

%absolute
  @extend %top

  position: absolute

*,
*::after,
*::before
  margin: 0
  padding: 0
  box-sizing: border-box

.section
  display: flex
  justify-content: space-evenly
  align-items: center
  width: 100%
  min-height: 100vh
  padding: 8rem
  color: white
  background-color: black

  &:nth-child(even)
    flex-direction: row-reverse
    background: blue

  /* your design */

Now we write our ScrollStage class and set up a scene with Three.js. The camera range of 10 is enough for us here. We already prepare the loop for later instructions.

index.js:

import * as THREE from 'three'

class ScrollStage {
  constructor() {
    this.element = document.querySelector('.content')

    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight,
    }

    this.scene = new THREE.Scene()

    this.renderer = new THREE.WebGLRenderer({ 
      antialias: true, 
      alpha: true 
    })

    this.canvas = this.renderer.domElement

    this.camera = new THREE.PerspectiveCamera( 
      75, 
      this.viewport.width / this.viewport.height, 
      .1, 
      10
    )

    this.clock = new THREE.Clock()

    this.update = this.update.bind(this)

    this.init()
  }

  init() {
    this.addCanvas()
    this.addCamera()
    this.addEventListeners()
    this.onResize()
    this.update()
  }

  /**
   * STAGE
   */
  addCanvas() {
    this.canvas.classList.add('webgl')
    document.body.appendChild(this.canvas)
  }

  addCamera() {
    this.camera.position.set(0, 0, 2.5)
    this.scene.add(this.camera)
  }

  /**
   * EVENTS
   */
  addEventListeners() {
    window.addEventListener('resize', this.onResize.bind(this))
  }

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.camera.aspect = this.viewport.width / this.viewport.height
    this.camera.updateProjectionMatrix()

    this.renderer.setSize(this.viewport.width, this.viewport.height)
    this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 1.5))  
  }

  /**
   * LOOP
   */
  update() {
    this.render()

    window.requestAnimationFrame(this.update) 
  }

  /**
   * RENDER
   */
  render() {
    this.renderer.render(this.scene, this.camera)
  }  
}

new ScrollStage()

We disable the pointer events and let the canvas blend.

index.sass:

...

canvas.webgl
  @extend %fixed

  pointer-events: none
  mix-blend-mode: screen

...

The rockstar

We create a mesh, assign a icosahedron geometry and set the blending of its material to additive for loud colors. And – I like the wireframe style. For now, we set the value of all uniforms to 0 (uOpacity to 1).
I usually scale down the mesh for portrait screens. With only one object, we can do it this way. Otherwise you better transform the camera.position.z.

Let’s rotate our sphere slowly.

index.js:

...

import vertexShader from './shaders/vertex.glsl'
import fragmentShader from './shaders/fragment.glsl'

...

  init() {

    ...

    this.addMesh()

    ...
  }

  /**
   * OBJECT
   */
  addMesh() {
    this.geometry = new THREE.IcosahedronGeometry(1, 64)

    this.material = new THREE.ShaderMaterial({
      wireframe: true,
      blending: THREE.AdditiveBlending,
      transparent: true,
      vertexShader,
      fragmentShader,
      uniforms: {
        uFrequency: { value: 0 },
        uAmplitude: { value: 0 },
        uDensity: { value: 0 },
        uStrength: { value: 0 },
        uDeepPurple: { value: 0 },
        uOpacity: { value: 1 }
      }
    })

    this.mesh = new THREE.Mesh(this.geometry, this.material)

    this.scene.add(this.mesh)
  }

  ...

  onResize() {

    ...

    if (this.viewport.width < this.viewport.height) {
      this.mesh.scale.set(.75, .75, .75)
    } else {
      this.mesh.scale.set(1, 1, 1)
    }

    ...

  }

  update() {
    const elapsedTime = this.clock.getElapsedTime()

    this.mesh.rotation.y = elapsedTime * .05

    ...

  }

In the vertex shader (which positions the geometry) and fragment shader (which assigns a color to the pixels) we control the values of the uniforms that we will get from the scroll position. To generate an organic randomness, we make some noise. This shader program runs now on the GPU.

/shaders/vertex.glsl:

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

uniform float uFrequency;
uniform float uAmplitude;
uniform float uDensity;
uniform float uStrength;

varying float vDistortion;

void main() {  
  float distortion = pnoise(normal * uDensity, vec3(10.)) * uStrength;

  vec3 pos = position + (normal * distortion);
  float angle = sin(uv.y * uFrequency) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistortion = distortion;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}

/shaders/fragment.glsl:

uniform float uOpacity;
uniform float uDeepPurple;

varying float vDistortion;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}     

void main() {
  float distort = vDistortion * 3.;

  vec3 brightness = vec3(.1, .1, .9);
  vec3 contrast = vec3(.3, .3, .3);
  vec3 oscilation = vec3(.5, .5, .9);
  vec3 phase = vec3(.9, .1, .8);

  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, vDistortion);
  gl_FragColor += vec4(min(uDeepPurple, 1.), 0., .5, min(uOpacity, 1.));
}

If you don’t understand what’s happening here, I recommend this tutorial by Mario Carrillo.

The soundcheck

To find your preferred settings, you can set up a dat.gui for example. I’ll show you another approach here, in which you can combine two (or more) parameters to intuitively find a cool flow of movement. We simply connect the uniform values with the normalized values of the mouse event and log them to the console. As we use this approach only for development, we do not call rAF (requestAnimationFrames).

index.js:

...

import GSAP from 'gsap'

...

  constructor() {

    ...

    this.mouse = {
      x: 0,
      y: 0
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 0
      },
      uAmplitude: {
        start: 0,
        end: 0
      },
      uDensity: {
        start: 0,
        end: 0
      },
      uStrength: {
        start: 0,
        end: 0
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 0,
        end: 0
      },
      uOpacity: {  // max 1
        start: 1,
        end: 1
      }
    }

    ...

  }

  addEventListeners() {

    ...

    window.addEventListener('mousemove', this.onMouseMove.bind(this))
  }

  onMouseMove(event) {
    // play with it!
    // enable / disable / change x, y, multiplier …

    this.mouse.x = (event.clientX / this.viewport.width).toFixed(2) * 4
    this.mouse.y = (event.clientY / this.viewport.height).toFixed(2) * 2

    GSAP.to(this.mesh.material.uniforms.uFrequency, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uAmplitude, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uDensity, { value: this.mouse.y })
    GSAP.to(this.mesh.material.uniforms.uStrength, { value: this.mouse.y })
    // GSAP.to(this.mesh.material.uniforms.uDeepPurple, { value: this.mouse.x })
    // GSAP.to(this.mesh.material.uniforms.uOpacity, { value: this.mouse.y })

    console.info(`X: ${this.mouse.x}  |  Y: ${this.mouse.y}`)
  }

The support act

To create a really fluid mood, we first implement our smooth scroll.

index.sass:

body
  overscroll-behavior: none
  width: 100%
  height: 100vh

  ...

.scroll
  &__stage
    @extend %fixed

    height: 100vh

  &__content
    @extend %absolute

     will-change: transform

SmoothScroll.js:

import GSAP from 'gsap'

export default class {
  constructor({ element, viewport, scroll }) {
    this.element = element
    this.viewport = viewport
    this.scroll = scroll

    this.elements = {
      scrollContent: this.element.querySelector('.scroll__content')
    }
  }

  setSizes() {
    this.scroll.height = this.elements.scrollContent.getBoundingClientRect().height
    this.scroll.limit = this.elements.scrollContent.clientHeight - this.viewport.height

    document.body.style.height = `${this.scroll.height}px`
  }

  update() {
    this.scroll.hard = window.scrollY
    this.scroll.hard = GSAP.utils.clamp(0, this.scroll.limit, this.scroll.hard)
    this.scroll.soft = GSAP.utils.interpolate(this.scroll.soft, this.scroll.hard, this.scroll.ease)

    if (this.scroll.soft < 0.01) {
      this.scroll.soft = 0
    }

    this.elements.scrollContent.style.transform = `translateY(${-this.scroll.soft}px)`
  }    

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.setSizes()
  }
}

index.js:

...

import SmoothScroll from './SmoothScroll'

...

  constructor() {

    ...

    this.scroll = {
      height: 0,
      limit: 0,
      hard: 0,
      soft: 0,
      ease: 0.05
    }

    this.smoothScroll = new SmoothScroll({ 
      element: this.element, 
      viewport: this.viewport, 
      scroll: this.scroll
    })

    ...

  }

  ...

  onResize() {

    ...

    this.smoothScroll.onResize()

    ...

  }

  update() {

    ...

    this.smoothScroll.update()

    ...

  }

The show

Finally, let’s rock the stage!

Once we have chosen the start and end values, it’s easy to attach them to the scroll position. In this example, we want to drop the purple mesh through the blue section so that it is subsequently soaked in blue itself. We increase the frequency and the strength of our vertex displacement. Let’s first enter this values in our settings and update the mesh material. We normalize scrollY so that we can get the values from 0 to 1 and make our calculations with them.

To render the shader only while scrolling, we call rAF by the scroll listener. We don’t need the mouse event listener anymore.

To improve performance, we add an overwrite to the GSAP default settings. This way we kill any existing tweens while generating a new one for every frame. A long duration renders the movement extra smooth. Once again we let the object rotate slightly with the scroll movement. We iterate over our settings and GSAP makes the music.

index.js:

  constructor() {

  ...

    this.scroll = {

      ...

      normalized: 0, 
      running: false
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 4
      },
      uAmplitude: {
        start: 4,
        end: 4
      },
      uDensity: {
        start: 1,
        end: 1
      },
      uStrength: {
        start: 0,
        end: 1.1
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 1,
        end: 0
      },
      uOpacity: { // max 1
        start: .33,
        end: .66
      }
    }

    GSAP.defaults({
      ease: 'power2',
      duration: 6.6,
      overwrite: true
    })

    this.updateScrollAnimations = this.updateScrollAnimations.bind(this)

    ...

  }

...

  addMesh() {

  ...

    uniforms: {
      uFrequency: { value: this.settings.uFrequency.start },
      uAmplitude: { value: this.settings.uAmplitude.start },
      uDensity: { value: this.settings.uDensity.start },
      uStrength: { value: this.settings.uStrength.start },
      uDeepPurple: { value: this.settings.uDeepPurple.start },
      uOpacity: { value: this.settings.uOpacity.start }
    }
  }

  ...

  addEventListeners() {

    ...

    // window.addEventListener('mousemove', this.onMouseMove.bind(this))  // enable to find your preferred values (console)

    window.addEventListener('scroll', this.onScroll.bind(this))
  }

  ...

  /**
   * SCROLL BASED ANIMATIONS
   */
  onScroll() {
    this.scroll.normalized = (this.scroll.hard / this.scroll.limit).toFixed(1)

    if (!this.scroll.running) {
      window.requestAnimationFrame(this.updateScrollAnimations)

      this.scroll.running = true
    }
  }

  updateScrollAnimations() {
    this.scroll.running = false

    GSAP.to(this.mesh.rotation, {
      x: this.scroll.normalized * Math.PI
    })

    for (const key in this.settings) {
      if (this.settings[key].start !== this.settings[key].end) {
        GSAP.to(this.mesh.material.uniforms[key], {
          value: this.settings[key].start + this.scroll.normalized * (this.settings[key].end - this.settings[key].start)
        })
      }
    }
  }

Thanks for reading this tutorial, hope you like it!
Try it out, go new ways, have fun – dare a stage dive!

The post Rock the Stage with a Smooth WebGL Shader Transformation on Scroll appeared first on Codrops.

Tropical Particles Rain Animation with Three.js

People always want more. More of everything. And there is nothing better in the world to fulfil that need than particle systems. Because you can have thousands of particles easily. And that’s what I did for this demo.

Creating a particle system

You start with just one particle. I like to use classes for that, because that perfectly describes the behaviour of particles. It usually it looks like this:

class Particle{
	// just setting initial position and speed
	constructor(){}
	// updating velocity according to position and physics
	updateVelocity(){}
	// finally update particle position, 
	// sometimes these last two could be merged
	updatePosition(){}
}

And after all of that, you could render them. Sounds easy, right? It is!

Physics

So the particles need to move. For that we need to establish some rules for their movement. The easiest one is: gravity. Implemented in the particle it would look like this:

updateVelocity(){
	this.speed.y = gravity; // some constant
}

updatePosition{
	// this could have been just one line, but imagine there are some other factors for speed, not just gravity.
	this.position.y += this.speed.y;

	// also some code to bring particle back to screen, once it is outside
}

With that you will get this simple rain. I also randomized gravity for each particle. Why? Because I can!

But just rain didn’t seem enough, and I decided to add a little bit of physics to change the speed of the particles. To slow them down or speed them up, depending on the… image. Yup, I took the physics for the particles from the image. My friend @EncharmDre proposed to use some oscillations for the particles, and I also added some colors palettes here. It looked amazing! Like a real tropical rain.

Still, the physics part is the computational bottleneck for this animation. It is possible to animate millions of particles by using GPGPU techniques, and the GPU to calculate the positions. But 10000 particles seemed enough for my demo, and I got away with CPU calculations this time. 10s of thousands also were possible because I saved some performance on rendering.

Rendering

There are a lot of ways to render particles: HTML, Canvas2D, WebGL, SVG, you name it. But so far the fastest one is of course WebGL. It is even faster than Canvas2D rendering. I mean, of course Canvas2D works perfectly fine for particles. But with WebGL you could have MORE OF THEM! Which is the point of this whole adventure =).

So, WebGL narrowed down the search for rendering to a couple of frameworks. PIXI.js is actually very nice for rendering 2D particle systems, but because I love three.js, and because its always nice to have another dimension as a backup (the more dimensions the better, remember?), I chose three.js for this job.

There is already an object specifically for this job: THREE.Points. I also decided to use shaders (because it’s cool) but you could actually easily avoid that for this demo.

The simplified code looks somewhat like this:

// initialization
for (let i = 0; i < numberOfParticles; i++) {
  // because we are animating them in 2D, z=0
  positions.set([x, y, 0], i * 3);
  this.particles.push(
    new Particle({
      x,
      y
    })
  );
}
geometry.setAttribute(
  "position",
  new THREE.BufferAttribute(positions, 3)
);

// animation loop
particles.forEach(particle => {
  particle.updateSpeedAndPosition();
});

To make it even cooler, I decided to create trails for the particles. For that I just used the previous rendering frame, and faded it a little bit. And here is what I got:

Now combining everything, we could achieve this final look:

Final words

I beg you to try this yourself, and experiment with different parameters for particles, or write your own logic for their behavior. It is a lot of fun. And you could be the ruler of thousands or even millions of… particles!

GitHub link coming soon!

The post Tropical Particles Rain Animation with Three.js appeared first on Codrops.

Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

In this tutorial we’ll implement an infinite circular gallery using WebGL with OGL based on the website Lions Good News 2020 made by SHIFTBRAIN inc.

Most of the steps of this tutorial can be also reproduced in other WebGL libraries such as Three.js or Babylon.js with the correct adaptations.

With that being said, let’s start coding!

Creating our OGL 3D environment

The first step of any WebGL tutorial is making sure that you’re setting up all the rendering logic required to create a 3D environment.

Usually what’s required is: a camera, a scene and a renderer that is going to output everything into a canvas element. Then inside a requestAnimationFrame loop, you’ll use your camera to render a scene inside the renderer. So here’s our initial snippet:

import { Renderer, Camera, Transform } from 'ogl'
 
export default class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer()
 
    this.gl = this.renderer.gl
    this.gl.clearColor(0.79607843137, 0.79215686274, 0.74117647058, 1)
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 20
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Events.
   */
  onTouchDown (event) {
      
  }
 
  onTouchMove (event) {
      
  }
 
  onTouchUp (event) {
      
  }
 
  onWheel (event) {
      
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
    
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
 
    window.addEventListener('mousedown', this.onTouchDown.bind(this))
    window.addEventListener('mousemove', this.onTouchMove.bind(this))
    window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
    window.addEventListener('touchstart', this.onTouchDown.bind(this))
    window.addEventListener('touchmove', this.onTouchMove.bind(this))
    window.addEventListener('touchend', this.onTouchUp.bind(this))
  }
}
 
new App()

Explaining the App class setup

In our createRenderer method, we’re initializing a renderer with a fixed color background by calling this.gl.clearColor. Then we’re storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> (this.gl.canvas) element to our document.body.

In our createCamera method, we’re creating a new Camera() instance and setting some of its attributes: fov and its z position. The FOV is the field of view of your camera, what you’re able to see from it. And the z is the position of your camera in the z axis.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

You’ll also notice that we already included the wheel, touchstart, touchmove, touchend, mousedown, mousemove and mouseup events, they will be used to include user interactions with our application.

Creating a reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl, {
    heightSegments: 50,
    widthSegments: 100
  })
}

The reason for including heightSegments and widthSegments with these values is being able to manipulate vertices in a way to make the Plane behave like a paper in the air.

Importing our images using Webpack

Now it’s time to import our images into our application. Since we’re using Webpack in this tutorial, all we need to do to request our images is using import:

import Image1 from 'images/1.jpg'
import Image2 from 'images/2.jpg'
import Image3 from 'images/3.jpg'
import Image4 from 'images/4.jpg'
import Image5 from 'images/5.jpg'
import Image6 from 'images/6.jpg'
import Image7 from 'images/7.jpg'
import Image8 from 'images/8.jpg'
import Image9 from 'images/9.jpg'
import Image10 from 'images/10.jpg'
import Image11 from 'images/11.jpg'
import Image12 from 'images/12.jpg'

Now let’s create our array of images that we want to use in our infinite slider, so we’re basically going to call the variables above inside a createMedia method, and use .map to create new instances of the Media class (new Media()), which is going to be our representation of each image of the gallery.

createMedias () {
  this.mediasImages = [
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
  ]
 
 
  this.medias = this.mediasImages.map(({ image, text }, index) => {
    const media = new Media({
      geometry: this.planeGeometry,
      gl: this.gl,
      image,
      index,
      length: this.mediasImages.length,
      scene: this.scene,
      screen: this.screen,
      text,
      viewport: this.viewport
    })
 
    return media
  })
}

As you’ve probably noticed, we’re passing a bunch of arguments to our Media class, I’ll explain why they’re needed when we start setting up the class in the next section. We’re also duplicating the amount of images to avoid any issues of not having enough images when making our gallery infinite on very wide screens.

It’s important to also include some specific calls in the onResize and update methods for our this.medias array, because we want the images to be responsive:

onResize () {
  if (this.medias) {
    this.medias.forEach(media => media.onResize({
      screen: this.screen,
      viewport: this.viewport
    }))
  }
}

And also do some real-time manipulations inside the requestAnimationFrame:

update () {
  this.medias.forEach(media => media.update(this.scroll, this.direction))
}

Setting up the Media class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

export default class {
  constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
    this.geometry = geometry
    this.gl = gl
    this.image = image
    this.index = index
    this.length = length
    this.scene = scene
    this.screen = screen
    this.text = text
    this.viewport = viewport
 
    this.createShader()
    this.createMesh()
 
    this.onResize()
  }
}

Explaining a few of these arguments, basically the geometry is the geometry we’re going to apply to our Mesh class. The this.gl is our GL context, useful to keep doing WebGL manipulations inside the class. The this.image is the URL of the image. Both of the this.index and this.length will be used to do positions calculations of the mesh. The this.scene is the group which we’re going to append our mesh to. And finally this.screen and this.viewport are the sizes of the viewport and environment.

Now it’s time to create the shader that is going to be applied to our Mesh in the createShader method, in OGL shaders are created with Program:

createShader () {
  const texture = new Texture(this.gl, {
    generateMipmaps: false
  })
 
  this.program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uPlaneSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] },
      uViewportSizes: { value: [this.viewport.width, this.viewport.height] }
      },
    transparent: true
  })
 
  const image = new Image()
 
  image.src = this.image
  image.onload = _ => {
    texture.image = image
 
    this.program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  }
}

In the snippet above, we’re basically creating a new Texture() instance, making sure to use generateMipmaps as false so it preserves the quality of the image. Then creating a new Program() instance, which represents a shader composed of fragment and vertex with some uniforms used to manipulate it.

We’re also creating a new Image() instance to preload the image before applying it to the texture.image. And also updating the this.program.uniforms.uImageSizes.value because it’s going to be used to preserve the aspect ratio of our images.

It’s important to create our fragment and vertex shaders now, so we’re going to create two new files: fragment.glsl and vertex.glsl:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}
precision highp float;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  vec3 p = position;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
}

And require them in the start of Media.js using Webpack:

import fragment from './fragment.glsl'
import vertex from './vertex.glsl'

Now let’s create our new Mesh() instance in the createMesh method merging together the geometry and shader.

createMesh () {
  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program: this.program
  })
 
  this.plane.setParent(this.scene)
}

The Mesh instance is stored in the this.plane variable to be reused in the onResize and update methods, then appended as a child of the this.scene group.

The only thing we have now on the screen is a simple square with our image:

Let’s now implement the onResize method and make sure we’re rendering rectangles:

onResize ({ screen, viewport } = {}) {
  if (screen) {
    this.screen = screen
  }
 
  if (viewport) {
    this.viewport = viewport
 
    this.plane.program.uniforms.uViewportSizes.value = [this.viewport.width, this.viewport.height]
  }
 
  this.scale = this.screen.height / 1500
 
  this.plane.scale.y = this.viewport.height * (900 * this.scale) / this.screen.height
  this.plane.scale.x = this.viewport.width * (700 * this.scale) / this.screen.width
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

The scale.y and scale.x calls are responsible for scaling our element properly, transforming our previous square into a rectangle of 700×900 sizes based on the scale.

And the uViewportSizes and uPlaneSizes uniform value updates makes the image display correctly. That’s basically what makes the image have the background-size: cover; behavior, but in WebGL environment.

Now we need to position all the rectangles in the x axis, making sure we have a small gap between them. To achieve that, we’re going to use this.plane.scale.x, this.padding and this.index variables to do the calculation required to move them:

this.padding = 2
 
this.width = this.plane.scale.x + this.padding
this.widthTotal = this.width * this.length
 
this.x = this.width * this.index

And in the update method, we’re going to set the this.plane.position to these variables:

update () {
  this.plane.position.x = this.x
}

Now you’ve setup all the initial code of Media, which results in the following image:

Including infinite scrolling logic

Now it’s time to make it interesting and include scrolling logic on it, so we have at least an infinite gallery in place when the user scrolls through your page. In our index.js, we’ll do the following updates.

First, let’s include a new object called this.scroll in our constructor with all variables that we will manipulate to do the smooth scrolling:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

Now let’s add the touch and wheel events, so when the user interacts with the canvas, he will be able to move stuff:

onTouchDown (event) {
  this.isDown = true
 
  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientX : event.clientX
}
 
onTouchMove (event) {
  if (!this.isDown) return
 
  const x = event.touches ? event.touches[0].clientX : event.clientX
  const distance = (this.start - x) * 0.01
 
  this.scroll.target = this.scroll.position + distance
}
 
onTouchUp (event) {
  this.isDown = false
}

Then, we’ll include the NormalizeWheel library in onWheel event, so this way we have the same value on all browsers when the user scrolls:

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.005
}

In our update method with requestAnimationFrame, we’ll lerp the this.scroll.current with this.scroll.target to make it smooth, then we’ll pass it to all medias:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll))
  }
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

And now we just update our Media file to use the current scroll value to move the Mesh to the new scroll position:

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1
}

This is the current result we have:

As you’ve noticed, it’s not infinite yet, to achieve that, we need to include some extra code. The first step is including the direction of the scroll in the update method from index.js:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'right'
  } else {
    this.direction = 'left'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
 
  this.scroll.last = this.scroll.current
}

Now in the Media class, you need to include a variable called this.extra in the constructor, and do some manipulations on it to sum the total width of the gallery, when the element is outside of the screen.

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.extra = 0
}

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1 - this.extra
    
  const planeOffset = this.plane.scale.x / 2
  const viewportOffset = this.viewport.width
 
  this.isBefore = this.plane.position.x + planeOffset < -viewportOffset
  this.isAfter = this.plane.position.x - planeOffset > viewportOffset
 
  if (direction === 'right' && this.isBefore) {
    this.extra -= this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
 
  if (direction === 'left' && this.isAfter) {
    this.extra += this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
}

That’s it, now we have the infinite scrolling gallery, pretty cool right?

Including circular rotation

Now it’s time to include the special flavor of the tutorial, which is making the infinite scrolling also have the circular rotation. To achieve it, we’ll use Math.cos to change the this.mesh.position.y accordingly to the rotation of the element. And map technique to change the this.mesh.rotation.z based on the element position in the z axis.

First, let’s make it rotate in a smooth way based on the position. The map method is basically a way to serve values based on another specific range, let’s say for example you use map(0.5, 0, 1, -500, 500);, it’s going to return 0 because it’s the middle between -500 and 500. Basically the first argument controls the output of min2 and max2:

export function map (num, min1, max1, min2, max2, round = false) {
  const num1 = (num - min1) / (max1 - min1)
  const num2 = (num1 * (max2 - min2)) + min2
 
  if (round) return Math.round(num2)
 
  return num2
}

Let’s see it in action by including the following like of code in the Media class:

this.plane.rotation.z = map(this.plane.position.x, -this.widthTotal, this.widthTotal, Math.PI, -Math.PI)

And that’s the result we get so far. It’s already pretty cool because you’re able to see the rotation changing based on the plane position:

Now it’s time to make it look circular. Let’s use Math.cos, we just need to do a simple calculation with this.plane.position.x / this.widthTotal, this way we’ll have a cos that will return a normalized value that we can just tweak multiplying by how much we want to change the y position of the element:

this.plane.position.y = Math.cos((this.plane.position.x / this.widthTotal) * Math.PI) * 75 - 75

Simple as that, we’re just moving it by 75 in environment space based in the position, this gives us the following result, which is exactly what we wanted to achieve:

Snapping to the closest item

Now let’s include a simple snapping to the closest item when the user stops scrolling. To achieve that, we need to create a new method called onCheck, it’s going to do some calculations when the user releases the scrolling:

onCheck () {
  const { width } = this.medias[0]
  const itemIndex = Math.round(Math.abs(this.scroll.target) / width)
  const item = width * itemIndex
 
  if (this.scroll.target < 0) {
    this.scroll.target = -item
  } else {
    this.scroll.target = item
  }
}

The result of the item variable is always the center of one of the elements in the gallery, which snaps the user to the corresponding position.

For wheel events, we need a debounced version of it called onCheckDebounce that we can include in the constructor by including lodash/debounce:

import debounce from 'lodash/debounce'
 
constructor ({ camera, color, gl, renderer, scene, screen, url, viewport }) {
  this.onCheckDebounce = debounce(this.onCheck, 200)
}
 
onWheel (event) {
  this.onCheckDebounce()
}

Now the gallery is always being snapped to the correct entry:

Writing paper shaders

Finally let’s include the most interesting part of our project, which is enhancing the shaders a little bit by taking into account the scroll velocity and distorting the vertices of our meshes.

The first step is to include two new uniforms in our this.program declaration from Media class: uSpeed and uTime.

this.program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uSpeed: { value: 0 },
    uTime: { value: 0 }
  },
  transparent: true
})

Now let’s write some shader code to make our images bend and distort in a very cool way. In your vertex.glsl file, you should include the new uniforms: uniform float uTime and uniform float uSpeed:

uniform float uTime;
uniform float uSpeed;

Then inside the void main() of your shader, you can now manipulate the vertices in the z axis using these two values plus the position stored variable in p. We’re going to use a sin and cos to bend our vertices like it’s a plane, so all you need to do is including the following line:

p.z = (sin(p.x * 4.0 + uTime) * 1.5 + cos(p.y * 2.0 + uTime) * 1.5);

Also don’t forget to include uTime increment in the update() method from Media:

this.program.uniforms.uTime.value += 0.04

Just this line of code outputs a pretty cool paper effect animation:

Including text in WebGL using MSDF fonts

Now let’s include our text inside the WebGL, to achieve that, we’re going to use msdf-bmfont to generate our files, you can see how to do that in this GitHub repository, but basically it’s installing the npm dependency and running the command below:

msdf-bmfont -f json -m 1024,1024 -d 4 --pot --smart-size freight.otf

After running it, you should now have a .png and .json file in the same directory, these are the files that we’re going to use on our MSDF implementation in OGL.

Now let’s create a new file called Title and start setting up the code of it. First let’s create our class and use import in the shaders and the files:

import AutoBind from 'auto-bind'
import { Color, Geometry, Mesh, Program, Text, Texture } from 'ogl'
 
import fragment from 'shaders/text-fragment.glsl'
import vertex from 'shaders/text-vertex.glsl'
 
import font from 'fonts/freight.json'
import src from 'fonts/freight.png'
 
export default class {
  constructor ({ gl, plane, renderer, text }) {
    AutoBind(this)
 
    this.gl = gl
    this.plane = plane
    this.renderer = renderer
    this.text = text
 
    this.createShader()
    this.createMesh()
  }
}

Now it’s time to start setting up MSDF implementation code inside the createShader() method. The first thing we’re going to do is create a new Texture() instance and load the fonts/freight.png one stored in src:

createShader () {
  const texture = new Texture(this.gl, { generateMipmaps: false })
  const textureImage = new Image()
 
  textureImage.src = src
  textureImage.onload = _ => texture.image = textureImage
}

Then we need to start setting up the fragment shader we’re going to use to render the MSDF text, because MSDF can be optimized in WebGL 2.0, we’re going to use this.renderer.isWebgl2 from OGL to check if it’s supported or not and declare different shaders based on it, so we’ll have vertex300, fragment300, vertex100 and fragment100:

createShader () {
  const vertex100 = `${vertex}`
 
  const fragment100 = `
    #extension GL_OES_standard_derivatives : enable
 
    precision highp float;
 
    ${fragment}
  `
 
  const vertex300 = `#version 300 es
 
    #define attribute in
    #define varying out
 
    ${vertex}
  `
 
  const fragment300 = `#version 300 es
 
    precision highp float;
 
    #define varying in
    #define texture2D texture
    #define gl_FragColor FragColor
 
    out vec4 FragColor;
 
    ${fragment}
  `
 
  let fragmentShader = fragment100
  let vertexShader = vertex100
 
  if (this.renderer.isWebgl2) {
    fragmentShader = fragment300
    vertexShader = vertex300
  }
 
  this.program = new Program(this.gl, {
    cullFace: null,
    depthTest: false,
    depthWrite: false,
    transparent: true,
    fragment: fragmentShader,
    vertex: vertexShader,
    uniforms: {
      uColor: { value: new Color('#545050') },
      tMap: { value: texture }
    }
  })
}

As you’ve probably noticed, we’re prepending fragment and vertex with different setup based on the renderer WebGL version, let’s create also our text-fragment.glsl and text-vertex.glsl files:

uniform vec3 uColor;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec3 color = texture2D(tMap, vUv).rgb;
 
  float signed = max(min(color.r, color.g), min(max(color.r, color.g), color.b)) - 0.5;
  float d = fwidth(signed);
  float alpha = smoothstep(-d, d, signed);
 
  if (alpha < 0.02) discard;
 
  gl_FragColor = vec4(uColor, alpha);
}
attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

Finally let’s create the geometry of our MSDF font implementation in the createMesh() method, for that we’ll use the new Text() instance from OGL, and then apply the buffers generated from it to the new Geometry() instance:

createMesh () {
  const text = new Text({
    align: 'center',
    font,
    letterSpacing: -0.05,
    size: 0.08,
    text: this.text,
    wordSpacing: 0,
  })
 
  const geometry = new Geometry(this.gl, {
    position: { size: 3, data: text.buffers.position },
    uv: { size: 2, data: text.buffers.uv },
    id: { size: 1, data: text.buffers.id },
    index: { data: text.buffers.index }
  })
 
  geometry.computeBoundingBox()
 
  this.mesh = new Mesh(this.gl, { geometry, program: this.program })
  this.mesh.position.y = -this.plane.scale.y * 0.5 - 0.085
  this.mesh.setParent(this.plane)
}

Now let’s apply our brand new titles in the Media class, we’re going to create a new method called createTilte() and apply it to the constructor:

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.createTitle()
}

createTitle () {
  this.title = new Title({
    gl: this.gl,
    plane: this.plane,
    renderer: this.renderer,
    text: this.text,
  })
}

Simple as that, we’re just including a new Title() instance inside our Media class, this will output the following result for you:

One of the best things about rendering text inside WebGL is reducing the overload of calculations required by the browser when animating the text to the right position. If you go with the DOM approach, you’ll usually have a little bit of performance impact because browsers will need to recalculate DOM sections when translating the text properly and checking composite layers.

For the purpose of this demo, we also included a new Number() class implementation that will be responsible for showing the current index that the user is seeing. You can check how it’s implemented in source code, but it’s basically the same implementation of the Title class with the only difference of it loading a different font style:

Including background blocks

To finalize the demo, let’s implement some blocks in the background that will be moving in x and y axis to enhance the depth effect of it:

To achieve this effect we’re going to create a new Background class and inside of it we’ll initialize some new Plane() geometries in a new Mesh() with random sizes and positions by changing the scale and position of the meshes of the for loop:

import { Color, Mesh, Plane, Program } from 'ogl'
 
import fragment from 'shaders/background-fragment.glsl'
import vertex from 'shaders/background-vertex.glsl'
 
import { random } from 'utils/math'
 
export default class {
  constructor ({ gl, scene, viewport }) {
    this.gl = gl
    this.scene = scene
    this.viewport = viewport
 
    const geometry = new Plane(this.gl)
    const program = new Program(this.gl, {
      vertex,
      fragment,
      uniforms: {
        uColor: { value: new Color('#c4c3b6') }
      },
      transparent: true
    })
 
    this.meshes = []
 
    for (let i = 0; i < 50; i++) {
      let mesh = new Mesh(this.gl, {
        geometry,
        program,
      })
 
      const scale = random(0.75, 1)
 
      mesh.scale.x = 1.6 * scale
      mesh.scale.y = 0.9 * scale
 
      mesh.speed = random(0.75, 1)
 
      mesh.xExtra = 0
 
      mesh.x = mesh.position.x = random(-this.viewport.width * 0.5, this.viewport.width * 0.5)
      mesh.y = mesh.position.y = random(-this.viewport.height * 0.5, this.viewport.height * 0.5)
 
      this.meshes.push(mesh)
 
      this.scene.addChild(mesh)
    }
  }
}

Then after that we just need to apply endless scrolling logic on them as well, following the same directional validation we have in the Media class:

update (scroll, direction) {
  this.meshes.forEach(mesh => {
    mesh.position.x = mesh.x - scroll.current * mesh.speed - mesh.xExtra
 
    const viewportOffset = this.viewport.width * 0.5
    const widthTotal = this.viewport.width + mesh.scale.x
 
    mesh.isBefore = mesh.position.x < -viewportOffset
    mesh.isAfter = mesh.position.x > viewportOffset
 
    if (direction === 'right' && mesh.isBefore) {
      mesh.xExtra -= widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    if (direction === 'left' && mesh.isAfter) {
      mesh.xExtra += widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    mesh.position.y += 0.05 * mesh.speed
 
    if (mesh.position.y > this.viewport.height * 0.5 + mesh.scale.y) {
      mesh.position.y -= this.viewport.height + mesh.scale.y
    }
  })
}

That’s simple as that, now we have the blocks in the background as well, finalizing the code of our demo!

I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

The post Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

How to Code the Travelling Particles Animation from “Volt for Drive”

In this ALL YOUR HTML coding session you’ll learn how to recreate Volt for Drive’s glowing particles that endlessly travel through city streets. The website was made by the team of Red Collar. We’ll use Three.js and some shader magic to implement the animation.

This coding session was streamed live on January 31, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post How to Code the Travelling Particles Animation from “Volt for Drive” appeared first on Codrops.

Twisted Colorful Spheres with Three.js

I love blobs and I enjoy looking for interesting ways to change basic geometries with Three.js: bending a plane, twisting a box, or exploring a torus (like in this 10-min video tutorial). So this time, my love for shaping things will be the excuse to see what we can do with a sphere, transforming it using shaders. 

This tutorial will be brief, so we’ll skip the basic render/scene setup and focus on manipulating the sphere’s shape and colors, but if you want to know more about the setup check out these steps.

We’ll go with a more rounded than irregular shape, so the premise is to deform a sphere and use that same distortion to color it.

Vertex displacement

As you’ve probably been thinking, we’ll be using noise to deform the geometry by moving each vertex along the direction of its normal. Think of it as if we were pushing each vertex from the inside out with different strengths. I could elaborate more on this, but I rather point you to this article by The Spite aka Jaume Sanchez Elias, he explains this so well! I bet some of you have stumbled upon this article already.

So in code, it looks like this:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And now we should see a blobby sphere:

See the Pen Vertex displacement by Mario (@marioecg) on CodePen.

You can experiment and change its values to see how the blob changes. I know we’re going with a more subtle and rounded distortion, but feel free to go crazy with it; there are audio visualizers out there that deform a sphere to the point that you don’t even think it’s based on a sphere.

Now, this already looks interesting, but let’s add one more touch to it next.

Noitation

…is just a word I came up with to combine noise with rotation (ba dum tss), but yes! Adding some twirl to the mix makes things more compelling.

If you’ve ever played with Play-Doh as a child, you have surely molded a big chunk of clay into a ball, grab it with each hand, and twisted in opposite directions until the clay tore apart. This is kind of what we want to do (except for the breaking part).

To twist the sphere, we are going to generate a sine wave from top to bottom of the sphere. Then, we are going to use this top-bottom wave as a rotation for the current position. Since the values increase/decrease from top to bottom, the rotation is going to oscillate as well, creating a twist:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

Notice how the waves emerge from the top, it’s soothing. Some of you might find this movement therapeutic, so take some time to appreciate it and think about what we’ve learned so far…

See the Pen Noitation by Mario (@marioecg) on CodePen.

Alright! Now that you’re back let’s get on to the fragment shader.

Colorific

If you take a close look at the shaders before, you see, almost at the end, that we’ve been passing the normals to the fragment shader. Remember that we want to use the distortion to color the shape, so first let’s create a varying where we pass that distortion to:

varying float vDistort;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistort = distortion; // Train goes to the fragment shader! Tchu tchuuu

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And use vDistort to color the pixels instead:

varying float vDistort;

uniform float uIntensity;

void main() {
  float distort = vDistort * uIntensity;

  vec3 color = vec3(distort);

  gl_FragColor = vec4(color, 1.0);
}

We should get a kind of twisted, smokey black and white color like so:

See the Pen Colorific by Mario (@marioecg) on CodePen.

With this basis, we’ll take it a step further and use it in conjunction with one of my favorite color functions out there.

Cospalette

Cosine palette is a very useful function to create and control color with code based on the brightness, contrast, oscillation of cosine, and phase of cosine. I encourage you to watch Char Stiles explain this further, which is soooo good. Final s/o to Inigo Quilez who wrote an article about this function some years ago; for those of you who haven’t stumbled upon his genius work, please do. I would love to write more about him, but I’ll save that for a poem.

Let’s use cospalette to input the distortion and see how it looks:

varying vec2 vUv;
varying float vDistort;

uniform float uIntensity;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}   

void main() {
  float distort = vDistort * uIntensity;

  // These values are my fav combination, 
  // they remind me of Zach Lieberman's work.
  // You can find more combos in the examples from IQ:
  // https://iquilezles.org/www/articles/palettes/palettes.htm
  // Experiment with these!
  vec3 brightness = vec3(0.5, 0.5, 0.5);
  vec3 contrast = vec3(0.5, 0.5, 0.5);
  vec3 oscilation = vec3(1.0, 1.0, 1.0);
  vec3 phase = vec3(0.0, 0.1, 0.2);

  // Pass the distortion as input of cospalette
  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, 1.0);
}

¡Liiistoooooo! See how the color palette behaves similar to the distortion because we’re using it as input. Swap it for vUv.x or vUv.y to see different results of the palette, or even better, come up with your own input!

See the Pen Cospalette by Mario (@marioecg) on CodePen.

And that’s it! I hope this short tutorial gave you some ideas to apply to anything you’re creating or inspired you to make something. Next time you use noise, stop and think if you can do something extra to make it more interesting and make sure to save Cospalette in your shader toolbelt.

Explore and have fun with this! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

I hope you learned something new. Till next time! 

References and Credits

Thanks to all the amazing people that put knowledge out in the world!

The post Twisted Colorful Spheres with Three.js appeared first on Codrops.

Recreating Frontier Development Lab’s Sun in Three.js

In this ALL YOUR HTML coding session you’ll learn how to recreate the sun from Frontier Development Lab’s Sun Challenge, using numerous shaders, lots of layers of noise and CubeCamera.

This coding session was streamed live on January 24, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Recreating Frontier Development Lab’s Sun in Three.js appeared first on Codrops.

Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders

Hello everyone, introducing myself a little bit first, I’m Luis Henrique Bizarro, I’m a Senior Creative Developer at Active Theory based in São Paulo, Brazil. It’s always a pleasure to me having the opportunity to collaborate with Codrops to help other developers learn new things, so I hope everyone enjoys this tutorial!

In this tutorial I’ll explain you how to create an auto scrolling infinite image gallery. The image grid is also scrollable be user interaction, making it an interesting design element to showcase works. It’s based on this great animation seen on Oneshot.finance made by Jesper Landberg.

I’ve been using the technique of styling images first with HTML + CSS and then creating an abstraction of these elements inside WebGL using some camera and viewport calculations in multiple websites, so this is the approach we’re going to use in this tutorial.

The good thing about this implementation is that it can be reused across any WebGL library, so if you’re more familiar with Three.js or Babylon.js than OGL, you’ll also be able to achieve the same results using a similar code, when it’s about shading and scaling the plane meshes.

So let’s get into it!

Implementing our HTML markup

The first step is implementing our HTML markup. We’re going to use <figure> and <img> elements, nothing special here, just the standard:

<div class="demo-1__gallery">
  <figure class="demo-1__gallery__figure">
    <img class="demo-1__gallery__image" src="images/demo-1/1.jpg">
  </figure>

  <!-- Repeating the same markup until 12.jpg. -->
</div>

Setting our CSS styles

The second step is styling our elements using CSS. One of the first things I do in a website is defining the font-size of the html element because I use rem to help with the responsive breakpoints.

This comes in handy if you’re doing creative websites that only require two or three different breakpoints, so I highly recommend starting using it if you haven’t adopted rem yet.

One thing I’m also using is calc() with the size of the designs. In our tutorial we’re going to use 1920 as our main width, scaling our font-size depending on the screen size of 100vw. This results in 10px at a 1920px screen, for example:

html {
  font-size: calc(100vw / 1920 * 10);
}

Now let’s style our grid of images. We want to freely place our images across the screen using absolute positioning, so we’re just going to set the height, width and left/top styles across all our demo-1 classes:

.demo-1__gallery {
  height: 295rem;
  position: relative;
  visibility: hidden;
}

.demo-1__gallery__figure {
  position: absolute;
 
  &:nth-child(1) {
    height: 40rem;
    width: 70rem;
  }
 
  &:nth-child(2) {
    height: 50rem;
    left: 85rem;
    top: 30rem;
    width: 40rem;
  }
 
  &:nth-child(3) {
    height: 50rem;
    left: 15rem;
    top: 60rem;
    width: 60rem;
  }
 
  &:nth-child(4) {
    height: 30rem;
    right: 0;
    top: 10rem;
    width: 50rem;
  }
 
  &:nth-child(5) {
    height: 60rem;
    right: 15rem;
    top: 55rem;
    width: 40rem;
  }
 
  &:nth-child(6) {
    height: 75rem;
    left: 5rem;
    top: 120rem;
    width: 57.5rem;
  }
 
  &:nth-child(7) {
    height: 70rem;
    right: 0;
    top: 130rem;
    width: 50rem;
  }
 
  &:nth-child(8) {
    height: 50rem;
    left: 85rem;
    top: 95rem;
    width: 40rem;
  }
 
  &:nth-child(9) {
    height: 65rem;
    left: 75rem;
    top: 155rem;
    width: 50rem;
  }
 
  &:nth-child(10) {
    height: 43rem;
    right: 0;
    top: 215rem;
    width: 30rem;
  }
 
  &:nth-child(11) {
    height: 50rem;
    left: 70rem;
    top: 235rem;
    width: 80rem;
  }
 
  &:nth-child(12) {
    left: 0;
    top: 210rem;
    height: 70rem;
    width: 50rem;
  }
}
 
.demo-1__gallery__image {
  height: 100%;
  left: 0;
  object-fit: cover;
  position: absolute;
  top: 0;
  width: 100%;
}

Note that we’re hiding the visibility of our HTML, because it’s not going to be visible for the users since we’re going to load these images inside the <canvas> element. But below you can find a screenshot of what the result will look like.

Creating our OGL 3D environment

Now it’s time to get started with the WebGL implementation using OGL. First let’s create an App class that is going to be the entry point of our demo and inside of it, let’s also create the initial methods: createRenderer, createCamera, createScene, onResize and our requestAnimationFrame loop with update.

import { Renderer, Camera, Transform } from 'ogl'

class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer({
      alpha: true
    })
 
    this.gl = this.renderer.gl
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 5
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Wheel.
   */
  onWheel (event) {
 
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
 
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
  }
}

new App()

Explaining some part of our App.js file

In our createRenderer method, we’re initializing one renderer with alpha enabled, storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> element to our document.body.

In our createCamera method, we’re just creating a new Camera and setting some of its attributes: fov and its z position.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

Create our reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl)
}

Select all images and create a new class for each one

Now it’s time to use document.querySelector to select all our images and create one reusable class that is going to represent our images. (We’re going to create a single Media.js file later.)

createMedias () {
  this.mediasElements = document.querySelectorAll('.demo-1__gallery__figure')
  this.medias = Array.from(this.mediasElements).map(element => {
    let media = new Media({
      element,
      geometry: this.planeGeometry,
      gl: this.gl,
      scene: this.scene,
      screen: this.screen,
      viewport: this.viewport
    })
 
    return media
  })
}

As you can see, we’re just selecting all .demo-1__gallery__figure elements, going through them and generating an array of `this.medias` with new instances of Media.

Now it’s important to start attaching this array in important pieces of our setup code.

Let’s first include all our media inside the method onResize and also call media.onResize for each one of these new instances:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    screen: this.screen,
    viewport: this.viewport
  }))
}

And inside our update method, we’re going to call media.update() as well:

if (this.medias) {
  this.medias.forEach(media => media.update())
}

Setting up our Media.js file and class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

import { Mesh, Program, Texture } from 'ogl'
 
import fragment from 'shaders/fragment.glsl'
import vertex from 'shaders/vertex.glsl'
 
export default class {
  constructor ({ element, geometry, gl, scene, screen, viewport }) {
    this.element = element
    this.image = this.element.querySelector('img')
 
    this.geometry = geometry
    this.gl = gl
    this.scene = scene
    this.screen = screen
    this.viewport = viewport
 
    this.createMesh()
    this.createBounds()
 
    this.onResize()
  }
}

In our createMesh method, we’ll load the image texture using the this.image.src attribute, then create a new Program, which is basically a representation of the material we’re applying to our Mesh. So our method looks like this:

createMesh () {
  const image = new Image()
  const texture = new Texture(this.gl)

  image.src = this.image.src
  image.onload = _ => {
    texture.image = image
  }

  const program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uScreenSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] }
    },
    transparent: true
  })

  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program
  })

  this.plane.setParent(this.scene)
}

Looks pretty simple, right? After we generate a new Mesh, we’re setting the plane as children of this.scene, so we’re including our mesh inside our main scene.

As you’ve probably noticed, our Program receives fragment and vertex. These both represent the shaders we’re going to use on our planes. For now, we’re just using simple implementations of both.

In our vertex.glsl file we’re getting the uv and position attributes, and making sure we’re rendering our planes in the right 3D world position.

attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

In our fragment.glsl file, we’re receiving a tMap texture, as you can see in the tMap: { value: texture } declaration, and rendering it in our plane geometry:

precision highp float;
 
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  gl_FragColor.rgb = texture2D(tMap, vUv).rgb;
  gl_FragColor.a = 1.0;
}

The createBounds method is important to make sure we’re positioning and scaling our planes in the correct DOM elements positions, so it’s basically going to call for this.element.getBoundingClientRect() to get the right position of our planes, and then after that using these values to calculate the 3D values of our plane.

createBounds () {
  this.bounds = this.element.getBoundingClientRect()

  this.updateScale()
  this.updateX()
  this.updateY()
}

updateScale () {
  this.plane.scale.x = this.viewport.width * this.bounds.width / this.screen.width
  this.plane.scale.y = this.viewport.height * this.bounds.height / this.screen.height
}

updateX (x = 0) {
  this.plane.position.x = -(this.viewport.width / 2) + (this.plane.scale.x / 2) + ((this.bounds.left - x) / this.screen.width) * this.viewport.width
}

updateY (y = 0) {
  this.plane.position.y = (this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height
}

update (y) {
  this.updateScale()
  this.updateX()
  this.updateY(y)
}

As you’ve probably noticed, the calculations for scale.x and scale.y are going to stretch our plane to make it the same width and height of the <img> elements. And the position.x and position.y takes the offset from the element and makes our translate our planes to the correct x and y axis in 3D.

And let’s not forget our onResize method, which is basically going to call createBounds again to refresh our getBoundingClientRect values and make sure we keep our 3D implementation responsive as well.

onResize (sizes) {
  if (sizes) {
    const { screen, viewport } = sizes
 
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
 
  this.createBounds()
}

This is the result we’ve got so far.

Implement cover behavior in fragment shaders

As you’ve probably noticed, our images are stretched. It happens because we need to make proper calculations in the fragment shaders in order to have a behavior like object-fit: cover; or background-size: cover; in WebGL.

I like to use an approach to pass the images’ real sizes and do some ratio calculations inside the fragment shader, so let’s adapt our code to this approach. So in our Program, we’re going to pass two new uniforms called uPlaneSizes and uImageSizes:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] }
  },
  transparent: true
})

Now we need to update our fragment.glsl and use these values to calculate our images ratios:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}

And then we also need update our image.onload method to pass naturalWidth and naturalHeight to uImageSizes:

image.onload = _ => {
  program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  texture.image = image
}

And createBounds to update the uPlaneSizes uniforms:

createBounds () {
  this.bounds = this.element.getBoundingClientRect()
 
  this.updateScale()
  this.updateX()
  this.updateY()
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

That’s it! Now we have properly scaled images.

Implementing smooth scrolling

Before we implement our infinite logic, it’s good to start making scrolling work properly. In our setup code, we have included a onWheel method, which is going to be used to lerp some variables and make our scroll butter smooth.

In our constructor from index.js, let’s create the this.scroll object with these variables:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
}

Now let’s update our onWheel implementation. When working with wheel events, it’s always important to normalize it, because it behaves differently based on the browser, I’ve been using normalize-wheel library to help on it:

import NormalizeWheel from 'normalize-wheel'

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.5
}

Let’s also create our lerp utility function inside the file utils/math.js:

export function lerp (p1, p2, t) {
  return p1 + (p2 - p1) * t
}

And now we just need to lerp from the this.scroll.current to the this.scroll.target inside the update method. And finally pass it to the media.update() methods:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current))
  }
}

After that we already have a result like this.

Making our smooth scrolling infinite

The approach of making an infinite scrolling logic is basically repeating the same grid over and over while the user keeps scrolling your page. Since the user can scroll up or down, you also need to take under consideration what direction is being scrolled, so overall the algorithm should work this way:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

To explain it in a visual way, let’s say we’re scrolling down and the red area is our viewport and the blue elements are not in the viewport anymore.

When we are in this state, we just need to move the blue elements to the end of our gallery grid, which is the entire height of our gallery: 295rem.

Let’s include the logic for it then. First, we need to create a new variable called this.scroll.last to store the last value of our scroll, this is going to be checked to give us up or down strings:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

In our update method, we need to include the following lines of validations and pass this.direction to our this.medias elements.

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current, this.direction))
  }
 
  this.renderer.render({
    scene: this.scene,
    camera: this.camera
  })
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

Then we need to get the total gallery height and transform it to 3D dimensions, so let’s include a querySelector of .demo-1__gallery and call the createGallery method in our index.js constructor.

createGallery () {
  this.gallery = document.querySelector('.demo-1__gallery')
}

It’s time to do the real calculations using this selector, so in our onResize method, we need to include the following lines:

this.galleryBounds = this.gallery.getBoundingClientRect()
this.galleryHeight = this.viewport.height * this.galleryBounds.height / this.screen.height

The this.galleryHeight variable now is storing the 3D size of the entire grid, now we need to pass it to both onResize and new Media() calls:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    height: this.galleryHeight,
    screen: this.screen,
    viewport: this.viewport
  }))
}
this.medias = Array.from(this.mediasElements).map(element => {
  let media = new Media({
    element,
    geometry: this.planeGeometry,
    gl: this.gl,
    height: this.galleryHeight,
    scene: this.scene,
    screen: this.screen,
    viewport: this.viewport
  })
 
  return media
})

And then inside our Media class, we need to store the height as well in the constructor and also in the onResize methods:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
  this.height = height
}
onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
}

Now we’re going to include the logic to move our elements based on their viewport position, just like our visual representation of the red and blue rectangles.

If the idea is to keep summing up a value based on the scroll and element position, we can achieve this by just creating a new variable called this.extra = 0, this is going to store how much we need to sum (or subtract) of our media, so in our constructor let’s include it:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
    this.extra = 0
}

And let’s reset it on resizing the browser, to make all values consistent so it doesn’t break when users resizes their viewport:

onResize (sizes) {
  this.extra = 0
}

And in our updateY method, we’re going to include it as well:

updateY (y = 0) {
  this.plane.position.y = ((this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height) - this.extra
}

Finally, the only thing left now is updating the this.extra variable inside our update method, making sure we’re adding or subtracting the this.height depending on the direction.

const planeOffset = this.plane.scale.y / 2
const viewportOffset = this.viewport.height / 2
 
this.isBefore = this.plane.position.y + planeOffset < -viewportOffset
this.isAfter = this.plane.position.y - planeOffset > viewportOffset
 
if (direction === 'up' && this.isBefore) {
  this.extra -= this.height
 
  this.isBefore = false
  this.isAfter = false
}
 
if (direction === 'down' && this.isAfter) {
  this.extra += this.height
 
  this.isBefore = false
  this.isAfter = false
}

Since we’re working in 3D space, we’re dealing with cartesian coordinates, that’s why you can notice we’re dividing most things by two (ex: this.viewport.heighht / 2). So that’s also the reason why we had to do a different logic for the this.isBefore and this.isAfter checks.

Awesome, we’re almost finishing our demo! That’s how it looks now, pretty cool to have it endless right?

Including touch events

Let’s also include touch events, so this demo can be more responsive to user interactions! In our addEventListeners method, let’s include some window.addEventListener calls:

window.addEventListener('mousedown', this.onTouchDown.bind(this))
window.addEventListener('mousemove', this.onTouchMove.bind(this))
window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
window.addEventListener('touchstart', this.onTouchDown.bind(this))
window.addEventListener('touchmove', this.onTouchMove.bind(this))
window.addEventListener('touchend', this.onTouchUp.bind(this))

Then we just need to implement simple touch events calculations, including the three methods: onTouchDown, onTouchMove and onTouchUp.

onTouchDown (event) {
  this.isDown = true

  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientY : event.clientY
}

onTouchMove (event) {
  if (!this.isDown) return

  const y = event.touches ? event.touches[0].clientY : event.clientY
  const distance = (this.start - y) * 2

  this.scroll.target = this.scroll.position + distance
}

onTouchUp (event) {
  this.isDown = false
}

Done! Now we also have touch events support enabled for our gallery.

Implementing direction-aware auto scrolling

Let’s also implement auto scrolling to make our interaction even better. In order to achieve that we just need to create a new variable that will store our speed based on the direction the user is scrolling.

So let’s create a variable called this.speed in our index.js file:

constructor () {
  this.speed = 2
}

This variable is going to be changed by our down and up validations we have in our update loop, so if the user is scrolling down, we’re going to keep the speed as 2, if the user is scrolling up, we’re going to replace it with -2, and before that we will sum this.speed to the this.scroll.target variable:

update () {
  this.scroll.target += this.speed

  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)

  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
    this.speed = 2
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
    this.speed = -2
  }
}

Implementing distortion shaders

Now let’s make everything even more interesting, it’s time to play a little bit with shaders and distort our planes while the user is scrolling through our page.

First, let’s update our update method from index.js, making sure we expose both current and last scroll values to all our medias, we’re going to do a simple calculation with them.

update () {
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
}

And now let’s create two uniforms for our Program shader: uOffset and uViewportSizes, and pass them:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uStrength: { value: 0 }
  },
  transparent: true
})

As you can probably notice, we’re going to need to set uViewportSizes in our onResize method as well, since this.viewport changes when we resize, so to keep this.viewport.width and this.viewport.height up to date, we also need to include the following lines of code in onResize:

onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) {
      this.viewport = viewport
 
      this.plane.program.uniforms.uOffset.value = [this.viewport.width, this.viewport.height]
    }
  }
}

Remember the this.scroll update we’ve made from index.js? Now it’s time to include a small trick to generate a speed value inside our Media.js:

update (y, direction) {
  this.updateY(y.current)
 
  this.plane.program.uniforms.uStrength.value = ((y.current - y.last) / this.screen.width) * 10
}

We’re basically checking the difference between the current and last values, which returns us some kind of “speed” of the scrolling, and dividing it by the this.screen.width, to keep our effect value behaving correctly independently of the width of our screen.

Finally now it’s time to play a little bit with our vertex shader. We’re going to bend our planes a little bit while the user is scrolling through the page. So let’s update our vertex.glsl file with this new code:

#define PI 3.1415926535897932384626433832795
 
precision highp float;
precision highp int;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
uniform float uStrength;
uniform vec2 uViewportSizes;
 
varying vec2 vUv;
 
void main() {
  vec4 newPosition = modelViewMatrix * vec4(position, 1.0);
 
  newPosition.z += sin(newPosition.y / uViewportSizes.y * PI + PI / 2.0) * -uStrength;
 
  vUv = uv;
 
  gl_Position = projectionMatrix * newPosition;
}

That’s it! Now we’re also bending our images creating an unique type of effect!

Explaining a little bit of the shader logic: basically what’s implemented in the newPosition.z line is taking into consideration the uViewportSize.y, which is our height from the viewport and the current position.y of our plane, getting the division of both and multiplying by PI that we defined on the very top of our shader file. And then we use the uStrength which is the strength of the bending, that is tight with our scrolling values, making it bend based on how faster you scroll the demo.

That’s the final result of our demo! I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

Photography used in the demos by Planete Elevene and Jayson Hinrichsen.

The post Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

WebGL Video Transitions with Curtains.js

We’ve done some crazy polygon transitions, and before that some WebGL Image Transitions. But a lot of people have been asking how to translate those techniques into the video world. And I agree, one cannot underestimate the importance of video on the web. So let’s do this!

For today’s demos we’ll be using curtains.js, a really great WebGL tool for animating images and videos. It was created by Martin Laxenaire and the new version is packed with great features. Go check out the docs if you didn’t know about it yet!

Just press play, and check out this slick video transition

What library?

Three.js is the industry standard for creating WebGL effects, but we already had a lot of demos using it. And there are a lot of other awesome WebGL libraries out there. All with a different kind of focus and purpose. So let’s dive into one of them!

Curtains.js

This library was made by Martin Laxenaire. It was just recently updated with a lot of cool stuff, and so I decided to use it for this demo.

The focus of this library is (from the homepage):

A lot of very good JavaScript libraries already handle WebGL but with most of them it’s kind of a headache to position your meshes relative to the DOM elements of your web page. Curtains.js was created with just that issue in mind.

Yes, exactly, connecting the DOM with WebGL! That’s pretty straightforward with curtains.js; one of the demos showcased is this:

So the titles are in HTML, and the videos are distorted with WebGL. Isn’t that cool?

And that’s not it! We also have a postprocessing, FBOs, and full control over all of it with our custom shaders!

Our demos

So, yeah, curtains.js is pretty cool and I really recommend you to try it. But let’s get back to our idea of animating videos in WebGL.

Turns out it’s ridiculously easy with curtains.js. Here is the HTML we will be using:

<div id="canvas"></div>
<div class="wrapper">
    <div class="plane">
        <video src="1.mp4" data-sampler="first"></video>
        <video src="2.mp4" data-sampler="second"></video>
    </div>
</div>
<!-- and some custom shaders, because we will be using them -->
<script id="vertexShader">...</script>
<script id="fragmentShader">...</script>

But, HTML has never been exciting in WebGL demos, so lets look at what JavaScript you need now to get the video in WebGL:

const curtains = new Curtains({
    container: "canvas",
    pixelRatio: Math.min(1.5, window.devicePixelRatio), 
});
const params = {
    vertexShaderID: "vertexShader",
    fragmentShaderID: "fragmentShader",
    uniforms: {
        transition: {
        name: "uTransition",
        type: "1f",
        value: 0,
        },
    },
};

const multiTexturesPlane = new Plane(
    curtains, 
    [...document.getElementsByClassName("plane")], // could be many Planes
    params
);

Well, that’s it! Because the focus of this library is shaders connected to the DOM, we don’t have any Camera or Scene concepts here. And the setup becomes pretty straightforward. That hooked me up immediately and I decided to share that with you guys, by using it in this demo.

Of course this is not the whole code. Because it’s a video we need some kind of user action to play it. And then, we also need some animation for the transition between the videos. But that part is pretty common to all those kind of effects. In curtain.js events are used for that:

multiTexturesPlane
.onReady(() => {
  // navigation click events
  // play button
})
.onRender(() => {
  // updating time or anything
});

And with that kind code you are ready to do any shader transitions like this one:

Can you guess where this transition is from?

All the magic is happening in shaders, and just to show you that images and videos are the same in WebGL, I used one of the effects from my previous demo. If you want to learn shaders, I advise you to read the amazing Book Of Shaders.

I hope you liked this short tutorial on getting videos into WebGL. Go get the source and play with the code!

References and Credits

The post WebGL Video Transitions with Curtains.js appeared first on Codrops.

Recreating the “100 Days of Poetry” Effect with Shader, ScrollTrigger and CSS Grid

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions that he records twice a month plus the demo he creates during the session!

For this episode of ALL YOUR HTML, I decided to replicate the effect seen on 100 DAYS OF POETRY with the technologies I know. So I did a fragment shader for the background effect, then coded a dummy CSS grid, and connected both of them with the power of GSAP’s ScrollTrigger plugin. This is of course a really simplified version of the website, but I hope you will learn something from the process of recreating this.

This coding session was streamed live on Oct 4, 2020.

Check out the live demo.

Original website: https://100daysofpoetry.gallery/

By Keita Yamada https://twitter.com/p5_keita

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Recreating the “100 Days of Poetry” Effect with Shader, ScrollTrigger and CSS Grid appeared first on Codrops.

Kinetic Typography with Three.js

Kinetic Typography may sound complicated but it’s just the elegant way to say “moving text” and, more specifically, to combine motion with text to create animations.

Imagine text on top of a 3D object, now could you see it moving along the object’s shape? Nice! That’s exactly what we’ll do in this article ? we’ll learn how to move text on a mesh using Three.js and three-bmfont-text.

We’re going to skip a lot of basics, so to get the most from this article we recommend you have some basic knowledge about Three.js, GLSL shaders, and three-bmfont-text.

Basis

The main idea for all these demos is to have a texture with text, use it on a mesh and play with it inside shaders. The simplest way of doing it is to have an image with text and then use it as a texture. But it can be a pain to figure out the correct size to try to display crisp text on the mesh, and later to change whatever text is in the image.

To avoid all these issues, we can generate that texture using code! We create a Render Target (RT) where we can have a scene that has text rendered with three-bmfont-text, and then use it as the texture of a mesh. This way we have more freedom to move, change, or color text. We’ll be taking this route following the next steps:

  1. Set up a RT with the text
  2. Create a mesh and add the RT texture
  3. Change the texture inside the fragment shader

To begin, we’ll run everything after the font file and atlas are loaded and ready to be used with three-bmfont-text. We won’t be going over this since I explained it in one of my previous articles.

The structure goes like this:

init() {
  // Create geometry of packed glyphs
  loadFont(fontFile, (err, font) => {
    this.fontGeometry = createGeometry({
      font,
      text: "ENDLESS"
    });

    // Load texture containing font glyphs
    this.loader = new THREE.TextureLoader();
    this.loader.load(fontAtlas, texture => {
      this.fontMaterial = new THREE.RawShaderMaterial(
        MSDFShader({
          map: texture,
          side: THREE.DoubleSide,
          transparent: true,
          negate: false,
          color: 0xffffff
        })
      );

      // Methods are called here
    });
  });
}

Now take a deep breath, grab your tea or coffee, chill, and let’s get started.

Render Target

A Render Target is a texture you can render to. Think of it as a canvas where you can draw whatever is inside and place it wherever you want. Having this flexibility makes the texture dynamic, so we can later add, change, or remove stuff in it.

Let’s set a RT along with a camera and a scene where we’ll place the text.

createRenderTarget() {
  // Render Target setup
  this.rt = new THREE.WebGLRenderTarget(
    window.innerWidth,
    window.innerHeight
  );

  this.rtCamera = new THREE.PerspectiveCamera(45, 1, 0.1, 1000);
  this.rtCamera.position.z = 2.5;

  this.rtScene = new THREE.Scene();
  this.rtScene.background = new THREE.Color("#000000");
}

Once we have the RT scene, let’s use the font geometry and material previously created to make the text mesh.

createRenderTarget() {
  // Render Target setup
  this.rt = new THREE.WebGLRenderTarget(
    window.innerWidth,
    window.innerHeight
  );

  this.rtCamera = new THREE.PerspectiveCamera(45, 1, 0.1, 1000);
  this.rtCamera.position.z = 2.5;

  this.rtScene = new THREE.Scene();
  this.rtScene.background = new THREE.Color("#000000");

  // Create text with font geometry and material
  this.text = new THREE.Mesh(this.fontGeometry, this.fontMaterial);

  // Adjust text dimensions
  this.text.position.set(-0.965, -0.275, 0);
  this.text.rotation.set(Math.PI, 0, 0);
  this.text.scale.set(0.008, 0.02, 1);

  // Add text to RT scene
  this.rtScene.add(this.text);
 
  this.scene.add(this.text); // Add to main scene
}

Note that for now, we added the text to the main scene to render it on the screen.

Cool! Let’s make it more interesting and “paste” the scene over a shape next.

Mesh and render texture

For simplicity, we’ll first use as shape a BoxGeometry together with ShaderMaterial to pass custom shaders, time and the render texture uniforms.

createMesh() {
  this.geometry = new THREE.BoxGeometry(1, 1, 1);

  this.material = new THREE.ShaderMaterial({
    vertexShader,
    fragmentShader,
    uniforms: {
      uTime: { value: 0 },
      uTexture: { value: this.rt.texture }
    }
  });

  this.mesh = new THREE.Mesh(this.geometry, this.material);

  this.scene.add(this.mesh);
}

The vertex shader won’t be doing anything interesting this time; we’ll skip it and focus on the fragment instead, which is showing the colors of the RT texture. It’s inverted for now (1. - texture) to stand out from the background.

varying vec2 vUv;

uniform sampler2D uTexture;

void main() {
  vec3 texture = texture2D(uTexture, vUv).rgb;

  gl_FragColor = vec4(1. - texture, 1.);
}

Normally, we would just render the main scene directly, but with a RT we have to first render to it before rendering to the screen.

render() {
  ...

  // Draw Render Target
  this.renderer.setRenderTarget(this.rt);
  this.renderer.render(this.rtScene, this.rtCamera);
  this.renderer.setRenderTarget(null);
  this.renderer.render(this.scene, this.camera);
}

And now a box should appear on the screen where each face has the text on it:

Looks alright so far, but what if we could cover more text around the shape?

Repeating the texture

GLSL’s built-in function fract comes handy to make repetition. We’ll use it to repeat the texture coordinates when multiplying them by a scalar so it wraps between 0 and 1.

varying vec2 vUv;

uniform sampler2D uTexture;

void main() {
  vec2 repeat = vec2(2., 6.); // 2 columns, 6 rows
  vec2 uv = fract(vUv * repeat);

  vec3 texture = texture2D(uTexture, uv).rgb;
  texture *= vec3(uv.x, uv.y, 1.);

  gl_FragColor = vec4(texture, 1.);
}

Notice that we also multiply the texture by the uv components to visualize the modified texture coordinates.

We’re getting there, right? The text should also follow the object’s shape; here’s where time comes in. We need to add it to the texture coordinates, in this case to the x component so it moves horizontally.

varying vec2 vUv;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  float time = uTime * 0.75;
  vec2 repeat = vec2(2., 6.);
  vec2 uv = fract(vUv * repeat + vec2(-time, 0.));

  vec3 texture = texture2D(uTexture, uv).rgb;

  gl_FragColor = vec4(texture, 1.);
}

And for a sweet touch, let’s blend the color with the the background.

This is basically the process! RT texture, repetition, and motion. Now that we’ve looked at the mesh for so long, using a BoxGeometry gets kind of boring, doesn’t it? Let’s change it in the next final bonus chapter.

Changing the geometry

As a kid, I loved playing and twisting these tangle toys, perhaps that’s the reason why I find satisfaction with knots and twisted shapes? Let this be an excuse to work with a torus knot geometry.

For the sake of rendering smooth text we’ll exaggerate the amount of tubular segments the knot has.

createMesh() {
  this.geometry = new THREE.TorusKnotGeometry(9, 3, 768, 3, 4, 3);

  ...
}

Inside the fragment shader, we’ll repeat any number of columns we want just to make sure to leave the same number of rows as the number of radial segments, which is 3.

varying vec2 vUv;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  vec2 repeat = vec2(12., 3.); // 12 columns, 3 rows
  vec2 uv = fract(vUv * repeat);

  vec3 texture = texture2D(uTexture, uv).rgb;
  texture *= vec3(uv.x, uv.y, 1.);

  gl_FragColor = vec4(texture, 1.);
}

And here’s our tangled torus knot:

Before adding time to the texture coordinates, I think we can make a fake shadow to give a better sense of depth. For that we’ll need to pass the position coordinates from the vertex shader using a varying.

varying vec2 vUv;
varying vec3 vPos;

void main() {
  vUv = uv;
  vPos = position;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.);
}

We now can use the z-coordinates and clamp them between 0 and 1, so the regions of the mesh farther the screen get darker (towards 0), and the closest lighter (towards 1).

varying vec3 vPos;

void main() {
  float shadow = clamp(vPos.z / 5., 0., 1.);

  gl_FragColor = vec4(vec3(shadow), 1.);
}

See? It sort of looks like white bone:

Now the final step! Multiply the shadow to blend it with the texture, and add time again.

varying vec2 vUv;
varying vec3 vPos;

uniform sampler2D uTexture;
uniform float uTime;

void main() {
  float time = uTime * 0.5;
  vec2 repeat = -vec2(12., 3.);
  vec2 uv = fract(vUv * repeat - vec2(time, 0.));

  vec3 texture = texture2D(uTexture, uv).rgb;

  float shadow = clamp(vPos.z / 5., 0., 1.);

  gl_FragColor = vec4(texture * shadow, 1.);
}

Fresh out of the oven! Look at this sexy torus coming out of the darkness. Internet high five!


We’ve just scratched the surface making repeated tiles of text, but there are many ways to add fun to the mixture. Could you use trigonometry or noise functions? Play with color? Text position? Or even better, do something with the vertex shader. The sky’s the limit! I encourage you to explore this and have fun with it.

Oh! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

Hope you learned something new. Till next time!

References and Credits

The post Kinetic Typography with Three.js appeared first on Codrops.

Create a Wave Motion Effect on an Image with Three.js

Waves! Because who does not enjoy the visual comfort an oscillating motion has on the human eye? Well, I do and for this tutorial, I would like to explain how to make waves on a 3D plane with Three.js using simplex noise.

To keep things short, we’ll just focus on the plane effect and not on the smooth scrolling or setup required to synchronize the DOM with WebGL. For these, check out Jesper Landberg’s codepen on smooth scrolling, and Luigi De Rosa’s article on how EPIC mixed WebGL and the DOM for WeCargo.

We’ll asume you have some basic understanding of Three.js, vertex and fragment shaders, so we’ll skip things like how to set up a scene.

Now let’s begin.

Creating the mesh

First, we’ll create a mesh using a PlaneGeometry and a ShaderMaterial.

Since we want to add a texture later, we’ll give out PlaneGeometry the same proportions as our 400×600 image. So, 0.4 for the width and 0.6 for the height. Having the same proportions makes it so the texture doesn’t stretch.

this.geometry = new THREE.PlaneGeometry(0.4, 0.6, 16, 16);
this.material = new THREE.ShaderMaterial({
  vertexShader,
  fragmentShader,
  uniforms: {
    uTime: { value: 0.0 }
  },
  wireframe: true,
});
this.mesh = new THREE.Mesh(this.geometry, this.material);
this.scene.add(this.mesh);

As a performance side note, if you are rendering more than one element to a scene, make sure to reuse the geometry object.

Making some noise

We’re going to displace the plane using 3D simplex noise.

Our simplex noise function snoise, outputs values between -1 and 1 based on the position values you give it.

Simplex noise is amazing because it’s random and seamless! So it’s really useful to create organic patterns. Which is exactly what we want for our wave effect.

Now inside the vertex shader. Using the vertex positions of the plane, we’re going to sample simplex noise to get a wave distortion, adjusting its frequency and amplitude to control it. The frequency will only change the x vector component to make horizontal waves on the plane and adding a time uniform to make them move. For the distortion to take place forwards and backwards, we need to add the value to the z vector component.

varying vec2 vUv;
uniform float uTime;

void main() {
  vUv = uv;

  vec3 pos = position;
  float noiseFreq = 3.5;
  float noiseAmp = 0.15; 
  vec3 noisePos = vec3(pos.x * noiseFreq + uTime, pos.y, pos.z);
  pos.z += snoise(noisePos) * noiseAmp;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}

And that’s really it for the motion! Even though it’s moving, the effect doesn’t quite sell because the inside looks flat.

To give that extra flare, as it is displaced, we need to modify colors accordingly. But before we do that, let’s add some pretty image!

Adding the texture

First, let’s create a new uniform that holds the texture for the ShaderMaterial.

this.material = new THREE.ShaderMaterial({
  vertexShader,
  fragmentShader,
  uniforms: {
    uTime: { value: 0.0 },
    uTexture: { value: new THREE.TextureLoader().load(img) },
  },
});

Second, let’s sample the texture in the fragment shader.

varying vec2 vUv;
uniform sampler2D uTexture;

void main() {
  vec3 texture = texture2D(uTexture, vUv).rgb;
  gl_FragColor = vec4(texture, 1.);
}

It looks solid already. Cool! Let’s go a step further and make it even more interesting by displacing the texture coordinates.

Displacing the texture coordinates

We can use the noise value from the vertex shader, pass it to the fragment shader, and add it to the texture coordinates so the image follows the motion of the displacement.

To share the distortion data from the vertex shader to the fragment shader we’ll use a varying, and for now let’s disable the vertex movement so we can appreciate more the texture displacement.

varying vec2 vUv;
varying float vWave;
uniform float uTime;

void main() {
  vUv = uv;

  vec3 pos = position;
  float noiseFreq = 3.5;
  float noiseAmp = 0.15; 
  vec3 noisePos = vec3(pos.x * noiseFreq + uTime, pos.y, pos.z);
  pos.z += snoise(noisePos) * noiseAmp;
  vWave = pos.z; // Off it goes!

  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.);
}

A new variable inside the fragment shader will be needed to scale down the distortion so it’s not too sharp.

varying vec2 vUv;
varying float vWave;
uniform sampler2D uTexture;

void main() {
  float wave = vWave * 0.2;
  vec3 texture = texture2D(uTexture, vUv + wave).rgb;
  gl_FragColor = vec4(texture, 1.);
}

See how the texture moves in the same way but in two dimensions? What if instead of moving all the texture coordinates, we only did it for one color channel to give a more futuristic vibe?

Splitting the color channel

To move one color channel, we’ll need to separate each color vector of the texture, add the wave value to one of their coordinates, and combine them back together. I’ll try it on the blue channel, but you can experiment with any of them.

varying vec2 vUv;
varying float vWave;
uniform sampler2D uTexture;

void main() {
  float wave = vWave * 0.2;
  // Split each texture color vector
  float r = texture2D(uTexture, vUv).r;
  float g = texture2D(uTexture, vUv).g;
  float b = texture2D(uTexture, vUv + wave).b;
  // Put them back together
  vec3 texture = vec3(r, g, b);
  gl_FragColor = vec4(texture, 1.);
}

Now the movement is happening just on blue channel of the texture. Spooky!

Finally, we can enable again the vertex distortion to get the final result. Aaaand here it is:

I hope you made it until here! This is just one of countless possibilities you can do with shaders and Three.js. For example, in the second demo, I took a different approach making transitions using displacement maps — if you’re interested in it, perhaps we can leave that for another time.

Make sure to tweak the values to get different results, mess with the noise, or use another kind of noise (psst psst Worley), create something and have fun with it! Oh, and don’t forget to share it with me on Twitter. If you got any questions or suggestions, please let me know, too.

Hope you learned something new. Until next time!

References and Credits

Create a Wave Motion Effect on an Image with Three.js was written by Mario Carrillo and published on Codrops.

Audio-based Image Distortion Effects with WebGL

We’ve covered in the past how we can read data from Audio, using the p5.sound library and how we can use that data, to draw things in the canvas, using p5.js.

Well, what if instead of drawing a sketch, we used audio to distort an image? Today we want to show you some demos that play around that idea.

We’ve created some experiments using the theme of movie trailers where the background image of the movie poster is being distorted using a sound sample. It kind of adds some drama to an otherwise static image in this case.

Here’s a short video of the beginning of one of the effects:

How it works

We analyze the sound and map the range of frequencies, to some uniforms we pass in our fragment shader. Then depending on the effect/distortion we have, we can tweak different parameters, using the audio frequencies which constantly change overtime.

In our first demo, we create a simple sinewave in our fragment shader, by using the bass frequencies of the audio track to control its frequency and the mid frequencies to control its amplitude. Then we add the distortion in both axes (x & y) of our uv and add that distortion to the initial texture coordinates.

It looks like this:


  float wave = sin(uv.y * u_bass + u_time) * u_mid;
  vec2 d = vec2(wave); // could be vec2(wave, 0.0) or vec2(0.0, wave) for distortion only in 1 axis.
  vec4 image = texture2D(u_texture, uv + d);
  gl_FragColor = image;

The possibilities are endless if you want to play around that idea, it’s just a matter of what effect you’re after. Make sure you’re mapping values to your uniforms, that are within a range that can distort your visual, and you can always use some generic uniforms like u_time, that can put some ‘overdrive’ to your distortion.

Head over to the demos and check out the variations we’ve made.

Hope you’ll have fun with this one and be sure to share any of your own versions!

Reference & Credits

Audio-based Image Distortion Effects with WebGL was written by Yannis Yannakopoulos and published on Codrops.

How to Create Procedural Clouds Using Three.js Sprites

Today we are going to create an animated cloud using a custom shader material, extending the built-in Sprite material of Three.js.

We’ll assume that you are familiar with React (including Hooks), Three.js and React-Three-Fiber. If not, you might find this article that I wrote as a beginner’s intro to the library helpful as a quick start.

The technique that we’ll explain was used in two recent projects made at Low:

1955-Horsebit: a website for the campaign launch of the Gucci’s bag.
Let girls dream: a project to support a gender-equal future

We won’t cover the other elements, like the background, and we also won’t create the most performant cloud because it would get a bit too complex and the purpose of this article is to get you familiar with the technique while keeping it simple.

Note that the use of React it’s not obligatory here but I started using React-Three-Fiber for all my demos and projects, so I’ve opted to use it here, too.

We will cover two main points in this article:

  1. How to extend a SpriteMaterial
  2. How to write the shader of the cloud

Extending the sprite material

Since the goal is not to create a volumetric cloud (a real 3D cloud), I decided to extend a Three.js SpriteMaterial. We can instead leverage the fact that using a Sprite, the cloud will always be facing the camera, independently of the camera position or orientation. So if you move the cloud or move the camera you’ll always see it and it helps to fake the missing of 3D volume (check out the debug mode to get the idea).

Note: If you head to the demo and add /?debug=true to the URL it will enable the Orbit Controls which will give you some visual insight into why I decided to use the Sprite material.

There are multiple ways to extend a built-in material of Three.js, and you can find a good explanation in this article by Dusan Bosnjak.

import {ShaderMaterial, UniformsUtils, ShaderLib} from 'three'
import fragment from '~shaders/cloud.frag'
import vertex from '~shaders/cloud.vert'

/**
* We are going to take the uniforms of the Sprite material
* and we'll merge with our uniforms
*/
const myUniforms = useMemo(() => ({
   .......
}), [])

const material = useMemo(() => {
  const mat = new ShaderMaterial({
    uniforms: {...UniformsUtils.clone(ShaderLib.sprite.uniforms), ...myUniforms},
     vertexShader: vertex,
     fragmentShader: fragment,
     transparent: true,
  })
  return mat
}, [])

We need to compose our vertex shader, adding the necessary #include code snippets. If you are interested in how materials are built in Three.js you can have a look at the source code.

uniform float rotation;
uniform vec2 center;
#include <common>
#include <uv_pars_vertex>
#include <fog_pars_vertex>
#include <logdepthbuf_pars_vertex>
#include <clipping_planes_pars_vertex>

varying vec2 vUv;

void main() {
  vUv = uv;

	vec4 mvPosition = modelViewMatrix * vec4( 0.0, 0.0, 0.0, 1.0 );
	vec2 scale;
	scale.x = length( vec3( modelMatrix[ 0 ].x, modelMatrix[ 0 ].y, modelMatrix[ 0 ].z ) );
	scale.y = length( vec3( modelMatrix[ 1 ].x, modelMatrix[ 1 ].y, modelMatrix[ 1 ].z ) );

	vec2 alignedPosition = ( position.xy - ( center - vec2( 0.5 ) ) ) * scale;
	vec2 rotatedPosition;
	rotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;
	rotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;
	mvPosition.xy += rotatedPosition;

	gl_Position = projectionMatrix * mvPosition;

	#include <logdepthbuf_vertex>
	#include <clipping_planes_vertex>
	#include <fog_vertex>
}

In this way we created a custom Sprite material. We can achieve the same effect in other ways for sure, but I decided to extend a built-in material because it could be useful in the future to add a custom logic. It’s now time to dig into the fragment.

Cloud’s fragment

To create the cloud we need two assets. One is the rough starting shape of the cloud, the other one is the starting point of the texture/pattern.
Keep in mind that both of the textures can be created directly in the shader but it will take some GPU calculations. That is fine, but if you can avoid it it’s a good practice to optimise the shader, too.

Using some images instead of creating them with code could save you some computational power.

Sliding textures

First of all let’s create two sliding textures, using the texture image above (uTxtCloudNoise), that we will use later to handle the alpha channel of the output. These are sliding textures that helps us to create a “fake” noise effect by concatenate, adding and multiply them.

vec4 txtNoise1 = texture2D(uTxtCloudNoise, vec2(vUv.x + uTime * 0.0001, vUv.y - uTime * 0.00014));
vec4 txtNoise2 = texture2D(uTxtCloudNoise, vec2(vUv.x - uTime * 0.00002, vUv.y + uTime * 0.000017 + 0.2));

Noise

We now need some GLSL noise: the Simpex noise and the Fractional Brownian motion (FBM) that allows us to morph the shape and create the vaporous border effect.

Let’s first create the Simplex and FBM noise to distort our UV.
We will use the FBM to achieve the effect for the border of the cloud, to make it like smoke, and we will use the Simplex to do the shape morphing of the cloud.

The distorted UV, now called newUv, will be used during the declaration of the txtShape:

#pragma glslify: fbm3d = require('glsl-fractal-brownian-noise/3d')
#pragma glslify: snoise3 = require(glsl-noise/simplex/3d)

// FBM
float noiseBig = fbm3d(vec3(vUv, uTime), 4)+ 1.0 * 0.5;
newUv += noiseBig * uDisplStrenght1;

//SIMPLEX
float noiseSmall = snoise3(vec3(newUv, uTime)) + 1.0 * 0.5;
newUv += noiseSmall * uDisplStrenght2;

......
vec4 txtShape = texture2D(uTxtShape, newUv);

And this is how the noise looks like:

Mask & Alpha

To create the mask for the cloud we will use the shape texture (uTxtShape) we saw at the beginning and the result of the sliding textures we mentioned earlier.

The following output is the result of the masking only. The border and the shape effect is fine but the internal pattern/color is not:

Now we calculate the alpha used on the sliding textures from before. We’ll use the levels function, that was taken from here, which is more or less like the Photoshop levels function.

Concatenating the distorted shape (uTxtShape) and the red channel of the sliding textures will give us the external shape and even the internal “cloud pattern” to create a more real look and feel:

vec4 txtShape = texture2D(uTxtShape, newUv);

float alpha = levels((txtNoise1 + txtNoise2) * 0.6, 0.2, 0.4, 0.7).r;
alpha *= txtShape.r;

gl_FragColor = vec4(vec3(0.95,0.95,0.95), alpha);

Concatenating everything

It’s time now to wrap everything up to display the final output:

void main() {
  vec2 newUv = vUv;

  // Sliding textures
  vec4 txtNoise1 = texture2D(uTxtCloudNoise, vec2(vUv.x + uTime * 0.0001, vUv.y - uTime * 0.00014)); // noise txt
  vec4 txtNoise2 = texture2D(uTxtCloudNoise, vec2(vUv.x - uTime * 0.00002, vUv.y + uTime * 0.000017 + 0.2)); // noise txt

  // Calculate the FBM and distort the UV
  float noiseBig = fbm3d(vec3(vUv * uFac1, uTime * uTimeFactor1), 4)+ 1.0 * 0.5;
  newUv += noiseBig * uDisplStrenght1;
  
  // Calculate the Simplex and distort the UV
  float noiseSmall = snoise3(vec3(newUv * uFac2, uTime * uTimeFactor2)) + 1.0 * 0.5;

  newUv += noiseSmall * uDisplStrenght2;

  // Create the shape (mask)
  vec4 txtShape = texture2D(uTxtShape, newUv);

  // Alpha
  float alpha = levels((txtNoise1 + txtNoise2) * 0.6, 0.2, 0.4, 0.7).r;
  alpha *= txtShape.r;

  gl_FragColor = vec4(vec3(0.95,0.95,0.95), alpha);
}

Final thoughts

Keep in mind that this is not the most performant way to create a cloud, but it’s a simple one. Using noise functions is expensive, but for the sake of this tutorial it should suffice.

If you have any thoughts, improvements or doubts, please feel free to write to me in Twitter, I’ll be happy to help.

How to Create Procedural Clouds Using Three.js Sprites was written by Robert Borghesi and published on Codrops.

Case Study: Portfolio of Bruno Arizio

Introduction

Bruno Arizio, Designer — @brunoarizio

Since I first became aware of the energy in this community, I felt the urge to be more engaged in this ‘digital avant-garde landscape’ that is being cultivated by the amazing people behind Codrops, Awwwards, CSSDA, The FWA, Webby Awards, etc. That energy has propelled me to set up this new portfolio, which acted as a way of putting my feet into the water and getting used to the temperature.

I see this community being responsible for pushing the limits of what is possible on the web, fostering the right discussions and empowering the role of creative developers and creative designers across the world.

With this in mind, it’s difficult not to think of the great art movements of the past and their role in mediating change. You can easily draw a parallel between this digital community and the Impressionists artists in the last century, or as well the Bauhaus movement leading our society into modernism a few decades ago. What these periods have in common is that they’re pushing the boundaries of what is possible, of what is the new standard, doing so through relentless experimentation. The result of that is the world we live in, the products we interact with, and the buildings we inhabit.

The websites that are awarded today, are so because they are innovating in some aspects, and those innovations eventually become a new standard. We can see that in the apps used by millions of people, in consumer websites, and so on. That is the impact that we make.

I’m not saying that a new interaction featured on a new portfolio launched last week is going to be in the hands of millions of people across the globe in the following week, although constantly pushing these interactions to its limits will scale it and eventually make these things adopted as new standards. This is the kind of responsibility that is in our hands.

Open Source

We decided to be transparent and take a step forward in making this entire project open source so people can learn how to make the things we created. We are both interested in supporting the community, so feel free to ask us questions on Twitter or Instagram (@brunoarizio and @lhbzr), we welcome you to do so!

The repository is available on GitHub.

Design Process

With the portfolio, we took a meticulous approach to motion and collaborated to devise deliberate interactions that have a ‘realness’ to it, especially on the main page.

The mix of the bending animation with the distortion effect was central to making the website ‘tactile’. It is meant to feel good when you shuffle through the projects, and since it was published we received a lot of messages from people saying how addictive the navigation is.

A lot of my new ideas come from experimenting with shaders and filters in After Effects, and just after I find what I’m looking for — the ‘soul’ of the project — I start to add the ‘grid layer’ and begin to structure the typography and other elements.

In this project, before jumping to Sketch, I started working with a variety of motion concepts in AE, and that’s when the version with the convection bending came in and we decided to take it forward. So we can pretty much say that the project was born from motion, not from a layout in this matter. After the main idea was solid enough, I took it to Sketch, designed a simple grid and applied the typography.

Collaboration

Working in collaboration with Luis was so productive. This is the second (of many to come) projects working together and I can safely say that we had a strong connection from start to finish, and that was absolutely important for the final results. It wasn’t a case in which the designer creates the layouts and hands them over to a developer and period. This was a nuanced relationship of constant feedback. We collaborated daily from idea to production, and it was fantastic how dev and design had this keen eye for perfectionism.

From layout to code we were constantly fine-tuning every aspect: from the cursor kinetics to making overhaul layout changes and finding the right tone for the easing curves and the noise mapping on the main page.

When you design a portfolio, especially your own, it feels daunting since you are free to do whatever you want. But the consequence is that this will dictate how people will see your work, and what work you will be doing shortly after. So making the right decisions deliberately and predicting its impact is mandatory for success.

Technical Breakdown

Luis Henrique Bizarro, Creative Developer — @lhbzr

Motion Reference

This was the video of the motion reference that Bruno shared with me when he introduced me his ideas for his portfolio. I think one of the most important things when starting a project like this with the idea of implementing a lot of different animations, is to create a little prototype in After Effects to drive the developer to achieve similar results using code.

The Tech Stack

The portfolio was developed using:

That’s my favorite stack to work with right now; it gives me a lot of freedom to focus on animations and interactions instead of having to follow guidelines of a specific framework.

In this particular project, most of the code was written from scratch using ECMAScript 2015+ features like Classes, Modules, and Promises to handle the route transitions and other things in the application.

In this case study, we’ll be focusing on the WebGL implementation, since it’s the core animation of the website and the most interesting thing to talk about.

1. How to measure things in Three.js

This specific subject was already covered in other articles of Codrops, but in case you’ve never heard of it before, when you’re working with Three.js, you’ll need to make some calculations in order to have values that represent the correct sizes of the viewport of your browser.

In my last projects, I’ve been using this Gist by Florian Morel, which is basically a calculation that uses your camera field-of-view to return the values for the height and width of the Three.js environment.

// createCamera()
const fov = THREEMath.degToRad(this.camera.fov);
const height = 2 * Math.tan(fov / 2) * this.camera.position.z;
const width = height * this.camera.aspect;
        
this.environment = {
  height,
  width
};

// createPlane()
const { height, width } = this.environment;

this.plane = new PlaneBufferGeometry(width * 0.75, height * 0.75, 100, 50);

I usually store these two variables in the wrapper class of my applications, this way we just need to pass it to the constructor of other elements that will use it.

In the embed below, you have a very simple implementation of a PlaneBufferGeometry that covers 75% of the height and width of your viewport using this solution.

2. Uploading textures to the GPU and using them in Three.js

In order to avoid the textures to be processed in runtime while the user is navigating through the website, I consider a very good practice to upload all images to the GPU immediately when they’re ready. On Bruno’s portfolio, this process happens during the preloading of the website. (Kudos to Fabio Azevedo for introducing me this concept a long time ago in previous projects.)

Another two good additions, in case you don’t want Three.js to resize and process the images you’re going to use as textures, are disabling mipmaps and change how the texture is sampled by changing the generateMipmaps and minFilter attributes.

this.loader = new TextureLoader();

this.loader.load(image, texture => {
  texture.generateMipmaps = false;
  texture.minFilter = LinearFilter;
  texture.needsUpdate = true;

  this.renderer.initTexture(texture, 0);
});

The method .initTexture() was introduced back in the newest versions of Three.js in the WebGLRenderer class, so make sure to update to the latest version of the library to be able to use this feature.

But my texture is looking stretched! The default behavior of Three.js map attribute from MeshBasicMaterial is to make your image fit into the PlaneBufferGeometry. This happens because of the way the library handles 3D models. But in order to keep the original aspect ratio of your image, you’ll need to do some calculations as well.

There’s a lot of different solutions out there that don’t use GLSL shaders, but in our case we’ll also need them to implement our animations. So let’s implement the aspect ratio calculations in our fragment shader that will be created for the ShaderMaterial class.

So, all you need to do is pass your Texture to your ShaderMaterial via the uniforms attribute. In the fragment shader, you’ll be able to use all variables passed via the uniforms attribute.

In Three.js Uniform documentation you have a good reference of what happens internally when you pass the values. For example, if you pass a Vector2, you’ll be able to use a vec2 inside your shaders.

We need two vec2 variables to do the aspect ratio calculations: the image resolution and the resolution of the renderer. After passing them to the fragment shader, we just need to implement our calculations.

this.material = new ShaderMaterial({
  uniforms: {
    image: {
      value: texture
    },
    imageResolution: {
      value: new Vector2(texture.image.width, texture.image.height)
    },
    resolution: {
      type: "v2",
      value: new Vector2(window.innerWidth, window.innerHeight)
    }
  },
  fragmentShader: `
    uniform sampler2D image;
    uniform vec2 imageResolution;
    uniform vec2 resolution;

    varying vec2 vUv;

    void main() {
        vec2 ratio = vec2(
          min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
          min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
        );

        vec2 uv = vec2(
          vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
          vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
        );

        gl_FragColor = vec4(texture2D(image, uv).xyz, 1.0);
    }
  `,
  vertexShader: `
    varying vec2 vUv;

    void main() {
        vUv = uv;

        vec3 newPosition = position;

        gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
    }
  `
});

In this snippet we’re using template strings to represent the code of our shaders only to keep it simple when using CodeSandbox, but I highly recommend using glslify to split your shaders into multiple files to keep your code more organized in a more robust development environment.

We’re all good now with the images! Our images are preserving their original aspect ratio and we also have control over how much space they’ll use in our viewport.

3. How to implement infinite scrolling

Infinite scrolling can be something very challenging, but in a Three.js environment the implementation is smoother than it’d be without WebGL by using CSS transforms and HTML elements, because you don’t need to worry about storing the original position of the elements and calculate their distance to avoid browser repaints.

Overall, a simple logic for the infinite scrolling should follow these two basic rules:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

Sounds reasonable right? So, first we need to detect in which direction the user is scrolling.

this.position.current += (this.scroll.values.target - this.position.current) * 0.1;

if (this.position.current < this.position.previous) {
  this.direction = "up";
} else if (this.position.current > this.position.previous) {
  this.direction = "down";
} else {
  this.direction = "none";
}

this.position.previous = this.position.current;

The variable this.scroll.values.target is responsible for defining to which scroll position the user wants to go. Then the variable this.position.current represents the current position of your scroll, it goes smoothly to the value of the target with the * 0.1 multiplication.

After detecting the direction the user is scrolling towards, we just store the current position to the this.position.previous variable, this way we’ll also have the right direction value inside the requestAnimationFrame.

Now we need to implement the checking method to make our items have the expected behavior based on the direction of the scroll and their position. In order to do so, you need to implement a method like this one below:

check() {
  const { height } = this.environment;
  const heightTotal = height * this.covers.length;

  if (this.position.current < this.position.previous) {
    this.direction = "up";
  } else if (this.position.current > this.position.previous) {
    this.direction = "down";
  } else {
    this.direction = "none";
  }

  this.projects.forEach(child =>; {
    child.isAbove = child.position.y > height;
    child.isBelow = child.position.y < -height;

    if (this.direction === "down" && child.isAbove) {
      const position = child.location - heightTotal;

      child.isAbove = false;
      child.isBelow = true;

      child.location = position;
    }

    if (this.direction === "up" && child.isBelow) {
      const position = child.location + heightTotal;

      child.isAbove = true;
      child.isBelow = false;

      child.location = position;
    }

    child.update(this.position.current);
  });
}

Now our logic for the infinite scroll is finally finished! Drag and drop the embed below to see it working.

You can also view the fullscreen demo here.

4. Integrate animations with infinite scrolling

The website motion reference has four different animations happening while the user is scrolling:

  • Movement on the z-axis: the image moves from the back to the front.
  • Bending on the z-axis: the image bends a little bit depending on its position.
  • Image scaling: the image scales slightly when moving out of the screen.
  • Image distortion: the image is distorted when we start scrolling.

My approach to implementing the animations was to use a calculation of the element position divided by the viewport height, giving me a percentage number between -1 and 1. This way I’ll be able to map this percentage into other values inside the ShaderMaterial instance.

  • -1 represents the bottom of the viewport.
  • 0 represents the middle of the viewport.
  • 1 represents the top of the viewport.
const percent = this.position.y / this.environment.height; 
const percentAbsolute = Math.abs(percent);

The implementation of the z-axis animation is pretty simple, because it can be done directly with JavaScript using this.position.z from Mesh, so the code for this animation looks like this:

this.position.z = map(percentAbsolute, 0, 1, 0, -50);

The implementation of the bending animation is slightly more complex, we need to use the vertex shaders to bend our PlaneBufferGeometry. I’ve choose distortion as the value to control this animation inside the shaders. Then we also pass two other parameters distortionX and distortionY which controls the amount of distortion of the x and y axis.

this.material.uniforms.distortion.value = map(percentAbsolute, 0, 1, 0, 5);
uniform float distortion;
uniform float distortionX;
uniform float distortionY;

varying vec2 vUv;

void main() {
  vUv = uv;

  vec3 newPosition = position;

  // 50 is the number of x-axis vertices we have in our PlaneBufferGeometry.
  float distanceX = length(position.x) / 50.0;
  float distanceY = length(position.y) / 50.0;

  float distanceXPow = pow(distortionX, distanceX);
  float distanceYPow = pow(distortionY, distanceY);

  newPosition.z -= distortion * max(distanceXPow + distanceYPow, 2.2);

  gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}

The implementation of image scaling was made with a single function inside the fragment shader:

this.material.uniforms.scale.value = map(percent, 0, 1, 0, 0.5);
vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  // ...

  uv = zoom(uv, scale);

  // ...
}

The implementation of distortion was made with glsl-noise and a simple calculation displacing the texture on the x and y axis based on user gestures:

onTouchStart() {
  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0.1
  });
}

onTouchEnd() {
  TweenMax.killTweensOf(this.material.uniforms.displacementY);

  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0
  });
}
#pragma glslify: cnoise = require(glsl-noise/classic/3d)

void main() {
  // ...

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  // ...
}

And that’s our final code of the fragment shader merging all the three animations together.

#pragma glslify: cnoise = require(glsl-noise/classic/3d)

uniform float alpha;
uniform float displacementX;
uniform float displacementY;
uniform sampler2D image;
uniform vec2 imageResolution;
uniform vec2 resolution;
uniform float scale;
uniform float time;

varying vec2 vUv;

vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  vec2 ratio = vec2(
    min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
    min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
  );

  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  uv = zoom(uv, scale);

  gl_FragColor = vec4(texture2D(image, uv).xyz, alpha);
}

You can also view the fullscreen demo here.

Photos used in examples of the article were taken by Willian Justen and Azamat Zhanisov.

Conclusion

We hope you liked the Case Study we’ve written together, if you have any questions, feel free to ask us on Twitter or Instagram (@brunoarizio and @lhbzr), we would be very happy to receive your feedback.

Case Study: Portfolio of Bruno Arizio was written by Bruno Arizio and published on Codrops.

Scroll, Refraction and Shader Effects in Three.js and React

In this tutorial I will show you how to take a couple of established techniques (like tying things to the scroll-offset), and cast them into re-usable components. Composition will be our primary focus.

In this tutorial we will:

  • build a declarative scroll rig
  • mix HTML and canvas
  • handle async assets and loading screens via React.Suspense
  • add shader effects and tie them to scroll
  • and as a bonus: add an instanced variant of Jesper Vos multiside refraction shader

Setting up

We are using React, hooks, Three.js and react-three-fiber. The latter is a renderer for Three.js which allows us to declare the scene graph by breaking up tasks into self-contained components. However, you still need to know a bit of Three.js. All there is to know about react-three-fiber you can find on the GitHub repo’s readme. Check out the tutorial on alligator.io, which goes into the why and how.

We don’t emulate a scroll bar, which would take away browser semantics. A real scroll-area in front of the canvas with a set height and a listener is all we need.

I decided to divide the content into:

  • virtual content sections
  • and pages, each 100vh long, this defines how long the scroll area is
function App() {
  const scrollArea = useRef()
  const onScroll = e => (state.top.current = e.target.scrollTop)
  useEffect(() => void onScroll({ target: scrollArea.current }), [])
  return (
    <>
      <Canvas orthographic>{/* Contents ... */}</Canvas>
      <div ref={scrollArea} onScroll={onScroll}>
        <div style={{ height: `${state.pages * 100}vh` }} />
      </div>

scrollTop is written into a reference because it will be picked up by the render-loop, which is carrying out the animations. Re-rendering for often occurring state doesn’t make sense.

A first-run effect synchronizes the local scrollTop with the actual one, which may not be zero.

Building a declarative scroll rig

There are many ways to go about it, but generally it would be nice if we could distribute content across the number of sections in a declarative way while the number of pages defines how long we have to scroll. Each content-block should have:

  • an offset, which is the section index, given 3 sections, 0 means start, 2 means end, 1 means in between
  • a factor, which gets added to the offset position and subtracted using scrollTop, it will control the blocks speed and direction

Blocks should also be nestable, so that sub-blocks know their parents’ offset and can scroll along.

const offsetContext = createContext(0)

function Block({ children, offset, factor, ...props }) {
  const ref = useRef()
  // Fetch parent offset and the height of a single section
  const { offset: parentOffset, sectionHeight } = useBlock()
  offset = offset !== undefined ? offset : parentOffset
  // Runs every frame and lerps the inner block into its place
  useFrame(() => {
    const curY = ref.current.position.y
    const curTop = state.top.current
    ref.current.position.y = lerp(curY, (curTop / state.zoom) * factor, 0.1)
  })
  return (
    <offsetContext.Provider value={offset}>
      <group {...props} position={[0, -sectionHeight * offset * factor, 0]}>
        <group ref={ref}>{children}</group>
      </group>
    </offsetContext.Provider>
  )
}

This is a block-component. Above all, it wraps the offset that it is given into a context provider so that nested blocks and components can read it out. Without an offset it falls back to the parent offset.

It defines two groups. The first is for the target position, which is the height of one section multiplied by the offset and the factor. The second, inner group is animated and cancels out the factor. When the user scrolls to the given section offset, the block will be centered.

We use that along with a custom hook which allows any component to access block-specific data. This is how any component gets to react to scroll.

function useBlock() {
  const { viewport } = useThree()
  const offset = useContext(offsetContext)
  const canvasWidth = viewport.width / zoom
  const canvasHeight = viewport.height / zoom
  const sectionHeight = canvasHeight * ((pages - 1) / (sections - 1))
  // ...
  return { offset, canvasWidth, canvasHeight, sectionHeight }
}

We can now compose and nest blocks conveniently:

<Block offset={2} factor={1.5}>
  <Content>
    <Block factor={-0.5}>
      <SubContent />
    </Block>
  </Content>
</Block>

Anything can read from block-data and react to it (like that spinning cross):

function Cross() {
  const ref = useRef()
  const { viewportHeight } = useBlock()
  useFrame(() => {
    const curTop = state.top.current
    const nextY = (curTop / ((state.pages - 1) * viewportHeight)) * Math.PI
    ref.current.rotation.z = lerp(ref.current.rotation.z, nextY, 0.1)
  })
  return (
    <group ref={ref}>

Mixing HTML and canvas, and dealing with assets

Keeping HTML in sync with the 3D world

We want to keep layout and text-related things in the DOM. However, keeping it in sync is a bit of a bummer in Three.js, messing with createElement and camera calculations is no fun.

In three-fiber all you need is the <Dom /> helper (@beta atm). Throw this into the canvas and add declarative HTML. This is all it takes for it to move along with its parents’ world-matrix.

<group position={[10, 0, 0]}>
  <Dom><h1>hello</h1></Dom>
</group>

Accessibility

If we strictly divide between layout and visuals, supporting a11y is possible. Dom elements can be behind the canvas (via the prepend prop), or in front of it. Make sure to place them in front if you need them to be accessible.

Responsiveness, media-queries, etc.

While the DOM fragments can rely on CSS, their positioning overall relies on the scene graph. Canvas elements on the other hand know nothing of the sort, so making it all work on smaller screens can be a bit of a challenge.

Fortunately, three-fiber has auto-resize inbuilt. Any component requesting size data will be automatically informed of changes.

You get:

  • viewport, the size of the canvas in its own units, must be divided by camera.zoom for orthographic cameras
  • size, the size of the screen in pixels
const { viewport, size } = useThree()

Most of the relevant calculations for margins, maxWidth and so on have been made in useBlock.

Handling async assets and loading screens via React.Suspense

Concerning assets, Reacts Suspense allows us to control loading and caching, when components should show up, in what order, fallbacks, and how errors are handled. It makes something like a loading screen, or a start-up animation almost too easy.

The following will suspend all contents until each and every component, even nested ones, have their async data ready. Meanwhile it will show a fallback. When everything is there, the <Startup /> component will render along with everything else.

<Suspense fallback={<Fallback />}>
  <AsyncContent />
  <Startup />
</Suspense>

In three-fiber you can suspend a component with the useLoader hook, which takes any Three.js loader, then loads (and caches) assets with it.

function Image() {
  const texture = useLoader(THREE.TextureLoader, "/texture.png")
  // It will only get here if the texture has been loaded
  return (
    <mesh>
      <meshBasicMaterial attach="material" map={texture} />

Adding shader effects and tying them to scroll

The custom shader in this demo is a Frankenstein based on the Three.js MeshBasicMaterial, plus:

The relevant portion of code in which we feed the shader block-specific scroll data is this one:

material.current.scale =
  lerp(material.current.scale, offsetFactor - top / ((pages - 1) * viewportHeight), 0.1)
material.current.shift =
  lerp(material.current.shift, (top - last) / 150, 0.1)

Adding Diamonds

The technique is explained in full detail in the article Real-time Multiside Refraction in Three Steps by Jesper Vos. I placed Jesper’s code into a re-usable component, so that it can be mounted and unmounted, taking care of all the render logic. I also changed the shader slightly to enable instancing, which now allows us to draw dozens of these onto the screen without hitting a performance snag anytime soon.

The component reads out block-data like everything else. The diamonds are put into place according to the scroll offset by distributing the instanced meshes. This is a relatively new feature in Three.js.

Wrapping up

This tutorial may give you a general idea, but there are many things that are possible beyond the generic parallax; you can tie anything to scroll. Above all, being able to compose and re-use components goes a long way and is so much easier than dealing with a soup of code fragments whose implicit contracts span the codebase.

Scroll, Refraction and Shader Effects in Three.js and React was written by Paul Henschel and published on Codrops.

Real-time Multiside Refraction in Three Steps

When rendering a 3D object you’ll always have to assign it a material to make it visible and to give it a desired appearance, whether it is in some kind of 3D software or in real-time with WebGL.

Many types of materials can be mimicked with out-of-the-box programs in libraries like Three.js, but in this tutorial I will show you how to make objects appear glass-like in three steps using—you guessed it—Three.js.

Step 1: Setup and Front Side Refraction

For this demo I’ll be using a diamond geometry, but you can follow along with a simple box or any other geometry.

Let’s set up our project. We’ll need a renderer, a scene, a perspective camera and our geometry. In order to render our geometry we will need to assign it a material. Creating this material will be the main focus of this tutorial. So go ahead and create a new ShaderMaterial with a basic vertex, and fragment shader.

Contrary to what you’d expect, our material will not be transparent, in fact we will sample and distort anything that’s behind our diamond. To do that we will need to render our scene (without the diamond) to a texture. I’m simply rendering a full screen plane with an orthographic camera, but this could just as well be a scene full of other objects. The easiest way to split the background geometry from the diamond in Three.js is to use Layers.

this.orthoCamera = new THREE.OrthographicCamera( width / - 2,width / 2, height / 2, height / - 2, 1, 1000 );
// assign the camera to layer 1 (layer 0 is default)
this.orthoCamera.layers.set(1);

const tex = await loadTexture('texture.jpg');
this.quad = new THREE.Mesh(new THREE.PlaneBufferGeometry(), new THREE.MeshBasicMaterial({map: tex}));

this.quad.scale.set(width, height, 1);
// also move the plane to layer 1
this.quad.layers.set(1);
this.scene.add(this.quad);

Our render loop will look like this:

this.envFBO = new THREE.WebGLRenderTarget(width, height);

this.renderer.autoClear = false;

render() {
	requestAnimationFrame( this.render );

	this.renderer.clear();

	// render background to fbo
	this.renderer.setRenderTarget(this.envFbo);
	this.renderer.render( this.scene, this.orthoCamera );

	// render background to screen
	this.renderer.setRenderTarget(null);
	this.renderer.render( this.scene, this.orthoCamera );
	this.renderer.clearDepth();

	// render geometry to screen
	this.renderer.render( this.scene, this.camera );
};

Alright, time for a little bit of theory now. Transparent materials like glass are visible because they bend light. That is because light travels slower in glass than it does in air, when a lightwave hits a glass object at an angle, this change in speed causes the wave to change direction. This change in direction of a wave is what describes the phenomenon of refraction.

potential refraction of a light ray

To replicate this in code we will need to know the angle between our eye vector and the surface (normal) vector of our diamond in world space. Let’s update our vertex shader to calculate these vectors.

varying vec3 eyeVector;
varying vec3 worldNormal;

void main() {
	vec4 worldPosition = modelMatrix * vec4( position, 1.0);
	eyeVector = normalize(worldPos.xyz - cameraPosition);
	worldNormal = normalize( modelViewMatrix * vec4(normal, 0.0)).xyz;

	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

In our fragment shader we can now use eyeVector and worldNormal as the first two parameters of glsl’s built-in refract function. The third parameter is the ratio of indices of refraction, meaning the index of refraction (IOR) of our fast medium—air—divided by the IOR of our slow medium—glass. In this case that will be 1.0/1.5, but you can tweak this value to achieve your desired result. For example the IOR of water is 1.33 and diamond has an IOR of 2.42.

uniform sampler2D envMap;
uniform vec2 resolution;

varying vec3 worldNormal;
varying vec3 viewDirection;

void main() {
	// get screen coordinates
	vec2 uv = gl_FragCoord.xy / resolution;

	vec3 normal = worldNormal;
	// calculate refraction and add to the screen coordinates
	vec3 refracted = refract(eyeVector, normal, 1.0/ior);
	uv += refracted.xy;
	
	// sample the background texture
	vec4 tex = texture2D(envMap, uv);

	vec4 output = tex;
	gl_FragColor = vec4(output.rgb, 1.0);
}

Nice! We successfully wrote a refraction shader. But our diamond is hardly visible… That is partly because we’ve only handled one visual property of glass. Not all light will pass through the material to be refracted, in fact, part of it will be reflected. Let’s see how we can implement that!

Step 2: Reflection and the Fresnel equation

For the sake of simplicity, in this tutorial we are not going to calculate proper reflections but just use a white color for our reflected light. Now, how do we know when to reflect and when to refract? In theory this depends on the refractive index of the material, when the angle between the incident vector and the surface normal is greater than the critical angle, the light wave will be reflected.

fresnel-diagram

In our fragment shader we will use the Fresnel equation to calculate the ratio between reflected and refracted rays. Unfortunately, glsl does not have this equation built-in as well, but you can just copy it from here:

float Fresnel(vec3 eyeVector, vec3 worldNormal) {
	return pow( 1.0 + dot( eyeVector, worldNormal), 3.0 );
}

We can now simply mix the refracted texture color with our white reflection color based on the Fresnel ratio we just calculated.

uniform sampler2D envMap;
uniform vec2 resolution;

varying vec3 worldNormal;
varying vec3 viewDirection;

float Fresnel(vec3 eyeVector, vec3 worldNormal) {
	return pow( 1.0 + dot( eyeVector, worldNormal), 3.0 );
}

void main() {
	// get screen coordinates
	vec2 uv = gl_FragCoord.xy / resolution;

	vec3 normal = worldNormal;
	// calculate refraction and add to the screen coordinates
	vec3 refracted = refract(eyeVector, normal, 1.0/ior);
	uv += refracted.xy;

	// sample the background texture
	vec4 tex = texture2D(envMap, uv);

	vec4 output = tex;

	// calculate the Fresnel ratio
	float f = Fresnel(eyeVector, normal);

	// mix the refraction color and reflection color
	output.rgb = mix(output.rgb, vec3(1.0), f);

	gl_FragColor = vec4(output.rgb, 1.0);
}

That’s already looking a lot better, but there’s still something off about it… Ah right, we can’t see the other side of transparent object. Let’s fix that!

Step 3: Multiside refraction

With the things we’ve learned so far about reflections and refractions we can understand that light can bounce back and forth a couple times inside the object before exiting it.

To achieve a physically correct result we will have to trace each ray, but unfortunately this computation is way too heavy to render in real-time. So instead, I will show you a simple approximation to at least visualize the back faces of our diamond.

We’ll need the world normals of our geometry’s front and back faces in one fragment shader. Since we cannot render both sides at the same time we’ll need to render the back face normals to a texture first.

normals

Let’s make a new ShaderMaterial like we did in step 1, but this time we will render the world normals to gl_FragColor.

varying vec3 worldNormal;

void main() {
	gl_FragColor = vec4(worldNormal, 1.0);
}

Next we’ll update our render loop to include the back face pass.

this.backfaceFbo = new THREE.WebGLRenderTarget(width, height);

...

render() {
	requestAnimationFrame( this.render );

	this.renderer.clear();

	// render background to fbo
	this.renderer.setRenderTarget(this.envFbo);
	this.renderer.render( this.scene, this.orthoCamera );

	// render diamond back faces to fbo
	this.mesh.material = this.backfaceMaterial;
	this.renderer.setRenderTarget(this.backfaceFbo);
	this.renderer.clearDepth();
	this.renderer.render( this.scene, this.camera );

	// render background to screen
	this.renderer.setRenderTarget(null);
	this.renderer.render( this.scene, this.orthoCamera );
	this.renderer.clearDepth();

	// render diamond with refraction material to screen
	this.mesh.material = this.refractionMaterial;
	this.renderer.render( this.scene, this.camera );
};

Now we sample the back face normal texture in our refraction material.

vec3 backfaceNormal = texture2D(backfaceMap, uv).rgb;

And finally we combine the front and back face normals.

float a = 0.33;
vec3 normal = worldNormal * (1.0 - a) - backfaceNormal * a;

In this equation, a is simply a scalar value indicating how much of the back face’s normal should be applied.

We did it! We can see all sides of our diamond, only because of the refractions and reflections we have applied to its material.

Limitations

As I already explained, it is not quite possible to render physically correct transparent materials in real-time with this method. Another problem occurs when rendering multiple glass objects in front of each other. Since we only sample the environment once we won’t be able to see through a chain of objects. And lastly, a screen space refraction like I demoed here won’t work very well near the edges of the canvas since rays may refract to values outside of its boundaries and we didn’t capture that data when rendering the background scene to the render target.

Of course, there are ways to overcome these limitations, but they might not all be great solutions for your real-time rendering in WebGL.

I hope you enjoyed following along with this demo and you have learned something from it. I’m curious to see what you can do with it! Let me know on Twitter. Also don’t hesitate to ask me anything!

Real-time Multiside Refraction in Three Steps was written by Jesper Vos and published on Codrops.