Creating a Distorted Mask Effect on an Image with Babylon.js and GLSL

Nowadays, it’s really hard to navigate the web and not run into some wonderful website that has some stunning effects that seem like black magic.

Well, many times that “black magic” is in fact WebGL, sometimes mixed with a bit of GLSL. You can find some really nice examples in this Awwwards roundup, but there are many more out there.

Recently, I stumbled upon the Waka Waka website, one of the latest works of Ben Mingo and Aristide Benoist, and the first thing I noticed was the hover effect on the images.

It was obvious that it’s WebGL, but my question was: “How did Aristide do that?”

Since I love to deconstruct WebGL stuff, I tried to replicate it, and in the end I’ve made it.

In this tutorial I’ll explain how to create an effect really similar to the one in the Waka Waka website using Microsoft’s BabylonJS library and some GLSL.

This is what we’ll do.

The setup

The first thing we have to do is create our scene; it will be very basic and will contain only a plane to which we’ll apply a custom ShaderMaterial.

I won’t cover how to setup a scene in BabylonJS, for that you can check its comprehensive documentation.

Here’s the code that you can copy and paste:

import { Engine } from "@babylonjs/core/Engines/engine";
import { Scene } from "@babylonjs/core/scene";
import { Vector3 } from "@babylonjs/core/Maths/math";
import { ArcRotateCamera } from "@babylonjs/core/Cameras/arcRotateCamera";
import { ShaderMaterial } from "@babylonjs/core/Materials/shaderMaterial";
import { Effect } from "@babylonjs/core/Materials/effect";
import { PlaneBuilder } from "@babylonjs/core/Meshes/Builders/planeBuilder";

class App {
  constructor() {
    this.canvas = null;
    this.engine = null;
    this.scene = null;
  }

  init() {
    this.setup();
    this.addListeners();
  }

  setup() {
    this.canvas = document.querySelector("#app");
    this.engine = new Engine(this.canvas, true, null, true);
    this.scene = new Scene(this.engine);

    // Adding the vertex and fragment shaders to the Babylon's ShaderStore
    Effect.ShadersStore["customVertexShader"] = require("./shader/vertex.glsl");
    Effect.ShadersStore[
      "customFragmentShader"
    ] = require("./shader/fragment.glsl");

    // Creating the shader material using the `custom` shaders we added to the ShaderStore
    const planeMaterial = new ShaderMaterial("PlaneMaterial", this.scene, {
      vertex: "custom",
      fragment: "custom",
      attributes: ["position", "normal", "uv"],
      uniforms: ["worldViewProjection"]
    });
    planeMaterial.backFaceCulling = false;

    // Creating a basic plane and adding the shader material to it
    const plane = new PlaneBuilder.CreatePlane(
      "Plane",
      { width: 1, height: 9 / 16 },
      this.scene
    );
    plane.scaling = new Vector3(7, 7, 1);
    plane.material = planeMaterial;

    // Camera
    const camera = new ArcRotateCamera(
      "Camera",
      -Math.PI / 2,
      Math.PI / 2,
      10,
      Vector3.Zero(),
      this.scene
    );

    this.engine.runRenderLoop(() => this.scene.render());
  }

  addListeners() {
    window.addEventListener("resize", () => this.engine.resize());
  }
}

const app = new App();
app.init();

As you can see, it’s not that different from other WebGL libraries like Three.js: it sets up a scene, a camera, and it starts the render loop (otherwise you wouldn’t see anything).

The material of the plane is a ShaderMaterial for which we’ll have to create its respective shader files.

// /src/shader/vertex.glsl

precision highp float;

// Attributes
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;

// Uniforms
uniform mat4 worldViewProjection;

// Varyings
varying vec2 vUV;

void main(void) {
    gl_Position = worldViewProjection * vec4(position, 1.0);
    vUV = uv;
}
// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

You can forget about the vertex shader since for the purpose of this tutorial we’ll work only on the fragment shader.

Here you can see it live:

Good, we’ve already written 80% of the JavaScript code we need for the purpose of this tutorial.

The logic

GLSL is cool, it allows you to create stunning effects that would be impossible to do with HTML, CSS and JS alone. It’s a completely different world, and if you’ve always done “web” stuff you’ll get confused at the beginning, because when working with GLSL you have to think in a completely different way to achieve any effect.

The logic behind the effect we want to achieve is pretty simple: we have two overlapping images, and the image that overlaps the other one has a mask applied to it.

Simple, but it doesn’t work like SVG masks for instance.

Adjusting the fragment shader

Before going any further we need to tweak the fragment shader a little bit.

As for now, it looks like this:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

Here, we’re telling the shader to assign each pixel a color whose channels are determined by the value of the x coordinate for the Red channel and the y coordinate for the Green channel.

But we need to have the origin at the center of the plane, not the bottom-left corner. In order to do so we have to refactor the declaration of uv this way:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(uv.x, uv.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

This simple change will result into the following:

This is becase we moved the origin from the bottom left corner to the center of the plane, so uv‘s values go from -0.5 to 0.5. Since you cannot assign negative values to RGB channels, the Red and Green channels fallback to 0.0 on the whole bottom left area.

Creating the mask

First, let’s change the color of the plane to complete black:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(0.0);
  gl_FragColor = vec4(color, 1.0);
}

Now let’s add a rectangle that we will use as the mask for the foreground image.

Add this code outside the main() function:

vec3 Rectangle(in vec2 size, in vec2 st, in vec2 p, in vec3 c) {
  float top = step(1. - (p.y + size.y), 1. - st.y);
  float right = step(1. - (p.x + size.x), 1. - st.x);
  float bottom = step(p.y, st.y);
  float left = step(p.x, st.x);
  return top * right * bottom * left * c;
}

(How to create shapes is beyond of the scope of this tutorial. For that, I suggest you to read this chapter of “The Book of Shaders”)

The Rectangle() function does exactly what its name says: it creates a rectangle based on the parameters we pass to it.

Then, we redeclare the color using that Rectangle() function:

vec2 maskSize = vec2(0.3, 0.3);

// Note that we're subtracting HALF of the width and height to position the rectangle at the center of the scene
vec2 maskPosition = vec2(-0.15, -0.15);
vec3 maskColor =  vec3(1.0);

color = Rectangle(maskSize, uv, maskPosition, maskColor);

Awesome! We now have our black plane with a beautiful white rectangle at the center.

But, wait! That’s not supposed to be a rectangle; we set its size to be 0.3 on both the width and the height!

That’s because of the ratio of our plane, but it can be easily fixed in two simple steps.

First, add this snippet to the JS file:

this.scene.registerBeforeRender(() => {
  plane.material.setFloat("uPlaneRatio", plane.scaling.x / plane.scaling.y);
});

And then, edit the shader by adding this line at the top of the file:

uniform float uPlaneRatio;

…and this line too, right below the line that sets the uv variable

uv.x *= uPlaneRatio;

Short explanation

In the JS file, we’re sending a uPlaneRatio uniform (one of the GLSL data type) to the fragment shader, whose value is the ratio between the plane width and height.

We made the fragment shader wait for that uniform by declaring it at the top of the file, then the shader uses it to adjust the uv.x value.


Here you can see the final result: a black plane with a white square at the center; nothing too fancy (yet), but it works!

Adding the foreground image

Displaying an image in GLSL is pretty simple. First, edit the JS code and add the following lines:

// Import the `Texture` module from BabylonJS at the top of the file
import { Texture } from '@babylonjs/core/Materials/Textures/texture'
// Add this After initializing both the plane mesh and its material
const frontTexture = new Texture('src/images/lantern.jpg')
plane.material.setTexture("u_frontTexture", frontTexture)

This way, we’re passing the foreground image to the fragment shader as a Texture element.

Now, add the following lines to the fragment shader:

// Put this at the beginninng of the file, outside of the `main()` function
uniform sampler2D u_frontTexture;
// Put this at the bottom of the `main()` function, right above `gl_FragColor = ...`
vec3 frontImage = texture2D(u_frontTexture, uv * 0.5 + 0.5).rgb;

A bit of explaining:

We told BabylonJS to pass the texture to the shader as a sampler2D with the setTexture() method, and then, we made the shader know that we will pass that sampler2D whose name is u_frontTexture.

Finally, we created a new variable of type vec3 named frontImage that contains the RGB values of our texture.

By default, a texture2D is a vec4 variable (it contains the r, g, b and a values), but we don’t need the alpha channel so we declare frontImage as a vec3 variable and explicitly get only the .rgb channels.

Please also note that we’ve modified the UVs of the texture by first multiplying it by 0.5 and then adding 0.5 to it. This is because at the beginning of the main() function I’ve remapped the coordinate system to -0.5 -> 0.5, and also because of the fact that we had to adjust the value of uv.x.


If you now add this to the GLSL code…

color = frontImage;

…you will see our image, rendered by a GLSL shader:

Masking

Always keep in mind that, for shaders, everything is a number (yes, even images), and that 0.0 means completely hidden while 1.0 stands for fully visible.

We can now use the mask we’ve just created to hide the parts of our image where the value of the mask equals 0.0.

With that in mind, it’s pretty easy to apply our mask. The only thing we have to do is multiply the color variable by the value of the mask:

// The mask should be a separate variable, not set as the `color` value
vec3 mask = Rectangle(maskSize, uv, maskPosition, maskColor);

// Some super magic trick
color = frontImage * mask;

Et voilà, we now have a fully functioning mask effect:

Let’s enhance it a bit by making the mask follow a circular path.

In order to do that we must go back to our JS file and add a couple of lines of code.

// Add this to the class constructor
this.time = 0
// This goes inside the `registerBeforeRender` callback
this.time++;
plane.material.setFloat("u_time", this.time);

In the fragment shader, first declare the new uniform at the top of the file:

uniform float u_time;

Then, edit the declaration of maskPosition like this:

vec2 maskPosition = vec2(
  cos(u_time * 0.05) * 0.2 - 0.15,
  sin(u_time * 0.05) * 0.2 - 0.15
);

u_time is simply one of the uniforms that we pass to our shader from the WebGL program.

The only difference with the u_frontTexture uniform is that we increase its value on each render loop and pass its new value to the shader, so that it updates the mask’s position.

Here’s a live preview of the mask going in a circle:

Adding the background image

In order to add the background image we’ll do the exact opposite of what we did for the foreground image.

Let’s go one step at a time.

First, in the JS class, pass the shader the background image in the same way we did for the foreground image:

const backTexture = new Texture("src/images/lantern-bw.jpg");
plane.material.setTexture("u_backTexture", backTexture);

Then, tell the fragment shader that we’re passing it that u_backTexture and initialize another vec3 variable:

// This goes at the top of the file
uniform sampler2D backTexture;

// Add this after `vec3 frontImage = ...`
vec3 backgroundImage = texture2D(iChannel1, uv * 0.5 + 0.5).rgb;

When you do a quick test by replacing

color = frontImage * mask;

with

color = backImage * mask;

you’ll see the background image.

But for this one, we have to invert the mask to make it behave the opposite way.

Inverting a number is really easy, the formula is:

invertedNumber = 1 - <number>

So, let’s apply the inverted mask to the background image:

backImage *= (1.0 - mask);

Here, we’re applying the same mask we added to the foreground image, but since we inverted it, the effect is the opposite.

Putting it all together

At this point, we can refactor the declaration of the two images by directly applying their masks.

vec3 frontImage = texture2D(u_frontTexture, uv * 0.5 + 0.5).rgb * mask;
vec3 backImage = texture2D(u_backTexture, uv * 0.5 + 0.5).rgb * (1.0 - mask);

We can now display both images by adding backImage to frontImage:

color = backImage + frontImage;

That’s it, here’s a live example of the desired effect:

Distorting the mask

Cool uh? But it’s not over yet! Let’s tweak it a bit by distorting the mask.

To do so, we first have to create a new vec2 variable:

vec2 maskUV = vec2(
  uv.x + sin(u_time * 0.03) * sin(uv.y * 5.0) * 0.15,
  uv.y + cos(u_time * 0.03) * cos(uv.x * 10.0) * 0.15
);

Then, replace uv with maskUV in the mask declaration

vec3 mask = Rectangle(maskSize, maskUV, maskPosition, maskColor);

In maskUV, we’re using some math to add uv values based on the u_time uniform and the current uv.

Try tweaking those values by yourself to see different effects.

Distorting the foreground image

Let’s now distort the foreground image the same way we did for the mask, but with slightly different values.

Create a new vec2 variable to store the foreground image uvs:

vec2 frontImageUV = vec2(
  (uv.x + sin(u_time * 0.04) * sin(uv.y * 10.) * 0.03),
  (uv.y + sin(u_time * 0.03) * cos(uv.x * 15.) * 0.05)
);

Then, use that frontImageUV instead of the default uv when declaring frontImage:

vec3 frontImage = texture2D(u_frontTexture, frontImageUV * 0.5 + 0.5).rgb * mask;

Voilà! Now both the mask and the image have a distortion effect applied.

Again, try tweaking those numbers to see how the effect changes.

10 – Adding mouse control

What we’ve made so far is really cool, but we could make it even cooler by adding some mouse control like making it fade in/out when the mouse hovers/leaves the plane and making the mask follow the cursor.

Adding fade effects

In order to detect the mouseover/mouseleave events on a mesh and execute some code when those events occur we have to use BabylonJS’s actions.

Let’s start by importing some new modules:

import { ActionManager } from "@babylonjs/core/Actions/actionManager";
import { ExecuteCodeAction } from "@babylonjs/core/Actions/directActions";
import "@babylonjs/core/Culling/ray";

Then add this code after the creation of the plane:

this.plane.actionManager = new ActionManager(this.scene);

this.plane.actionManager.registerAction(
  new ExecuteCodeAction(ActionManager.OnPointerOverTrigger, () =>
    this.onPlaneHover()
  )
);

this.plane.actionManager.registerAction(
  new ExecuteCodeAction(ActionManager.OnPointerOutTrigger, () =>
    this.onPlaneLeave()
  )
);

Here we’re telling the plane’s ActionManager to listen for the PointerOver and PointerOut events and execute the onPlaneHover() and onPlaneLeave() methods, which we’ll add right now:

onPlaneHover() {
  console.log('hover')
}

onPlaneLeave() {
  console.log('leave')
}

Some notes about the code above

Please note that I’ve used this.plane instead of just plane; that’s because we’ll have to access it from within the mousemove event’s callback later, so I’ve refactored the code a bit.

ActionManager allows us to listen to certain events on a target, in this case the plane.

ExecuteCodeAction is a BabylonJS action that we’ll use to execute some arbitrary code.

ActionManager.OnPointerOverTrigger and ActionManager.OnPointerOutTrigger are the two events that we’re listening to on the plane. They behave exactly like the mouseenter and mouseleave events for DOM elements.

To detect hover events in WebGL, we need to “cast a ray” from the position of the mouse to the mesh we’re checking; if that ray, at some point, intersects with the mesh, it means that the mouse is hovering it. This is why we’re importing the @babylonjs/core/Culling/ray module; BabylonJS will take care of the rest.


Now, if you test it by hovering and leaving the mesh, you’ll see that it logs hover and leave.

Now, let’s add the fade effect. For this, I’ll use the GSAP library, which is the de-facto library for complex and high-performant animations.

First, install it:

yarn add gsap

Then, import it in our class

import gsap from 'gsap

and add this line to the constructor

this.maskVisibility = { value: 0 };

Finally, add this line to the registerBeforeRender()‘s callback function

this.plane.material.setFloat( "u_maskVisibility", this.maskVisibility.value);

This way, we’re sending the shader the current value property of this.maskVisibility as a new uniform called u_maskVisibility.

Refactor the fragment shader this way:

// Add this at the top of the file, like any other uniforms
uniform float u_maskVisibility;

// When declaring `maskColor`, replace `1.0` with the `u_maskVisibility` uniform
vec3 maskColor = vec3(u_maskVisibility);

If you now check the result, you’ll see that the foreground image is not visible anymore; what happened?

Do you remember when I wrote that “for shaders, everything is a number”? That’s the reason! The u_maskVisibility uniform equals 0.0, which means that the mask is invisible.

We can fix it in few lines of code. Open the JS code and refactor the onPlaneHover() and onPlaneLeave() methods this way:

onPlaneHover() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 1
  });
}

onPlaneLeave() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 0
  });
}

Now, when you hover or leave the plane, you’ll see that the mask fades in and out!

(And yes, BabylonJS has it’s own animation engine, but I’m way more confident with GSAP, that’s why I opted for it.)

Make the mask follow the mouse cursor

First, add this line to the constructor

this.maskPosition = { x: 0, y: 0 };

and this to the addListeners() method:

window.addEventListener("mousemove", () => {
  const pickResult = this.scene.pick(
    this.scene.pointerX,
    this.scene.pointerY
  );

  if (pickResult.hit) {
    const x = pickResult.pickedPoint.x / this.plane.scaling.x;
    const y = pickResult.pickedPoint.y / this.plane.scaling.y;

    this.maskPosition = { x, y };
  }
});

What the code above does is pretty simple: on every mousemove event it casts a ray with this.scene.pick() and updates the values of this.maskPosition if the ray is intersecting something.

(Since we have only a single mesh we can avoid checking what mesh is being hit by the ray.)

Again, on every render loop, we send the mask position to the shader, but this time as a vec2. First, import the Vector2 module together with Vector3

import { Vector2, Vector3 } from "@babylonjs/core/Maths/math";

Add this in the runRenderLoop callback function

this.plane.material.setVector2(
  "u_maskPosition",
  new Vector2(this.maskPosition.x, this.maskPosition.y)
);

Add the u_maskPosition uniform at the top of the fragment shader

uniform vec2 u_maskPosition;

Finally, refactor the maskPosition variable this way

vec3 maskPosition = vec2(
  u_maskPosition.x * uPlaneRatio - 0.15,
  u_maskPosition.y - 0.15
);

Side note; I’ve adjusted the x using the uPlaneRatio value because at the beginning of the main() function I did the same with the shader’s uvs

And here you can see the result of your hard work:

Conclusion

As you can see, doing these kind of things doesn’t involve too much code (~150 lines of JavaScript and ~50 lines of GLSL, including comments and empty lines); the hard part with WebGL is the fact that it’s complex by nature, and it’s a very vast subject, so vast that many times I don’t even know what to search on Google when I get stuck.

Also, you have to study a lot, way more than with “standard” website development. But in the end, it’s really fun to work with.

In this tutorial, I tried to explain the whole process (and the reasoning behind everything) step by step, just like I want someone to explain it to me; if you’ve reached this point of this tutorial, it means that I’ve reached my goal.

In any case, thanks!

Credits

The lantern image is by Vladimir Fetodov

Creating a Distorted Mask Effect on an Image with Babylon.js and GLSL was written by Francesco Michelini and published on Codrops.

Making Gooey Image Hover Effects with Three.js

Flash’s grandson, WebGL has become more and more popular over the last few years with libraries like Three.js, PIXI.js or the recent OGL.js. Those are very useful for easily creating a blank board where the only boundaries are your imagination. We see more and more, often subtle integration of WebGL in an interface for hover, scroll or reveal effects. Examples are the gallery of articles on Hello Monday or the effects seen on cobosrl.co.

In this tutorial, we’ll use Three.js to create a special gooey texture that we’ll use to reveal another image when hovering one. Head over to the demo to see the effect in action. For the demo itself, I’ve created a more practical example that shows a vertical scrollable layout with images, where each one has a variation of the effect. You can click on an image and it will expand to a larger version while some other content shows up (just a mock-up). We’ll go over the most interesting parts of the effect, so that you get an understanding of how it works and how to create your own.

I’ll assume that you are comfortable with JavaScript and have some knowledge of Three.js and shader logic. If you’re not, have a look at the Three.js documentation or The Book of Shaders, Three.js Fundamentals or Discover Three.js.

Attention: This tutorial covers many parts; if you prefer, you can skip the HTML/CSS/JavaScript part and go directly go to the shaders section.

Now that we are clear, let’s do this!

Create the scene in the DOM

Before we start making some magic, we are first going to mark up the images in the HTML. It will be easier to handle resizing our scene after we’ve set up the initial position and dimension in HTML/CSS rather than positioning everything in JavaScript. Moreover, the styling part should be only made with CSS, not JavaScript. For example, if our image has a ratio of 16:9 on desktop but a 4:3 ratio on mobile, we just want to handle this using CSS. JavaScript will only get the new values and do its stuff.

// index.html

<section class="container">
	<article class="tile">
		<figure class="tile__figure">
			<img data-src="path/to/my/image.jpg" data-hover="path/to/my/hover-image.jpg" class="tile__image" alt="My image" width="400" height="300" />
		</figure>
	</article>
</section>

<canvas id="stage"></canvas>
// style.css

.container {
	display: flex;
	align-items: center;
	justify-content: center;
	width: 100%;
	height: 100vh;
	z-index: 10;
}

.tile {
	width: 35vw;
	flex: 0 0 auto;
}

.tile__image {
	width: 100%;
	height: 100%;
	object-fit: cover;
	object-position: center;
}

canvas {
	position: fixed;
	left: 0;
	top: 0;
	width: 100%;
	height: 100vh;
	z-index: 9;
}

As you can see above, we have create a single image that is centered in the middle of our screen. Did you notice the data-src and data-hover attributes on the image? These will be our reference images and we’ll load both of these later in our script with lazy loading.

Don’t forget the canvas. We’ll stack it below our main section to draw the images in the exact same place as we have placed them before.

Create the scene in JavaScript

Let’s get started with the less-easy-but-ok part! First, we’ll create the scene, the lights, and the renderer.

// Scene.js

import * as THREE from 'three'

export default class Scene {
	constructor() {
		this.container = document.getElementById('stage')

		this.scene = new THREE.Scene()
		this.renderer = new THREE.WebGLRenderer({
			canvas: this.container,
			alpha: true,
	  })

		this.renderer.setSize(window.innerWidth, window.innerHeight)
		this.renderer.setPixelRatio(window.devicePixelRatio)

		this.initLights()
	}

	initLights() {
		const ambientlight = new THREE.AmbientLight(0xffffff, 2)
		this.scene.add(ambientlight)
	}
}

This is a very basic scene. But we need one more essential thing in our scene: the camera. We have a choice between two types of cameras: orthographic or perspective. If we keep our image flat, we can use the first one. But for our rotation effect, we want some perspective as we move the mouse around.

In Three.js (and other libraries for WebGL) with a perspective camera, 10 unit values on our screen are not 10px. So the trick here is to use some math to transform 1 unit to 1 pixel and change the perspective to increase or decrease the distortion effect.

// Scene.js

const perspective = 800

constructor() {
	// ...
	this.initCamera()
}

initCamera() {
	const fov = (180 * (2 * Math.atan(window.innerHeight / 2 / perspective))) / Math.PI

	this.camera = new THREE.PerspectiveCamera(fov, window.innerWidth / window.innerHeight, 1, 1000)
	this.camera.position.set(0, 0, perspective)
}

We’ll set the perspective to 800 to have a not-so-strong distortion as we rotate the plane. The more we increase the perspective, the less we’ll perceive the distortion, and vice versa.

The last thing we need to do is to render our scene in each frame.

// Scene.js

constructor() {
	// ...
	this.update()
}

update() {
	requestAnimationFrame(this.update.bind(this))
	
	this.renderer.render(this.scene, this.camera)
}

If your screen is not black, you are on the right way!

Build the plane with the correct sizes

As we mentioned above, we have to retrieve some additional information from the image in the DOM like its dimension and position on the page.

// Scene.js

import Figure from './Figure'

constructor() {
	// ...
	this.figure = new Figure(this.scene)
}
// Figure.js

export default class Figure {
	constructor(scene) {
		this.$image = document.querySelector('.tile__image')
		this.scene = scene

		this.loader = new THREE.TextureLoader()

		this.image = this.loader.load(this.$image.dataset.src)
		this.hoverImage = this.loader.load(this.$image.dataset.hover)
		this.sizes = new THREE.Vector2(0, 0)
		this.offset = new THREE.Vector2(0, 0)

		this.getSizes()

		this.createMesh()
	}
}

First, we create another class where we pass the scene as a property. We set two new vectors, dimension and offset, in which we’ll store the dimension and position of our DOM image.

Furthermore, we’ll use a TextureLoader to “load” our images and convert them into a texture. We need to do that as we want to use these pictures in our shaders.

We need to create a method in our class to handle the loading of our images and wait for a callback. We could achieve that with an async function but for this tutorial, let’s keep it simple. Just keep in mind that you’ll probably need to refactor this a bit for your own purposes.

// Figure.js

// ...
	getSizes() {
		const { width, height, top, left } = this.$image.getBoundingClientRect()

		this.sizes.set(width, height)
		this.offset.set(left - window.innerWidth / 2 + width / 2, -top + window.innerHeight / 2 - height / 2)
	}
// ...

We get our image information in the getBoundingClientRect object. After that, we’ll pass these to our two variables. The offset is here to calculate the distance between the center of the screen and the object on the page.

// Figure.js

// ...
	createMesh() {
		this.geometry = new THREE.PlaneBufferGeometry(1, 1, 1, 1)
		this.material = new THREE.MeshBasicMaterial({
			map: this.image
		})

		this.mesh = new THREE.Mesh(this.geometry, this.material)

		this.mesh.position.set(this.offset.x, this.offset.y, 0)
		this.mesh.scale.set(this.sizes.x, this.sizes.y, 1)

		this.scene.add(this.mesh)
	}
// ...

After that, we’ll set our values on the plane we’re building. As you can notice, we have created a plane of 1 on 1px with 1 row and 1 column. As we don’t want to distort the plane, we don’t need a lot of faces or vertices. So let’s keep it simple.

But why scale it while we can set the size directly? Glad you asked.

Because of the resizing part. If we want to change the size of our mesh afterwards, there is no other proper way than this one. While it’s easier to change the scale of the mesh, it’s not for the dimension.

For the moment, we set a MeshBasicMaterial, just to see if everything is fine.

Get mouse coordinates

Now that we have built our scene with our mesh, we want to get our mouse coordinates and, to keep things easy, we’ll normalize them. Why normalize? Because of the coordinate system in shaders.

coordinate-system

As you can see in the figure above, we have normalized the values for both of our shaders. So to keep things simple, we’ll prepare our mouse coordinate to match the vertex shader coordinate.

If you’re lost at this point, I recommend you to read the Book of Shaders and the respective part of Three.js Fundamentals. Both have good advice and a lot of examples to help understand what’s going on.

// Figure.js

// ...

this.mouse = new THREE.Vector2(0, 0)
window.addEventListener('mousemove', (ev) => { this.onMouseMove(ev) })

// ...

onMouseMove(event) {
	TweenMax.to(this.mouse, 0.5, {
		x: (event.clientX / window.innerWidth) * 2 - 1,
		y: -(event.clientY / window.innerHeight) * 2 + 1,
	})

	TweenMax.to(this.mesh.rotation, 0.5, {
		x: -this.mouse.y * 0.3,
		y: this.mouse.x * (Math.PI / 6)
	})
}

For the tween parts, I’m going to use TweenMax from GreenSock. This is the best library ever. EVER. And it’s perfect for our purpose. We don’t need to handle the transition between two states, TweenMax will do it for us. Each time we move our mouse, TweenMax will update the position and the rotation smoothly.

One last thing before we continue: we’ll update our material from MeshBasicMaterial to ShaderMaterial and pass some values (uniforms) and shaders.

// Figure.js

// ...

this.uniforms = {
	u_image: { type: 't', value: this.image },
	u_imagehover: { type: 't', value: this.hover },
	u_mouse: { value: this.mouse },
	u_time: { value: 0 },
	u_res: { value: new THREE.Vector2(window.innerWidth, window.innerHeight) }
}

this.material = new THREE.ShaderMaterial({
	uniforms: this.uniforms,
	vertexShader: vertexShader,
	fragmentShader: fragmentShader
})

update() {
	this.uniforms.u_time.value += 0.01
}

We passed our two textures, the mouse position, the size of our screen and a variable called u_time which we will increment each frame.

But keep in mind that it’s not the best way to do that. For example, we only need to increment when we are hovering the figure, not every frame. I’m not going into details, but performance-wise, it’s better to just update our shader only when we need it.

The logic behind the trick & how to use noise

Still here? Nice! Time for some magic tricks.

I will not explain what noise is and where it comes from. If you’re interested, be sure to read this page from The Book of Shaders. It’s well explained.

Long story short, Noise is a function that gives us a value between -1 and 1 based on values we pass through. It will output a random pattern but more organic.

Thanks to noise, we can generate a lot of different shapes, like maps, random patterns, etc.

noise-example1

noise-example2

Let’s start with a 2D noise result. Just by passing the coordinate of our texture, we’ll have something like a cloud texture.

noise-result1

But there are several kinds of noise functions. Let’s use a 3D noise by giving one more parameter like … the time? The noise pattern will evolve and change over time. By changing the frequency and the amplitude, we can give some movement and increase the contrast.

It will be our first base.

noise-result2

Second, we’ll create a circle. It’s quite easy to build a simple shape like a circle in the fragment shader. We just take the function from The Book of Shaders: Shapes to create a blurred circle, increase the contrast and voilà!

noise-result3

Last, we add these two together, play with some variables, cut a “slice” of this and tadaaa:

noise-result4

We finally mix our textures together based on this result and here we are, easy peasy lemon squeezy!

Let’s dive into the code.

Shaders

We won’t really need the vertex shader here so this is our code:

 // vertexShader.glsl
varying vec2 v_uv;

void main() {
	v_uv = uv;

	gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

ShaderMaterial from Three.js provides some useful default variables when you’re a beginner:

  • position (vec3): the coordinates of each vertex of our mesh
  • uv (vec2): the coordinates of our texture
  • normals (vec3): normal of each vertex our mesh have.

Here we’re just passing the UV coordinates from the vertex shader to fragment shader.

Create the circle

Let’s use the function from The Book of Shaders to build our circle and add a variable to handle the blurriness of our edges.

Moreover, we’ll add the mouse position to the origin of our circle. This way, the circle will be moving as long as we move our mouse over our image.

// fragmentShader.glsl
uniform vec2 u_mouse;
uniform vec2 u_res;

float circle(in vec2 _st, in float _radius, in float blurriness){
	vec2 dist = _st;
	return 1.-smoothstep(_radius-(_radius*blurriness), _radius+(_radius*blurriness), dot(dist,dist)*4.0);
}

void main() {
	vec2 st = gl_FragCoord.xy / u_res.xy - vec2(1.);
	// tip: use the following formula to keep the good ratio of your coordinates
	st.y *= u_res.y / u_res.x;

	vec2 mouse = u_mouse;
	// tip2: do the same for your mouse
	mouse.y *= u_res.y / u_res.x;
	mouse *= -1.;
	
	vec2 circlePos = st + mouse;
	float c = circle(circlePos, .03, 2.);

	gl_FragColor = vec4(vec3(c), 1.);
}

Make some noooooise

As we saw above, the noise function has several parameters and gives us a smooth cloudy pattern. How could we have that? Glad you asked.

For this part, I’m using glslify and glsl-noise, and two npm packages to include other functions. It keeps our shader a little bit more readable and avoids having a lot of displayed functions that we will not use after all.

// fragmentShader.glsl
#pragma glslify: snoise2 = require('glsl-noise/simplex/2d')

//...

varying vec2 v_uv;

uniform float u_time;

void main() {
	// ...

	float n = snoise2(vec2(v_uv.x, v_uv.y));

	gl_FragColor = vec4(vec3(n), 1.);
}

noise-result5

By changing the amplitude and the frequency of our noise (exactly like the sin/cos functions), we can change the render.

// fragmentShader.glsl

float offx = v_uv.x + sin(v_uv.y + u_time * .1);
float offy = v_uv.y - u_time * 0.1 - cos(u_time * .001) * .01;

float n = snoise2(vec2(offx, offy) * 5.) * 1.;

noise-result6

But it isn’t evolving through time! It is distorted but that’s it. We want more. So we will use noise3d instead and pass a 3rd parameter: the time.

float n = snoise3(vec3(offx, offy, u_time * .1) * 4.) * .5;

As you can see, I changed the amplitude and the frequency to have the render I desire.

Alright, let’s add them together!

Merging both textures

By just adding these together, we’ll already see an interesting shape changing through time.

noise-result7

To explain what’s happening, let’s imagine our noise is like a sea floating between -1 and 1. But our screen can’t display negative color or pixels more than 1 (pure white) so we are just seeing the values between 0 and 1.

explanation-noise1

And our circle is like a flan.

explanation-noise2

By adding these two shapes together it will give this very approximative result:

explanation-noise3

Our very white pixels are only pixels outside the visible spectrum.

If we scale down our noise and subtract a small number, it will be completely moving down your waves until it disappears above the surface of the ocean of visible colors.

noise-result8

float n = snoise(vec3(offx, offy, u_time * .1) * 4.) - 1.;

Our circle is still there but not enough visible to be displayed. If we multiply its value, it will be more contrasted.

float c = circle(circlePos, 0.3, 0.3) * 2.5;

noise-result9

We are almost there! But as you can see, there are still some details missing. And our edges aren’t sharp at all.

To avoid that, we’ll use the built-in smoothstep function.

float finalMask = smoothstep(0.4, 0.5, n + c);

gl_FragColor = vec4(vec3(finalMask), 1.);

Thanks to this function, we’ll cut a slice of our pattern between 0.4 et 0.5, for example. The shorter the space is between these values, the sharper the edges are.

Finally, we can mix our two textures to use them as a mask.

uniform sampler2D u_image;
uniform sampler2D u_imagehover;

// ...

vec4 image = texture2D(u_image, uv);
vec4 hover = texture2D(u_imagehover, uv);

vec4 finalImage = mix(image, hover, finalMask);

gl_FragColor = finalImage;

We can change a few variables to have a more gooey effect:

// ...

float c = circle(circlePos, 0.3, 2.) * 2.5;

float n = snoise3(vec3(offx, offy, u_time * .1) * 8.) - 1.;

float finalMask = smoothstep(0.4, 0.5, n + pow(c, 2.));

// ...

And voilà!

Check out the full source here or take a look at the live demo.

Mic drop

Congratulations to those who came this far. I haven’t planned to explain this much. This isn’t perfect and I might have missed some details but I hope you’ve enjoyed this tutorial anyway. Don’t hesitate to play with variables, try other noise functions and try to implement other effects using the mouse direction or play with the scroll!

If you have any questions, let me know in the comments section! I also encourage you to download the demo, it’s a little bit more complex and shows the effects in action with hover and click effects ¯\_(?)_/¯

References and Credits

Making Gooey Image Hover Effects with Three.js was written by Arno Di Nunzio and published on Codrops.

Creating a Water-like Distortion Effect with Three.js

In this tutorial we’re going to build a water-like effect with a bit of basic math, a canvas, and postprocessing. No fluid simulation, GPGPU, or any of that complicated stuff. We’re going to draw pretty circles in a canvas, and distort the scene with the result.

We recommend that you get familiar with the basics of Three.js because we’ll omit some of the setup. But don’t worry, most of the tutorial will deal with good old JavaScript and the canvas API. Feel free to chime in if you don’t feel too confident on the Three.js parts.

The effect is divided into two main parts:

  1. Capturing and drawing the ripples to a canvas
  2. Displacing the rendered scene with postprocessing

Let’s start with updating and drawing the ripples since that’s what constitutes the core of the effect.

Making the ripples

The first idea that comes to mind is to use the current mouse position as a uniform and then simply displace the scene and call it a day. But that would mean only having one ripple that always remains at the mouse’s position. We want something more interesting, so we want many independent ripples moving at different positions. For that we’ll need to keep track of each one of them.

We’re going to create a WaterTexture class to manage everything related to the ripples:

  1. Capture every mouse movement as a new ripple in an array.
  2. Draw the ripples to a canvas
  3. Erase the ripples when their lifespan is over
  4. Move the ripples using their initial momentum

For now, let’s begin coding by creating our main App class.

import { WaterTexture } from './WaterTexture';
class App{
    constructor(){
        this.waterTexture = new WaterTexture({ debug: true });
        
        this.tick = this.tick.bind(this);
    	this.init();
    }
    init(){
        this.tick();
    }
    tick(){
        this.waterTexture.update();
        requestAnimationFrame(this.tick);
    }
}
const myApp = new App();

Let’s create our ripple manager WaterTexture with a teeny-tiny 64px canvas.

export class WaterTexture{
  constructor(options) {
    this.size = 64;
      this.radius = this.size * 0.1;
     this.width = this.height = this.size;
    if (options.debug) {
      this.width = window.innerWidth;
      this.height = window.innerHeight;
      this.radius = this.width * 0.05;
    }
      
    this.initTexture();
      if(options.debug) document.body.append(this.canvas);
  }
    // Initialize our canvas
  initTexture() {
    this.canvas = document.createElement("canvas");
    this.canvas.id = "WaterTexture";
    this.canvas.width = this.width;
    this.canvas.height = this.height;
    this.ctx = this.canvas.getContext("2d");
    this.clear();
	
  }
  clear() {
    this.ctx.fillStyle = "black";
    this.ctx.fillRect(0, 0, this.canvas.width, this.canvas.height);
  }
  update(){}
}

Note that for development purposes there is a debug option to mount the canvas to the DOM and give it a bigger size. In the end result we won’t be using this option.

Now we can go ahead and start adding some of the logic to make our ripples work:

  1. On constructor() add
    1. this.points array to keep all our ripples
    2. this.radius for the max-radius of a ripple
    3. this.maxAge for the max-age of a ripple
  2. On Update(),

    1. clear the canvas
    2. sing happy birthday to each ripple, and remove those older than this.maxAge
    3. draw each ripple
  3. Create AddPoint(), which is going to take a normalized position and add a new point to the array.
class WaterTexture(){
    constructor(){
        this.size = 64;
        this.radius = this.size * 0.1;
        
        this.points = [];
        this.maxAge = 64;
        ...
    }
    ...
    addPoint(point){
		this.points.push({ x: point.x, y: point.y, age: 0 });
    }
	update(){
        this.clear();
        this.points.forEach(point => {
            point.age += 1;
            if(point.age > this.maxAge){
                this.points.splice(i, 1);
            }
        })
        this.points.forEach(point => {
            this.drawPoint(point);
        })
    }
}

Note that AddPoint() receives normalized values, from 0 to 1. If the canvas happens to resize, we can use the normalized points to draw using the correct size.

Let’s create drawPoint(point) to start drawing the ripples: Convert the normalized point coordinates into canvas coordinates. Then, draw a happy little circle:

class WaterTexture(){
    ...
    drawPoint(point) {
        // Convert normalized position into canvas coordinates
        let pos = {
            x: point.x * this.width,
            y: point.y * this.height
        }
        const radius = this.radius;
        
        
        this.ctx.beginPath();
        this.ctx.arc(pos.x, pos.y, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

For our ripples to have a strong push at the center and a weak force at the edges, we’ll make our circle a Radial Gradient, which looses transparency as it moves to the edges.

Radial Gradients create a dithering-like effect when a lot of them overlap. It looks stylish but not as smooth as what we want it to look like.

To make our ripples smooth, we’ll use the circle’s shadow instead of using the circle itself. Shadows give us the gradient-like result without the dithering-like effect. The difference is in the way shadows are painted to the canvas.

Since we only want to see the shadow and not the flat-colored circle, we’ll give the shadow a high offset. And we’ll move the circle in the opposite direction.

As the ripple gets older, we’ll reduce it’s opacity until it disappears:

export class WaterTexture(){
    ...
    drawPoint(point) {
        ... 
        const ctx = this.ctx;
        // Lower the opacity as it gets older
        let intensity = 1.;
        intensity = 1. - point.age / this.maxAge;
        
        let color = "255,255,255";
        
        let offset = this.width * 5.;
        // 1. Give the shadow a high offset.
        ctx.shadowOffsetX = offset; 
        ctx.shadowOffsetY = offset; 
        ctx.shadowBlur = radius * 1; 
        ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 

        
        this.ctx.beginPath();
        this.ctx.fillStyle = "rgba(255,0,0,1)";
        // 2. Move the circle to the other direction of the offset
        this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

To introduce interactivity, we’ll add the mousemove event listener to app class and send the normalized mouse position to WaterTexture.

import { WaterTexture } from './WaterTexture';
class App {
	...
	init(){
        window.addEventListener('mousemove', this.onMouseMove.bind(this));
        this.tick();
	}
	onMouseMove(ev){
        const point = {
			x: ev.clientX/ window.innerWidth, 
			y: ev.clientY/ window.innerHeight, 
        }
        this.waterTexture.addPoint(point);
	}
}

Great, now we’ve created a disappearing trail of ripples. Now, let’s give them some momentum!

Momentum

To give momentum to a ripple, we need its direction and force. Whenever we create a new ripple, we’ll compare its position with the last ripple. Then we’ll calculate its unit vector and force.

On every update, we’ll update the ripples’ positions with their unit vector and position. And as they get older we’ll move them slower and slower until they retire or go live on a farm. Whatever happens first.

export lass WaterTexture{
	...
    constructor(){
        ...
        this.last = null;
    }
    addPoint(point){
        let force = 0;
        let vx = 0;
        let vy = 0;
        const last = this.last;
        if(last){
            const relativeX = point.x - last.x;
            const relativeY = point.y - last.y;
            // Distance formula
            const distanceSquared = relativeX * relativeX + relativeY * relativeY;
            const distance = Math.sqrt(distanceSquared);
            // Calculate Unit Vector
            vx = relativeX / distance;
            vy = relativeY / distance;
            
            force = Math.min(distanceSquared * 10000,1.);
        }
        
        this.last = {
            x: point.x,
            y: point.y
        }
        this.points.push({ x: point.x, y: point.y, age: 0, force, vx, vy });
    }
	
	update(){
        this.clear();
        let agePart = 1. / this.maxAge;
        this.points.forEach((point,i) => {
            let slowAsOlder = (1.- point.age / this.maxAge)
            let force = point.force * agePart * slowAsOlder;
              point.x += point.vx * force;
              point.y += point.vy * force;
            point.age += 1;
            if(point.age > this.maxAge){
                this.points.splice(i, 1);
            }
        })
        this.points.forEach(point => {
            this.drawPoint(point);
        })
    }
}

Note that instead of using the last ripple in the array, we use a dedicated this.last. This way, our ripples always have a point of reference to calculate their force and unit vector.

Let’s fine-tune the intensity with some easings. Instead of just decreasing until it’s removed, we’ll make it increase at the start and then decrease:

const easeOutSine = (t, b, c, d) => {
  return c * Math.sin((t / d) * (Math.PI / 2)) + b;
};

const easeOutQuad = (t, b, c, d) => {
  t /= d;
  return -c * t * (t - 2) + b;
};

export class WaterTexture(){
	drawPoint(point){
	...
	let intensity = 1.;
        if (point.age < this.maxAge * 0.3) {
          intensity = easeOutSine(point.age / (this.maxAge * 0.3), 0, 1, 1);
        } else {
          intensity = easeOutQuad(
            1 - (point.age - this.maxAge * 0.3) / (this.maxAge * 0.7),
            0,
            1,
            1
          );
        }
        intensity *= point.force;
        ...
	}
}

Now we're finished with creating and updating the ripples. It's looking amazing.

But how do we use what we have painted to the canvas to distort our final scene?

Canvas as a texture

Let's use the canvas as a texture, hence the name WaterTexture. We are going to draw our ripples on the canvas, and use it as a texture in a postprocessing shader.

First, let's make a texture using our canvas and refresh/update that texture at the end of every update:

import * as THREE from 'three'
class WaterTexture(){
	initTexture(){
		...
		this.texture = new THREE.Texture(this.canvas);
	}
	update(){
        ...
		this.texture.needsUpdate = true;
	}
}

By creating a texture of our canvas, we can sample our canvas like we would with any other texture. But how is this useful to us? Our ripples are just white spots on the canvas.

In the distortion shader, we're going to need the direction and intensity of the distortion for each pixel. If you recall, we already have the direction and force of each ripple. But how do we communicate that to the shader?

Encoding data in the color channels

Instead of thinking of the canvas as a place where we draw happy little clouds, we are going to think about the canvas' color channels as places to store our data and read them later on our vertex shader.

In the Red and Green channels, we'll store the unit vector of the ripple. In the Blue channel, we'll store the intensity of the ripple.

Since RGB channels range from 0 to 255, we need to send our data that range to normalize it. So, we'll transform the unit vector range (-1 to 1) and the intensity range (0 to 1) into 0 to 255.

class WaterEffect {
    drawPoint(point){
		...
        
		// Insert data to color channels
        // RG = Unit vector
        let red = ((point.vx + 1) / 2) * 255;
        let green = ((point.vy + 1) / 2) * 255;
        // B = Unit vector
        let blue = intensity * 255;
        let color = `${red}, ${green}, ${blue}`;

        
        let offset = this.size * 5;
        ctx.shadowOffsetX = offset; 
        ctx.shadowOffsetY = offset; 
        ctx.shadowBlur = radius * 1; 
        ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 

        this.ctx.beginPath();
        this.ctx.fillStyle = "rgba(255,0,0,1)";
        this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

Note: Remember how we painted the canvas black? When our shader reads that pixel, it's going to apply a distortion of 0, only distorting where our ripples are painting.

Look at the pretty color our beautiful data gives the ripples now!

With that, we're finished with the ripples. Next, we'll create our scene and apply the distortion to the result.

Creating a basic Three.js scene

For this effect, it doesn't matter what we render. So, we'll only have a single plane to showcase the effect. But feel free to create an awesome-looking scene and share it with us in the comments!

Since we're done with WaterTexture, don't forget to turn the debug option to false.

import * as THREE from "three";
import { WaterTexture } from './WaterTexture';

class App {
    constructor(){
        this.waterTexture = new WaterTexture({ debug: false });
        
        this.renderer = new THREE.WebGLRenderer({
          antialias: false
        });
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.setPixelRatio(window.devicePixelRatio);
        document.body.append(this.renderer.domElement);
        
        this.camera = new THREE.PerspectiveCamera(
          45,
          window.innerWidth / window.innerHeight,
          0.1,
          10000
        );
        this.camera.position.z = 50;
        
        this.touchTexture = new TouchTexture();
        
        this.tick = this.tick.bind(this);
        this.onMouseMove = this.onMouseMove.bind(this);
        
        this.init();
    
    }
    addPlane(){
        let geometry = new THREE.PlaneBufferGeometry(5,5,1,1);
        let material = new THREE.MeshNormalMaterial();
        let mesh = new THREE.Mesh(geometry, material);
        
        window.addEventListener("mousemove", this.onMouseMove);
        this.scene.add(mesh);
    }
    init(){
    	this.addPlane(); 
    	this.tick();
    }
    render(){
        this.renderer.render(this.scene, this.camera);
    }
    tick(){
        this.render();
        this.waterTexture.update();
        requrestAnimationFrame(this.tick);
    }
}

Applying the distortion to the rendered scene

We are going to use postprocessing to apply the water-like effect to our render.

Postprocessing allows you to add effects or filters after (post) your scene is rendered (processing). Like any kind of image effect or filter you might see on snapchat or Instagram, there is a lot of cool stuff you can do with postprocessing.

For our case, we'll render our scene normally with a RenderPass, and apply the effect on top of it with a custom EffectPass.

Let's render our scene with postprocessing's EffectComposer instead of the Three.js renderer.

Note that EffectComposer works by going through its passes on each render. It doesn't render anything unless it has a pass for it. We need to add the render of our scene using a RenderPass:

import { EffectComposer, RenderPass } from 'postprocessing'
class App{
    constructor(){
        ...
		this.composer = new EffectComposer(this.renderer);
         this.clock = new THREE.Clock();
        ...
    }
    initComposer(){
        const renderPass = new RenderPass(this.scene, this.camera);
    
        this.composer.addPass(renderPass);
    }
    init(){
    	this.initComposer();
    	...
    }
    render(){
        this.composer.render(this.clock.getDelta());
    }
}

Things should look about the same. But now we start adding custom postprocessing effects.

We are going to create the WaterEffect class that extends postprocessing's Effect. It is going to receive the canvas texture in the constructor and make it a uniform in its fragment shader.

In the fragment shader, we'll distort the UVs using postprocessing's function mainUv using our canvas texture. Postprocessing is then going to take these UVs and sample our regular scene distorted.

Although we'll only use postprocessing's mainUv function, there are a lot of interesting functions you can use. I recommend you check out the wiki for more information!

Since we already have the unit vector and intensity, we only need to multiply them together. But since the texture values are normalized we need to convert our unit vector from a range of 1 to 0, into a range of -1 to 0:

import * as THREE from "three";
import { Effect } from "postprocessing";

export class WaterEffect extends Effect {
  constructor(texture) {
    super("WaterEffect", fragment, {
      uniforms: new Map([["uTexture", new THREE.Uniform(texture)]])
    });
  }
}
export default WaterEffect;

const fragment = `
uniform sampler2D uTexture;
#define PI 3.14159265359

void mainUv(inout vec2 uv) {
        vec4 tex = texture2D(uTexture, uv);
		// Convert normalized values into regular unit vector
        float vx = -(tex.r *2. - 1.);
        float vy = -(tex.g *2. - 1.);
		// Normalized intensity works just fine for intensity
        float intensity = tex.b;
        float maxAmplitude = 0.2;
        uv.x += vx * intensity * maxAmplitude;
        uv.y += vy * intensity * maxAmplitude;
    }
`;

We'll then instantiate WaterEffect with our canvas texture and add it as an EffectPass after our RenderPass. Then we'll make sure our composer only renders the last effect to the screen:

import { WaterEffect } from './WaterEffect'
import { EffectPass } from 'postprocessing'
class App{
    ...
	initComposer() {
        const renderPass = new RenderPass(this.scene, this.camera);
        this.waterEffect = new WaterEffect(  this.touchTexture.texture);

        const waterPass = new EffectPass(this.camera, this.waterEffect);

        renderPass.renderToScreen = false;
        waterPass.renderToScreen = true;
        this.composer.addPass(renderPass);
        this.composer.addPass(waterPass);
	}
}

And here we have the final result!

An awesome and fun effect to play with!

Conclusion

Through this article, we've created ripples, encoded their data into the color channels and used it in a postprocessing effect to distort our render.

That's a lot of complicated-sounding words! Great work, pat yourself on the back or reach out on Twitter and I'll do it for you 🙂

But there's still a lot more to explore:

  1. Drawing the ripples with a hollow circle
  2. Giving the ripples an actual radial-gradient
  3. Expanding the ripples as they get older
  4. Or using the canvas as a texture technique to create interactive particles as in Bruno's article.

We hope you enjoyed this tutorial and had a fun time making ripples. If you have any questions, don't hesitate to comment below or on Twitter!

Creating a Water-like Distortion Effect with Three.js was written by Daniel Velasquez and published on Codrops.

Crafting Stylised Mouse Trails With OGL

Mmm… swirly goodness…
Hey! Snap out of it! Ok. Now, in this tutorial we’re going to have some fun creating this pretty crazy little effect by tipping our toes into WebGL. The water is warm, I promise!

What’s OGL?

OGL is a WebGL library that is going to save us having to write a bunch of pretty unfriendly WebGL code directly. It has been my baby for the last year or so – I’ve been busy adding a bunch of examples to try and give people a kick-start with practical references.

By design, it’s tightly coupled to WebGL with only a small amount of abstraction – meaning it’s very lightweight. For example, the OGL classes required for this experiment are under 13kb gzipped – and more than half of them are just maths.

You got any more of them mouse trails?

Yes! Moving on, sorry.

Mouse trails (or any sort of trails) are just innately fun to interact with. The little touch of physics that it adds can really make an animation feel incredibly reactive and tangible, to the point where you can be certain that a good percentage of your users’ time will be spent playing with the effect, before consuming any of your site’s content… sorry not sorry.

Of course, they can be achieved using a whole range of techniques. Example time.

Starting with a really clever, recent one done using the DOM (html) and CSS. I especially love the difference-blend style they’ve added to the trail. The crux of the effect is a number of diminishing circles that follow one another, giving the effect of a tapered line.
Developed by Luca Mariotti.

Here’s an effective take on one, made using the 2D Canvas api. This one draws a circle at the head of each line every frame, while simultaneously the whole scene slowly disappears.
By Hakim El Hattab.

And here’s one used as a main game mechanic, made using SVG by yours truly – a few trips-around-the-sun ago… It’s a dynamic bezier curve whose start, middle and end positions are updated every frame.

So each of these previous (viable) options use drawing APIs (CSS, Canvas 2D, SVG) to render those pixels. WebGL, on the other hand, leaves all of that pixel-drawing math in your capable hands. This means two things.

Firstly, it’s more work for you. Yep, it’ll probably take you longer to get something up and running, working out those pesky projection matrices, while keeping an eye on the optimisation of your attribute buffers…

Buuut (important but), it’s less work for the computer – so it’ll run (normally, waaaay) faster. You’re basically cutting out the middle man. It also gives you a crazy amount of flexibility, which I highly appreciate.

The Approach

I’m a fan of visualising problems to understand them (let’s not talk about quaternions…), so let’s break it down.

We will start by making a number of points, made up of [x, y] coordinates, arranged in a lovely little line.

app-dots

Then we’ll use these to generate a geometry, duplicating each point so that there are two vertices at each step along the curve. Why two? Just a sec.

app-line

Now, because our paired points are at the exact same position, our line is infinitely thin when renderer – and hence, invisible… No good. What we need to do is separate those pairs to give the line some width. And, puuuush.

app-mesh

And there we have our line mesh, all visible and line-y…

The part I nimbly skipped over there is also the most complicated – how do we know in which direction to move each vertex?

To solve this, we need to work out the direction between the previous and next points along the line. Then we can rotate it by 90 degrees, and we’re left with the angle we want: the normal.

app-normal

Now that we have our normal, we can start getting a bit creative. For example, what if we separated the pairs differing amounts at each point? Here, if we make the pairs get closer together toward the ends of the line, it will give us this sharp, tapered effect.

app-taper

Now see what fun ideas you can come up with! I’ll wait. A bit more. Ok, stop.

And that’s the most complicated part over. Note: I haven’t gone into the depths of each caveat that drawing lines can present, because, well, I don’t need to. But I couldn’t possibly write about lines without mentioning this extremely informative and digestible article, written by Matt DesLauriers about everything you’d want to know about drawing lines in WebGL.

Code time – setting the scene

To kick us off, let’s set up a basic OGL canvas.

Here’s an OGL CodeSandbox Template project that I’ll be using as a guide. Feel free to fork this for any OGL experiments!

First import the required modules. Normally, I would import from a local copy of OGL for the ability to tree-shake, but to keep the file structure empty on CodeSandbox, here we’re using jsdelivr – which gives us CDN access to the npm deployment.

import {
    Renderer, Camera, Orbit, Transform, Geometry, Vec3, Color, Polyline, 
} from 'https://cdn.jsdelivr.net/npm/ogl@0.0.25/dist/ogl.mjs';

Create the WebGL context, and add the canvas to the DOM.

const renderer = new Renderer({dpr: 2});
const gl = renderer.gl;
document.body.appendChild(gl.canvas);

Create our camera and scene.

const camera = new Camera(gl);
camera.position.z = 3;

const controls = new Orbit(camera);

const scene = new Transform();

And then render the scene in an update loop. Obviously the scene is empty, so this will currently look very black.

function update(t) {
    requestAnimationFrame(update);
    controls.update();

    renderer.render({scene, camera});
}

But now we can do all of the things!

As an input to OGL’s Polyline class, we need to create a bunch of points (xyz coordinates).

Here, the x value goes from -1.5 and 1.5 along the line, while the y value moves in a sine pattern between -0.5 and 0.5.

const count = 100;
const points = [];
for (let i = 0; i < count; i++) {
    const x = (i / (count - 1) - 0.5) * 3;
    const y = Math.sin(i / 10.5) * 0.5;
    const z = 0;

    points.push(new Vec3(x, y, z));
};

Then we pass those points into a new instance of Polyline, along with colour and thickness variables (uniforms). And finally, attach it to the scene.

const polyline = new Polyline(gl, {
    points,
    uniforms: {
        uColor: {value: new Color('#1b1b1b')},
        uThickness: {value: 20},
    },
});

polyline.mesh.setParent(scene);

Here we have that working live. (Click and drag, scroll etc. If you want...)

How about a square? Let's just change those points.

const points = [];
points.push(new Vec3( 0, -1, 0));
points.push(new Vec3(-1, -1, 0));
points.push(new Vec3(-1,  1, 0));
points.push(new Vec3( 1,  1, 0));
points.push(new Vec3( 1, -1, 0));
points.push(new Vec3( 0, -1, 0));

Circle?

const count = 100;
const points = [];
for (let i = 0; i < count; i++) {
    const angle = i / (count - 2) * Math.PI * 2;
    const x = Math.cos(angle);
    const y = Math.sin(angle);
    const z = 0;

    points.push(new Vec3(x, y, z));
};

You may have noticed that when you rotate or zoom the camera, the line will always stay the same thickness. You would probably expect the line to get thicker when it's closer to to camera, and also to be paper thin when rotated on its side.

This is because the pair separation we spoke about earlier is happening after the camera's projection is applied - when the vertex values are in what's called 'NDC Space' (Normalized Device Coordinates). Projection matrices can be confusing, but luckily, NDC Space is not.

NDC Space is simply picturing your canvas as a 2D graph, with left to right (X), and bottom to top (Y) going from -1 to 1. No matter how complicated your scene is (geometry, projections, manipulations), each vertex will eventually need to be projected to a -1 to 1 range for X and Y.

code-screen

A more common term you've probably heard is Screen Space, which is very similar, but instead of a -1 to 1 range, it's mapped from 0 to 1.

We generally use cameras to help us convert our 3D coordinates into NDC Space, which is absolutely vital when you need to spin around an object, or view geometry from a specific perspective. But for what we're doing (mouse trails. I haven't forgotten), we don't really need to do any of that! So, in fact, we're going to skip that whole step, throw away the camera, and create our points directly in NDC Space (-1 to 1) from the get-go. This simplifies things, and it also means that we're going to get the opportunity to write a custom shader! Let me show you.

Shaping the line with a custom shader

Firstly, let's create our points in a straight line, with the X going from -0.5 to 0.5 and the Y left at 0. Keeping in mind that the screen goes from -1 to 1, this means we will end up with a horizontal line in the center of the screen, spanning half the width.

const count = 40;
const points = [];
for (let i = 0; i < count; i++) {
    const x = i / (count - 1) - 0.5;
    const y = 0;
    const z = 0;

    points.push(new Vec3(x, y, z));
};

This time when we create our Polyline, we are going to pass in a custom Vertex shader, which will override the default shader found in that class. We also don't need a thickness just yet as we'll be calculating that in the shader.

const polyline = new Polyline(gl, {
    points,
    vertex,
    uniforms: {
        uColor: {value: new Color('#1b1b1b')},
    },
});

Now, there are two shaders in the WebGL pipeline, Vertex and Fragment. To put it simply, the Vertex shader determines where on the screen to draw, and the Fragment shader determines what colour.

We can pass into a Vertex shader whatever data we want, that's entirely up to you. However, it will always be expected to return a position on the viewport that should be rendered (in Clip Space, which, for this case, is the same as NDC Space; -1 to 1).

At the start of our Vertex shader, you will find the input data: Attributes and Uniforms. Attributes are per-vertex variables, whereas Uniforms are common variables for all of the vertices. For example, as this shader is run for each vertex passed in, the position Attribute value will change, moving along each point, however the uResolution Uniform value will remain the same throughout.

attribute vec3 position;
attribute vec3 next;
attribute vec3 prev;
attribute vec2 uv;
attribute float side;

uniform vec2 uResolution;

At the very end of our Vertex shader, you'll find a function called main that defines the variable gl_Position. These two names are non-debatable! Our WebGL program will automatically look for the main function to run, and then it will pass the gl_Position variable on to the Fragment shader.

void main() {
    gl_Position = vec4(position, 1);
}

As our points are already in NDC Space, our shader - made up of just these two sections - is technically correct. However the only issue (the same as we had in our breakdown) is that the position pairs are on top of each other, so our line would be invisibly thin.

So instead of passing our position right on through to the output, let's add a function, getPosition, to push each vertex apart and give our line some width.

vec4 getPosition() {
    vec2 aspect = vec2(uResolution.x / uResolution.y, 1);
    vec2 nextScreen = next.xy * aspect;
    vec2 prevScreen = prev.xy * aspect;

    vec2 tangent = normalize(nextScreen - prevScreen);
    vec2 normal = vec2(-tangent.y, tangent.x);
    normal /= aspect;
    normal *= 0.1;
    
    vec4 current = vec4(position, 1);
    current.xy -= normal * side;
    return current;
}

void main() {
    gl_Position = getPosition();
}

Ah, now we can see our line. Mmmm, very modernist.

This new function is doing the exact steps in our approach overview. See here.

We determine the direction from the previous to the next point.

vec2 tangent = normalize(nextScreen - prevScreen);

Then rotate it 90 degrees to find the normal.

vec2 normal = vec2(-tangent.y, tangent.x);

Then we push our vertices apart along the normal. The side variable has a value of -1 or 1 for each side of a pair.

current.xy -= normal * side;

"OK OK... but you skipped a few lines".

Indeed. So, the lines that determine and apply the aspect ratio are there to account for the rectangular viewport. Multiplying against the aspect ratio makes our scene square. Then we can perform the rotation without risk of skewing. And after, we divide by the aspect to bring us back to the correct ratio.

And the other line...

normal *= 0.1;

Yes that one... is where we can have some fun. As this manipulates the line's width.

Without this bit of code, our line would cover the entire height of the viewport. Why? See if you can guess...

You see, as the normal is a 'normalised' direction, this means it has a length of 1. As we know, the NDC Space goes from -1 to 1, so if our line is in the middle of the screen, and each side of the line is pushed out by 1, that will cover the entire range of -1 to 1. So multiplying by 0.1 instead only makes our line cover 10% of the viewport.

Now if we were to change this line, to say...

normal *= uv.y * 0.2;

We get this expanding, triangular shape.

width-01

This is because the variable uv.y goes from 0 to 1 along the length of the line. So we can use this to affect the shape in a bunch of different ways.

Like, we can wrap that code in a pow function.

normal *= pow(uv.y, 2.0) * 0.2;

width-02

Hm, how exponentially curvy. No, I want something more edgy.

normal *= abs(fract(uv.y * 2.0) - 0.5) * 0.4;

width-03

Too edgy...

normal *= cos(uv.y * 12.56) * 0.1 + 0.2;

width-04

Too flabby.

normal *= (1.0 - abs(uv.y - 0.5) * 2.0) * 0.2;

width-05

Almost... but a little too diamond-y.

normal *= (1.0 - pow(abs(uv.y - 0.5) * 2.0, 2.0)) * 0.2;

width-06

That's not bad. Let's run with that.

So now we have our shape, let's deal with the movement.

Adding movement

To start off we just need 20 points, left at the default [0, 0, 0] value.

Then we need a new Vec3 that will track the mouse input, and covert the X and Y values to a -1 to 1 range, with the Y flipped.

const mouse = new Vec3();

function updateMouse(e) {
    mouse.set(
        (e.x / gl.renderer.width) * 2 - 1,
        (e.y / gl.renderer.height) * -2 + 1,
        0
    );
}

Then in our update function, we can use this mouse value to move our points.

Every frame, we loop through each of our points. For the first point, we ease it to the mouse value. For every other point, we ease it to the previous point in the line. This creates a trail effect, that grows and shrinks as the user moves the mouse faster and slower.

requestAnimationFrame(update);
function update(t) {
    requestAnimationFrame(update);
    
    for (let i = points.length - 1; i >= 0; i--) {
        if (!i) {
            points[i].lerp(mouse, 0.9);
        } else {
            points[i].lerp(points[i - 1], 0.9);
        }
    }
    polyline.updateGeometry();

    renderer.render({scene});
}

Have a play with it below.

We can make this a bit more fun by replacing the first point's linear easing with a spring.

const spring = 0.06;
const friction = 0.85;
const mouseVelocity = new Vec3();
const tmp = new Vec3();

requestAnimationFrame(update);
function update(t) {
    requestAnimationFrame(update);
    
    for (let i = points.length - 1; i >= 0; i--) {
        if (!i) {
            tmp.copy(mouse).sub(points[i]).multiply(spring);
            mouseVelocity.add(tmp).multiply(friction);
            points[i].add(mouseVelocity);
        } else {
            points[i].lerp(points[i - 1], 0.9);
        }
    }
    polyline.updateGeometry();

    renderer.render({scene});
}

The extra bit of physics just makes it that much more interesting to play with. I can't help but try and make a beautiful curving motion with the mouse...

Finally, one line is never enough. And what's with all of this dark grey?! Give me 5 coloured lines, with randomised spring values, and we'll call it even.

And there we have it!

As we're using random values, every time you refresh, the effect will behave a little differently.

End

Thank you so much for sticking with me. That ended up being a lot more in-depth than I had planned... I implore you to play around with the code, maybe try randomising the number of points in each line... or changing the shape of the curve over time!

If you're new to WebGL, I hope this made the world of buffer attributes and shaders a little less overwhelming - it can be really rewarding to come up with something interesting and unique.

Crafting Stylised Mouse Trails With OGL was written by Nathan Gordon and published on Codrops.

Image Trail Effects

Today we’d like to share a fun mouse interaction effect with you that we found on the VLNC Studio website. The idea is to follow the mouse and show a trail of random images. It’s a kind of brutalist effect and there are various possibilities when it comes to showing and hiding the images. So we compiled a set of demos that explores different animations.

The animations are powered by TweenMax.

Attention: Note that the demos are experimental and that we use modern CSS properties that might not be supported in older browsers.

The main idea is to show the images quickly so that a trail forms along the movement of the mouse.

ImageTrailEffects_01

While there’s different ways to show the images, there’s also lots of room to play with the effects that make them disappear.

ImageTrailEffects_02

Demo 3 shows how we can make the images “drop” when they disappear:

ImageTrailEffects_03

We can also add a bit of a squeeze, too:

ImageTrailEffects_04

The last demo explores setting the size of the image to be fullscreen and restricting the movement to the sides only:

ImageTrailEffects_05

This effect is inspired by Ricky Michiels website.

Here’s a short GIF that shows the effect of demo 2 where we scale the images up and fade them out:

ImageTrailEffects.2019-08-07 11_22_55

We hope you enjoy these demos and find them useful.

References and Credits

Image Trail Effects was written by Mary Lou and published on Codrops.

How to Create a Fake 3D Image Effect with WebGL








WebGL is becoming quite popular these days as it allows us to create unique interactive graphics for the web. You might have seen the recent text distortion effects using Blotter.js or the animated WebGL lines created with the THREE.MeshLine library. Today you’ll see how to quickly create an interactive “fake” 3D effect for images with plain WebGL.

If you use Facebook, you might have seen the update of 3D photos for the news feed and VR. With special phone cameras that capture the distance between the subject in the foreground and the background, 3D photos bring scenes to life with depth and movement. We can recreate this kind of effect with any photo, some image editing and a little bit of coding.

Usually, these kind of effects would rely on either Three.js or Pixi.js, the powerful libraries that come with many useful features and simplifications when coding. Today we won’t use any libraries but go with the native WebGL API.

So let’s dig in.

Getting started

So, for this effect we’ll go with the native WebGL API. A great place to help you get started with WebGL is webglfundamentals.org. WebGL is usually being berated for its verboseness. And there is a reason for that. The foundation of all fullcreen shader effects (even if they are 2D) is some sort of plane or mesh, or so called quad, which is stretched over the whole screen. So, speaking of being verbose, while we would simply write THREE.PlaneGeometry(1,1) in three.js which creates the 1×1 plane, here is what we need in plain WebGL:

let vertices = new Float32Array([
	  -1, -1,
	  1, -1,
	  -1, 1,
	  1, 1,
	])
	let buffer = gl.createBuffer();
	gl.bindBuffer( gl.ARRAY_BUFFER, buffer );
	gl.bufferData( gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW );

Now that we have our plane, we can apply vertex and fragment shaders to it.

Preparing the image

For our effect to work, we need to create a depth map of the image. The main principle for building a depth map is that we’ve got to separate some parts of the image depending on their Z position, i.e. being far or close, hence isolate the foreground from the background.

For that, we can open the image in Photoshop and paint gray areas over the original photo in the following way:

fake3d_01

This image shows some mountains where you can see that the closer the objects are to the camera, the brighter the area is painted in the depth map. Let’s see in the next section why this kind of shading makes sense.

Shaders

The rendering logic is mostly happening in shaders. As described in the MDN web docs:

A shader is a program, written using the OpenGL ES Shading Language (GLSL), that takes information about the vertices that make up a shape and generates the data needed to render the pixels onto the screen: namely, the positions of the pixels and their colors. There are two shader functions run when drawing WebGL content: the vertex shader and the fragment shader.

A great resource to learn more about shaders is The Book Of Shaders.

The vertex shader will not do much; it just shows the vertices:

attribute vec2 position;
    void main() {
    gl_Position = vec4( position, 0, 1 );
}

The most interesting part will happen in a fragment shader. Let’s load the two images there:

void main(){
    vec4 depth = texture2D(depthImage, uv);
    gl_FragColor = texture2D(originalImage, uv); // just showing original photo
}

Remember, the depth map image is black and white. For shaders, color is just a number: 1 is white and 0 is pitch black. The uv variable is a two dimensional map storing information on which pixel to show. With these two things we can use the depth information to move the pixels of the original photo a little bit.

Let’s start with a mouse movement:

vec4 depth = texture2D(depthImage, uv);
gl_FragColor = texture2D(originalImage, uv + mouse);

Here is how it looks like:

fake3d_02

Now let’s add the depth:

vec4 depth = texture2D(depthImage, uv);
gl_FragColor = texture2D(originalImage, uv + mouse*depth.r);

And here we are:

fake3d_03

Because the texture is black and white, we can just take the red channel (depth.r), and multiply it to the mouse position value on the screen. That means, the brighter the pixel is, the more it will move with the mouse. On the other hand, dark pixels will just stay in place. It’s so simple, yet, it results in such a nice 3D illusion of an image.

Of course, shaders are capable of doing all kinds of other crazy things, but I hope you like this small experiment of “faking” a 3D movement. Let me know what you think about it, and I hope to see your creations with this!

References and Credits

How to Create a Fake 3D Image Effect with WebGL was written by Yuriy Artyukh and published on Codrops.