How to Create Procedural Clouds Using Three.js Sprites

Today we are going to create an animated cloud using a custom shader material, extending the built-in Sprite material of Three.js.

We’ll assume that you are familiar with React (including Hooks), Three.js and React-Three-Fiber. If not, you might find this article that I wrote as a beginner’s intro to the library helpful as a quick start.

The technique that we’ll explain was used in two recent projects made at Low:

1955-Horsebit: a website for the campaign launch of the Gucci’s bag.
Let girls dream: a project to support a gender-equal future

We won’t cover the other elements, like the background, and we also won’t create the most performant cloud because it would get a bit too complex and the purpose of this article is to get you familiar with the technique while keeping it simple.

Note that the use of React it’s not obligatory here but I started using React-Three-Fiber for all my demos and projects, so I’ve opted to use it here, too.

We will cover two main points in this article:

  1. How to extend a SpriteMaterial
  2. How to write the shader of the cloud

Extending the sprite material

Since the goal is not to create a volumetric cloud (a real 3D cloud), I decided to extend a Three.js SpriteMaterial. We can instead leverage the fact that using a Sprite, the cloud will always be facing the camera, independently of the camera position or orientation. So if you move the cloud or move the camera you’ll always see it and it helps to fake the missing of 3D volume (check out the debug mode to get the idea).

Note: If you head to the demo and add /?debug=true to the URL it will enable the Orbit Controls which will give you some visual insight into why I decided to use the Sprite material.

There are multiple ways to extend a built-in material of Three.js, and you can find a good explanation in this article by Dusan Bosnjak.

import {ShaderMaterial, UniformsUtils, ShaderLib} from 'three'
import fragment from '~shaders/cloud.frag'
import vertex from '~shaders/cloud.vert'

/**
* We are going to take the uniforms of the Sprite material
* and we'll merge with our uniforms
*/
const myUniforms = useMemo(() => ({
   .......
}), [])

const material = useMemo(() => {
  const mat = new ShaderMaterial({
    uniforms: {...UniformsUtils.clone(ShaderLib.sprite.uniforms), ...myUniforms},
     vertexShader: vertex,
     fragmentShader: fragment,
     transparent: true,
  })
  return mat
}, [])

We need to compose our vertex shader, adding the necessary #include code snippets. If you are interested in how materials are built in Three.js you can have a look at the source code.

uniform float rotation;
uniform vec2 center;
#include <common>
#include <uv_pars_vertex>
#include <fog_pars_vertex>
#include <logdepthbuf_pars_vertex>
#include <clipping_planes_pars_vertex>

varying vec2 vUv;

void main() {
  vUv = uv;

	vec4 mvPosition = modelViewMatrix * vec4( 0.0, 0.0, 0.0, 1.0 );
	vec2 scale;
	scale.x = length( vec3( modelMatrix[ 0 ].x, modelMatrix[ 0 ].y, modelMatrix[ 0 ].z ) );
	scale.y = length( vec3( modelMatrix[ 1 ].x, modelMatrix[ 1 ].y, modelMatrix[ 1 ].z ) );

	vec2 alignedPosition = ( position.xy - ( center - vec2( 0.5 ) ) ) * scale;
	vec2 rotatedPosition;
	rotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;
	rotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;
	mvPosition.xy += rotatedPosition;

	gl_Position = projectionMatrix * mvPosition;

	#include <logdepthbuf_vertex>
	#include <clipping_planes_vertex>
	#include <fog_vertex>
}

In this way we created a custom Sprite material. We can achieve the same effect in other ways for sure, but I decided to extend a built-in material because it could be useful in the future to add a custom logic. It’s now time to dig into the fragment.

Cloud’s fragment

To create the cloud we need two assets. One is the rough starting shape of the cloud, the other one is the starting point of the texture/pattern.
Keep in mind that both of the textures can be created directly in the shader but it will take some GPU calculations. That is fine, but if you can avoid it it’s a good practice to optimise the shader, too.

Using some images instead of creating them with code could save you some computational power.

Sliding textures

First of all let’s create two sliding textures, using the texture image above (uTxtCloudNoise), that we will use later to handle the alpha channel of the output. These are sliding textures that helps us to create a “fake” noise effect by concatenate, adding and multiply them.

vec4 txtNoise1 = texture2D(uTxtCloudNoise, vec2(vUv.x + uTime * 0.0001, vUv.y - uTime * 0.00014));
vec4 txtNoise2 = texture2D(uTxtCloudNoise, vec2(vUv.x - uTime * 0.00002, vUv.y + uTime * 0.000017 + 0.2));

Noise

We now need some GLSL noise: the Simpex noise and the Fractional Brownian motion (FBM) that allows us to morph the shape and create the vaporous border effect.

Let’s first create the Simplex and FBM noise to distort our UV.
We will use the FBM to achieve the effect for the border of the cloud, to make it like smoke, and we will use the Simplex to do the shape morphing of the cloud.

The distorted UV, now called newUv, will be used during the declaration of the txtShape:

#pragma glslify: fbm3d = require('glsl-fractal-brownian-noise/3d')
#pragma glslify: snoise3 = require(glsl-noise/simplex/3d)

// FBM
float noiseBig = fbm3d(vec3(vUv, uTime), 4)+ 1.0 * 0.5;
newUv += noiseBig * uDisplStrenght1;

//SIMPLEX
float noiseSmall = snoise3(vec3(newUv, uTime)) + 1.0 * 0.5;
newUv += noiseSmall * uDisplStrenght2;

......
vec4 txtShape = texture2D(uTxtShape, newUv);

And this is how the noise looks like:

Mask & Alpha

To create the mask for the cloud we will use the shape texture (uTxtShape) we saw at the beginning and the result of the sliding textures we mentioned earlier.

The following output is the result of the masking only. The border and the shape effect is fine but the internal pattern/color is not:

Now we calculate the alpha used on the sliding textures from before. We’ll use the levels function, that was taken from here, which is more or less like the Photoshop levels function.

Concatenating the distorted shape (uTxtShape) and the red channel of the sliding textures will give us the external shape and even the internal “cloud pattern” to create a more real look and feel:

vec4 txtShape = texture2D(uTxtShape, newUv);

float alpha = levels((txtNoise1 + txtNoise2) * 0.6, 0.2, 0.4, 0.7).r;
alpha *= txtShape.r;

gl_FragColor = vec4(vec3(0.95,0.95,0.95), alpha);

Concatenating everything

It’s time now to wrap everything up to display the final output:

void main() {
  vec2 newUv = vUv;

  // Sliding textures
  vec4 txtNoise1 = texture2D(uTxtCloudNoise, vec2(vUv.x + uTime * 0.0001, vUv.y - uTime * 0.00014)); // noise txt
  vec4 txtNoise2 = texture2D(uTxtCloudNoise, vec2(vUv.x - uTime * 0.00002, vUv.y + uTime * 0.000017 + 0.2)); // noise txt

  // Calculate the FBM and distort the UV
  float noiseBig = fbm3d(vec3(vUv * uFac1, uTime * uTimeFactor1), 4)+ 1.0 * 0.5;
  newUv += noiseBig * uDisplStrenght1;
  
  // Calculate the Simplex and distort the UV
  float noiseSmall = snoise3(vec3(newUv * uFac2, uTime * uTimeFactor2)) + 1.0 * 0.5;

  newUv += noiseSmall * uDisplStrenght2;

  // Create the shape (mask)
  vec4 txtShape = texture2D(uTxtShape, newUv);

  // Alpha
  float alpha = levels((txtNoise1 + txtNoise2) * 0.6, 0.2, 0.4, 0.7).r;
  alpha *= txtShape.r;

  gl_FragColor = vec4(vec3(0.95,0.95,0.95), alpha);
}

Final thoughts

Keep in mind that this is not the most performant way to create a cloud, but it’s a simple one. Using noise functions is expensive, but for the sake of this tutorial it should suffice.

If you have any thoughts, improvements or doubts, please feel free to write to me in Twitter, I’ll be happy to help.

How to Create Procedural Clouds Using Three.js Sprites was written by Robert Borghesi and published on Codrops.

Case Study: Akaru 2019

In 2019, a new version of our Akaru studio website has been released.

After long discussions between developers and designers, we found the creative path we wanted to take for the redesign. The idea was to create a connection between our name Akaru and the graphic style. Meaning “to highlight” in Japanese, we wanted Akaru to transmit the light spectrum, the iridescence and reflections that light can have on some surfaces. 

The particularity of the site is the mixing of regular content in the DOM/CSS and interactive background content in WebGL. We’ll have a look at how we planned and decided on the visuals of the main effect, and in the second part will share a technical overview and show how the “iridescent oil” effect was coded.

Design of the liquid effect

In the following, we will go through our iteration process between design and implementation and how we decided on the visuals of the interactive liquid/oil effect.

Visual Search

After some in-depth research, we built an inspirational mood board inspired by 3D artists and photographers. We have therefore selected several colors, and used all the details present in the images of liquids we considered: the mixture of fluids, streaks of colors and lights.

Processus

We started to create our first texture tests in Photoshop using vector shapes, brushes, distortions, and blurs. After several tests we were able to make our first environmental test with an interesting graphic rendering. The best method was to first draw the waves and shapes, then paint over and mix the colors with the different fusion modes.

Challenges

The main challenge was to “feel” a liquid effect on the textures. It was from this moment that the exchange between designers and developers became essential. To achieve this effect, the developers created a parametric tool where the designers could upload a texture, and then decide on the fluid movements using a Flowmap. From there, we could manage amplitudes, noise speed, scale and a multitude of options.

Implementing the iridescent oil effect

Now we will go through the technical details of how the iridescent oil effect was implemented on every page. We are assuming some basic knowledge of WebGL with Three.js and the GLSL language so we will skip over commonly used code, like scene initialization.

Creating the plane

For this project, we use the OrthographicCamera of Three.js. This camera removes all perspective so we can create our plane without caring about the depth of it.

We will create our plane with a geometry which has the width of the viewport, and we get the height by multiplying the width of the plane by the aspect ratio of our texture:

const PLANE_ASPECT_RATIO = 9 / 16;

const planeWidth = window.innerWidth;
const planeHeight = planeWidth * PLANE_ASPECT_RATIO;

const geometry = new PlaneBufferGeometry(planeWidth, planeHeight);

We could keep the number of segments by default since this effect runs on the fragment shader. By doing so, we reduce the amount of vertices we have to render, which is always good for performance.

Then, in the shader we use the UVs to sample our texture:

vec3 color = texture2D(uTexture, vUv).rgb;

gl_FragColor = vec4(color, 1.0);

Oil motion

Now that our texture is rendered on our plane, we need to make it flow.

To create some movement, we sampled the texture with an offset two times with a different offset:

float phase1 = fract(uTime * uFlowSpeed + 0.5);
float phase2 = fract(uTime * uFlowSpeed + 1.0);

// mirroring phase
phase1 = 1.0 - phase1;
phase2 = 1.0 - phase2;

vec3 color1 = texture2D(
    uTexture,
    uv + vec2(0.5, 0.0) * phase1).rgb;

vec3 color2 = texture2D(
    uTexture,
    uv + vec2(0.5, 0.0) * phase2).rgb;

Then we blend our two textures together:

float flowLerp = abs((0.5 - phase1) / 0.5);
vec3 finalColor = mix(color1, color2, flowLerp);

return finalColor;

But we don’t want our texture to always flow in the same direction, we want some areas to flow up, some others to flow to the right, and so on. To achieve this, we used Flow Map or Vector Map, which look like this:

Example Flow Map

A flow map is a texture in which every pixel contains a direction represented as a 2d vector x and y. In this texture, the red component stores the direction on the x axis, while the green component stores the direction on the y axis. Areas where the liquid is stationary are mid red and mid green (you can find those areas on top of the map). In fact, the direction could be in two ways, for example on the x axis the liquid could go to the left or to the right. To store this information a red value of 0 will make the texture go to the left and a red value of 255 will make the texture go to the right. In the shader, we implement this logic like this:

vec2 flowDir = texture2D(uFlowMap, uv).rg;
// make mid red and mid green the "stationary flow" values
flowDir -= 0.5;

// mirroring phase
phase1 = 1.0 - phase1;
phase2 = 1.0 - phase2;

vec3 color1 = texture2D(
    uTexture,
    uv + flowDir * phase1).rgb;

vec3 color2 = texture2D(
    uTexture,
    uv + flowDir * phase2).rgb;

We painted this map using Photoshop and unfortunately, with all exports (jpeg, png, etc.), we always got some weird artefacts. We found out that using PNG resulted in the least “glitchy” exports we could obtain. We guess that it comes from the compression algorithm for exports in Photoshop. These artefacts are invisible to the eye and can only be seen when we use it as a map. To fix that, we blurred the texture two times with glsl-fast-gaussian-blur (one vertically and one horizontally) and blended them together:

vec4 horizontalBlur = blur(
    uFlowMap,
    uv,
    uResolution,
    vec2(uFlowMapBlurRadius, 0.0)
  );
vec4 verticalBlur = blur(
    uFlowMap,
    uv,
    uResolution,
    vec2(0.0, uFlowMapBlurRadius)
  );
vec4 texture = mix(horizontalBlur, verticalBlur, 0.5);

As you can see, we used glslify to import glsl modules hosted on npm; it’s very useful to keep you shader’s code split and as simple as possible.

Make it feel more natural

Now that our liquid flows, we can clearly see when the liquid is repeating. But liquid doesn’t flow this way in real life. To create a better illusion of a realistic flow movement, we added some turbulence to distort our textures. 

To create this turbulence we use glsl-noise to compute a 3D Noise in which x and y will be the UV downscaled a bit to create a large distortion, and Z will be the time elapsed since the first frame, this will create an animated seamless noise:

float x = uv.x * uNoiseScaleX;
float y = uv.y * uNoiseScaleY;

float n = cnoise3(vec3(x, y, uTime * uNoiseSpeed));

Then, instead of sampling our flow with the default UV, we distort them with the noise:

vec2 distordedUv = uv + applyNoise(uv);

vec3 color1 = texture2D(
    uTexture,
    distordedUv + flowDir * phase1).rgb;

...

On top of that, we use a uniform called uNoiseAmplitude to control the noise strength.

To observe how the noise influences the rendering, you can tweak it inside the “Noise” folder in the GUI at the top right of the screen. For example, try to tweak the “amplitude” value:

Adding the mouse trail

To add some user interaction, we wanted to create a trail inside the oil, like a finger pushing the oil, where the finger would be the user’s pointer. This effect consists of three things:

1. Computing the mouse trail

To achieve this we used a Frame Buffer (or FBO). I will not go very deep into what frame buffers are here but if you want to you can learn everything about it here

Basically, it will: 

  1. Draw a circle in the current mouse position
  2. Render this as a texture
  3. Store this texture
  4. Use this texture the next frame to draw on top of it the new mouse position
  5. Repeat

By doing so, we have a trail drawn by the mouse and everything run on the GPU! For this kind of simulations, running them on the GPU is way more performant than running them on the CPU.

2. Blending the trail with the flow map

We can use the frame buffer as texture. It will be a black texture, with a white trail painted by the mouse. So we pass our trail texture via uniform to our Oil shader and we can compute it like this:

float trail = texture2D(uTrailTexture, uv).r;

We use only the red component of our texture since it’s a grayscale map and all colors are equal.

Then inside our flow function we use our trail to change the direction our liquid texture flow:

flowDir.x -= trail;
flowDir.y += trail * 0.075;

3. Adding the mouse acceleration

When we move our finger in a liquid, the trail it will create depends on the speed our finger moves. To recreate this feeling we make the radius of the trail depending on the mouse speed: the faster the mouse will go, the bigger the trail will be.

To find the mouse speed we compute the difference between the damped and the current mouse position:

const deltaMouse = clamp(this.mouse.distanceTo(this.smoothedMouse), 0.0, 1.0) * 100;

Then we normalize it and apply an easing to this value with the easing functions provided by TweenMax to avoid creating a linear acceleration.

const normDeltaMouse = norm(deltaMouse, 0.0, this.maxRadius);
const easeDeltaMouse = Circ.easeOut.getRatio(normDeltaMouse);

The Tech Stack

Here’s an overview of the technologies we’ve used in our project:

  • three.js for the WebGL part
  • Vue.js for the DOM part, it allows us to wrap up the WebGL inside a component which communicate easily with the rest of the UI 
  • GSAP is the tweening library we love and use in almost every project as it is well optimized and performant
  • Nuxt.js to pre-render during deployment and serve all our pages as static files

Prismic is a really easy to use headless CMS with a nice API for image formatting and a lot of others useful features.

Conclusion

We hope you liked this Case Study, if you have any questions, feel free to ask us on Twitter (@lazyheart and @colinpeyrat), we would be very happy to receive your feedback!

Case Study: Akaru 2019 was written by Jeremy Franzese and published on Codrops.

Creative WebGL Image Transitions

Everybody loves images. They are bright and colorful and we can do fun things with them. Even text sometimes strives to be an image:

     |\_/|                  
     | @ @   Woof! 
     |   <>              _  
     |  _/\------____ ((| |))
     |               `--' |   
 ____|_       ___|   |___.' 
/_/_____/____/_______|

Once you want to show more than one image, you can’t help making a transition between them. Or is it just me?

Jokes aside, image transitions are all over the web. They can be powered by CSS, SVG or WebGL. But of course, the most efficient way to work with graphics in the browser is using the Graphics Processor, or GPU. And the the best way to do this is with WebGL, specifically with shaders written in GLSL.

Today we want to show you some interesting image transition experiments that reveal the boundless possibilities of WebGL. These effects were inspired by the countless incredible design examples and effects seen on websites like The Avener and Oversize Studio.

Setup

I will be using the Three.js framework for my transitions. It doesn’t really matter what library you use, it could have also been the amazing Pixi.js library, or simply (but not so straightforward) native WebGL. I’ve used native WebGL in my previous experiment, so this time I’m going to use Three.js. It also seems most beginner friendly to me.

Three.js uses concepts like Camera, Scene and Objects. We will create a simple Plane object, add it to Scene and put it in front of the Camera, so that it is the only thing that you can see. There is a template for that kind of object, PlaneBufferGeometry:

To cover the whole screen with a plane you need a little bit of geometry. The Camera has a fov (field of view), and the plane has a size. So with some calculations you can get it to fill your whole screen:

camera.fov = 2*(180/Math.PI)*Math.atan(PlaneSize/(2*CameraDistance));

Looks complicated, but it’s just getting the angle(fov), knowing all the distances here:

That is actually the end of the 3D part, everything else will be happening in 2D.

GLSL

In case you are not yet familiar with this language, I highly advise you to check out the wonderful Book Of Shaders.

So, we have a plane and we have a fragment shader attached to it that calculates each pixels color. How do we make a transition? The simplest one done with a shader looks like this:

void main() {
  vec4 image1 = texture2D(texture1,uv);
  vec4 image2 = texture2D(texture2,uv);
  gl_FragColor = mix(image1, image2, progress);
}

Where progress is a number between 0 and 1, indicating the progress of the animation.

With that kind of code you will get a simple fade transition between images. But that’s not that cool, right?

Cool transitions

Usually all transitions are based on changing so called UVs, or the way texture is wrapped on the plane. So for instance, multiplying UV scales the image, adding a number just shifts the image on the plane.

UVs are nothing magical. Think of them as a coordinate system for pixels on a plane:

Let’s start with some basic code:

gl_FragColor = texture2D(texture,uv);

This just shows an image on the screen. Now let’s adjust that code a bit:

gl_FragColor = texture2D(texture,fract(uv + uv));

By taking the fractional part, we make sure that all the values stay between 0 and 1. And if UV was from 0 to 1, doubling it means it will be from 0 to 2, so we should see the fractional part changing from 0 to 1, and from 0 to 1 again!

And that’s what you get: a repeated image. Now let’s try something different: subtracting UV and using the progress for the animation:

gl_FragColor = texture2D(texture, uv - uv * vec2(1.,0) * progress * 0.5);

First, we make sure that we are only changing one axis of UV, by multiplying it with vec2(1.,0). So, when the progress is 0, it should be the default image. Let’s see:

Now we can stretch the image! Let’s combine those two effects into one.

gl_FragColor = texture2D(uTextureOne, uv - fract(uv * vec2(5.,0.)) * progress * 0.1 );

So basically, we do the stretching and repeat it 5 times. We could use any other number as well.

Much better! Next, if we add another image, we get the effect that you can see in demo 7.

Cool isn’t it? Just two simple arithmetic operations, and you get an interesting transition effect.

That’s just one way of changing UVs. Check out all the other demos, and try to guess what’s the math behind them! Try to come up with your own unique animation and share it with me!

References and Credits

Creative WebGL Image Transitions was written by Yuriy Artyukh and published on Codrops.

Interactive Particles with Three.js

This tutorial is going to demonstrate how to draw a large number of particles with Three.js and an efficient way to make them react to mouse and touch input using shaders and an off-screen texture.

Attention: You will need an intermediate level of experience with Three.js. We will omit some parts of the code for brevity and assume you already know how to set up a Three.js scene and how to import your shaders — in this demo we are using glslify.
codrops-02

Instanced Geometry

The particles are created based on the pixels of an image. Our image’s dimensions are 320×180, or 57,600 pixels.

However, we don’t need to create one geometry for each particle. We can create only a single one and render it 57,600 times with different parameters. This is called geometry instancing. With Three.js we use InstancedBufferGeometry to define the geometry, BufferAttribute for attributes which remain the same for every instance and InstancedBufferAttribute for attributes which can vary between instances (i.e. colour, size).

The geometry of our particles is a simple quad, formed by 4 vertices and 2 triangles.

quad


const geometry = new THREE.InstancedBufferGeometry();

// positions
const positions = new THREE.BufferAttribute(new Float32Array(4 * 3), 3);
positions.setXYZ(0, -0.5, 0.5, 0.0);
positions.setXYZ(1, 0.5, 0.5, 0.0);
positions.setXYZ(2, -0.5, -0.5, 0.0);
positions.setXYZ(3, 0.5, -0.5, 0.0);
geometry.addAttribute('position', positions);

// uvs
const uvs = new THREE.BufferAttribute(new Float32Array(4 * 2), 2);
uvs.setXYZ(0, 0.0, 0.0);
uvs.setXYZ(1, 1.0, 0.0);
uvs.setXYZ(2, 0.0, 1.0);
uvs.setXYZ(3, 1.0, 1.0);
geometry.addAttribute('uv', uvs);

// index
geometry.setIndex(new THREE.BufferAttribute(new Uint16Array([ 0, 2, 1, 2, 3, 1 ]), 1));

Next, we loop through the pixels of the image and assign our instanced attributes. Since the word position is already taken, we use the word offset to store the position of each instance. The offset will be the x,y of each pixel in the image. We also want to store the particle index and a random angle which will be used later for animation.


const indices = new Uint16Array(this.numPoints);
const offsets = new Float32Array(this.numPoints * 3);
const angles = new Float32Array(this.numPoints);

for (let i = 0; i < this.numPoints; i++) {
	offsets[i * 3 + 0] = i % this.width;
	offsets[i * 3 + 1] = Math.floor(i / this.width);

	indices[i] = i;

	angles[i] = Math.random() * Math.PI;
}

geometry.addAttribute('pindex', new THREE.InstancedBufferAttribute(indices, 1, false));
geometry.addAttribute('offset', new THREE.InstancedBufferAttribute(offsets, 3, false));
geometry.addAttribute('angle', new THREE.InstancedBufferAttribute(angles, 1, false));

Particle Material

The material is a RawShaderMaterial with custom shaders particle.vert and particle.frag.

The uniforms are described as follows:

  • uTime: elapsed time, updated every frame
  • uRandom: factor of randomness used to displace the particles in x,y
  • uDepth: maximum oscillation of the particles in z
  • uSize: base size of the particles
  • uTexture: image texture
  • uTextureSize: dimensions of the texture
  • uTouch: touch texture

const uniforms = {
	uTime: { value: 0 },
	uRandom: { value: 1.0 },
	uDepth: { value: 2.0 },
	uSize: { value: 0.0 },
	uTextureSize: { value: new THREE.Vector2(this.width, this.height) },
	uTexture: { value: this.texture },
	uTouch: { value: null }
};

const material = new THREE.RawShaderMaterial({
	uniforms,
	vertexShader: glslify(require('../../../shaders/particle.vert')),
	fragmentShader: glslify(require('../../../shaders/particle.frag')),
	depthTest: false,
	transparent: true
});

A simple vertex shader would output the position of the particles according to their offset attribute directly. To make things more interesting, we displace the particles using random and noise. And the same goes for particles’ sizes.


// particle.vert

void main() {
	// displacement
	vec3 displaced = offset;
	// randomise
	displaced.xy += vec2(random(pindex) - 0.5, random(offset.x + pindex) - 0.5) * uRandom;
	float rndz = (random(pindex) + snoise_1_2(vec2(pindex * 0.1, uTime * 0.1)));
	displaced.z += rndz * (random(pindex) * 2.0 * uDepth);

	// particle size
	float psize = (snoise_1_2(vec2(uTime, pindex) * 0.5) + 2.0);
	psize *= max(grey, 0.2);
	psize *= uSize;

	// (...)
}

The fragment shader samples the RGB colour from the original image and converts it to greyscale using the luminosity method (0.21 R + 0.72 G + 0.07 B).

The alpha channel is determined by the linear distance to the centre of the UV, which essentially creates a circle. The border of the circle can be blurred out using smoothstep.


// particle.frag

void main() {
	// pixel color
	vec4 colA = texture2D(uTexture, puv);

	// greyscale
	float grey = colA.r * 0.21 + colA.g * 0.71 + colA.b * 0.07;
	vec4 colB = vec4(grey, grey, grey, 1.0);

	// circle
	float border = 0.3;
	float radius = 0.5;
	float dist = radius - distance(uv, vec2(0.5));
	float t = smoothstep(0.0, border, dist);

	// final color
	color = colB;
	color.a = t;

	// (...)
}

Optimisation

In our demo we set the size of the particles according to their brightness, which means dark particles are almost invisible. This makes room for some optimisation. When looping through the pixels of the image, we can discard the ones which are too dark. This reduces the number of particles and improves performance.

optimised

The optimisation starts before we create our InstancedBufferGeometry. We create a temporary canvas, draw the image onto it and call getImageData() to retrieve an array of colours [R, G, B, A, R, G, B … ]. We then define a threshold — hex #22 or decimal 34 — and test it against the red channel. The red channel is an arbitrary choice, we could also use green or blue, or even an average of all three channels, but the red channel is simple to use.


// discard pixels darker than threshold #22
if (discard) {
	numVisible = 0;
	threshold = 34;

	const img = this.texture.image;
	const canvas = document.createElement('canvas');
	const ctx = canvas.getContext('2d');

	canvas.width = this.width;
	canvas.height = this.height;
	ctx.scale(1, -1); // flip y
	ctx.drawImage(img, 0, 0, this.width, this.height * -1);

	const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
	originalColors = Float32Array.from(imgData.data);

	for (let i = 0; i < this.numPoints; i++) {
		if (originalColors[i * 4 + 0] > threshold) numVisible++;
	}
}

We also need to update the loop where we define offset, angle and pindex to take the threshold into account.


for (let i = 0, j = 0; i < this.numPoints; i++) {
	if (originalColors[i * 4 + 0] <= threshold) continue;

	offsets[j * 3 + 0] = i % this.width;
	offsets[j * 3 + 1] = Math.floor(i / this.width);

	indices[j] = i;

	angles[j] = Math.random() * Math.PI;

	j++;
}

Interactivity

Considerations

There are many different ways of introducing interaction with the particles. For example, we could give each particle a velocity attribute and update it on every frame based on its proximity to the cursor. This is a classic technique and it works very well, but it might be a bit too heavy if we have to loop through tens of thousands of particles.

A more efficient way would be to do it in the shader. We could pass the cursor’s position as a uniform and displace the particles based on their distance from it. While this would perform a lot faster, the result could be quite dry. The particles would go to a given position, but they wouldn’t ease in or out of it.

Chosen Approach

The technique we chose in our demo was to draw the cursor position onto a texture. The advantage is that we can keep a history of cursor positions and create a trail. We can also apply an easing function to the radius of that trail, making it grow and shrink smoothly. Everything would happen in the shader, running in parallel for all the particles.

codrops-05

In order to get the cursor’s position we use a Raycaster and a simple PlaneBufferGeometry the same size of our main geometry. The plane is invisible, but interactive.

Interactivity in Three.js is a topic on its own. Please see this example for reference.

When there is an intersection between the cursor and the plane, we can use the UV coordinates in the intersection data to retrieve the cursor’s position. The positions are then stored in an array (trail) and drawn onto an off-screen canvas. The canvas is passed as a texture to the shader via the uniform uTouch.

In the vertex shader the particles are displaced based on the brightness of the pixels in the touch texture.


// particle.vert

void main() {
	// (...)

	// touch
	float t = texture2D(uTouch, puv).r;
	displaced.z += t * 20.0 * rndz;
	displaced.x += cos(angle) * t * 20.0 * rndz;
	displaced.y += sin(angle) * t * 20.0 * rndz;

	// (...)
}

Conclusion

Hope you enjoyed the tutorial! If you have any questions don’t hesitate to get in touch.

rhino

Interactive Particles with Three.js was written by Bruno Imbrizi and published on Codrops.