How to Add More Fun to a Game: Extending “The Aviator”

If you like cute little games you will love Karim Maaloul’s “The Aviator” — as a pilot you steer your aircraft across a round little ocean world, evading red “enemies” and collecting blue energy tokens to avoid crashing into the water. It runs entirely in the browser so make sure to quickly play a round to better understand what we are about to do in this tutorial.

By the way, Karim co-founded the Belgian creative agency Epic. His style is unique in its adorableness and his animation craftmanship is astonishing as you can also see in his series of WebGL experiments.

Karim thankfully wrote about the making of and open sourced the code and while it is a fun little game there is still a lot of potential to get even more out of it. In this article we will explore some hands-on changes on how to bring the most fun based on the foundation we have here, a small browser game using Three.js.

This tutorial requires some knowledge of JavaScript and Three.js.

What makes a game fun?

While there obviously is no definitive recipe there are a few key mechanics that will maximize your chances of generating fun. There is a great compilation on gamedesigning.org, so let’s see which items apply already:

✅ Great controls
✅ An interesting theme and visual style
🚫 Excellent sound and music
🚫 Captivating worlds
🤔 Fun gameplay
🚫 Solid level design
🚫 An entertaining story & memorable characters
🤔 Good balance of challenge and reward
✅ Something different

We can see there’s lots to do, too much for a single article of course, so we will get to the general game layout, story, characters and balance later. Now we will improve the gameplay and add sounds — let’s go!

Adding weapons

Guns are always fun! Some games like Space Invaders consist entirely of shooting and it is a great mechanic to add visual excitement, cool sound effects and an extra dimension to the skill requirement so we not only have the up and down movement of the aircraft.

Let’s try some simple gun designs:

The “Simple gun” (top) and the “Better gun” (bottom).

These 3D models consist of only 2–3 cylinders of shiny metal material:

const metalMaterial = new THREE.MeshStandardMaterial({
    color: 0x222222,
    flatShading: true,
    roughness: 0.5,
    metalness: 1.0
})

class SimpleGun {
    static createMesh() {
        const BODY_RADIUS = 3
        const BODY_LENGTH = 20

        const full = new THREE.Group()
        const body = new THREE.Mesh(
            new THREE.CylinderGeometry(BODY_RADIUS, BODY_RADIUS, BODY_LENGTH),
            metalMaterial,
        )
        body.rotation.z = Math.PI/2
        full.add(body)

        const barrel = new THREE.Mesh(
            new THREE.CylinderGeometry(BODY_RADIUS/2, BODY_RADIUS/2, BODY_LENGTH),
            metalMaterial,
        )
        barrel.rotation.z = Math.PI/2
        barrel.position.x = BODY_LENGTH
        full.add(barrel)

        return full
    }
}

We will have 3 guns: A SimpleGun, then the DoubleGun as just two of those and then the BetterGun which has just a bit different proportions and another cylinder at the tip.

Guns mounted to the airplane

Positioning the guns on the plane was done by simply experimenting with the positional x/y/z values.

The shooting mechanic itself is straight forward:

class SimpleGun {
  downtime() {
    return 0.1
  }

  damage() {
    return 1
  }

  shoot(direction) {
    const BULLET_SPEED = 0.5
    const RECOIL_DISTANCE = 4
    const RECOIL_DURATION = this.downtime() / 1.5

    const position = new THREE.Vector3()
    this.mesh.getWorldPosition(position)
    position.add(new THREE.Vector3(5, 0, 0))
    spawnProjectile(this.damage(), position, direction, BULLET_SPEED, 0.3, 3)

    // Little explosion at exhaust
    spawnParticles(position.clone().add(new THREE.Vector3(2,0,0)), 1, Colors.orange, 0.2)

    // Recoil of gun
    const initialX = this.mesh.position.x
    TweenMax.to(this.mesh.position, {
      duration: RECOIL_DURATION,
      x: initialX - RECOIL_DISTANCE,
      onComplete: () => {
        TweenMax.to(this.mesh.position, {
          duration: RECOIL_DURATION,
          x: initialX,
        })
      },
    })
  }
}

class Airplane {
  shoot() {
    if (!this.weapon) {
      return
    }

    // rate-limit shooting
    const nowTime = new Date().getTime() / 1000
    if (nowTime-this.lastShot < this.weapon.downtime()) {
      return
    }
    this.lastShot = nowTime

    // fire the shot
    let direction = new THREE.Vector3(10, 0, 0)
    direction.applyEuler(airplane.mesh.rotation)
    this.weapon.shoot(direction)

    // recoil airplane
    const recoilForce = this.weapon.damage()
    TweenMax.to(this.mesh.position, {
      duration: 0.05,
      x: this.mesh.position.x - recoilForce,
    })
  }
}

// in the main loop
if (mouseDown[0] || keysDown['Space']) {
  airplane.shoot()
}

Now the collision detection with the enemies, we just check whether the enemy’s bounding box intersects with the bullet’s box:

class Enemy {
	tick(deltaTime) {
		...
		const thisAabb = new THREE.Box3().setFromObject(this.mesh)
		for (const projectile of allProjectiles) {
			const projectileAabb = new THREE.Box3().setFromObject(projectile.mesh)
			if (thisAabb.intersectsBox(projectileAabb)) {
				spawnParticles(projectile.mesh.position.clone(), 5, Colors.brownDark, 1)
				projectile.remove()
				this.hitpoints -= projectile.damage
			}
		}
		if (this.hitpoints <= 0) {
			this.explode()
		}
	}

	explode() {
		spawnParticles(this.mesh.position.clone(), 15, Colors.red, 3)
		sceneManager.remove(this)
	}
}

Et voilá, we can shoot with different weapons and it’s super fun!

Changing the energy system to lives and coins

Currently the game features an energy/fuel bar that slowly drains over time and fills up when collecting the blue pills. I feel like this makes sense but a more conventional system of having lives as health, symbolized by hearts, and coins as goodies is clearer to players and will allow for more flexibility in the gameplay.

In the code the change from blue pills to golden coins is easy: We changed the color and then the geometry from THREE.TetrahedronGeometry(5,0) to THREE.CylinderGeometry(4, 4, 1, 10).

The new logic now is: We start out with three lives and whenever our airplane crashes into an enemy we lose one. The amount of collected coins show in the interface. The coins don’t yet have real impact on the gameplay but they are great for the score board and we can easily add some mechanics later: For example that the player can buy accessoires for the airplane with their coins, having a lifetime coin counter or we could design a game mode where the task is to not miss a single coin on the map.

Adding sounds

This is an obvious improvement and conceptually simple — we just need to find fitting, free sound bites and integrate them.

Luckily on https://freesound.org and https://www.zapsplat.com/ we can search for sound effects and use them freely, just make sure to attribute where required.

Example of a gun shot sound: https://freesound.org/people/LeMudCrab/sounds/163456/.

We load all 24 sound files at the start of the game and then to play a sound we code a simple audioManager.play(‘shot-soft’). Repetitively playing the same sound can get boring for the ears when shooting for a few seconds or when collecting a few coins in a row, so we make sure to have several different sounds for those and just select randomly which one to play.

Be aware though that browsers require a page interaction, so basically a mouse click, before they allow a website to play sound. This is to prevent websites from annoyingly auto-playing sounds directly after loading. We can simply require a click on a “Start” button after page load to work around this.

Adding collectibles

How do we get the weapons or new lives to the player? We will spawn “collectibles” for that, which is the item (a heart or gun) floating in a bubble that the player can catch.

We already have the spawning logic in the game, for coins and enemies, so we can adopt that easily.

class Collectible {
	constructor(mesh, onApply) {
		this.mesh = new THREE.Object3D()
		const bubble = new THREE.Mesh(
			new THREE.SphereGeometry(10, 100, 100),
			new THREE.MeshPhongMaterial({
				color: COLOR_COLLECTIBLE_BUBBLE,
				transparent: true,
				opacity: .4,
				flatShading: true,
			})
		)
		this.mesh.add(bubble)
		this.mesh.add(mesh)
		...
	}


	tick(deltaTime) {
		rotateAroundSea(this, deltaTime, world.collectiblesSpeed)

		// rotate collectible for visual effect
		this.mesh.rotation.y += deltaTime * 0.002 * Math.random()
		this.mesh.rotation.z += deltaTime * 0.002 * Math.random()

		// collision?
		if (utils.collide(airplane.mesh, this.mesh, world.collectibleDistanceTolerance)) {
			this.onApply()
			this.explode()
		}
		// passed-by?
		else if (this.angle > Math.PI) {
			sceneManager.remove(this)
		}
	}


	explode() {
		spawnParticles(this.mesh.position.clone(), 15, COLOR_COLLECTIBLE_BUBBLE, 3)
		sceneManager.remove(this)
		audioManager.play('bubble')

		// animation to make it very obvious that we collected this item
		TweenMax.to(...)
	}
}


function spawnSimpleGunCollectible() {
	const gun = SimpleGun.createMesh()
	gun.scale.set(0.25, 0.25, 0.25)
	gun.position.x = -2

	new Collectible(gun, () => {
		airplane.equipWeapon(new SimpleGun())
	})
}

And that’s it, we have our collectibles:

The only problem is that I couldn’t for the life of me create a heart model from the three.js primitives so I resorted to a free, low-poly 3D model from the platform cgtrader.

Defining the spawn-logic on the map in a way to have a good balance of challenge and reward requires sensible refining so after some experimenting this felt nice: Spawn the three weapons after a distance of 550, 1150 and 1750 respectively and spawn a life a short while after losing one.

Some more polish

  • The ocean’s color gets darker as we progress through the levels
  • Show more prominently when we enter a new level
  • Show an end game screen after 5 levels
  • Adjusted the code for a newer version of the Three.js library
  • Tweaked the color theme

More, more, more fun!

We went from a simple fly-up-and-down gameplay to being able to collect guns and shoot the enemies. The sounds add to the atmosphere and the coins mechanics sets us up for new features later on.

Make sure to play our result here! Collect the weapons, have fun with the guns and try to survive until the end of level 5.

If you are interested in the source code, you find it here on GitHub.

How to proceed from here? We improved on some key mechanics and have a proper basis but this is not quite a finalized, polished game yet.

As a next step we plan to dive more into game design theory. We will look at several of the most popular games of the endless runner genre to get insights into their structure and mechanics and how they keep their players engaged. The aim would be to learn more about the advanced concepts and build them into The Aviator.

Subway Surfer, the most successful “endless runner” game.

Stay tuned, so long!

The post How to Add More Fun to a Game: Extending “The Aviator” appeared first on Codrops.

Creating a Fluid Distortion Animation with Three.js

In this new ALL YOUR HTML coding session you’ll learn how to code a water-like distortion animation as seen on the PixiJS website using Three.js. We’ll use shaders and render target to achieve the fluid effects.

Original website: https://pixijs.com/

This coding session was streamed live on April 10, 2022.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Creating a Fluid Distortion Animation with Three.js appeared first on Codrops.

Replicating the Interweave Shape Animation with Three.js

In this new ALL YOUR HTML coding session you’ll learn how to reconstruct the beautiful shape animation from the website of INTERWEAVE agency with Three.js. We’ll be calculating tangents and bitangents and use Physical materials to make a beautiful object.

Original website: https://interweaveagency.com/

This coding session was streamed live on March 20, 2022.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Interweave Shape Animation with Three.js appeared first on Codrops.

Creating a Risograph Grain Light Effect in Three.js

Recently, I release my brand new portfolio, home to my projects that I have worked on in the past couple of years:

As I was doing experimentations for the portfolio, I tried to reproduce this kind of effect I found on the web:

I really like these 2D grain effects applied to 3D elements. It kind of has this cool feeling of cray and rocks and I decided to try and reproduce it from scratch. I started with a custom light shader, then mixed it with a grain effect and by playing with some values I got to this final result:

In this tutorial I’d like to share with you what I’ve done to achieve this effect. We’re going to explore two different ways of doing it.

Note that I won’t get into too much detail about Three.js and WebGL for simplicity, so it’s good to have some solid knowledge of JavaScript, Three.js and some notions about shaders before starting with this tutorial. If you’re not very familiar with shaders but with Three.js, then the second way is for you!

Summary

Method 1: Writing our own custom ShaderMaterial (That’s the harder path but you’ll learn about how light reflection works!)

  • Creating a basic Three.js scene
  • Use ShaderMaterial
  • Create a diffuse light shader
  • Create a grain effect using 2D noise
  • Mix it with light

Method 2: Starting from MeshLambertMaterial shader (Easier but includes unused code from Three.js since we’ll rewrite the Three.js LambertMaterial shader)

  • Copy and paste MeshLambertMaterial
  • Add our custom grain light effect to the fragmentShader
  • Add any Three.js lights

1. Writing our own custom ShaderMaterial

Creating a basic Three.js scene

First we need to set up a basic Three.js scene with a simple sphere in the center:

Here is a Three.js basic scene with a camera, a renderer and a sphere in the middle. You can find the code in this repository in the file src/js/Scene.js, so you can start the tutorial from here.

Use ShaderMaterial

Let’s create a custom shader in Three.js using the ShaderMaterial class. You can pass it uniforms objects, and a vertex and a fragment shader as parameters. The cool thing about this class is that it’s already giving you most of the necessary uniforms and attributes for a basic shader (positions of the vertices, normals for light, ModelViewProjection matrices and more).

First, let’s create a uniform that will contain the default color of our sphere. Here I picked a light blue (#51b1f5) but feel free to pick your favorite color. We’ll use a new THREE.Color() and call our uniform uColor. We’ll replace the material from the previous code l.87:

const material = new THREE.ShaderMaterial({
  uniforms: {
    uColor: { value: new THREE.Color(0x51b1f5) }
  }
});

Then let’s create a simple vertex shader in vertexShader.glsl, a separated file that will display the sphere vertices at the correct position related to the camera.

void main(void) {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

And finally, we write a basic fragment shader fragmentShader.glsl in a separated file as well, that will use our uniform uColor vec3 value:

uniform vec3 uColor;

void main(void) {
  gl_FragColor = vec4(uColor, 1.);
}

Then, let’s import and link them to our ShaderMaterial.

import vertexShader from './vertexShader.glsl'
import fragmentShader from './fragmentShader.glsl'
...    
const material = new THREE.ShaderMaterial({
  vertexShader: vertexShader,
  fragmentShader: fragmentShader,
  uniforms: {
    uColor: { value: new THREE.Color(0x51b1f5) }
  }
});

Now we should have a nice monochrome sphere:

Create a diffuse light shader

Creating our own custom light shader will allow us to easily manipulate how the light should affect our mesh.

Even if that seems complicated to do, it’s not that much code and you can find great articles online explaining how light reflection works on a 3D object. I recommend you read webglfundamentals if you would like to learn more details on this topic.

Going further, we want to add a light source in our scene to see how the sphere reflects light. Let’s add three new uniforms, one for the position of our spotlight, the other for the color and a last one for the intensity of the light. Let’s place the spotlight above the object, 5 units in Y and 1 unit in Z, use a white color and an intensity of 0.7 for this example.

 ...
 uLightPos: {
   value: new THREE.Vector3(0, 5, 3) // position of spotlight
 },
 uLightColor: {
   value: new THREE.Color(0xffffff) // default light color
 },
 uLightIntensity: {
   value: 0.7 // light intensity
 },

Now let’s talk about normals. A THREE.SphereGeometry has normals 3D vectors represented by these arrows:

For each surface of the sphere, these red vectors define in which direction the light rays should be reflected. That’s what we’re going to use to calculate the intensity of the light for each pixel.

Let’s add two varyings on the vertex shader:

  • vNormals, the normals vectors of the object related to the world position (where it is in the global scene).
  • vSurfaceToLight, this represents the direction of the light position minus the direction of each surface of the sphere.
uniform vec3 uLightPos;

varying vec3 vNormal;
varying vec3 vSurfaceToLight;

void main(void) {
  vNormal = normalize(normalMatrix * normal);

  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  // General calculations needed for diffuse lighting
  // Calculate a vector from the fragment location to the light source
  vec3 surfaceToLightDirection = vec3( modelViewMatrix * vec4(position, 1.0));
  vec3 worldLightPos = vec3( viewMatrix * vec4(uLightPos, 1.0));
  vSurfaceToLight = normalize(worldLightPos - surfaceToLightDirection);
}

Now let’s generate colors based on these light values in the Fragment shader.

We already have the normals values with vNormals. To calculate a basic light reflection on a 3D object we need two values light types: ambient and diffuse.

Ambient light is a constant value that will give a global light color of the whole scene. Let’s just use our light color for this case.

Diffuse light is representing the value of how strong the light is depending on how the object reflects it. That means that all surfaces which are close to and facing the spotLight should be more enlightened than surfaces that are far away and in the same direction. There is an amazing math function to calculate this value called the dot() product. The formula for getting a diffuse color is the dot product of vSurfaceToLight and vNormal. In this image you can see that vectors facing the sun are brighter than the others:

Then we need to addition the ambient and diffuse light and finally multiply it by a lightIntensity. Once we got our light value let’s multiply it by the color of our sphere. Fragment shader:

uniform vec3 uLightColor;
uniform vec3 uColor;
uniform float uLightIntensity;

varying vec3 vNormal;
varying vec3 vSurfaceToLight;

vec3 light_reflection(vec3 lightColor) {
  // AMBIENT is just the light's color
  vec3 ambient = lightColor;

  //- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  // DIFFUSE  calculations
  // Calculate the cosine of the angle between the vertex's normal
  // vector and the vector going to the light.
  vec3 diffuse = lightColor * dot(vSurfaceToLight, vNormal);

  // Combine 
  return (ambient + diffuse);
}

void main(void) {
  vec3 light_value = light_reflection(uLightColor);
  light_value *= uLightIntensity;

  gl_FragColor = vec4(uColor * light_value, 1.);
}

And voilà:

Feel free to click and drag on this sandbox scene to rotate the camera.

Note that if you want to recreate MeshPhongMaterial you also need to calculate the specular light. This represent the effect you can observe when a ray of light gets directly into our eyes when reflected by an object, but we don’t need that precision here.

Create a grain effect using 2D noise

To get a 2D grain effect we’ll have to use a noise function that will display a gray color from 0 to 1 for each pixel of the screen in a “beautiful randomness”. There are a lot of functions online for creating simplex noise, perlin noise or others. Here we’ll use glsl-noise for a 2D simplex noise and glslify to import the noise function directly at the beginning of our fragment shader using:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)

Thanks to the native WebGL value gl_FragCoord.xy we can easily get the UVs (coordinates) of the screen. Then we just have to apply the noise to these coordinates vec3 textureNoise = vec3(snoise2(uv)); This will create our 2D noise. Then, let’s apply these noise colors to our gl_FragColor:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)
... 
// grain
vec2 uv = gl_FragCoord.xy;

vec3 noiseColors = vec3(snoise2(uv));

gl_FragColor = vec4(noiseColors, 1.0);

What a nice old TV noise effect:

As you can see, when moving the camera, the texture feels like it’s “stuck” to the screen, that’s because we matched our simplex noise effect to the coordinates of the screen to create a 2D style effect. You can also adjust the size of the noise like so uv /= myNoiseScaleVal;

Mixing it with the light

Now that we got our noise value and our light let’s mix them! The idea is to apply less noise where the light value is stronger (1.0 == white) and more noise where the light value is weaker (0.0 == black). We already have our light value, so let’s just multiply the texture value with that:

colorNoise *= light_value.r;

You can see how the light affects the noise now, but this doesn’t look very strong. We can accentuate this value by using an exponential function. To do that in GLSL (the shader language) you can use pow(). It’s already included in shaders, here I used the exponential of 5.

colorNoise *= pow(light_value.r, 5.0);

Then, let’s enlighten the noise color effect like so:

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);

To gray, right? Almost there, let’s re-add our beautiful color that we got from the start. We can say that if the light is strong it will go white, and if the light is weak it will be clamped to the initial channel color of the sphere like this:

gl_FragColor.r = max(textureNoise.r, uColor.r);
gl_FragColor.g = max(textureNoise.g, uColor.g);
gl_FragColor.b = max(textureNoise.b, uColor.b);
gl_FragColor.a = 1.0;

Now that we have this Material ready, we can apply it to any object of the scene:

Congrats, you finished the first way of doing this effect!

2. Starting from MeshLambertMaterial shader

This way is simpler since we’ll directly reuse the MeshLambertMaterial from Three.js and apply our grain in the fragment shader. First let’s create a basic scene like in the first method. You can take this repository, and start from the src/js/Scene.js file to follow this second method.

Copy and paste MeshLambertMaterial

In Three.js all the Materials shaders can be found here. They are composed by shunks (reusable GLSL code) that are included here and there in Three.js shaders. We’re going to copy the MeshLambertMaterial fragment shader from here and paste it in a new fragment.glsl file.

Then, let’s add a new ShaderMaterial that will include this fragmentShader. However, for the vertex, since we’re not changing it, we can just pick it directly from the lib THREE.ShaderLib.lambert.vertexShader.

Finally, we need to merge the Three.js uniforms with ours, using THREE.UniformsUtils.merge(). Like in the first method, let’s use the sphere color uColor, uNoiseCoef to play with the grain effect and a uNoiseScale for the grain size.

import fragmentShader from './fragmentShader.glsl'
...

this.uniforms = THREE.UniformsUtils.merge([
  THREE.ShaderLib.lambert.uniforms,
  {
    uColor: {
      value: new THREE.Color(0x51b1f5)
    },
    uNoiseCoef: {
      value: 3.5
    },
    uNoiseScale: {
      value: 0.8
    }
  }
])

const material = new THREE.ShaderMaterial({
  vertexShader: THREE.ShaderLib.lambert.vertexShader,
  fragmentShader: glslify(fragmentShader),
  uniforms: this.uniforms,
  lights: true,
  transparent: true
})

Note that we’re importing the fragmentShader using glslify because we’re going to use the same simplex noise 2D from the first method. Also, the lights parameter needs to be set to true so the materials can reuse the value of all source lights of the scene.

Add our custom grain light effect to the fragmentShader

In our freshly copied fragment shader, we’ll need to import the 2D simplex noise using the glslify and glsl-noise libs. #pragma glslify: snoise2 = require(glsl-noise/simplex/2d).

If we look closely at the MeshLambertMaterial fragment we can find a outgoingLight value. This looks very similar to our light_value from the first method, so let’s apply the same 2D grain shader effect to it:

// grain
vec2 uv = gl_FragCoord.xy;
uv /= uNoiseScale;

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);
colorNoise *= pow(outgoingLight.r, uNoiseCoef);

Then let’s mix our uColor with the colorNoise. And here is the final fragment shader:

#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)
...
uniform float uNoiseScale;
uniform float uNoiseCoef;
...	
// write this the very end of the shader
// grain
vec2 uv = gl_FragCoord.xy;
uv /= uNoiseScale;

vec3 colorNoise = vec3(snoise2(uv) * 0.5 + 0.5);
colorNoise *= pow(outgoingLight.r, uNoiseCoef);

gl_FragColor.r = max(colorNoise.r, uColor.r);
gl_FragColor.g = max(colorNoise.g, uColor.g);
gl_FragColor.b = max(colorNoise.b, uColor.b);
gl_FragColor.a = 1.0;

Add any Three.js lights

No light? Let’s add some THREE.SpotLight in the scene in src/js/Scene.js file.

const spotLight = new THREE.SpotLight(0xff0000)
spotLight.position.set(0, 5, 4)
spotLight.intensity = 1.85
this.scene.add(spotLight)

And here you go:

You can also play with the alpha value in the fragment shader like this:

gl_FragColor = vec4(colorNoise, 1. - colorNoise.r);

And that’s it! Hope you enjoyed the tutorial and thank you for reading.

Resources

The post Creating a Risograph Grain Light Effect in Three.js appeared first on Codrops.

Noise Pattern Reconstruction from Monopo Studio

In this new ALL YOUR HTML coding session we’ll be reconstructing the beautiful noise pattern from Monopo Studio’s website using Three.js and GLSL and some postprocessing.

Monopo Studio: https://twitter.com/monopo_en

Developer: https://twitter.com/bramichou

This coding session was streamed live on February 20, 2022.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Noise Pattern Reconstruction from Monopo Studio appeared first on Codrops.

Pixel Distortion Effect with Three.js

The creative coder’s dream is to rule pixels on their screen. To arrange them in beautiful patterns and do whatever you want with them. Well, this is exactly what we are going to do with this demo. Let’s distort and rule pixels with the power of our mouse cursor, just like the developers of the amazing Infinite Bad Guy website did!

Setup

The scene is the usual, we just create a fullscreen image on a screen, so it preserves the aspect ratio, and has its “background-size: cover” applied through the glsl shader. In the end, we have a geometry stretched for the whole viewport, and a little shader like this:

vec2 newUV = (vUv - vec2(0.5))*aspect + vec2(0.5);
    gl_FragColor = texture2D(uTexture,newUV);
    

The whole thing just shows the image, no distortions yet.

The Magnificent Data Texture

I hope by this time you know that any texture in WebGL is basically just numbers corresponding to each pixel’s color.

Three.js has a specific API to create your own textures pixel by pixel. It is called, no surprise, DataTexture. So let’s create another texture for our demo, with random numbers:

    const size = rows * columns;
    const data = new Float32Array(3 * size);

    for(let i = 0; i < size; i++) {
          const stride = i * 3;
          let r = Math.random() * 255 ;
          let r1 = Math.random() * 255 ;

          data[stride] = r; // red, and also X
          data[stride + 1] = r1; // green, and also Y
          data[stride + 2] = 0; // blue
        }
    this.texture = new THREE.DataTexture(data, width, height, THREE.RGBFormat, THREE.FloatType);
    

This is heavily based on the default example from the documentation. The only difference is, we are using FloatType texture, so we are not bound to only integer numbers. One of the interesting things is, that numbers should be between 0 and 255, even though, in the GLSL it will be 0..1 range anyway. You should just keep that in mind, so you are using correct number ranges.

What is also an interesting idea, is that GLSL doesn’t really care what the numbers mean in your data structures. It could be both color.rgb, and color.xyz. And that’s precisely what we will use here, we don’t care about exact color of this texture, we will use it as a distortion for our demo! Just as a nice data structure for GLSL.

But, just to understand better, this is what the texture will look like when you want to preview it:

You see those big rectangles because i picked something like 25×35 DataTexture size, which is really low-res.
Also, it has colors because im using two different random numbers for XY(Red-Green) variables, which results in this.

So now, we could already use this texture as a distortion in our fragment shader:

    vec4 color = texture2D(uTexture,newUV);
    vec4 offset = texture2D(uDataTexture,vUv);
    // we are distorting UVs with new texture values
    gl_FragColor = texture2D(uTexture,newUV - 0.02*offset.rg);
    

The Mouse and its power

So now, let’s make it dynamic! We will need a couple of things. First, we need the mouse position and speed. And also, the mouse radius, meaning, at what distance would the mouse distort our image.

A short explanation: On each step of the animation, I will loop through my grid cells aka pixels of DataTexture. And assign some values based on mouse position and speed. Second, im going to relax the distortion. This needs to be done, if the user stops moving mouse, the distortion should come to 0.

So, now the code looks like this, simplified a bit, for better understanding the concept:

    let data = DataTexture.image.data;
    // loop through all the pixels of DataTexture
    for (let i = 0; i < rows; i++) {
    for (let j = 0; j < cols; j++) {
        // get distance between mouse, and current DataTexture pixel
      let distance = distanceBetween(mouse, [i,j])
      if (distance < maxDistance) {

        let index = 3 * (i + this.size * j); // get the pixel coordinate on screen
        data[index] = this.mouse.vX ; // mouse speed
        data[index + 1] =  this.mouse.vY ; // mouse speed
      }
    }
    // slowly move system towards 0 distortion
    for (let i = 0; i < data.length; i += 3) {
      data[i] *= 0.9
      data[i + 1] *= 0.9
    }
    DataTexture.needsUpdate = true;

A couple of things are added to make it look better, but the concept is here. If you ever worked with particle systems, this is exactly that concept, except our particles never move, we just change some values of the particles (distortion inside each big pixel).

Result

I left the settings open in the last demo, so you can play with parameters and come up with your own unique feel of the animation. Let me know what it inspired you to create!

The post Pixel Distortion Effect with Three.js appeared first on Codrops.

Crafting Scroll Based Animations in Three.js

Having an experience composed of only WebGL is great, but sometimes, you’ll want the experience to be part of a classic website.

The experience can be in the background to add some beauty to the page, but then, you’ll want that experience to integrate properly with the HTML content.

In this tutorial, we will:

  • learn how to use Three.js as a background of a classic HTML page
  • make the camera translate to follow the scroll
  • discover some tricks to make the scrolling more immersive
  • add a cool parallax effect based on the cursor position
  • trigger some animations when arriving at the corresponding sections
See the live version

This tutorial is part of the 39 lessons available in the Three.js Journey course.

Three.js Journey is the ultimate course to learn WebGL with Three.js. Once you’ve subscribed, you get access to 45 hours of videos also available as text version. First, you’ll start with the basics like the reasons to use Three.js and how to setup a simple scene. Then, you’ll start animating it, creating cool environments, interacting with it, creating your own models in Blender. To finish, you will learn advanced techniques like physics, shaders, realistic renders, code structuring, baking, etc.

As a member of the Three.js Journey community, you will also get access to a members-only Discord server.

Use the code CODROPS1 for a 20% discount.

Starter

This tutorial is intended for beginners but with some basic knowledge of Three.js.

Installation

For this tutorial, a starter.zip file is provided.

You should see a red cube at the center with “My Portfolio” written on it:

The libraries are loaded as plain <script> to keep things simple and accessible for everyone:

  • Three.js in version 0.136.0
  • GSAP in version 3.9.1

For specific techniques like Three.js controls or texture loading, you are going to need a development server, but we are not going to use those here.

Setup

We already have a basic Three.js setup.

Here’s a quick explaination of what each part of the setup does, but if you want to learn more, everything is explained in the Three.js Journey course:

index.html

<canvas class="webgl"></canvas>

Creates a <canvas> in which we are going to draw the WebGL renders.

<section class="section">
    <h1>My Portfolio</h1>
</section>
<section class="section">
    <h2>My projects</h2>
</section>
<section class="section">
    <h2>Contact me</h2>
</section>

Creates some sections with a simple title in them. You can add whatever you want in these.

<script src="./three.min.js"></script>
<script src="./gsap.min.js"></script>
<script src="./script.js"></script>

Loads the Three.js library, the GSAP library, and to finish, our JavaScript file.

style.css

*
{
    margin: 0;
    padding: 0;
}

Resets any margin or padding.

.webgl
{
    position: fixed;
    top: 0;
    left: 0;
}

Makes the WebGL <canvas> fit the viewport and stay fixed while scrolling.

.section
{
    display: flex;
    align-items: center;
    height: 100vh;
    position: relative;
    font-family: 'Cabin', sans-serif;
    color: #ffeded;
    text-transform: uppercase;
    font-size: 7vmin;
    padding-left: 10%;
    padding-right: 10%;
}

section:nth-child(odd)
{
    justify-content: flex-end;
}

Centers the sections. Also centers the text vertically and aligns it on the right for one out of two sections.

script.js

/**
 * Base
 */
// Canvas
const canvas = document.querySelector('canvas.webgl')

// Scene
const scene = new THREE.Scene()

Retrieves the canvas from the HTML and create a Three.js Scene.

/**
 * Test cube
 */
const cube = new THREE.Mesh(
    new THREE.BoxGeometry(1, 1, 1),
    new THREE.MeshBasicMaterial({ color: '#ff0000' })
)
scene.add(cube)

Creates the red cube that we can see at the center. We are going to remove it shortly.

/**
 * Sizes
 */
const sizes = {
    width: window.innerWidth,
    height: window.innerHeight
}

window.addEventListener('resize', () =>
{
    // Update sizes
    sizes.width = window.innerWidth
    sizes.height = window.innerHeight

    // Update camera
    camera.aspect = sizes.width / sizes.height
    camera.updateProjectionMatrix()

    // Update renderer
    renderer.setSize(sizes.width, sizes.height)
    renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2))
})

Saves the size of the viewport in a sizes variable, updates that variable when a resize event occurs and updates the camera and renderer at the same time (more about these two right after).

/**
 * Camera
 */
// Base camera
const camera = new THREE.PerspectiveCamera(35, sizes.width / sizes.height, 0.1, 100)
camera.position.z = 6
scene.add(camera)

Creates a PerspectiveCamera and moves it backward on the positive z axis.

/**
 * Renderer
 */
const renderer = new THREE.WebGLRenderer({
    canvas: canvas
})
renderer.setSize(sizes.width, sizes.height)
renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2))

Creates the WebGLRenderer that will render the scene seen from the camera and updates its size and pixel ratio with a maximum of 2 to prevent performance issues.

/**
 * Animate
 */
const clock = new THREE.Clock()

const tick = () =>
{
    const elapsedTime = clock.getElapsedTime()

    // Render
    renderer.render(scene, camera)

    // Call tick again on the next frame
    window.requestAnimationFrame(tick)
}

tick()

Starts a loop with a classic requestAnimationFrame to call the tick function on each frame and animates our experience. In that tick function, we do a render of the scene from the camera on each frame.

The Clock lets us retrieve the elapsed time that we save in the elapsedTime variable for later use.

HTML Scroll

Fix the elastic scroll

In some environments, you might notice that, if you scroll too far, you get a kind of elastic animation when the page goes beyond the limit:

While this is a cool feature, by default, the back of the page is white and doesn’t match our experience.

We want to keep that elastic effect for those who have it, but make the white parts the same color as the renderer.

We could have set the background-color of the page to the same color as the clearColor of the renderer. But instead, we are going to make the clearColor transparent and only set the background-color on the page so that the background color is set at one place only.

To do that, in /script.js, you need to set the alpha property to true on the WebGLRenderer:

const renderer = new THREE.WebGLRenderer({
    canvas: canvas,
    alpha: true
})

By default, the clear alpha value is 0 which is why we didn’t have to set it ourselves. Telling the renderer to handle alpha is enough. But if you want to change that value, you can do it with setClearAlpha:

renderer.setClearAlpha(0)

We can now see the back of the page which is white:

In /style.css, add a background-color to the html in CSS:

html
{
    background: #1e1a20;
}

We get a nice uniform background color and the elastic scroll isn’t an issue anymore:

Objects

We are going to create an object for each section to illustrate each of them.

To keep things simple, we will use Three.js primitives, but you can create whatever you want or even import custom models into the scene.

In /script.js, remove the code for the cube. In its place, create three Meshes using a TorusGeometry, a ConeGeometry and a TorusKnotGeometry:

/**
 * Objects
 */
// Meshes
const mesh1 = new THREE.Mesh(
    new THREE.TorusGeometry(1, 0.4, 16, 60),
    new THREE.MeshBasicMaterial({ color: '#ff0000' })
)
const mesh2 = new THREE.Mesh(
    new THREE.ConeGeometry(1, 2, 32),
    new THREE.MeshBasicMaterial({ color: '#ff0000' })
)
const mesh3 = new THREE.Mesh(
    new THREE.TorusKnotGeometry(0.8, 0.35, 100, 16),
    new THREE.MeshBasicMaterial({ color: '#ff0000' })
)

scene.add(mesh1, mesh2, mesh3)

All the objects should be on top of each other (we will fix that later):

In order to keep things simple, our code will be a bit redundant. But don’t hesitate to use arrays or other code structuring solutions if you have more sections.

Material

Base material

We are going to use the MeshToonMaterial for the objects and are going to create one instance of the material and use it for all three Meshes.

When creating the MeshToonMaterial, use '#ffeded' for the color property and apply it to all 3 Meshes:

// Material
const material = new THREE.MeshToonMaterial({ color: '#ffeded' })

// Meshes
const mesh1 = new THREE.Mesh(
    new THREE.TorusGeometry(1, 0.4, 16, 60),
    material
)
const mesh2 = new THREE.Mesh(
    new THREE.ConeGeometry(1, 2, 32),
    material
)
const mesh3 = new THREE.Mesh(
    new THREE.TorusKnotGeometry(0.8, 0.35, 100, 16),
    material
)

scene.add(mesh1, mesh2, mesh3)

Unfortunately, it seems that the objects are now black:

The reason is that the MeshToonMaterial is one of the Three.js materials that appears only when there is light.

Light

Add one DirectionalLight to the scene:

/**
 * Lights
 */
const directionalLight = new THREE.DirectionalLight('#ffffff', 1)
directionalLight.position.set(1, 1, 0)
scene.add(directionalLight)

You should now see your objects:

Position

By default, in Three.js, the field of view is vertical. This means that if you put one object on the top part of the render and one object on the bottom part of the render and then you resize the window, you’ll notice that the objects stay put at the top and at the bottom.

To illustrate this, temporarily add this code:

mesh1.position.y = 2
mesh1.scale.set(0.5, 0.5, 0.5)

mesh2.visible = false

mesh3.position.y = - 2
mesh3.scale.set(0.5, 0.5, 0.5)

The torus stays at the top and the torus knot stays at the bottom:

When you’re done, remove the code above.

This is good because it means that we only need to make sure that each object is far enough away from the other on the y axis, so that we don’t see them together.

Create an objectsDistance variable and choose a random value like 2:

const objectsDistance = 2

Use that variable to position the meshes on the y axis. The values must be negative so that the objects go down:

mesh1.position.y = - objectsDistance * 0
mesh2.position.y = - objectsDistance * 1
mesh3.position.y = - objectsDistance * 2

Increase the objectsDistance until the objects are far enough apart. A good amount should be 4, but you can go back to change that value later.

const objectsDistance = 4

Now, we can only see the first object:

The two others will be below. We will position them horizontally once we move the camera with the scroll and they appear again.

The objectsDistance will get handy a bit later, which is why we saved the value in a variable.

Permanent rotation

To give more life to the experience, we are going to add a permanent rotation to the objects.

First, add the objects to a sectionMeshes array:

const sectionMeshes = [ mesh1, mesh2, mesh3 ]

Then, in the tick function, loop through the sectionMeshes array and apply a slow rotation by using the elapsedTime already available:

const tick = () =>
{
    const elapsedTime = clock.getElapsedTime()

    // Animate meshes
    for(const mesh of sectionMeshes)
    {
        mesh.rotation.x = elapsedTime * 0.1
        mesh.rotation.y = elapsedTime * 0.12
    }

    // ...
}

All the meshes (though we can see only one here) should slowly rotate:

Camera

Scroll

It’s time to make the camera move with the scroll.

First, we need to retrieve the scroll value. This can be done with the window.scrollY property.

Create a scrollY variable and assign it window.scrollY:

/**
 * Scroll
 */
let scrollY = window.scrollY

But then, we need to update that value when the user scrolls. To do that, listen to the 'scroll' event on window:

window.addEventListener('scroll', () =>
{
    scrollY = window.scrollY

    console.log(scrollY)
})

You should see the scroll value in the logs. Remove the console.log.

In the tick function, use scrollY to make the camera move (before doing the render):

const tick = () =>
{
    // ...

    // Animate camera
    camera.position.y = scrollY

    // ...
}

Not quite right yet:

The camera is way too sensitive and going in the wrong direction. We need to work a little on that value.

scrollY is positive when scrolling down, but the camera should go down on the y axis. Let’s invert the value:

camera.position.y = - scrollY

Better, but still too sensitive:

scrollY contains the amount of pixels that have been scrolled. If we scroll 1000 pixels (which is not that much), the camera will go down of 1000 units in the scene (which is a lot).

Each section has exactly the same size as the viewport. This means that when we scroll the distance of one viewport height, the camera should reach the next object.

To do that, we need to divide scrollY by the height of the viewport which is sizes.height:

camera.position.y = - scrollY / sizes.height

The camera is now going down of 1 unit for each section scrolled. But the objects are currently separated by 4 units which is the objectsDistance variable:

We need to multiply the value by objectsDistance:

camera.position.y = - scrollY / sizes.height * objectsDistance

To put it in a nutshell, if the user scrolls down one section, then the camera will move down to the next object:

Position object horizontally

Now is a good time to position the objects left and right to match the titles:

mesh1.position.x = 2
mesh2.position.x = - 2
mesh3.position.x = 2

Parallax

We call parallax the action of seeing one object through different observation points. This is done naturally by our eyes and it’s how we feel the depth of things.

To make our experience more immersive, we are going to apply this parallax effect by making the camera move horizontally and vertically according to the mouse movements. It will create a natural interaction, and help the user feel the depth.

Cursor

First, we need to retrieve the cursor position.

To do that, create a cursor object with x and y properties:

/**
 * Cursor
 */
const cursor = {}
cursor.x = 0
cursor.y = 0

Then, listen to the mousemove event on window and update those values:

window.addEventListener('mousemove', (event) =>
{
    cursor.x = event.clientX
    cursor.y = event.clientY

    console.log(cursor)
})

You should get the pixel positions of the cursor in the console:

While we could use those values directly, it’s always better to adapt them to the context.

First, the amplitude depends on the size of the viewport and users with different screen resolutions will have different results. We can normalize the value (from 0 to 1) by dividing them by the size of the viewport:

window.addEventListener('mousemove', (event) =>
{
    cursor.x = event.clientX / sizes.width
    cursor.y = event.clientY / sizes.height

    console.log(cursor)
})

While this is better already, we can do even more.

We know that the camera will be able to go as much on the left as on the right. This is why, instead of a value going from 0 to 1 it’s better to have a value going from -0.5 to 0.5.

To do that, subtract 0.5:

window.addEventListener('mousemove', (event) =>
{
    cursor.x = event.clientX / sizes.width - 0.5
    cursor.y = event.clientY / sizes.height - 0.5

    console.log(cursor)
})

Here is a clean value adapted to the context:

Remove the console.log.

We can now use the cursor values in the tick function. Create a parallaxX and a parallaxY variable and put the cursor.x and cursor.y in them:

const tick = () =>
{
    // ...

    // Animate camera
    camera.position.y = - scrollY / sizes.height * objectsDistance

    const parallaxX = cursor.x
    const parallaxY = cursor.y
    camera.position.x = parallaxX
    camera.position.y = parallaxY

    // ...
}

Unfortunately, we have two issues.

The x and y axes don’t seem synchronized in terms of direction. And, the camera scroll doesn’t work anymore:

Let’s fix the first issue. When we move the cursor to the left, the camera seems to go to the left. Same thing for the right. But when we move the cursor up, the camera seems to move down and the opposite when moving the cursor down.

To fix that weird feeling, invert the cursor.y:

    const parallaxX = cursor.x
    const parallaxY = - cursor.y
    camera.position.x = parallaxX
    camera.position.y = parallaxY

For the second issue, the problem is that we update the camera.position.y twice and the second one will replace the first one.

To fix that, we are going to put the camera in a Group and apply the parallax on the group and not the camera itself.

Right before instantiating the camera, create the Group, add it to the scene and add the camera to the Group:

/**
 * Camera
 */
// Group
const cameraGroup = new THREE.Group()
scene.add(cameraGroup)

// Base camera
const camera = new THREE.PerspectiveCamera(35, sizes.width / sizes.height, 0.1, 100)
camera.position.z = 6
cameraGroup.add(camera)

This shouldn’t change the result, but now, the camera is inside a group.

In the tick function, instead of applying the parallax on the camera, apply it on the cameraGroup:

const tick = () =>
{
    // ...

    // Animate camera
    camera.position.y = - scrollY / sizes.height * objectsDistance

    const parallaxX = cursor.x
    const parallaxY = - cursor.y
    
    cameraGroup.position.x = parallaxX
    cameraGroup.position.y = parallaxY

    // ...
}

The scroll animation and parallax animation are now mixed together nicely:

But we can do even better.

Easing

The parallax animation is a good start, but it feels a bit too mechanic. Having such a linear animation is impossible in real life for a number of reasons: the camera has weight, there is friction with the air and surfaces, muscles can’t make such a linear movement, etc. This is why the movement feels a bit wrong. We are going to add some “easing” (also called “smoothing” or “lerping”) and we are going to use a well-known formula.

The idea behind the formula is that, on each frame, instead of moving the camera straight to the target, we are going to move it (let’s say) a 10th closer to the destination. Then, on the next frame, another 10th closer. Then, on the next frame, another 10th closer.

On each frame, the camera will get a little closer to the destination. But, the closer it gets, the slower it moves because it’s always a 10th of the actual position toward the target position.

First, we need to change the = to += because we are adding to the actual position:

    cameraGroup.position.x += parallaxX
    cameraGroup.position.y += parallaxY

Then, we need to calculate the distance from the actual position to the destination:

    cameraGroup.position.x += (parallaxX - cameraGroup.position.x)
    cameraGroup.position.y += (parallaxY - cameraGroup.position.y)

Finally, we only want a 10th of that distance:

    cameraGroup.position.x += (parallaxX - cameraGroup.position.x) * 0.1
    cameraGroup.position.y += (parallaxY - cameraGroup.position.y) * 0.1

The animation feels a lot smoother:

But there is still a problem that some of you might have noticed.

If you test the experience on a high frequency screen, the tick function will be called more often and the camera will move faster toward the target. While this is not a big issue, it’s not accurate and it’s preferable to have the same result across devices as much as possible.

To fix that, we need to use the time spent between each frame.

Right after instantiating the Clock, create a previousTime variable:

const clock = new THREE.Clock()
let previousTime = 0

At the beginning of the tick function, right after setting the elapsedTime, calculate the deltaTime by subtracting the previousTime from the elapsedTime:

const tick = () =>
{
    const elapsedTime = clock.getElapsedTime()
    const deltaTime = elapsedTime - previousTime

    // ...
}

And then, update the previousTime to be used on the next frame:

const tick = () =>
{
    const elapsedTime = clock.getElapsedTime()
    const deltaTime = elapsedTime - previousTime
    previousTime = elapsedTime

    console.log(deltaTime)

    // ...
}

You now have the time spent between the current frame and the previous frame in seconds. For high frequency screens, the value will be smaller because less time was needed.

We can now use that deltaTime on the parallax, but, because the deltaTime is in seconds, the value will be very small (around 0.016 for most common screens running at 60fps). Consequently, the effect will be very slow.

To fix that, we can change 0.1 to something like 5:

    cameraGroup.position.x += (parallaxX - cameraGroup.position.x) * 5 * deltaTime
    cameraGroup.position.y += (parallaxY - cameraGroup.position.y) * 5 * deltaTime

We now have a nice easing that will feel the same across different screen frequencies:

Finally, now that we have the animation set properly, we can lower the amplitude of the effect:

    const parallaxX = cursor.x * 0.5
    const parallaxY = - cursor.y * 0.5

Particles

A good way to make the experience more immersive and to help the user feel the depth is to add particles.

We are going to create very simple square particles and spread them around the scene.

Because we need to position the particles ourselves, we are going to create a custom BufferGeometry.

Create a particlesCount variable and a positions variable using a Float32Array:

/**
 * Particles
 */
// Geometry
const particlesCount = 200
const positions = new Float32Array(particlesCount * 3)

Create a loop and add random coordinates to the positions array:

for(let i = 0; i < particlesCount; i++)
{
    positions[i * 3 + 0] = Math.random()
    positions[i * 3 + 1] = Math.random()
    positions[i * 3 + 2] = Math.random()
}

We will change the positions later, but for now, let’s keep things simple and make sure that our geometry is working.

Instantiate the BufferGeometry and set the position attribute:

const particlesGeometry = new THREE.BufferGeometry()
particlesGeometry.setAttribute('position', new THREE.BufferAttribute(positions, 3))

Create the material using PointsMaterial:

// Material
const particlesMaterial = new THREE.PointsMaterial({
    color: '#ffeded',
    sizeAttenuation: true,
    size: 0.03
})

Create the particles using Points:

// Points
const particles = new THREE.Points(particlesGeometry, particlesMaterial)
scene.add(particles)

You should get a bunch of particles spread around in a cube:

We can now position the particles on the three axes.

For the x (horizontal) and z (depth), we can use random values that can be as much positive as they are negative:

for(let i = 0; i < particlesCount; i++)
{
    positions[i * 3 + 0] = (Math.random() - 0.5) * 10
    positions[i * 3 + 1] = Math.random()
    positions[i * 3 + 2] = (Math.random() - 0.5) * 10
}

For the y (vertical) it’s a bit more tricky. We need to make the particles start high enough and then spread far enough below so that we reach the end with the scroll.

To do that, we can use the objectsDistance variable and multiply by the number of objects which is the length of the sectionMeshes array:

for(let i = 0; i < particlesCount; i++)
{
    positions[i * 3 + 0] = (Math.random() - 0.5) * 10
    positions[i * 3 + 1] = objectsDistance * 0.5 - Math.random() * objectsDistance * sectionMeshes.length
    positions[i * 3 + 2] = (Math.random() - 0.5) * 10
}

That’s all for the particles, but you can improve them with random sizes, random alpha. And, we can even animate them.

Triggered rotations

As a final feature and to make the exercise just a bit harder, we are going to make the objects do a little spin when we arrive at the corresponding section in addition to the permanent rotation.

Knowing when to trigger the animation

First, we need a way to know when we reach a section. There are plenty of ways of doing that and we could even use a library, but in our case, we can use the scrollY value and do some math to find the current section.

After creating the scrollY variable, create a currentSection variable and set it to 0:

let scrollY = window.scrollY
let currentSection = 0

In the 'scroll' event callback function, calculate the current section by dividing the scrollY by sizes.height:

window.addEventListener('scroll', () =>
{
    scrollY = window.scrollY

    const newSection = scrollY / sizes.height
    
    console.log(newSection)
})

This works because each section is exactly one height of the viewport.

To get the exact section instead of that float value, we can use Math.round():

window.addEventListener('scroll', () =>
{
    scrollY = window.scrollY

    const newSection = Math.round(scrollY / sizes.height)
    
    console.log(newSection)
})

We can now test if newSection is different from currentSection. If so, that means we changed the section and we can update the currentSection in order to do our animation:

window.addEventListener('scroll', () =>
{
    scrollY = window.scrollY
    const newSection = Math.round(scrollY / sizes.height)

    if(newSection != currentSection)
    {
        currentSection = newSection

        console.log('changed', currentSection)
    }
})

Animating the meshes

We can now animate the meshes and, to do that, we are going to use GSAP.

The GSAP library is already loaded from the HTML file as we did for Three.js.

Then, in the if statement we did earlier, we can do the animation with gsap.to():

window.addEventListener('scroll', () =>
{
    // ...
    
    if(newSection != currentSection)
    {
        // ...

        gsap.to(
            sectionMeshes[currentSection].rotation,
            {
                duration: 1.5,
                ease: 'power2.inOut',
                x: '+=6',
                y: '+=3'
            }
        )
    }
})

While this code is valid, it will unfortunately not work. The reason is that, on each frame, we are already updating the rotation.x and rotation.y of each mesh with the elapsedTime.

To fix that, in the tick function, instead of setting a very specific rotation based on the elapsedTime, we are going to add the deltaTime to the current rotation:

const tick = () =>
{
    // ...

    for(const mesh of sectionMeshes)
    {
        mesh.rotation.x += deltaTime * 0.1
        mesh.rotation.y += deltaTime * 0.12
    }

    // ...
}

Final code

You can download the final project here https://threejs-journey.com/resources/codrops/threejs-scroll-based-animation/final.zip

Go further

We kept things really simple on purpose, but you can for sure go much further!

  • Add more content to the HTML
  • Animate other properties like the material
  • Animate the HTML texts
  • Improve the particles
  • Add more tweaks to the Debug UI
  • Test other colors
  • Add mobile and touch support
  • Etc.

If you liked this tutorial or want to learn more about WebGL and Three.js, join the Three.js Journey course!

As a reminder, here’s a 20% discount CODROPS1 for you 😉

The post Crafting Scroll Based Animations in Three.js appeared first on Codrops.

Three.js Animation with K-d (Christmas) Tree Algorithm

In this festive ALL YOUR HTML coding session we’ll decompile the animation seen on the website of ONE-OFF using the K-d tree algorithm and Three.js shape creation. We’ll also be using GLSL to create the visuals.

This coding session was streamed live on December 26, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Three.js Animation with K-d (Christmas) Tree Algorithm appeared first on Codrops.

Pixelated Distortion Effect with Three.js

In this ALL YOUR HTML stream and coding session we’ll be recreating the interactive pixel distortion effect seen on the website for the music video covers of “Infinite Bad Guy” made as an AI Experiment at Google and YouTube. We’ll be using Three.js and datatexture to achieve the look.

This coding session was streamed live on December 12, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Pixelated Distortion Effect with Three.js appeared first on Codrops.

Teleportation Transition with Three.js

In this ALL YOUR HTML stream and coding session we’ll be creating a teleportation-like transition with Three.js using some quaternions, and fragment shaders! The original effect comes from Marseille 2021 by La Phase 5.

This coding session was streamed live on December 5, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Teleportation Transition with Three.js appeared first on Codrops.

Animated 3D Ribbons with Three.js

In this ALL YOUR HTML stream and coding session we will recreate the interesting looking 3D ribbon effect seen on the website of iad-lab and made by mutoco.ch. We’ll apply some geometrical tricks and use the Three.js API.

This coding session was streamed live on November 28, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Animated 3D Ribbons with Three.js appeared first on Codrops.

Ripple Effect on a Texture with Three.js

In this ALL YOUR HTML coding session you will learn how to recreate the interesting ripple effect seen on the homunculus.jp website with Three.js. We’ll have a look at render targets and use a little bit of math.

This coding session was streamed live on November 21, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Ripple Effect on a Texture with Three.js appeared first on Codrops.

Replicating the Particles Animation from DNA Capital with Three.js

In this ALL YOUR HTML coding session you will learn how to recreate the beautiful particles animation from the website of DNA Capital using Three.js. The website was made by Immersive Garden.

This coding session was streamed live on October 17, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Particles Animation from DNA Capital with Three.js appeared first on Codrops.

Creating 3D Characters in Three.js

Three.js is a JavaScript library for drawing in 3D with WebGL. It enables us to add 3D objects to a scene, and manipulate things like position and lighting. If you’re a developer used to working with the DOM and styling elements with CSS, Three.js and WebGL can seem like a whole new world, and perhaps a little intimidating! This article is for developers who are comfortable with JavaScript but relatively new to Three.js. Our goal is to walk through building something simple but effective with Three.js — a 3D animated figure — to get a handle on the basic principles, and demonstrate that a little knowledge can take you a long way!

Setting the scene

In web development we’re accustomed to styling DOM elements, which we can inspect and debug in our browser developer tools. In WebGL, everything is rendered in a single <canvas> element. Much like a video, everything is simply pixels changing color, so there’s nothing to inspect. If you inspected a webpage rendered entirely with WebGL, all you would see is a <canvas> element. We can use libraries like Three.js to draw on the canvas with JavaScript.

Basic principles

First we’re going to set up the scene. If you’re already comfortable with this you can skip over this part and jump straight to the section where we start creating our 3D character.

We can think of our Three.js scene as a 3D space in which we can place a camera, and an object for it to look at.

Drawing of a transparent cube, with a smaller cube inside, showing the x, y and z axis and center co-ordinates
We can picture our scene as a giant cube, with objects placed at the center. In actual fact, it extends infinitely, but there is a limit to how much we can see.

First of all we need to create the scene. In our HTML we just need a <canvas> element:

<canvas data-canvas></canvas>

Now we can create the scene with a camera, and render it on our canvas in Three.js:

const canvas = document.querySelector('[data-canvas]')

// Create the scene
const scene = new THREE.Scene()

// Create the camera
const camera = new THREE.PerspectiveCamera(75, sizes.width / sizes.height, 0.1, 1000)
scene.add(camera)

// Create the renderer
const renderer = new THREE.WebGLRenderer({ canvas })

// Render the scene
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.render(scene, camera)

For brevity, we won’t go into the precise details of everything we’re doing here. The documentation has much more detail about creating a scene and the various camera attributes. However, the first thing we’ll do is move the position of our camera. By default, anything we add to the scene is going to be placed at co-ordinates (0, 0, 0) — that is, if we imagine the scene itself as a cube, our camera will be placed right in the center. Let’s place our camera a little further out, so that our camera can look at any objects placed in the center of the scene.

Showing the camera looking towards the center of the scene
Moving the camera away from the center allows us to see the objects placed in the center of the scene.

We can do this by setting the z position of the camera:

camera.position.z = 5

We won’t see anything yet, as we haven’t added any objects to the scene. Let’s add a cube to the scene, which will form the basis of our figure.

3D shapes

Objects in Three.js are known as meshes. In order to create a mesh, we need two things: a geometry and a material. Geometries are 3D shapes. Three.js has a selection of geometries to choose from, which can be manipulated in different ways. For the purpose of this tutorial — to see what interesting scenes we can make with just some basic principles — we’re going to limit ourselves to only two geometries: cubes and spheres.

Let’s add a cube to our scene. First we’ll define the geometry and material. Using Three.js BoxGeometry, we pass in parameters for the x, y and z dimensions.

// Create a new BoxGeometry with dimensions 1 x 1 x 1
const geometry = new THREE.BoxGeometry(1, 1, 1)

For the material we’ll choose MeshLambertMaterial, which reacts to light and shade but is more performant than some other materials.

// Create a new material with a white color
const material = new THREE.MeshLambertMaterial({ color: 0xffffff })

Then we create the mesh by combining the geometry and material, and add it to the scene:

const mesh = new THREE.Mesh(geometry, material)
scene.add(mesh)

Unfortunately we still won’t see anything! That’s because the material we’re using depends on light in order to be seen. Let’s add a directional light, which will shine down from above. We’ll pass in two arguments: 0xffffff for the color (white), and the intensity, which we’ll set to 1.

const lightDirectional = new THREE.DirectionalLight(0xffffff, 1)
scene.add(lightDirectional)
Light shining down from above at position 0, 1, 0
By default, the light points down from above

If you’ve followed all the steps so far, you still won’t see anything! That’s because the light is pointing directly down at our cube, so the front face is in shadow. If we move the z position of the light towards the camera and off-center, we should now see our cube.

const lightDirectional = new THREE.DirectionalLight(0xffffff, 1)
scene.add(lightDirectional)

// Move the light source towards us and off-center
lightDirectional.position.x = 5
lightDirectional.position.y = 5
lightDirectional.position.z = 5
Light position at 5, 5, 5
Moving the light gives us a better view

We can alternatively set the position on the x, y and z axis simultaneously by calling set():

lightDirectional.position.set(5, 5, 5)

We’re looking at our cube straight on, so only one face can be seen. If we give it a little bit of rotation, we can see the other faces. To rotate an object, we need to give it a rotation angle in [radians](). I don’t know about you, but I don’t find radians very easy to visualize, so I prefer to use a JS function to convert from degrees:

const degreesToRadians = (degrees) => {
	return degrees * (Math.PI / 180)
}

mesh.rotation.x = degreesToRadians(30)
mesh.rotation.y = degreesToRadians(30)

We can also add some ambient light (light that comes from all directions) with a color tint, which softens the effect slightly end ensures the face of the cube turned away from the light isn’t completely hidden in shadow:

const lightAmbient = new THREE.AmbientLight(0x9eaeff, 0.2)
scene.add(lightAmbient)

Now that we have our basic scene set up, we can start to create our 3D character. To help you get started I’ve created a boilerplate which includes all the set-up work we’ve just been through, so that you can jump straight to the next part if you wish.

Creating a class

The first thing we’ll do is create a class for our figure. This will make it easy to add any number of figures to our scene by instantiating the class. We’ll give it some default parameters, which we’ll use later on to position our character in the scene.

class Figure {
	constructor(params) {
		this.params = {
			x: 0,
			y: 0,
			z: 0,
			ry: 0,
			...params
		}
	}
}

Groups

In our class constructor, let’s create a Three.js group and add it to our scene. Creating a group allows us to manipulate several geometries as one. We’re going to add the different elements of our figure (head, body, arms, etc.) to this group. Then we can position, scale or rotate the figure anywhere in our scene without having to concern ourselves with individually positioning those parts individually every time.

class Figure {
	constructor(params) {
		this.params = {
			x: 0,
			y: 0,
			z: 0,
			ry: 0,
			...params
		}
		
		this.group = new THREE.Group()
		scene.add(this.group)
	}
}

Creating the body parts

Next let’s write a function to render the body of our figure. It’ll be much the same as the way we created a cube earlier, except, we’ll make it a little taller by increasing the size on the y axis. (While we’re at it, we can remove the lines of code where we created the cube earlier, to start with a clear scene.) We already have the material defined in our codebase, and don’t need to define it within the class itself.

Instead of adding the body to the scene, we instead add it to the group we created.

const material = new THREE.MeshLambertMaterial({ color: 0xffffff })

class Figure {
	constructor(params) {
		this.params = {
			x: 0,
			y: 0,
			z: 0,
			ry: 0,
			...params
		}
		
		this.group = new THREE.Group()
		scene.add(this.group)
	}
	
	createBody() {
		const geometry = new THREE.BoxGeometry(1, 1.5, 1)
		const body = new THREE.Mesh(geometry, material)
		this.group.add(body)
	}
}

We’ll also write a class method to initialize the figure. So far it will call only the createBody() method, but we’ll add others shortly. (This and all subsequent methods will be written inside our class declaration, unless otherwise specified.)

createBody() {
	const geometry = new THREE.BoxGeometry(1, 1.5, 1)
	const body = new THREE.Mesh(geometry, material)
	this.group.add(body)
}
	
init() {
	this.createBody()
}

Adding the figure to the scene

At this point we’ll want to render our figure in our scene, to check that everything’s working. We can do that by instantiating the class.

const figure = new Figure()
figure.init()

Next we’ll write a similar method to create the head of our character. We’ll make this a cube, slightly larger than the width of the body. We’ll also need to adjust the position so it’s just above the body, and call the function in our init() method:

createHead() {
	const geometry = new THREE.BoxGeometry(1.4, 1.4, 1.4)
	const head = new THREE.Mesh(geometry, material)
	this.group.add(head)
	
	// Position it above the body
	head.position.y = 1.65
}

init() {
	this.createBody()
	this.createHead()
}

You should now see a narrower cuboid (the body) rendered below the first cube (the head).

Adding the arms

Now we’re going to give our character some arms. Here’s where things get slightly more complex. We’ll add another method to our class called createArms(). Again, we’ll define a geometry and a mesh. The arms will be long, thin cuboids, so we’ll pass in our desired dimensions for these.

As we need two arms, we’ll create them in a for loop.

createArms() {
	for(let i = 0; i < 2; i++) {
		const geometry = new THREE.BoxGeometry(0.25, 1, 0.25)
		const arm = new THREE.Mesh(geometry, material)
		
		this.group.add(arm)
	}
}

We don’t need to create the geometry in the for loop, as it will be the same for each arm.

Don’t forget to call the function in our init() method:

init() {
	this.createBody()
	this.createHead()
	this.createArms()
}

We’ll also need to position each arm either side of the body. I find it helpful here to create a variable m (for multiplier). This helps us position the left arm in the opposite direction on the x axis, with minimal code. (We’ll also use it rotate the arms in a moment too.)

createArms() {
	for(let i = 0; i < 2; i++) {
		const geometry = new THREE.BoxGeometry(0.25, 1, 0.25)
		const arm = new THREE.Mesh(geometry, material)
		const m = i % 2 === 0 ? 1 : -1
		
		this.group.add(arm)
		
		arm.position.x = m * 0.8
		arm.position.y = 0.1
	}
}

Additionally, we can rotate the arms in our for loop, so they stick out at a more natural angle (as natural as a cube person can be!):

arm.rotation.z = degreesToRadians(30 * m)
Figure with co-ordinate system overlaid
If our figure is placed in the center, the arm on the left will be positioned at the negative equivalent of the x-axis position of the arm on the right

Pivoting

When we rotate the arms you might notice that they rotate from a point of origin in the center. It can be hard to see with a static demo, but try moving the slider in this example.

See the Pen
ThreeJS figure arm pivot example (default pivot from center)
by Michelle Barker (@michellebarker)
on CodePen.0

We can see that the arms don’t move naturally, at an angle from the shoulder, but instead the entire arm rotates from the center. In CSS we would simply set the transform-origin. Three.js doesn’t have this option, so we need to do things slightly differently.

Two figures, the leftmost with an arm that pivots from the center, the rightmost with an arm that pivots from the top left
The figure on the right has arms that rotate from the top, for a more natural effect

Our steps are as follows for each arm:

  1. Create a new Three.js group.
  2. Position the group at the “shoulder” of our figure (or the point from which we want to rotate).
  3. Create a new mesh for the arm and position it relative to the group.
  4. Rotate the group (instead of the arm).

Let’s update our createArms() function to follow these steps. First we’ll create the group for each arm, add the arm mesh to the group, and position the group roughly where we want it:

createArms() {
	const geometry = new THREE.BoxGeometry(0.25, 1, 0.25)
	
	for(let i = 0; i < 2; i++) {
		const arm = new THREE.Mesh(geometry, material)
		const m = i % 2 === 0 ? 1 : -1
		
		// Create group for each arm
		const armGroup = new THREE.Group()
		
		// Add the arm to the group
		armGroup.add(arm)
		
		// Add the arm group to the figure
		this.group.add(armGroup)
		
		// Position the arm group
		armGroup.position.x = m * 0.8
		armGroup.position.y = 0.1
	}
}

To assist us with visualizing this, we can add one of Three.js’s built-in helpers to our figure. This creates a wireframe showing the bounding box of an object. It’s useful to help us position the arm, and once we’re done we can remove it.

// Inside the `for` loop:
const box = new THREE.BoxHelper(armGroup, 0xffff00)
this.group.add(box)

To set the transform origin to the top of the arm rather than the center, we then need to move the arm (within the group) downwards by half of its height. Let’s create a variable for height, which we can use when creating the geometry:

createArms() {
	// Set the variable
	const height = 1
	const geometry = new THREE.BoxGeometry(0.25, height, 0.25)
	
	for(let i = 0; i < 2; i++) {
		const armGroup = new THREE.Group()
		const arm = new THREE.Mesh(geometry, material)
		
		const m = i % 2 === 0 ? 1 : -1
		
		armGroup.add(arm)
		this.group.add(armGroup)
		
		// Translate the arm (not the group) downwards by half the height
		arm.position.y = height * -0.5
		
		armGroup.position.x = m * 0.8
		armGroup.position.y = 0.6
		
		// Helper
		const box = new THREE.BoxHelper(armGroup, 0xffff00)
		this.group.add(box)
	}
}

Then we can rotate the arm group.

// In the `for` loop
armGroup.rotation.z = degreesToRadians(30 * m)

In this demo, we can see that the arms are (correctly) being rotated from the top, for a more realistic effect. (The yellow is the bounding box.)

See the Pen
ThreeJS figure arm pivot example (using group)
by Michelle Barker (@michellebarker)
on CodePen.0

Eyes

Next we’re going to give our figure some eyes, for which we’ll use the Sphere geometry in Three.js. We’ll need to pass in three parameters: the radius of the sphere, and the number of segments for the width and height respectively (defaults shown here).

const geometry = new THREE.SphereGeometry(1, 32, 16)

As our eyes are going to be quite small, we can probably get away with fewer segments, which is better for performance (fewer calculations needed).

Let’s create a new group for the eyes. This is optional, but it helps keep things neat. If we need to reposition the eyes later on, we only need to reposition the group, rather than both eyes individually. Once again, let’s create the eyes in a for loop and add them to the group. As we want the eyes to be a different color from the body, we can define a new material:

createEyes() {
	const eyes = new THREE.Group()
	const geometry = new THREE.SphereGeometry(0.15, 12, 8)
	
	// Define the eye material
	const material = new THREE.MeshLambertMaterial({ color: 0x44445c })
	
	for(let i = 0; i < 2; i++) {
		const eye = new THREE.Mesh(geometry, material)
		const m = i % 2 === 0 ? 1 : -1
		
		// Add the eye to the group
		eyes.add(eye)
		
		// Position the eye
		eye.position.x = 0.36 * m
	}
}

We could add the eye group directly to the figure. However, if we decide we want to move the head later on, it would be better if the eyes moved with it, rather than being positioned entirely independently! For that, we need to modify our createHead() method to create another group, comprising both the main cube of the head, and the eyes:

createHead() {
	// Create a new group for the head
	this.head = new THREE.Group()
	
	// Create the main cube of the head and add to the group
	const geometry = new THREE.BoxGeometry(1.4, 1.4, 1.4)
	const headMain = new THREE.Mesh(geometry, material)
	this.head.add(headMain)
	
	// Add the head group to the figure
	this.group.add(this.head)
	
	// Position the head group
	this.head.position.y = 1.65
	
	// Add the eyes by calling the function we already made
	this.createEyes()
}

In the createEyes() method we then need to add the eye group to the head group, and position them to our liking. We’ll need to position them forwards on the z axis, so they’re not hidden inside the cube of the head:

// in createEyes()
this.head.add(eyes)

// Move the eyes forwards by half of the head depth - it might be a good idea to create a variable to do this!
eyes.position.z = 0.7

Legs

Lastly, let’s give our figure some legs. We can create these in much the same way as the eyes. As they should be positioned relative to the body, we can create a new group for the body in the same way that we did with the head, then add the legs to it:

createLegs() {
	const legs = new THREE.Group()
	const geometry = new THREE.BoxGeometry(0.25, 0.4, 0.25)
	
	for(let i = 0; i < 2; i++) {
		const leg = new THREE.Mesh(geometry, material)
		const m = i % 2 === 0 ? 1 : -1
		
		legs.add(leg)
		leg.position.x = m * 0.22
	}
	
	this.group.add(legs)
	legs.position.y = -1.15
	
	this.body.add(legs)
}

Positioning in the scene

If we go back to our constructor, we can position our figure group according to the parameters:

class Figure {
	constructor(params) {
		this.params = {
			x: 0,
			y: 0,
			z: 0,
			ry: 0,
			...params
		}
		
		this.group.position.x = this.params.x
		this.group.position.y = this.params.y
		this.group.position.z = this.params.z
		this.group.rotation.y = this.params.ry
	}
}

Now, passing in different parameters enables us to position it accordingly. For example, we can give it a bit of rotation, and adjust its x and y position:

const figure = new Figure({ 
	x: -4,
	y: -2,
	ry: degreesToRadians(35)
})
figure.init()

Alternatively, if we want to center the figure within the scene, we can use the Three.js Box3 function, which computes the bounding box of the figure group. This line will center the figure horizontally and vertically:

new THREE.Box3().setFromObject(figure.group).getCenter(figure.group.position).multiplyScalar(-1)

Making it generative

At the moment our figure is all one color, which doesn’t look particularly interesting. We can add a bit more color, and take the extra step of making it generative, so we get a new color combination every time we refresh the page! To do this we’re going to use a function to randomly generate a number between a minimum and a maximum. This is one I’ve borrowed from George Francis, which allows us to specify whether we want an integer or a floating point value (default is an integer).

const random = (min, max, float = false) => {
  const val = Math.random() * (max - min) + min

  if (float) {
    return val
  }

  return Math.floor(val)
}

Let’s define some variables for the head and body in our class constructor. Using the random() function, we’ll generate a value for each one between 0 and 360:

class Figure {
	constructor(params) {
		this.headHue = random(0, 360)
		this.bodyHue = random(0, 360)
	}
}

I like to use HSL when manipulating colors, as it gives us a fine degree of control over the hue, saturation and lightness. We’re going to define the material for the head and body, generating different colors for each by using template literals to pass the random hue values to the hsl color function. Here I’m adjusting the saturation and lightness values, so the body will be a vibrant color (high saturation) while the head will be more muted:

class Figure {
	constructor(params) {
		this.headHue = random(0, 360)
		this.bodyHue = random(0, 360)
		
		this.headMaterial = new THREE.MeshLambertMaterial({ color: `hsl(${this.headHue}, 30%, 50%` })
		this.bodyMaterial = new THREE.MeshLambertMaterial({ color: `hsl(${this.bodyHue}, 85%, 50%)` })
	}
}

Our generated hues range from 0 to 360, a full cycle of the color wheel. If we want to narrow the range (for a limited color palette), we could select a lower range between the minimum and maximum. For example, a range between 0 and 60 would select hues in the red, orange and yellow end of the spectrum, excluding greens, blues and purples.

We could similarly generate values for the lightness and saturation if we choose to.

Now we just need to replace any reference to material with this.headMaterial or this.bodyMaterial to apply our generative colors. I’ve chosen to use the head hue for the head, arms and legs.

See the Pen
ThreeJS figure (generative)
by Michelle Barker (@michellebarker)
on CodePen.0

We could use generative parameters for much more than just the colors. In this demo I’m generating random values for the size of the head and body, the length of the arms and legs, and the size and position of the eyes.

See the Pen
ThreeJS figure random generated
by Michelle Barker (@michellebarker)
on CodePen.0

Animation

Part of the fun of working with 3D is having our objects move in a three-dimensional space and behave like objects in the real world. We can add a bit of animation to our 3D figure using the Greensock animation library (GSAP).

GSAP is more commonly used to animate elements in the DOM. As we’re not animating DOM elements in this case, it requires a different approach. GSAP doesn’t require an element to animate — it can animate JavaScript objects. As one post in the GSAP forum puts it, GSAP is just “changing numbers really fast”.

We’ll let GSAP do the work of changing the parameters of our figure, then re-render our figure on each frame. To do this, we can use GSAP’s ticker method, which uses requestAnimationFrame. First, let’s animate the ry value (our figure’s rotation on the y axis). We’ll set it to repeat infinitely, and the duration to 20 seconds:

gsap.to(figure.params, {
	ry: degreesToRadians(360),
	repeat: -1,
	duration: 20
})

We won’t see any change just yet, as we aren’t re-rendering our scene. Let’s now trigger a re-render on every frame:

gsap.ticker.add(() => {
	// Update the rotation value
	figure.group.rotation.y = this.params.ry
	
	// Render the scene
	renderer.setSize(window.innerWidth, window.innerHeight)
	renderer.render(scene, camera)
})

Now we should see the figure rotating on its y axis in the center of the scene. Let’s give him a little bounce action too, by moving him up and down and rotating the arms. First of all we’ll set his starting position on the y axis to be a little further down, so he’s not bouncing off screen. We’ll set yoyo: true on our tween, so that the animation repeats in reverse (so our figure will bounce up and down):

// Set the starting position
gsap.set(figure.params, {
	y: -1.5
})

// Tween the y axis position and arm rotation
gsap.to(figure.params, {
	y: 0,
	armRotation: degreesToRadians(90),
	repeat: -1,
	yoyo: true,
	duration: 0.5
})

As we need to update a few things, let’s create a method called bounce() on our Figure class, which will handle the animation. We can use it to update the values for the rotation and position, then call it within our ticker, to keep things neat:

/* In the Figure class: */
bounce() {
	this.group.rotation.y = this.params.ry
	this.group.position.y = this.params.y
}

/* Outside of the class */
gsap.ticker.add(() => {
	figure.bounce()
	
	// Render the scene
	renderer.setSize(window.innerWidth, window.innerHeight)
	renderer.render(scene, camera)
})

To make the arms move, we need to do a little more work. In our class constructor, let’s define a variable for the arms, which will be an empty array:

class Figure {
	constructor(params) {
		this.arms = []
	}
}

In our createArms() method, in addition to our code, we’ll push each arm group to the array:

createArms() {
	const height = 0.85
	
	for(let i = 0; i < 2; i++) {
		/* Other code for creating the arms.. */
		
		// Push to the array
		this.arms.push(armGroup)
	}
}

Now we can add the arm rotation to our bounce() method, ensuring we rotate them in opposite directions:

bounce() {
	// Rotate the figure
	this.group.rotation.y = this.params.ry
	
	// Bounce up and down
	this.group.position.y = this.params.y
	
	// Move the arms
	this.arms.forEach((arm, index) => {
		const m = index % 2 === 0 ? 1 : -1
		arm.rotation.z = this.params.armRotation * m
	})
}

Now we should see our little figure bouncing, as if on a trampoline!

See the Pen
ThreeJS figure with GSAP
by Michelle Barker (@michellebarker)
on CodePen.0

Wrapping up

There’s much, much more to Three.js, but we’ve seen that it doesn’t take too much to get started building something fun with just the basic building blocks, and sometimes limitation breeds creativity! If you’re interested in exploring further, I recommend the following resources.

Resources

The post Creating 3D Characters in Three.js appeared first on Codrops.

Deconstructing the homunculus.jp Distortion with Three.js

In this ALL YOUR HTML coding session we will be deconstructing the pixel river distortion seen on homunculus.jp with Three.js, and also trying out Theatre.js.

This coding session was streamed live on October 3, 2021.

Check out the live demo.

Try to change values and animate them; use the icon on the top left corner of the website.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Deconstructing the homunculus.jp Distortion with Three.js appeared first on Codrops.

Surface Sampling in Three.js

One day I got lost in the Three.js documentation and I came across something called “MeshSurfaceSampler“. After reading the little information on the page, I opened the provided demo and was blown away!

What exactly does this class do? In short, it’s a tool you attach to a Mesh (any 3D object) then you can call it at any time to get a random point along the surface of your object.

The function works in two steps:

  1. Pick a random face from the geometry
  2. Pick a random point on that face

In this tutorial we will see how you can get started with the MeshSurfaceSampler class and explore some nice effects we can build with it.

💡 If you are the kind of person who wants to dig right away with the demos, please do! I’ve added comments in each CodePen to help you understand the process.

⚠ This tutorial assumes basic familiarity with Three.js

Creating a scene

The first step in (almost) any WebGL project is to first setup a basic scene with a cube.
In this step I will not go into much detail, you can check the comments in the code if needed.

We are aiming to render a scene with a wireframe cube that spins. This way we know our setup is ready.

⚠ Don’t forget to also load OrbitControls as it is not included in Three.js package.

// Create an empty scene, needed for the renderer
const scene = new THREE.Scene();
// Create a camera and translate it
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.set(1, 1, 2);

// Create a WebGL renderer and enable the antialias effect
const renderer = new THREE.WebGLRenderer({ antialias: true });
// Define the size and append the <canvas> in our document
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

// Add OrbitControls to allow the user to move in the scene
const controls = new THREE.OrbitControls(camera, renderer.domElement);

// Create a cube with basic geometry & material
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshBasicMaterial({
  color: 0x66ccff,
  wireframe: true
});
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);

/// Render the scene on each frame
function render () {  
  // Rotate the cube a little on each frame
  cube.rotation.y += 0.01;
  
  renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

See the Pen by Louis Hoebregts (@Mamboleoo) on CodePen.

Creating a sampler

For this step we will create a new sampler and use it to generate 300 spheres on the surface of our cube.

💡 Note that MeshSurfaceSampler is not built-in with Three.js. You can find it in the official repository, in the ‘examples’ folder.

Once you have added the file in your imported scripts, we can initiate a sampler for our cube.

const sampler = new THREE.MeshSurfaceSampler(cube).build();

This needs to be done only once in our code. If you want to get random coordinates on multiple meshes, you will need to store a new sampler for each object.

Because we will be displaying hundreds of the same geometry, we can use the InstancedMesh class to achieve better performance. Juste like a regular Mesh, we define the geometry (SphereGeometry for the demo) and a material (MeshBasicMaterial). After to have those two, you can pass them to a new InstancedMesh and define how many objects you need (300 in this case).

const sphereGeometry = new THREE.SphereGeometry(0.05, 6, 6);
const sphereMaterial = new THREE.MeshBasicMaterial({
 color: 0xffa0e6
});
const spheres = new THREE.InstancedMesh(sphereGeometry, sphereMaterial, 300);
scene.add(spheres);	

Now that our sampler is ready to be used, we can create a loop to define a random position and scale for each of our spheres.

Before we loop, we need two dummy variables for this step:

  • tempPosition is a 3D Vector that our sampler will update with the random coordinates
  • tempObject is a 3D Object used to define the position and scale of a sphere and generate a matrix from it

Inside the loop, we start by sampling a random point on the surface of our cube and store it into tempPosition.
Those coordinates are then applied to our tempObject.
We also define a random scale for the dummy object so that not every sphere will look the same.
Because we need the Matrix of the dummy object, we ask Three.js to update it.
Finally we add the updated Matrix of the object into our InstancedMesh’s own Matrix at the index of the sphere we want to move.

const tempPosition = new THREE.Vector3();
const tempObject = new THREE.Object3D();
for (let i = 0; i < 300; i++) {
  sampler.sample(tempPosition);
  tempObject.position.set(tempPosition.x, tempPosition.y, tempPosition.z);
  tempObject.scale.setScalar(Math.random() * 0.5 + 0.5);
  tempObject.updateMatrix();
  spheres.setMatrixAt(i, tempObject.matrix);
}	

See the Pen #1 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

Amazing isn’t it? With only a few steps we already have a working scene with random meshes along a surface.

Phew, let’s just take a breath before we move to more creative demos ✨

Playing with particles

Because everybody loves particles (I know you do), let’s see how we can generate thousands of them to create the feeling of volume only from tiny dots. For this demo, we will be using a Torus knot instead of a cube.

This demo will work with a very similar logic as for the spheres before:

  • Sample 15000 coordinates and store them in an array
  • Create a geometry from the coordinates and a material for Points
  • Combine the geometry and material into a Points object
  • Add them to the scene
/* Sample the coordinates */
const vertices = [];
const tempPosition = new THREE.Vector3();
for (let i = 0; i < 15000; i ++) {
  sampler.sample(tempPosition);
  vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
}

/* Create a geometry from the coordinates */
const pointsGeometry = new THREE.BufferGeometry();
pointsGeometry.setAttribute('position', new THREE.Float32BufferAttribute(vertices, 3));

/* Create a material */
const pointsMaterial = new THREE.PointsMaterial({
  color: 0xff61d5,
  size: 0.03
});
/* Create a Points object */
const points = new THREE.Points(pointsGeometry, pointsMaterial);

/* Add the points into the scene */
scene.add(points);		

Here is the result, a 3D Torus knot only made from particles ✨
Try adding more particles or play with another geometry!

See the Pen #3 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

💡 If you check the code of the demo, you will notice that I don’t add the torus knot into the scene anymore. MeshSurfaceSampler requires a Mesh, but it doesn’t even have to be rendered in your scene!

Using a 3D Model

So far we have only been playing with native geometries from Three.js. It was a good start but we can take a step further by using our code with a 3D model!

There are many websites that provide free or paid models online. For this demo I will use this elephant from poly.pizza.

See the Pen #4 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

#1 Loading the .obj file

Three.js doesn’t have built-in loaders for OBJ models but there are many loaders available on the official repository.

Once the file is loaded, we will update its material with wireframe activated and reduce the opacity so we can see easily through.

/* Create global variable we will need for later */
let elephant = null;
let sampler = null;
/* Load the .obj file */
new THREE.OBJLoader().load(
  "path/to/the/model.obj",
  (obj) => {
    /* The loaded object with my file being a group, I need to pick its first child */
    elephant = obj.children[0];
    /* Update the material of the object */
    elephant.material = new THREE.MeshBasicMaterial({
      wireframe: true,
      color: 0x000000,
      transparent: true,
      opacity: 0.05
    });
    /* Add the elephant in the scene */
    scene.add(obj);
    
    /* Create a surface sampler from the loaded model */
    sampler = new THREE.MeshSurfaceSampler(elephant).build();

    /* Start the rendering loop */ 
    renderer.setAnimationLoop(render);
  }
);	

#2 Setup the Points object

Before sampling points along our elephant we need to setup a Points object to store all our points.

This is very similar to what we did in the previous demo, except that this time we will define a custom color for each point. We are also using a texture of a circle to make our particles rounded instead of the default square.

/* Used to store each particle coordinates & color */
const vertices = [];
const colors = [];
/* The geometry of the points */
const sparklesGeometry = new THREE.BufferGeometry();
/* The material of the points */
const sparklesMaterial = new THREE.PointsMaterial({
  size: 3,
  alphaTest: 0.2,
  map: new THREE.TextureLoader().load("path/to/texture.png"),
  vertexColors: true // Let Three.js knows that each point has a different color
});
/* Create a Points object */
const points = new THREE.Points(sparklesGeometry, sparklesMaterial);
/* Add the points into the scene */
scene.add(points);	

#3 Sample a point on each frame

It is time to generate the particles on our model! But you know what? It works the same way as on a native geometry 😍

Since you already know how to do that, you can check the code below and notice the differences:

  • On each frame, we add a new point
  • Once the point is sampled, we update the position attribute of the geometry
  • We pick a color from an array of colors and add it to the color attribute of the geometry
/* Define the colors we want */
const palette = [new THREE.Color("#FAAD80"), new THREE.Color("#FF6767"), new THREE.Color("#FF3D68"), new THREE.Color("#A73489")];
/* Vector to sample a random point */
const tempPosition = new THREE.Vector3();

function addPoint() {
  /* Sample a new point */
  sampler.sample(tempPosition);
  /* Push the point coordinates */
  vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
  /* Update the position attribute with the new coordinates */
  sparklesGeometry.setAttribute("position", new THREE.Float32BufferAttribute(vertices, 3)  );
  
  /* Get a random color from the palette */
  const color = palette[Math.floor(Math.random() * palette.length)];
  /* Push the picked color */
  colors.push(color.r, color.g, color.b);
  /* Update the color attribute with the new colors */
  sparklesGeometry.setAttribute("color", new THREE.Float32BufferAttribute(colors, 3));
}

function render(a) {
  /* If there are less than 10,000 points, add a new one*/
  if (vertices.length < 30000) {
    addPoint();
  }
  renderer.render(scene, camera);
}		

Animate a growing path

A cool effect we can create using the MeshSurfaceSampler class is to create a line that will randomly grow along the surface of our mesh. Here are the steps to generate the effect:

  1. Create an array to store the coordinates of the vertices of the line
  2. Pick a random point on the surface to start and add it to your array
  3. Pick another random point and check its distance from the previous point
    1. If the distance is short enough, go to step 4
    2. If the distance is too far, repeat step 3 until you find a point close enough
  4.  Add the coordinates of the new point in the array
  5. Update the line geometry and render it
  6. Repeat steps 3-5 to make the line grow on each frame

The key here is the step 3 where we will pick random points until we find one that is close enough. This way we won’t have two points across the mesh. This could work for a simple object (like a sphere or a cube) as all the lines will stay inside the object. But think about our elephant, what if we have a point connected from the trunk to one of the back legs. You will end up with lines where there should be ’empty’ spaces.

Check the demo below to see the line coming to life!

See the Pen #5 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

For this animation, I’m creating a class Path as I find it a cleaner way if we want to create multiple lines. The first step is to setup the constructor of that Path. Similar to what we have done before, each path will require 4 properties:

  1. An array to store the vertices of the line
  2. The final geometry of the line
  3. A material specific for Line objects
  4. A Line object combining the geometry and the material
  5. The previous point Vector
/* Vector to sample the new point */
const tempPosition = new THREE.Vector3();
class Path {
  constructor () {
    /* The array with all the vertices of the line */
    this.vertices = [];
    /* The geometry of the line */
    this.geometry = new THREE.BufferGeometry();
    /* The material of the line */
    this.material = new THREE.LineBasicMaterial({color: 0x14b1ff});
    /* The Line object combining the geometry & the material */
    this.line = new THREE.Line(this.geometry, this.material);
    
    /* Sample the first point of the line */
    sampler.sample(tempPosition);
    /* Store the sampled point so we can use it to calculate the distance */
    this.previousPoint = tempPosition.clone();
  }
}		

The second step is to create a function we can call on each frame to add a new vertex at the end of our line. Within that function we will execute a loop to find the next point for the path.
When that next point is found, we can store it in the vertices array and in the previousPoint variable.
Finally, we need to update the line geometry with the updated vertices array.

class Path {
  constructor () {...}
  update () {
    /* Variable used to exit the while loop when we find a point */
    let pointFound = false;
    /* Loop while we haven't found a point */
    while (!pointFound) {
      /* Sample a random point */
      sampler.sample(tempPosition);
      /* If the new point is less 30 units from the previous point */
      if (tempPosition.distanceTo(this.previousPoint) < 30) {
        /* Add the new point in the vertices array */
        this.vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
        /* Store the new point vector */
        this.previousPoint = tempPosition.clone();
        /* Exit the loop */
        pointFound = true;
      }
    }
    /* Update the geometry */
    this.geometry.setAttribute("position", new THREE.Float32BufferAttribute(this.vertices, 3));
  }
}

function render() {
  /* Stop the progression once we have reached 10,000 points */
  if (path.vertices.length < 30000) {
    /* Make the line grow */
    path.update();
  }
  renderer.render(scene, camera);
}		

💡 The value of how short the distance between the previous point and the new one depends on your 3D model. If you have a very small object, that distance could be ‘1’, with the elephant model we are using ’30’.

Now what?

Now that you know how to use MeshSurfaceSampler with particles and lines, it is your turn to create funky demos with it!
What about animating multiple lines together or starting a line from each leg of the elephant, or even popping particles from each new point of the line. The sky is the limit ⛅

See the Pen #6 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

This article does not show all the available features from MeshSurfaceSampler. There is still the weight property that allows you to have more or less chance to have a point on some faces. When we sample a point, we could also use the normal or the color of that point for other creative ideas. This could be part of a future article one day… 😊

Until next time, I hope you learned something today and that you can’t wait to use that new knowledge!

If you have questions, let me know on Twitter.

The post Surface Sampling in Three.js appeared first on Codrops.

Magical Marbles in Three.js

In April 2019, Harry Alisavakis made a great write-up about the “magical marbles” effect he shared prior on Twitter. Check that out first to get a high level overview of the effect we’re after (while you’re at it, you should see some of his other excellent shader posts).

While his write-up provided a brief summary of the technique, the purpose of this tutorial is to offer a more concrete look at how you could implement code for this in Three.js to run on the web. There’s also some tweaks to the technique here and there that try to make things more straightforward.

⚠ This tutorial assumes intermediate familiarity with Three.js and GLSL

Overview

You should read Harry’s post first because he provides helpful visuals, but the gist of it is this:

  • Add fake depth to a material by offsetting the texture look-ups based on camera direction
  • Instead of using the same texture at each iteration, let’s use depth-wise “slices” of a heightmap so that the shape of our volume is more dynamic
  • Add wavy motion by displacing the texture look-ups with scrolling noise

There were a couple parts of this write-up that weren’t totally clear to me, likely due to the difference in features available in Unity vs Three.js. One is the jump from parallax mapping on a plane to a sphere. Another is how to get vertex tangents for the transformation to tangent space. Finally, I wasn’t sure if the noise for the heightmap was evaluated as code inside the shader or pre-rendered. After some experimentation I came to my own conclusions for these, but I encourage you to come up with your own variations of this technique 🙂

Here’s the Pen I’ll be starting from, it sets up a boilerplate Three.js app with an init and tick lifecycle, color management, and an environment map from Poly Haven for lighting.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 1: A Blank Marble

Marbles are made of glass, and Harry’s marbles definitely showed some specular shine. In order to make a truly beautiful glassy material it would take some pretty complex PBR shader code, which is too much work! Instead, let’s just take one of Three.js’s built-in PBR materials and hook our magical bits into that, like the shader parasite we are.

Enter onBeforeCompile, a callback property of the THREE.Material base class that lets you apply patches to built-in shaders before they get compiled by WebGL. This technique is very hacky and not well explained in the official docs, but a good place to learn more about it is Dusan Bosnjak’s post “Extending three.js materials with GLSL”. The hardest part about it is determining which part of the shaders you need to change exactly. Unfortunately, your best bet is to just read through the source code of the shader you want to modify, find a line or chunk that looks vaguely relevant, and try tweaking stuff until the property you want to modify shows visible changes. I’ve been writing personal notes of what I discover since it’s really hard to keep track of what the different chunks and variables do.

ℹ I recently discovered there’s a much more elegant way to extend the built-in materials using Three’s experimental Node Materials, but that deserves a whole tutorial of its own, so for this guide I’ll stick with the more common onBeforeCompile approach.

For our purposes, MeshStandardMaterial is a good base to start from. It has specular and environment reflections that will make out material look very glassy, plus it gives you the option to add a normal map later on if you want to add scratches onto the surface. The only part we want to change is the base color on which the lighting is applied. Luckily, this is easy to find. The fragment shader for MeshStandardMaterial is defined in meshphysical_frag.glsl.js (it’s a subset of MeshPhysicalMaterial, so they are both defined in the same file). Oftentimes you need to go digging through the shader chunks represented by each of the #include statements you’ll see in the file, however, this is a rare occasion where the variable we want to tweak is in plain sight.

It’s the line right near the top of the main() function that says:

vec4 diffuseColor = vec4( diffuse, opacity );

This line normally reads from the diffuse and opacity uniforms which you set via the .color and .opacity JavaScript properties of the material, and then all the chunks after that do the complicated lighting work. We are going to replace this line with our own assignment to diffuseColor so we can apply whatever pattern we want on the marble’s surface. You can do this using regular JavaScript string methods on the .fragmentShader field of the shader provided to the onBeforeCompile callback.

material.onBeforeCompile = shader => {
  shader.fragmentShader = shader.fragmentShader.replace('/vec4 diffuseColor.*;/, `
    // Assign whatever you want!
    vec4 diffuseColor = vec4(1., 0., 0., 1.);
  `)
}

By the way, the type definition for that mysterious callback argument is available here.

In the following Pen I swapped our geometry for a sphere, lowered the roughness, and filled the diffuseColor with the screen space normals which are available in the standard fragment shader on vNormal. The result looks like a shiny version of MeshNormalMaterial.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Step 2: Fake Volume

Now comes the harder part — using the diffuse color to create the illusion of volume inside our marble. In Harry’s earlier parallax post, he talks about finding the camera direction in tangent space and using this to offset the UV coordinates. There’s a great explanation of how this general principle works for parallax effects on learnopengl.com and in this archived post.

However, converting stuff into tangent space in Three.js can be tricky. To the best of my knowledge, there’s not a built-in utility to help with this like there are for other space transformations, so it takes some legwork to both generate vertex tangents and then assemble a TBN matrix to perform the transformation. On top of that, spheres are not a nice shape for tangents due to the hairy ball theorem (yes, that’s a real thing), and Three’s computeTangents() function was producing discontinuities for me so you basically have to compute tangents manually. Yuck!

Luckily, we don’t really need to use tangent space if we frame this as a 3D raymarching problem. We have a ray pointing from the camera to the surface of our marble, and we want to march this through the sphere volume as well as down the slices of our height map. We just need to know how to convert a point in 3D space into a point on the surface of our sphere so we can perform texture lookups. In theory you could also just plug the 3D position right into your noise function of choice and skip using the texture, but this effect relies on lots of iterations and I’m operating under the assumption that a texture lookup is cheaper than all the number crunching happening in e.g. the 3D simplex noise function (shader gurus, please correct me if I’m wrong). The other benefit of reading from a texture is that it allows us to use a more art-oriented pipeline to craft our heightmaps, so we can make all sorts of interesting volumes without writing new code.

Originally I wrote a function to do this spherical XYZ→UV conversion based on some answers I saw online, but it turns out there’s already a function that does the same thing inside of common.glsl.js called equirectUv. We can reuse that as long as put our raymarching logic after the #include <common> line in the standard shader.

Creating our heightmap

For the heightmap, we want a texture that seamlessly projects on the surface of a UV sphere. It’s not hard to find seamless noise textures online, but the problem is that these flat projections of noise will look warped near the poles when applied to a sphere. To solve this, let’s craft our own texture using Blender. One way to do this is to bend a high resolution “Grid” mesh into a sphere using two instances of the “Simple Deform modifier”, plug the resulting “Object” texture coordinates into your procedural shader of choice, and then do an emissive bake with the Cycles renderer. I also added some loop cuts near the poles and a subdivision modifier to prevent any artifacts in the bake.

The resulting bake looks something like this:

Raymarching

Now the moment we’ve been waiting for (or dreading) — raymarching! It’s actually not so bad, the following is an abbreviated version of the code. For now there’s no animation, I’m just taking slices of the heightmap using smoothstep (note the smoothing factor which helps hide the sharp edges between layers), adding them up, and then using this to mix two colors.

uniform sampler2D heightMap;
uniform vec3 colorA;
uniform vec3 colorB;
uniform float iterations;
uniform float depth;
uniform float smoothing;

/**
  * @param rayOrigin - Point on sphere
  * @param rayDir - Normalized ray direction
  * @returns Diffuse RGB color
  */
vec3 marchMarble(vec3 rayOrigin, vec3 rayDir) {
  float perIteration = 1. / float(iterations);
  vec3 deltaRay = rayDir * perIteration * depth;

  // Start at point of intersection and accumulate volume
  vec3 p = rayOrigin;
  float totalVolume = 0.;

  for (int i=0; i<iterations; ++i) {
    // Read heightmap from current spherical direction
    vec2 uv = equirectUv(p);
    float heightMapVal = texture(heightMap, uv).r;

    // Take a slice of the heightmap
    float height = length(p); // 1 at surface, 0 at core, assuming radius = 1
    float cutoff = 1. - float(i) * perIteration;
    float slice = smoothstep(cutoff, cutoff + smoothing, heightMapVal);

    // Accumulate the volume and advance the ray forward one step
    totalVolume += slice * perIteration;
    p += deltaRay;
  }
  return mix(colorA, colorB, totalVolume);
}

/**
 * We can user this later like:
 *
 * vec4 diffuseColor = vec4(marchMarble(rayOrigin, rayDir), 1.0);
 */

ℹ This logic isn’t really physically accurate — taking slices of the heightmap based on the iteration index assumes that the ray is pointing towards the center of the sphere, but this isn’t true for most of the pixels. As a result, the marble appears to have some heavy refraction. However, I think this actually looks cool and further sells the effect of it being solid glass!

Injecting uniforms

One final note before we see the fruits of our labor — how do we include all these custom uniforms in our modified material? We can’t just stuck stuff onto material.uniforms like you would with THREE.ShaderMaterial. The trick is to create your own personal uniforms object and then wire up its contents onto the shader argument inside of onBeforeCompile. For instance:

const myUniforms = {
  foo: { value: 0 }
}

material.onBeforeCompile = shader => {
  shader.uniforms.foo = myUniforms.foo

  // ... (all your other patches)
}

When the shader tries to read its shader.uniforms.foo.value reference, it’s actually reading from your local myUniforms.foo.value, so any change to the values in your uniforms object will automatically be reflected in the shader.

I typically use the JavaScript spread operator to wire up all my uniforms at once:

const myUniforms = {
  // ...(lots of stuff)
}

material.onBeforeCompile = shader => {
  shader.uniforms = { ...shader.uniforms, ...myUniforms }

  // ... (all your other patches)
}

Putting this all together, we get a gassy (and glassy) volume. I’ve added sliders to this Pen so you can play around with the iteration count, smoothing, max depth, and colors.

See the Pen
by Matt (@mattrossman)
on CodePen.0

ℹ Technically the ray origin and ray direction should be in local space so the effect doesn’t break when the marble moves. However, I’m skipping this transformation because we’re not moving the marble, so world space and local space are interchangeable. Work smarter not harder!

Step 3: Wavy Motion

Almost done! The final touch is to make this marble come alive by animating the volume. Harry’s waving displacement post explains how he accomplishes this using a 2D displacement texture. However, just like with the heightmap, a flat displacement texture warps near the poles of a sphere. So, we’ll make our own again. You can use the same Blender setup as before, but this time let’s bake a 3D noise texture to the RGB channels:

Then in our marchMarble function, we’ll read from this texture using the same equirectUv function as before, center the values, and then add a scaled version of that vector to the position used for the heightmap texture lookup. To animate the displacement, introduce a time uniform and use that to scroll the displacement texture horizontally. For an even better effect, we’ll sample the displacement map twice (once upright, then upside down so they never perfectly align), scroll them in opposite directions and add them together to produce noise that looks chaotic. This general strategy is often used in water shaders to create waves.

uniform float time;
uniform float strength;

// Lookup displacement texture
vec2 uv = equirectUv(normalize(p));
vec2 scrollX = vec2(time, 0.);
vec2 flipY = vec2(1., -1.);
vec3 displacementA = texture(displacementMap, uv + scrollX).rgb;
vec3 displacementB = texture(displacementMap, uv * flipY - scrollX).rgb;

// Center the noise
displacementA -= 0.5;
displacementB -= 0.5;

// Displace current ray position and lookup heightmap
vec3 displaced = p + strength * (displacementA + displacementB);
uv = equirectUv(normalize(displaced));
float heightMapVal = texture(heightMap, uv).r;

Behold, your magical marble!

See the Pen
by Matt (@mattrossman)
on CodePen.0

Extra Credit

Hard part’s over! This formula is a starting point from which there are endless possibilities for improvements and deviations. For instance, what happens if we swap out the noise texture we used earlier for something else like this:

This was created using the “Wave Texture” node in Blender

See the Pen
by Matt (@mattrossman)
on CodePen.0

Or how about something recognizable, like this map of the earth?

Try dragging the “displacement” slider and watch how the floating continents dance around!

See the Pen
by Matt (@mattrossman)
on CodePen.0

In that example I modified the shader to make the volume look less gaseous by boosting the rate of volume accumulation, breaking the loop once it reached a certain volume threshold, and tinting based on the final number of iterations rather than accumulated volume.

For my last trick, I’ll point back to Harry’s write-up where he suggests mixing between two HDR colors. This basically means mixing between colors whose RGB values exceed the typical [0, 1] range. If we plug such a color into our shader as-is, it’ll create color artifacts in the pixels where the lighting is blown out. There’s an easy solve for this by wrapping the color in a toneMapping() call as is done in tonemapping_fragment.glsl.js, which “tones down” the color range. I couldn’t find where that function is actually defined, but it works!

I’ve added some color multiplier sliders to this Pen so you can push the colors outside the [0, 1] range and observe how mixing these HDR colors creates pleasant color ramps.

See the Pen
by Matt (@mattrossman)
on CodePen.0

Conclusion

Thanks again to Harry for the great learning resources. I had a ton of fun trying to recreate this effect and I learned a lot along the way. Hopefully you learned something too!

Your challenge now is to take these examples and run with them. Change the code, the textures, the colors, and make your very own magical marble. Show me and Harry what you make on Twitter.

Surprise me!

The post Magical Marbles in Three.js appeared first on Codrops.

Creating a Typography Motion Trail Effect with Three.js

Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing and have already been covered in-depth here on Codrops. They allow us to “post-process” our scenes, applying different effects on them once rendered. But how exactly do they work?

By default, WebGL (and also Three.js and all other libraries built on top of it) render to the default framebuffer, which is the device screen. If you have used Three.js or any other WebGL framework before, you know that you create your mesh with the correct geometry and material, render it, and voilà, it’s visible on your screen.

However, we as developers can create new framebuffers besides the default one and explicitly instruct WebGL to render to them. By doing so, we render our scenes to image buffers in the video card’s memory instead of the device screen. Afterwards, we can treat these image buffers like regular textures and apply filters and effects before eventually rendering them to the device screen.

Here is a video breaking down the post-processing and effects in Metal Gear Solid 5: Phantom Pain that really brings home the idea. Notice how it starts by footage from the actual game rendered to the default framebuffer (device screen) and then breaks down how each framebuffer looks like. All of these framebuffers are composited together on each frame and the result is the final picture you see when playing the game:

So with the theory out of the way, let’s create a cool typography motion trail effect by rendering to a framebuffer!

Our skeleton app

Let’s render some 2D text to the default framebuffer, i.e. device screen, using threejs. Here is our boilerplate:

const LABEL_TEXT = 'ABC'

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a threejs renderer:
// 1. Size it correctly
// 2. Set default background color
// 3. Append it to the page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
document.body.appendChild(renderer.domElement)

// Create an orthographic camera that covers the entire screen
// 1. Position it correctly in the positive Z dimension
// 2. Orient it towards the scene center
const orthoCamera = new THREE.OrthographicCamera(
  -innerWidth / 2,
  innerWidth / 2,
  innerHeight / 2,
  -innerHeight / 2,
  0.1,
  10,
)
orthoCamera.position.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

// Create a plane geometry that spawns either the entire
// viewport height or width depending on which one is bigger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
  labelMeshSize,
  labelMeshSize
)

// Programmaticaly create a texture that will hold the text
let labelTextureCanvas
{
  // Canvas and corresponding context2d to be used for
  // drawing the text
  labelTextureCanvas = document.createElement('canvas')
  const labelTextureCtx = labelTextureCanvas.getContext('2d')

  // Dynamic texture size based on the device capabilities
  const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
  const relativeFontSize = 20
  // Size our text canvas
  labelTextureCanvas.width = textureSize
  labelTextureCanvas.height = textureSize
  labelTextureCtx.textAlign = 'center'
  labelTextureCtx.textBaseline = 'middle'

  // Dynamic font size based on the texture size
  // (based on the device capabilities)
  labelTextureCtx.font = `${relativeFontSize}px Helvetica`
  const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
  const widthDelta = labelTextureCanvas.width / textWidth
  const fontSize = relativeFontSize * widthDelta
  labelTextureCtx.font = `${fontSize}px Helvetica`
  labelTextureCtx.fillStyle = 'white'
  labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.height / 2)
}
// Create a material with our programmaticaly created text
// texture as input
const labelMaterial = new THREE.MeshBasicMaterial({
  map: new THREE.CanvasTexture(labelTextureCanvas),
  transparent: true,
})

// Create a plane mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)

// Start out animation render loop
renderer.setAnimationLoop(onAnimLoop)

function onAnimLoop() {
  // On each new frame, render the scene to the default framebuffer 
  // (device screen)
  renderer.render(scene, orthoCamera)
}

This code simply initialises a threejs scene, adds a 2D plane with a text texture to it and renders it to the default framebuffer (device screen). If we are execute it with threejs included in our project, we will get this:

See the Pen
Step 1: Render to default framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Again, we don’t explicitly specify otherwise, so we are rendering to the default framebuffer (device screen).

Now that we managed to render our scene to the device screen, let’s add a framebuffer (THEEE.WebGLRenderTarget) and render it to a texture in the video card memory.

Rendering to a framebuffer

Let’s start by creating a new framebuffer when we initialise our app:

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a new framebuffer we will use to render to
// the video card memory
const renderBufferA = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

// ... rest of application

Now that we have created it, we must explicitly instruct threejs to render to it instead of the default framebuffer, i.e. device screen. We will do this in our program animation loop:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)
  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
}

And here is our result:

See the Pen
Step 2: Render to a framebuffer
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

As you can see, we are getting an empty screen, yet our program contains no errors – so what happened? Well, we are no longer rendering to the device screen, but another framebuffer! Our scene is being rendered to a texture in the video card memory, so that’s why we see the empty screen.

In order to display this generated texture containing our scene back to the default framebuffer (device screen), we need to create another 2D plane that will cover the entire screen of our app and pass the texture as material input to it.

First we will create a fullscreen 2D plane that will span the entire device screen:

// ... rest of initialisation step

// Create a second scene that will hold our fullscreen plane
const postFXScene = new THREE.Scene()

// Create a plane geometry that covers the entire screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)

// Create a plane material that expects a sampler texture input
// We will pass our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
  },
  // vertex shader will be in charge of positioning our plane correctly
  vertexShader: `
      varying vec2 v_uv;

      void main () {
        // Set the correct position of each plane vertex
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);

        // Pass in the correct UVs to the fragment shader
        v_uv = uv;
      }
    `,
  fragmentShader: `
      // Declare our texture input as a "sampler" variable
      uniform sampler2D sampler;

      // Consume the correct UVs from the vertex shader to use
      // when displaying the generated texture
      varying vec2 v_uv;

      void main () {
        // Sample the correct color from the generated texture
        vec4 inputColor = texture2D(sampler, v_uv);
        // Set the correct color of each pixel that makes up the plane
        gl_FragColor = inputColor;
      }
    `
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)

// ... animation loop code here, same as before

As you can see, we are creating a new scene that will hold our fullscreen plane. After creating it, we need to augment our animation loop to render the generated texture from the previous step to the fullscreen plane on our screen:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
  
  // 👇
  // Set the device screen as the framebuffer to render to
  // In WebGL, framebuffer "null" corresponds to the default 
  // framebuffer!
  renderer.setRenderTarget(null)

  // 👇
  // Assign the generated texture to the sampler variable used
  // in the postFXMesh that covers the device screen
  postFXMesh.material.uniforms.sampler.value = renderBufferA.texture

  // 👇
  // Render the postFX mesh to the default framebuffer
  renderer.render(postFXScene, orthoCamera)
}

After including these snippets, we can see our scene once again rendered on the screen:

See the Pen
Step 3: Display the generated framebuffer on the device screen
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s recap the necessary steps needed to produce this image on our screen on each render loop:

  1. Create renderTargetA framebuffer that will allow us to render to a separate texture in the users device video memory
  2. Create our “ABC” plane mesh
  3. Render the “ABC” plane mesh to renderTargetA instead of the device screen
  4. Create a separate fullscreen plane mesh that expects a texture as an input to its material
  5. Render the fullscreen plane mesh back to the default framebuffer (device screen) using the generated texture created by rendering the “ABC” mesh to renderTargetA

Achieving the persistence effect by using two framebuffers

We don’t have much use of framebuffers if we are simply displaying them as they are to the device screen, as we do right now. Now that we have our setup ready, let’s actually do some cool post-processing.

First, we actually want to create yet another framebuffer – renderTargetB, and make sure it and renderTargetA are let variables, rather then consts. That’s because we will actually swap them at the end of each render so we can achieve framebuffer ping-ponging.

“Ping-ponging” in WebGl is a technique that alternates the use of a framebuffer as either input or output. It is a neat trick that allows for general purpose GPU computations and is used in effects such as gaussian blur, where in order to blur our scene we need to:

  1. Render it to framebuffer A using a 2D plane and apply horizontal blur via the fragment shader
  2. Render the result horizontally blurred image from step 1 to framebuffer B and apply vertical blur via the fragment shader
  3. Swap framebuffer A and framebuffer B
  4. Keep repeating steps 1 to 3 and incrementally applying blur until desired gaussian blur radius is achieved.

Here is a small chart illustrating the steps needed to achieve ping-pong:

So with that in mind, we will render the contents of renderTargetA into renderTargetB using the postFXMesh we created and apply some special effect via the fragment shader.

Let’s kick things off by creating our renderTargetB:

let renderBufferA = new THREE.WebGLRenderTarget(
  // ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

Next up, let’s augment our animation loop to actually do the ping-pong technique:

function onAnimLoop() {
  // 👇
  // Do not clear the contents of the canvas on each render
  // In order to achieve our ping-pong effect, we must draw
  // the new frame on top of the previous one!
  renderer.autoClearColor = false

  // 👇
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // 👇
  // Render the postFXScene to renderBufferA.
  // This will contain our ping-pong accumulated texture
  renderer.render(postFXScene, orthoCamera)

  // 👇
  // Render the original scene containing ABC again on top
  renderer.render(scene, orthoCamera)
  
  // Same as before
  // ...
  // ...
  
  // 👇
  // Ping-pong our framebuffers by swapping them
  // at the end of each frame render
  const temp = renderBufferA
  renderBufferA = renderBufferB
  renderBufferB = temp
}

If we are to render our scene again with these updated snippets, we will see no visual difference, even though we do in fact alternate between the two framebuffers to render it. That’s because, as it is right now, we do not apply any special effects in the fragment shader of our postFXMesh.

Let’s change our fragment shader like so:

// Sample the correct color from the generated texture
// 👇
// Notice how we now apply a slight 0.005 offset to our UVs when
// looking up the correct texture color

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the correct color of each pixel that makes up the plane
// 👇
// We fade out the color from the previous step to 97.5% of
// whatever it was before
gl_FragColor = vec4(inputColor * 0.975);

With these changes in place, here is our updated program:

See the Pen
Step 4: Create a second framebuffer and ping-pong between them
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Let’s break down one frame render of our updated example:

  1. We render renderTargetB result to renderTargetA
  2. We render our “ABC” text to renderTargetA, compositing it on top of renderTargetB result in step 1 (we do not clear the contents of the canvas on new renders, because we set renderer.autoClearColor = false)
  3. We pass the generated renderTargetA texture to postFXMesh, apply a small offset vec2(0.002) to its UVs when looking up the texture color and fade it out a bit by multiplying the result by 0.975
  4. We render postFXMesh to the device screen
  5. We swap renderTargetA with renderTargetB (ping-ponging)

For each new frame render, we will repeat steps 1 to 5. This way, the previous target framebuffer we rendered to will be used as an input to the current render and so on. You can clearly see this effect visually in the last demo – notice how as the ping-ponging progresses, more and more offset is being applied to the UVs and more and more the opacity fades out.

Applying simplex noise and mouse interaction

Now that we have implemented and can see the ping-pong technique working correctly, we can get creative and expand on it.

Instead of simply adding an offset in our fragment shader as before:

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));

Let’s actually use simplex noise for more interesting visual result. We will also control the direction using our mouse position.

Here is our updated fragment shader:

// Pass in elapsed time since start of our program
uniform float time;

// Pass in normalised mouse position
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;

// <Insert snoise function definition from the link above here>

// Calculate different offsets for x and y by using the UVs
// and different time offsets to the snoise method
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;

// Add the snoise offset multiplied by the normalised mouse position
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);

We also need to specify mousePos and time as inputs to our postFXMesh material shader:

const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
    time: { value: 0 },
    mousePos: { value: new THREE.Vector2(0, 0) }
  },
  // ...
})

Finally let’s make sure we attach a mousemove event listener to our page and pass the updated normalised mouse coordinates from Javascript to our GLSL fragment shader:

// ... initialisation step

// Attach mousemove event listener
document.addEventListener('mousemove', onMouseMove)

function onMouseMove (e) {
  // Normalise horizontal mouse pos from -1 to 1
  const x = (e.pageX / innerWidth) * 2 - 1

  // Normalise vertical mouse pos from -1 to 1
  const y = (1 - e.pageY / innerHeight) * 2 - 1

  // Pass normalised mouse coordinates to fragment shader
  postFXMesh.material.uniforms.mousePos.value.set(x, y)
}

// ... animation loop

With these changes in place, here is our final result. Make sure to hover around it (you might have to wait a moment for everything to load):

See the Pen
Step 5: Perlin Noise and mouse interaction
by Georgi Nikoloff (@gbnikolov)
on CodePen.0

Conclusion

Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.

I encourage you to experiment with the provided examples, try to render more elements, alternate the “ABC” text color between each renderTargetA and renderTargetB swap to achieve different color mixing, etc.

In the first demo, you can see a specific example of how this typography effect could be used and the second demo is a playground for you to try some different settings (just open the controls in the top right corner).

Further readings:

The post Creating a Typography Motion Trail Effect with Three.js appeared first on Codrops.