How to Create Motion Hover Effects with Image Distortions using Three.js

The reveal hover effect on images has become a very popular pattern in modern websites. It plays an important role in taking the user experience to a higher level. But usually these kind of animations remain too “flat”. Natural movements with a realistic feel are much more enjoyable for the user. In this tutorial we’re going to build some special interactive reveal effects for images when a link is hovered. The aim is to add fluid and interesting motion to the effects. We will be exploring three different types of animations. This dynamic experience consists of two parts:

  1. Distortion Image Effect (main effect)
  2. RGB Displacement, Image Trail Effect, Image Stretch (additional effects)

We assume that you are confident with JavaScript and have some basic understanding of Three.js and WebGL.

Getting started

The markup for this effect will include a link element that contains an image (and some other elements that are not of importance for our effect):

<a class="link" href="#">
	<!-- ... -->
	<img src="img/demo1/img1.jpg" alt="Some image" />
</a>

The EffectShell class will group common methods and properties of the three distinct effects we’ll be creating. As a result, each effect will extend EffectShell.

Three.js setup

First of all, we need to create the Three.js scene.

class EffectShell {
 constructor(container = document.body, itemsWrapper = null) {
   this.container = container
   this.itemsWrapper = itemsWrapper
   if (!this.container || !this.itemsWrapper) return
   this.setup()
 }
 
 setup() {
   window.addEventListener('resize', this.onWindowResize.bind(this), false)
 
   // renderer
   this.renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true })
   this.renderer.setSize(this.viewport.width, this.viewport.height)
   this.renderer.setPixelRatio = window.devicePixelRatio
   this.container.appendChild(this.renderer.domElement)
 
   // scene
   this.scene = new THREE.Scene()
 
   // camera
   this.camera = new THREE.PerspectiveCamera(
     40,
     this.viewport.aspectRatio,
     0.1,
     100
   )
   this.camera.position.set(0, 0, 3)
 
   // animation loop
   this.renderer.setAnimationLoop(this.render.bind(this))
 }
 
 render() {
   // called every frame
   this.renderer.render(this.scene, this.camera)
 }
 
 get viewport() {
   let width = this.container.clientWidth
   let height = this.container.clientHeight
   let aspectRatio = width / height
   return {
     width,
     height,
     aspectRatio
   }
 }
 
 onWindowResize() {
   this.camera.aspect = this.viewport.aspectRatio
   this.camera.updateProjectionMatrix()
   this.renderer.setSize(this.viewport.width, this.viewport.height)
 }
}

Get items and load textures

In our markup we have links with images inside. The next step is to get each link from the DOM and put them in an array.

class EffectShell {
 ...
 get itemsElements() {
   // convert NodeList to Array
   const items = [...this.itemsWrapper.querySelectorAll('.link')]
 
   //create Array of items including element, image and index
   return items.map((item, index) => ({
     element: item,
     img: item.querySelector('img') || null,
     index: index
   }))
 }
}

Because we will use the images as a texture, we have to load the textures through Three.js’ TextureLoader. It’s an asynchronous operation so we shouldn’t initialize the effect without all textures being loaded. Otherwise our texture will be fully black. That’s why we use Promises here:

class EffectShell {
 ...
 initEffectShell() {
   let promises = []
 
   this.items = this.itemsElements
 
   const THREEtextureLoader = new THREE.TextureLoader()
   this.items.forEach((item, index) => {
     // create textures
     promises.push(
       this.loadTexture(
         THREEtextureLoader,
         item.img ? item.img.src : null,
         index
       )
     )
   })
 
   return new Promise((resolve, reject) => {
     // resolve textures promises
     Promise.all(promises).then(promises => {
       // all textures are loaded
       promises.forEach((promise, index) => {
         // assign texture to item
         this.items[index].texture = promise.texture
       })
       resolve()
     })
   })
 }
 
 loadTexture(loader, url, index) {
   // https://threejs.org/docs/#api/en/loaders/TextureLoader
   return new Promise((resolve, reject) => {
     if (!url) {
       resolve({ texture: null, index })
       return
     }
     // load a resource
     loader.load(
       // resource URL
       url,
 
       // onLoad callback
       texture => {
         resolve({ texture, index })
       },
 
       // onProgress callback currently not supported
       undefined,
 
       // onError callback
       error => {
         console.error('An error happened.', error)
         reject(error)
       }
     )
   })
 }
}

At this point we get an array of items. Each item contains an Element, Image, Index and Texture. Then, when all textures are loaded we can initialize the effect.

class EffectShell {
 constructor(container = document.body, itemsWrapper = null) {
   this.container = container
   this.itemsWrapper = itemsWrapper
   if (!this.container || !this.itemsWrapper) return
   this.setup()
   this.initEffectShell().then(() => {
     console.log('load finished')
     this.isLoaded = true
   })
 }
 ...
}

Create the plane

Once we have created the scene and loaded the textures, we can create the main effect. We start by creating a plane mesh using PlaneBufferGeometry and ShaderMaterial with three uniforms:

  1. uTexture contains the texture data to display the image on the plane
  2. uOffset provides plane deformation values
  3. uAlpha manages plane opacity
class Effect extends EffectShell {
 constructor(container = document.body, itemsWrapper = null, options = {}) {
   super(container, itemsWrapper)
   if (!this.container || !this.itemsWrapper) return
 
   options.strength = options.strength || 0.25
   this.options = options
 
   this.init()
 }
 
 init() {
   this.position = new THREE.Vector3(0, 0, 0)
   this.scale = new THREE.Vector3(1, 1, 1)
   this.geometry = new THREE.PlaneBufferGeometry(1, 1, 32, 32)
   this.uniforms = {
     uTexture: {
       //texture data
       value: null
     },
     uOffset: {
       //distortion strength
       value: new THREE.Vector2(0.0, 0.0)
     },
     uAlpha: {
       //opacity
       value: 0
     }
 
   }
   this.material = new THREE.ShaderMaterial({
     uniforms: this.uniforms,
     vertexShader: `
       uniform vec2 uOffset;
       varying vec2 vUv;
 
       void main() {
         vUv = uv;
         vec3 newPosition = position;
         gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
       }
     `,
     fragmentShader: `
       uniform sampler2D uTexture;
       uniform float uAlpha;
       varying vec2 vUv;
 
       void main() {
         vec3 color = texture2D(uTexture,vUv).rgb;
         gl_FragColor = vec4(color,1.0);
       }
     `,
     transparent: true
   })
   this.plane = new THREE.Mesh(this.geometry, this.material)
   this.scene.add(this.plane)
 }
}

At this point, we have a black squared plane in the center of our screen. Not very impressive.

Adding interactions

Creating events

So, let's outline all our possible events and what needs to be done:

  1. when we hover over an item, the plane’s texture takes the item’s texture
  2. when the mouse moves on the container, the plane’s position follows the mouse and its vertices are deformed
  3. when the mouse leaves the container, the plane’s opacity fades to 0
  4. when the mouse hovers a link, if the plane was invisible, its opacity animates to 1
class EffectShell {
 constructor(container = document.body, itemsWrapper = null) {
   this.container = container
   this.itemsWrapper = itemsWrapper
   if (!this.container || !this.itemsWrapper) return
 
   this.setup()
   this.initEffectShell().then(() => {
     console.log('load finished')
     this.isLoaded = true
   })
   this.createEventsListeners()
 }
 ...
 createEventsListeners() {
   this.items.forEach((item, index) => {
     item.element.addEventListener(
       'mouseover',
       this._onMouseOver.bind(this, index),
       false
     )
   })
 
   this.container.addEventListener(
     'mousemove',
     this._onMouseMove.bind(this),
     false
   )
   this.itemsWrapper.addEventListener(
     'mouseleave',
     this._onMouseLeave.bind(this),
     false
   )
 }
 
 _onMouseLeave(event) {
   this.isMouseOver = false
   this.onMouseLeave(event)
 }
 
 _onMouseMove(event) {
   // get normalized mouse position on viewport
   this.mouse.x = (event.clientX / this.viewport.width) * 2 - 1
   this.mouse.y = -(event.clientY / this.viewport.height) * 2 + 1
 
   this.onMouseMove(event)
 }
 
 _onMouseOver(index, event) {
   this.onMouseOver(index, event)
 }
}

Updating the texture

When we created the plane geometry we gave it 1 as height and width, that’s why our plane is always squared. But we need to scale the plane in order to fit the image dimensions otherwise the texture will be stretched.

class Effect extends EffectShell {
 ...
 onMouseEnter() {}
 
 onMouseOver(index, e) {
   if (!this.isLoaded) return
   this.onMouseEnter()
   if (this.currentItem && this.currentItem.index === index) return
   this.onTargetChange(index)
 }
 
 onTargetChange(index) {
   // item target changed
   this.currentItem = this.items[index]
   if (!this.currentItem.texture) return
 
   //update texture
   this.uniforms.uTexture.value = this.currentItem.texture
 
   // compute image ratio
   let imageRatio =
     this.currentItem.img.naturalWidth / this.currentItem.img.naturalHeight
 
   // scale plane to fit image dimensions
   this.scale = new THREE.Vector3(imageRatio, 1, 1)
   this.plane.scale.copy(this.scale)
 }
}

Updating the plane position

Here comes the first mathematical part of this tutorial. As we move the mouse over the viewport, the browser gives us the mouse's 2D coordinates from the viewport, but what we need is the 3D coordinates in order to move our plane in the scene. So, we need to remap the mouse coordinate to the view size of our scene.

First, we need to get the view size of our scene. For this, we can compute the plane's fit-to-screen dimensions by resolving AAS triangles using the camera position and camera FOV. This solution is provided by ayamflow.

class EffectShell {
 ...
 get viewSize() {
   // https://gist.github.com/ayamflow/96a1f554c3f88eef2f9d0024fc42940f
 
   let distance = this.camera.position.z
   let vFov = (this.camera.fov * Math.PI) / 180
   let height = 2 * Math.tan(vFov / 2) * distance
   let width = height * this.viewport.aspectRatio
   return { width, height, vFov }
 }
}

We are going to remap the normalized mouse position with the scene view dimensions using a value mapping function.

Number.prototype.map = function(in_min, in_max, out_min, out_max) {
 return ((this - in_min) * (out_max - out_min)) / (in_max - in_min) + out_min
}

Finally, we will add a GSAP-powered animation in order to smooth out our movements.

class Effect extends EffectShell {
 ...
 onMouseMove(event) {
   // project mouse position to world coordinates
   let x = this.mouse.x.map(
     -1,
     1,
     -this.viewSize.width / 2,
     this.viewSize.width / 2
   )
   let y = this.mouse.y.map(
     -1,
     1,
     -this.viewSize.height / 2,
     this.viewSize.height / 2
   )
 
   // update plane position
   this.position = new THREE.Vector3(x, y, 0)
   TweenLite.to(this.plane.position, 1, {
     x: x,
     y: y,
     ease: Power4.easeOut,
     onUpdate: this.onPositionUpdate.bind(this)
   })
 }
}

Fading the opacity

class Effect extends EffectShell {
 ...
 onMouseEnter() {
   if (!this.currentItem || !this.isMouseOver) {
     this.isMouseOver = true
     // show plane
     TweenLite.to(this.uniforms.uAlpha, 0.5, {
       value: 1,
       ease: Power4.easeOut
     })
   }
 }
 
 onMouseLeave(event) {
   TweenLite.to(this.uniforms.uAlpha, 0.5, {
     value: 0,
     ease: Power4.easeOut
   })
 }
}

Once correctly animated, we have to put uAlpha as alpha channel inside fragment shader of the plane material.


fragmentShader: `
 uniform sampler2D uTexture;
 uniform float uAlpha;
 varying vec2 vUv;

 void main() {
   vec3 color = texture2D(uTexture,vUv).rgb;
   gl_FragColor = vec4(color,uAlpha);
 }
`,

Adding the curved, velocity-sensitive distortion effect

During the movement animation, we compute the plane’s velocity and use it as uOffset for our distortion effect.

vector

class Effect extends EffectShell {
 ...
 onPositionUpdate() {
   // compute offset
   let offset = this.plane.position
     .clone()
     .sub(this.position) // velocity
     .multiplyScalar(-this.options.strength)
   this.uniforms.uOffset.value = offset
 }
}

Now, in order to make the "curved" distortion we will use the sine function. As you can see, the sine function is wave-shaped (sinusoidal) between x = 0 and x = PI. Moreover, the plane's UVs are mapped between 0 and 1 so by multiplying uv by we can remap between 0 and PI. Then we multiply it by the uOffset value that we calculated beforehand and we get the curve distortion thanks to the velocity.

sine

vertexShader: `
 uniform vec2 uOffset;
 varying vec2 vUv;

 #define M_PI 3.1415926535897932384626433832795

 vec3 deformationCurve(vec3 position, vec2 uv, vec2 offset) {
   position.x = position.x + (sin(uv.y * M_PI) * offset.x);
   position.y = position.y + (sin(uv.x * M_PI) * offset.y);
   return position;
 }

 void main() {
   vUv = uv;
   vec3 newPosition = deformationCurve(position, uv, uOffset);
   gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
 }
`,

Additional effects

RGBShift

demo1

To do an RGB shift we have to separate the red channel from other channels and apply its offset:

fragmentShader: `
 uniform sampler2D uTexture;
 uniform float uAlpha;
 uniform vec2 uOffset;

 varying vec2 vUv;

 vec3 rgbShift(sampler2D texture, vec2 uv, vec2 offset) {
   float r = texture2D(uTexture,vUv + uOffset).r;
   vec2 gb = texture2D(uTexture,vUv).gb;
   return vec3(r,gb);
 }

 void main() {
   vec3 color = rgbShift(uTexture,vUv,uOffset);
   gl_FragColor = vec4(color,uAlpha);
 }
`,

Stretch

demo3

By offsetting UV with the uOffset values we can achieve a “stretch effect”, but in order to avoid that the texture border gets totally stretched we need to scale the UVs.


vertexShader: `
 uniform vec2 uOffset;

 varying vec2 vUv;

 vec3 deformationCurve(vec3 position, vec2 uv, vec2 offset) {
   float M_PI = 3.1415926535897932384626433832795;
   position.x = position.x + (sin(uv.y * M_PI) * offset.x);
   position.y = position.y + (sin(uv.x * M_PI) * offset.y);
   return position;
 }

 void main() {
   vUv =  uv + (uOffset * 2.);
   vec3 newPosition = position;
   newPosition = deformationCurve(position,uv,uOffset);
   gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
 }
`,
fragmentShader: `
 uniform sampler2D uTexture;
 uniform float uAlpha;

 varying vec2 vUv;

 // zoom on texture 
 vec2 scaleUV(vec2 uv,float scale) {
   float center = 0.5;
   return ((uv - center) * scale) + center;
 }

 void main() {
   vec3 color = texture2D(uTexture,scaleUV(vUv,0.8)).rgb;
   gl_FragColor = vec4(color,uAlpha);
 }
`,

Trails

demo2

To make a trail-like effect, we have to use several planes with the same texture but with a different position animation duration.

class TrailsEffect extends EffectShell {
 ...
 init() {
   this.position = new THREE.Vector3(0, 0, 0)
   this.scale = new THREE.Vector3(1, 1, 1)
   this.geometry = new THREE.PlaneBufferGeometry(1, 1, 16, 16)
   //shared uniforms
   this.uniforms = {
     uTime: {
       value: 0
     },
     uTexture: {
       value: null
     },
     uOffset: {
       value: new THREE.Vector2(0.0, 0.0)
     },
     uAlpha: {
       value: 0
     }
   }
   this.material = new THREE.ShaderMaterial({
     uniforms: this.uniforms,
     vertexShader: `
       uniform vec2 uOffset;
 
       varying vec2 vUv;
 
       vec3 deformationCurve(vec3 position, vec2 uv, vec2 offset) {
         float M_PI = 3.1415926535897932384626433832795;
         position.x = position.x + (sin(uv.y * M_PI) * offset.x);
         position.y = position.y + (sin(uv.x * M_PI) * offset.y);
         return position;
       }
 
       void main() {
         vUv = uv;
         vec3 newPosition = position;
         newPosition = deformationCurve(position,uv,uOffset);
         gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
       }
     `,
     fragmentShader: `
       uniform sampler2D uTexture;
       uniform float uAlpha;
       uniform vec2 uOffset;
 
       varying vec2 vUv;
 
       void main() {
         vec3 color = texture2D(uTexture,vUv).rgb;
         gl_FragColor = vec4(color,uAlpha);
       }
     `,
     transparent: true
   })
   this.plane = new THREE.Mesh(this.geometry, this.material)
 
   this.trails = []
   for (let i = 0; i < this.options.amount; i++) {
     let plane = this.plane.clone()
     this.trails.push(plane)
     this.scene.add(plane)
   }
 }
 
 onMouseMove(event) {
   // project mouse position to world coodinates
   let x = this.mouse.x.map(
     -1,
     1,
     -this.viewSize.width / 2,
     this.viewSize.width / 2
   )
   let y = this.mouse.y.map(
     -1,
     1,
     -this.viewSize.height / 2,
     this.viewSize.height / 2
   )
 
   TweenLite.to(this.position, 1, {
     x: x,
     y: y,
     ease: Power4.easeOut,
     onUpdate: () => {
       // compute offset
       let offset = this.position
         .clone()
         .sub(new THREE.Vector3(x, y, 0))
         .multiplyScalar(-this.options.strength)
       this.uniforms.uOffset.value = offset
     }
   })
 
   this.trails.forEach((trail, index) => {
     let duration =
       this.options.duration * this.options.amount -
       this.options.duration * index
     TweenLite.to(trail.position, duration, {
       x: x,
       y: y,
       ease: Power4.easeOut
     })
   })
 }
}

Conclusion

We have tried to make this tutorial as easy as possible to follow, so that it's understandable to those who are not as advanced in Three.js. If there's anything you have not understood, please feel free to comment below.

The main purpose of this tutorial was to show how to create motion-distortion effects on images, but you can play around with the base effect and add something else or try something different. Feel free to make pull requests or open an issue in our GitHub repo.
These effects can also fit very well with texture transitions; it's something you can explore with GL Transitions.

We hope you enjoyed this article and play around with this to explore new stuff.

References

  • Three.js
  • GSAP
  • Fit Plane to screen
  • Credits

    Art Direction, Photography, Dev (HTML,CSS) – Niccolò Miranda
    Dev (JS, WebGL) – Clément Roche

    How to Create Motion Hover Effects with Image Distortions using Three.js was written by Niccolò Miranda and published on Codrops.

    Case Study: Chang Liu Portfolio V4

    Recently, I rehauled my personal website in 3D using Three.js. In this post, I’ll run through my design process and outline how I achieved some of the effects. Additionally, I will explain how to achieve the wavy distortion effect that I use on a menu.

    Objective

    The goal was to highlight my work in a logical way that was also creative enough to stand as a portfolio piece itself. I started coding the site in 2D, deriving concepts from its previous version. Around the time however, I was also starting my first Three.js project under UCLA’s Creative Labs while passively admiring 3D projects during my time at Use All Five. So several months later, after I already finished the bulk of the 2D work, I decided to make the leap to 3D.

    The site in 2D, then the first iteration in 3D

    Challenges

    3D animations were not exactly easy to prototype. Coupled with my own inexperience in 3D programming, the biggest challenge was finding a middle ground between what I wanted and what I was capable of making i.e. being ambitious but also realistic.

    I also discovered that my creative process was very ad-hoc and collage-like; whenever I came across something I fancied, I tried to incorporate that into the website. What resulted was a jumble of different interactions that I needed to somehow unify.

    The last challenge was a matter of wanting to depart from my previous style of design but also to stay minimalistic and clean.

    1. Cohesiveness & Unification

    Vincent Tavano’s portfolio heavily inspired me in the way that it unified a series of very disjointed projects. I applied the same concept by making each project page a unique experience, unified by a common description section. This way, I was able to experiment and add different interactions to each page while maintaining a thematic portfolio.

    Project pages with a common header and varying interactive content

    Another pivotal change was consolidating two components on the homepage. Originally, I had a vertical carousel as well as a vertical menu that both displayed the same links. I decided to cut this redundancy out and combine them into one component that transforms from a carousel to a menu and vice versa.

    2. Contrast & Distortion

    My solution to creating experimental yet minimalistic UI was to utilize contrast and distortion. I was able to keep the clean look of sharp planes but also achieve experimental looks by applying distortion effects on hover. The contrast of sharp, rigid planes to wavy, flowy planes, sans-serif to serif types, straight arrows to circular loading spinners and white text to negative colored text also helped me distinguish this version from the homogeneously designed previous site.

    Rectangular planes on the home and about pages that distort on mouse events to add an experimental feel

    Using blend modes to add contrast in color in an otherwise monochromatic site

    Creating the Wavy Menu Effects

    Now I will go over how I achieved the wavy distortion effect on my planes. For the sake of simplicity, we will use just one plane for the example instead of a carousel of planes. I am also assuming basic knowledge of the Three.js library and GLSL shader language so I will skip over commonly used code like scene initialization.

    1. Measuring 3D Space Dimensions

    To begin with, we need to be comfortable converting between pixels and 3D space dimensions. There is a simple way to calculate the viewport size at a given z-depth for a scene using PerspectiveCamera:

    const getVisibleDimensionsAtZDepth = (depth, camera) => {
      const cameraOffset = camera.position.z;
    
      if (depth < cameraOffset) depth -= cameraOffset;
      else depth += cameraOffset;
    
      const vFOV = (camera.fov * Math.PI) / 180; // vertical fov in radians
    
      // Math.abs to ensure the result is always positive
      const visibleHeight = 2 * Math.tan(vFOV / 2) * Math.abs(depth);
      const visibleWidth = visibleHeight * camera.aspect;
    
      return {
        visibleHeight,
        visibleWidth
      };
    };

    Our scene is a fullscreen canvas so the pixel dimensions would be window.innerWidth × window.innerHeight. We place our plane at z = 0 and the 3D dimensions can be calculated by getVisibleDimensionsAtZDepth(0, camera). From here, we can get the visibleWidthPerPixel by calculating window.innerWidth / visibleWidth, and likewise for the height. Now if we wanted to make our plane appear 300 pixels wide in the 3D space, we would initialize its width to 300 × visibleWidthPerPixel.

    2. Creating the Plane

    For the wavy distortion effects, we need to apply transformations to the plane’s vertices. This means when we initialize the plane, we need to use THREE.ShaderMaterial to allow for shader programs and THREE.PlaneBufferGeometry to subdivide the plane into segments. We will also use the standard THREE.TextureLoader to load an image to map to our plane.

    One more thing to note is preserving the aspect ratio of our image. When you initialize a plane and texture it, the texture will stretch or shrink accordingly depending on the dimensions. To achieve a CSS background-size: cover like effect in 3D, we can pass in a ratio uniform that is calculated like so:

    const ratio = new Vector2(
      Math.min(planeWidth / planeHeight / (textureWidth / textureHeight), 1.0),
      Math.min(planeHeight / planeWidth / (textureHeight / textureWidth), 1.0)
    );

    Then inside the fragment shader we will have:

    uniform sampler2D texture;
    uniform vec2 ratio;
    varying vec2 vUv;
    
    void main(){
      vec2 uv = vec2(
        vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
        vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
      );
    
      gl_FragColor = texture2D(texture, uv);
    }

    I recommend setting a fixed aspect ratio and dynamic plane width to make the scene responsive. In this example I am setting planeWidth to half the visibleWidth and then calculating the height by multiplying that by my fixed aspect ratio of 9/16. Also note that when we initialize the PlaneBufferGeometry, we are passing in whole numbers that are proportional to the plane dimensions for the 3rd and 4th argument. These arguments specify the horizontal and vertical segments respectively; we want the number to be large enough to allow the plane to bend smoothly but not too large that it will impact performance – I am using 30 horizontal segments.

    3. Passing in Other Uniforms

    We have the fragment shader all set up now but there are several more uniforms we will need to pass to the vertex shader:

    1. hover – A float value in the range [0, 1] where 1 means we are hovering over the plane. We will use GSAP to tween the uniform so that we can have a smooth transition into the wavy effect.
    2. intersect – A 2D vector representing the uv coordinates of the texture that we are hovering over. To get this value, we first need to store the user’s mouse position as normalized device coordinates in the range [-1, 1] and then raycast the mouse position with our plane. The THREE.js docs on raycasting includes all the code we need to set that up
    3. time – A continuously changing float value that we update every time in the requestAnimationFrame loop. The wavy animation is just a sine wave so we need to pass in a dynamic time parameter to make it move. Also, to save on potentially large computations, we will clamp the value of this uniform from [0, 1] by setting it like: time = (time + 0.05) % 1 (where 0.05 is an arbitrary increment value).

    4. Handling Mouse Events

    As linked above, the THREE.js Raycaster docs give us a good outline of how to handle mouse events. We will add an additional function, updateIntersected, in the mousemove event listener with logic to start our wave effect and small micro animations like scaling and translating the plane.

    Again, we are using the GreenSock library to tween values, specifically the TweenMax object which tweens one object and the TimelineMax object which can chain multiple tweens

    The Raycaster intersectObject function returns an array of intersects, and in our case, we just have one plane to check so as long as the array is non-empty then we know we are hovering over our plane. Our logic then has two parts:

    1. If we are hovering over the plane, set the intersect uniform to the uv coordinates we get from the Raycaster and translate the plane in the direction of the mouse (since normalized device coordinates are relative to the center of the screen, it’s very easy to translate the plane by just setting the x and y to our mouse coordinates). Then, if it’s the first time we’re hovering over the plane (we track this using a global variable), tween the hover uniform to 1 and scale the plane up a bit.
    2. If there is no intersection, we reset the uniforms, scale and position of the plane

    5. Creating the Wave Effect

    The wave effect consists of two things going on in the shader:

    1. Applying a sine wave to the z coordinates of the plane’s vertices. We can incorporate the classic sine wave function y = A sin(B(x + C)) + D into our own shader like so:

    float _wave = hover * A * sin(B * (position.x + position.y + time));

    A is the wave’s amplitude and B is a speed factor that increases the frequency. By multiplying the speed by position.x + position.y + time, we make the sine wave dependent on the x & y texture coordinates and the constantly changing time uniform, creating a very dynamic effect. We also multiply everything by our hover uniform so that when we tween the value, the wave effect eases in. The final result is a transformation that we can apply to our plane’s z position.

    2. Restricting the wave effect to a certain radius around the mouse

    Since we already pass in the mouse location as the intersect uniform, we can calculate whether the mouse is in a given hoverRadius by doing:

    float _dist = length(uv - intersect);
    float _inCircle = 1. - (clamp(_dist, 0., hoverRadius) / hoverRadius);
    float _distort = _inCircle * _wave;
    

    The inCircle variable ranges from 0 to 1, where 1 means the current pixel is at the center of the mouse. We multiply this by our final effect variable so we get a nice tapering of the wavyness at the edge of the radius.

    Experiment with different values for amplitude, speed and radius to see how they affect the hover effect.

    Tech Stack

    • React – readable component hierarchy, easy to use but very hard to handle page transitions and page load animations
    • DigitalOcean / Node.js – Linux machine to handle subdomains, rather than using static Github Pages
    • Contentful – very friendly CMS that is API only, comes with image formatting and other neat features
    • GSAP / Three.js – GSAP is state of the art for animations as it comes with so many optimizations for performance; Three.js on the other hand is a 500kb library and if I were to do things differently I would try to just use plain WebGL to save space

    Case Study: Chang Liu Portfolio V4 was written by Chang Liu and published on Codrops.

    How to Create an Interactive 3D Character with Three.js

    Ever had a personal website dedicated to your work and wondered if you should include a photo of yourself in there somewhere? I recently figured I’d go a couple steps further and added a fully interactive 3D version of myself that watched the user’s cursor as they navigated around my screen. And ass if that wasn’t enough, you could even click on me and I’d do stuff. This tutorial shows you how to do the same with a model we chose named Stacy.

    Here’s the demo (click on Stacy, and move your mouse around the Pen to watch her follow it).

    We’re going to use Three.js, and I’m going to assume you have a handle on JavaScript.

    See the Pen
    Character Tutorial – Final
    by Kyle Wetton (@kylewetton)
    on CodePen.

    The model we use has ten animations loaded into it, at the bottom of this tutorial, I’ll explain how its set up. This is done in Blender and the animations are from Adobe’s free animation repo, Mixamo.

    Part 1: HTML and CSS Project Starter

    Let’s get the small amount of HTML and CSS out of the way. This pen has everything you need. Follow along by forking this pen, or copy the HTML and CSS from here into a blank project elsewhere.

    See the Pen
    Character Tutorial – Blank
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Our HTML consists of a loading animation (currently commented out until we need it), a wrapper div and our all-important canvas element. The canvas is what Three.js uses to render our scene, and the CSS sets this at 100% viewport size. We also load in two dependencies at the bottom of our HTML file: Three.js, and GLTFLoader (GLTF is the format that our 3D model is imported as). Both of these dependencies are available as npm modules.

    The CSS also consists of a small amount of centering styling and the rest is just the loading animation; really nothing more to it than that. You can now collapse your HTML and CSS panels, we will delve into that very little for the rest of the tutorial.

    Part 2: Building our Scene

    In my last tutorial, I found myself making you run up and down your file adding variables at the top that needed to be shared in a few different places. This time I’m going to give all of these to you upfront, and I’ll let you know when we use them. I’ve included explanations of what each are if you’re curious. So, our project starts like this. In your JavaScript add these variables. Note that because there is a bit at work here that would otherwise be in global scope, we’re wrapping our entire project in a function:

    (function() {
    // Set our main variables
    let scene,  
      renderer,
      camera,
      model,                              // Our character
      neck,                               // Reference to the neck bone in the skeleton
      waist,                               // Reference to the waist bone in the skeleton
      possibleAnims,                      // Animations found in our file
      mixer,                              // THREE.js animations mixer
      idle,                               // Idle, the default state our character returns to
      clock = new THREE.Clock(),          // Used for anims, which run to a clock instead of frame rate 
      currentlyAnimating = false,         // Used to check whether characters neck is being used in another anim
      raycaster = new THREE.Raycaster(),  // Used to detect the click on our character
      loaderAnim = document.getElementById('js-loader');
    
    })(); // Don't add anything below this line

    We’re going to set up Three.js. This consists of a scene, a renderer, a camera, lights, and an update function. The update function runs on every frame.

    Let’s do all this inside an init() function. Under our variables, and inside our function scope, we add our init function:

    init(); 
    
    function init() {
    
    }

    Inside our init function, let’s reference our canvas element and set our background color, I’ve gone for a very light grey for this tutorial. Note that Three.js doesn’t reference colors in a string like so “#f1f1f1”, but rather a hexadecimal integer like 0xf1f1f1.

    const canvas = document.querySelector('#c');
    const backgroundColor = 0xf1f1f1;

    Below that, let’s create a new Scene. Here we set the background color, and we’re also going to add some fog. This isn’t that visible in this tutorial, but if your floor and background color are different, it can come in handy to blur those together.

    // Init the scene
    scene = new THREE.Scene();
    scene.background = new THREE.Color(backgroundColor);
    scene.fog = new THREE.Fog(backgroundColor, 60, 100);

    Next up is the renderer, we create a new renderer and pass an object with the canvas reference and other options. The only option we’re using here is that we’re enabling antialiasing. We enable shadowMap so that our character can cast a shadow, and we set the pixel ratio to be that of the device, this is so that mobile devices render correctly. The canvas will display pixelated on high density screens otherwise. Finally, we add our renderer to our document body.

    // Init the renderer
    renderer = new THREE.WebGLRenderer({ canvas, antialias: true });
    renderer.shadowMap.enabled = true;
    renderer.setPixelRatio(window.devicePixelRatio);
    document.body.appendChild(renderer.domElement);

    That covers the first two things that Three.js needs. Next up is a camera. Let’s create a new perspective camera. We’re setting the field of view to 50, the size to that of the window, and the near and far clipping planes are the default. After that, we’re positioning the camera to be 30 units back, and 3 units down. This will become more obvious later. All of this can be experimented with, but I recommend using these settings for now.

    // Add a camera
    camera = new THREE.PerspectiveCamera(
      50,
      window.innerWidth / window.innerHeight,
      0.1,
      1000
    );
    camera.position.z = 30 
    camera.position.x = 0;
    camera.position.y = -3;

    Note that scene, renderer and camera are initially referenced at the top of our project.

    Without lights our camera has nothing to display. We’re going to create two lights, a hemisphere light, and a directional light. We then add them to the scene using scene.add(light).

    Let’s add our lights under the camera. I’ll explain a bit more about what we’re doing afterwards:

    // Add lights
    let hemiLight = new THREE.HemisphereLight(0xffffff, 0xffffff, 0.61);
    hemiLight.position.set(0, 50, 0);
    // Add hemisphere light to scene
    scene.add(hemiLight);
    
    let d = 8.25;
    let dirLight = new THREE.DirectionalLight(0xffffff, 0.54);
    dirLight.position.set(-8, 12, 8);
    dirLight.castShadow = true;
    dirLight.shadow.mapSize = new THREE.Vector2(1024, 1024);
    dirLight.shadow.camera.near = 0.1;
    dirLight.shadow.camera.far = 1500;
    dirLight.shadow.camera.left = d * -1;
    dirLight.shadow.camera.right = d;
    dirLight.shadow.camera.top = d;
    dirLight.shadow.camera.bottom = d * -1;
    // Add directional Light to scene
    scene.add(dirLight);

    The hemisphere light is just casting white light, and its intensity is at 0.61. We also set its position 50 units above our center point; feel free to experiment with this later.

    Our directional light needs a position set; the one I’ve chosen feels right, so let’s start with that. We enable the ability to cast a shadow, and set the shadow resolution. The rest of the shadows relate to the lights view of the world, this gets a bit vague to me, but its enough to know that the variable d can be adjusted until your shadows aren’t clipping in strange places.

    While we’re here in our init function, lets add our floor:

    // Floor
    let floorGeometry = new THREE.PlaneGeometry(5000, 5000, 1, 1);
    let floorMaterial = new THREE.MeshPhongMaterial({
      color: 0xeeeeee,
      shininess: 0,
    });
    
    let floor = new THREE.Mesh(floorGeometry, floorMaterial);
    floor.rotation.x = -0.5 * Math.PI; // This is 90 degrees by the way
    floor.receiveShadow = true;
    floor.position.y = -11;
    scene.add(floor);

    What we’re doing here is creating a new plane geometry, which is big: it’s 5000 units (for no particular reason at all other than it really ensures our seamless background).

    We then create a material for our scene. This is new. We only have a couple different materials in this tutorial, but it’s enough to know for now that you combine geometry and materials into a mesh, and this mesh is a 3D object in our scene. The mesh we’re making now is a really big, flat plane rotated to be flat on the ground (well, it is the ground). Its color is set to 0xeeeeee which is slightly darker than our background. Why? Because our lights shine on this floor, but our lights don’t affect the background. This is a color I manually tweaked in to give us the seamless scene. Play around with it once we’re done.

    Our floor is a Mesh which combines the Geometry and Material. Read through what we just added, I think you’ll find that everything is self explanatory. We’re moving our floor down 11 units, this will make sense once we load in our character.

    That’s it for our init() function for now.

    One crucial aspect that Three.js relies on is an update function, which runs every frame, and is similar to how game engines work if you’ve ever dabbled with Unity. This function needs to be placed after our init() function instead of inside it. Inside our update function the renderer renders the scene and camera, and the update is run again. Note that we immediately call the function after the function itself.

    function update() {
      renderer.render(scene, camera);
      requestAnimationFrame(update);
    }
    update();

    Our scene should now turn on. The canvas is rendering a light grey; what we’re actually seeing here is both the background and the floor. You can test this out by changing the floors material color to 0xff0000. Remember to change it back though!

    We’re going to load the model in the next part. Before we do though, there is one more thing our scene needs. The canvas as an HTML element will resize just fine the way it is, the height and width is set to 100% in CSS. But, the scene needs to be aware of resizes too so that it can keep everything in proportion. Below where we call our update function (not inside it), add this function. Read it carefully if you’d like, but essentially what it’s doing is constantly checking whether our renderer is the same size as our canvas, as soon as it’s not, it returns needResize as a boolean.

    function resizeRendererToDisplaySize(renderer) {
      const canvas = renderer.domElement;
      let width = window.innerWidth;
      let height = window.innerHeight;
      let canvasPixelWidth = canvas.width / window.devicePixelRatio;
      let canvasPixelHeight = canvas.height / window.devicePixelRatio;
    
      const needResize =
        canvasPixelWidth !== width || canvasPixelHeight !== height;
      if (needResize) {
        renderer.setSize(width, height, false);
      }
      return needResize;
    }

    We’re going to use this inside our update function. Find these lines:

    renderer.render(scene, camera);
    requestAnimationFrame(update);

    ABOVE these lines, we’re going to check if we need a resize by calling our function, and updating the cameras aspect ratio to match the new size.

    if (resizeRendererToDisplaySize(renderer)) {
      const canvas = renderer.domElement;
      camera.aspect = canvas.clientWidth / canvas.clientHeight;
      camera.updateProjectionMatrix();
    }

    Our full update function should now look like this:

    function update() {
    
      if (resizeRendererToDisplaySize(renderer)) {
        const canvas = renderer.domElement;
        camera.aspect = canvas.clientWidth / canvas.clientHeight;
        camera.updateProjectionMatrix();
      }
      renderer.render(scene, camera);
      requestAnimationFrame(update);
    }
    
    update();
    
    function resizeRendererToDisplaySize(renderer) { ... }

    Here’s our project in its entirety so far. Next up we’re going to load the model.

    See the Pen
    Character Tutorial – Round 1
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Part 3: Adding the Model

    Our scene is super sparse, but it’s set up and we’ve got our resizing sorted, our lights and camera are working. Let’s add the model.

    Right at the top of our init() function, before we reference our canvas, let’s reference the model file. This is in the GLTf format (.glb), Three.js support a range of 3D model formats, but this is the format it recommends. We’re going to use our GLTFLoader dependency to load this model into our scene.

    const MODEL_PATH = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/stacy_lightweight.glb';

    Still inside our init() function, below our camera setup, let’s create a new loader:

    var loader = new THREE.GLTFLoader();

    This loader uses a method called load. It takes four arguments: the model path, a function to call once the model is loaded, a function to call during the loading, and a function to catch errors.

    Lets add this now:

    var loader = new THREE.GLTFLoader();
    
    loader.load(
      MODEL_PATH,
      function(gltf) {
       // A lot is going to happen here
      },
      undefined, // We don't need this function
      function(error) {
        console.error(error);
      }
    );

    Notice the comment “A lot is going to happen here”, this is the function that runs once our model is loaded. Everything going forward is added inside this function unless I mention otherwise.

    The GLTF file itself (passed into the function as the variable gltf) has two parts to it, the scene inside the file (gltf.scene), and the animations (gltf.animations). Let’s reference both of these at the top of this function, and then add the model to the scene:

    model = gltf.scene;
    let fileAnimations = gltf.animations;
    
    scene.add(model);

    Our full loader.load function so far looks like this:

    loader.load(
      MODEL_PATH,
      function(gltf) {
      // A lot is going to happen here
        model = gltf.scene;
        let fileAnimations = gltf.animations;
    
        scene.add(model);
        
      },
      undefined, // We don't need this function
      function(error) {
        console.error(error);
      }
    );

    Note that model is already initialized at the top of our project.

    You should now see a small figure in our scene.

    little-stacy

    A couple of things here:

    • Our model is really small; 3D models are like vectors, you can scale them without any loss of definition; Mixamo outputs the model really small, and for that reason we will need to scale it up.
    • You can include textures inside a GLTF model, there are a number of reasons why I didn’t, the first is that decoupling them allows for smaller file sizes when hosting the assets, the other is to do with color space and I cover that more in the section at the bottom of this tutorial which deals with how to set 3D models up.

    We added our model prematurely, so above scene.add(model), let’s do a couple more things.

    First of all, we’re going to use the model’s traverse method to find all the meshs, and enabled the ability to cast and receive shadows. This is done like this. Again, this should go above scene.add(model):

    model.traverse(o => {
      if (o.isMesh) {
        o.castShadow = true;
        o.receiveShadow = true;
      }
    });

    Then, we’re going to set the model’s scale to a uniformed 7x its initial size. Add this below our traverse method:

    // Set the models initial scale
    model.scale.set(7, 7, 7);

    And finally, let’s move the model down by 11 units so that it’s standing on the floor.

    model.position.y = -11;

    proper-scaled

    Perfect, we’ve loaded in our model. Let’s now load in the texture and apply it. This model came with the texture and the model has been mapped to this texture in Blender. This process is called UV mapping. Feel free to download the image itself to look at it, and learn more about UV mapping if you’d like to explore the idea of making your own character.

    We referenced the loader earlier; let’s create a new texture and material above this reference:

    let stacy_txt = new THREE.TextureLoader().load('https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/stacy.jpg');
    
    stacy_txt.flipY = false; // we flip the texture so that its the right way up
    
    const stacy_mtl = new THREE.MeshPhongMaterial({
      map: stacy_txt,
      color: 0xffffff,
      skinning: true
    });
    
    // We've loaded this earlier
    var loader - new THREE.GLTFLoader()

    Lets look at this for a second. Our texture can’t just be a URL to an image, it needs to be loaded in as a new texture using TextureLoader. We set this to a variable called stacy_txt.

    We’ve used materials before. This was placed on our floor with the color 0xeeeeee, we’re using a couple of new options here for our models material. Firstly, we’re passing the stacy_txt texture to the map property. Secondly we are turning skinning on, this is critical for animated models. We reference this material with stacy_mtl.

    Okay, so we’ve got our textured material, our files scene (gltf.scene) only has one object, so, in our traverse method, let’s add one more line under the lines that enabled our object to cast and receive shadows:

    model.traverse(o => {
     if (o.isMesh) {
       o.castShadow = true;
       o.receiveShadow = true;
       o.material = stacy_mtl; // Add this line
     }
    });

    stacy_mtl

    Just like that, our model has become the fully realized character, Stacy.

    She’s a little lifeless though. The next section will deal with animations, but now that you’ve handled geometry and materials, let’s use what we’ve learned to make the scene a little more interesting. Scroll down to where you added your floor, I’ll meet you there.

    Below your floor, as the final lines of your init() function, let’s add a circle accent. This is really a 3D sphere, quite big but far away, that uses a BasicMaterial. The materials we’ve used previously are called PhongMaterials which can be shiny, and also most importantly can receive and cast shadows. A BasicMaterial however, can not. So, add this sphere to your scene to create a flat circle that frames Stacy better.

    let geometry = new THREE.SphereGeometry(8, 32, 32);
    let material = new THREE.MeshBasicMaterial({ color: 0x9bffaf }); // 0xf2ce2e 
    let sphere = new THREE.Mesh(geometry, material);
    sphere.position.z = -15;
    sphere.position.y = -2.5;
    sphere.position.x = -0.25;
    scene.add(sphere);

    Change the color to whatever you want!

    Part 4: Animating Stacy

    Before we get started, you may have noticed that Stacy takes a while to load. This can cause confusion because before she loads, all we see is a colored dot in the middle of the page. I mentioned that in our HTML we had a loader that was commented out. Head to the HTML and uncomment this markup.

    <!-- The loading element overlays everything else until the model is loaded, at which point we remove this element from the DOM -->  
    <div class="loading" id="js-loader"><div class="loader"></div></div>

    Then again in our loader function, once the model has been added into the scene with scene.add(model), add this line below it. loaderAnim has already been referenced at the top of our project.

    loaderAnim.remove();

    All we’re doing here is removing the loading animation overlay once Stacy has been added to the scene. Save and then refresh, you should see the loader until the page is ready to show Stacy. If the model is cached, the page might load too quickly to see it.

    Anyway, onto animating!

    We’re still in our loader function, we’re going to create a new AnimationMixer, an AnimationMixer is a player for animations on a particular object in the scene. Some of this might look foreign, and is potentially outside of the scope of this tutorial, but if you’d like to know more, check out the Three.js docs page on the AnimationMixer. You won’t need to know more than what we handle here to complete the tutorial.

    Add this below the line that removes the loader, and pass in our model:

    mixer = new THREE.AnimationMixer(model);

    Note that mixer is referenced at the top of our project.

    Below this line, we’re going to create a new AnimationClip, we’re looking inside our fileAnimations to find an animation called ‘idle’. This name was set inside Blender.

    let idleAnim = THREE.AnimationClip.findByName(fileAnimations, 'idle');

    We then use a method in our mixer called clipAction, and pass in our idleAnim. We call this clipAction idle.

    Finally, we tell idle to play:

    idle = mixer.clipAction(idleAnim);
    idle.play();

    It’s not going play yet though, we do need one more thing. The mixer needs to be updated in order for it to run continuously through an animation. In order to do this, we need to tell it to update inside our update() function. Add this right at the top, above our resizing check:

    if (mixer) {
      mixer.update(clock.getDelta());
    }
    

    The update takes our clock (a Clock was referenced at the top of our project) and updates it to that clock. This is so that animations don’t slow down if the frame rate slows down. If you run an animation to a frame rate, it’s tied to the frames to determine how fast or slow it runs, that’s not what you want.

    sway-zoom

    Stacy should be happily swaying side by side! Great job! This is only one of 10 animations loaded inside our model file though, soon we will pick a random animation to play when you click on Stacy, but next up, let’s make our model even more alive by having her head and body point toward our cursor.

    Part 5: Looking at our Cursor

    If you don’t know much about 3D (or even 2D animation in most cases), the way it works is that there is a skeleton (or an array of bones) that warp the mesh. These bones position, scale and rotation are animated across time to warp and move our mesh in interesting ways. We’re going to hook into Stacys skeleton (ek) and reference her neck bone and her bottom spine bone. We’re then going to rotate these bones depending on where the cursor is relative to the middle of the screen. In order for us to do this though, we need to tell our current idle animation to ignore these two bones. Let’s get started.

    Remember that part in our model traverse method where we said if (o.isMesh) { … set shadows ..}? In this traverse method (don’t do this), you can also use o.isBone. I console logged all the bones and found the neck and spine bones, and their namess. If you’re making your own character, you’ll want to do this to find the exact name string of your bone. Have a look here… (again don’t add this to our project)

    model.traverse(o => {
    if (o.isBone) {
      console.log(o.name);
    }
    if (o.isMesh) {
      o.castShadow = true;
      o.receiveShadow = true;
      o.material = stacy_mtl;
    }

    I got an output of a lot of bones, but the ones I was trying to find where these (this is pasted from my console):

    ...
    ...
    mixamorigSpine
    ...
    mixamorigNeck
    ...
    ...

    So now we know our spine (from here on out referenced as the waist), and our neck names.

    In our model traverse, let’s add these bones to our neck and waist variables which have already been referenced at the top of our project.

    model.traverse(o => {
      if (o.isMesh) {
        o.castShadow = true;
        o.receiveShadow = true;
        o.material = stacy_mtl;
      }
      // Reference the neck and waist bones
      if (o.isBone && o.name === 'mixamorigNeck') { 
        neck = o;
      }
      if (o.isBone && o.name === 'mixamorigSpine') { 
        waist = o;
      }
    });

    Now for a little bit more investigative work. We created an AnimationClip called idleAnim which we then sent to our mixer to play. We want to snip the neck and skeleton tracks out of this animation, or else our idle animation is going to overwrite any manipulation we try and create manually on our model.

    So the first thing I did was console log idleAnim. It’s an object, with a property called tracks. The value of tracks is an array of 156 values, every 3 values represent the animation of a single bone. The three being the position, quaternion (rotation) and the scale of a bone. So the first three values are the hips position, rotation and scale.

    What I was looking for though was this (pasted from my console):

    3: ad {name: "mixamorigSpine.position", ...
    4: ke {name: "mixamorigSpine.quaternion", ...
    5: ad {name: "mixamorigSpine.scale", ...

    …and this:

    12: ad {name: "mixamorigNeck.position", ...
    13: ke {name: "mixamorigNeck.quaternion", ...
    14: ad {name: "mixamorigNeck.scale", ...

    So inside our animation, I want to splice the tracks array to remove 3,4,5 and 12,13,14.

    However, once I splice 3,4,5 …. My neck becomes 9,10,11. Something to keep in mind.

    Let’s do this now. Below where we reference idleAnim inside our loader function, add these lines:

    let idleAnim = THREE.AnimationClip.findByName(fileAnimations, 'idle');
    
    // Add these:
    idleAnim.tracks.splice(3, 3);
    idleAnim.tracks.splice(9, 3);
    

    We’re going to do this to all animations later on. This means that regardless of what she’s doing, you still have some control over her waist and neck, letting you modify animations in interesting ways in real time (yes, I did make my character play air guitar, and yes I did spend 3 hours making him head bang with my mouse while the animation ran).

    Right at the bottom of our project, let’s add an event listener, along with a function that returns our mouse position whenever it’s moved.

    document.addEventListener('mousemove', function(e) {
      var mousecoords = getMousePos(e);
    });
    
    function getMousePos(e) {
      return { x: e.clientX, y: e.clientY };
    }

    Below this, we’re going to create a new function called moveJoint. I’ll walk us through everything that these functions do.

    function moveJoint(mouse, joint, degreeLimit) {
      let degrees = getMouseDegrees(mouse.x, mouse.y, degreeLimit);
      joint.rotation.y = THREE.Math.degToRad(degrees.x);
      joint.rotation.x = THREE.Math.degToRad(degrees.y);
    }

    The moveJoint function takes three arguments, the current mouse position, the joint we want to move, and the limit (in degrees) that the joint is allowed to rotate. This is called degreeLimit, remember this as I’ll talk about it soon.

    We have a variable called degrees referenced at the top, the degrees come from a function called getMouseDegrees, which returns an object of {x, y}. We then use these degrees to rotate the joint on the x axis and the y axis.

    Before we add getMouseDegrees, I want to explain what it does.

    getMouseDegrees does this: It checks the top half of the screen, the bottom half of the screen, the left half of the screen, and the right half of the screen. It determines where the mouse is on the screen in a percentage between the middle and each edge of the screen.

    For instance, if the mouse is half way between the middle of the screen and the right edge. The function determines that right = 50%, if the mouse is a quarter of the way UP from the center, the function determines that up = 25%.

    Once the function has these percentages, it returns the percentage of the degreelimit.

    So the function can determine your mouse is 75% right and 50% up, and return 75% of the degree limit on the x axis and 50% of the degree limit on the y axis. Same for left and right.

    Here’s a visual:

    rotation_explanation

    I wanted to explain that because the function looks pretty complicated, and I won’t bore you with each line, but I have commented every step of the way for you to investigate it more if you want.

    Add this function to the bottom of your project:

    function getMouseDegrees(x, y, degreeLimit) {
      let dx = 0,
          dy = 0,
          xdiff,
          xPercentage,
          ydiff,
          yPercentage;
    
      let w = { x: window.innerWidth, y: window.innerHeight };
    
      // Left (Rotates neck left between 0 and -degreeLimit)
      
       // 1. If cursor is in the left half of screen
      if (x <= w.x / 2) {
        // 2. Get the difference between middle of screen and cursor position
        xdiff = w.x / 2 - x;  
        // 3. Find the percentage of that difference (percentage toward edge of screen)
        xPercentage = (xdiff / (w.x / 2)) * 100;
        // 4. Convert that to a percentage of the maximum rotation we allow for the neck
        dx = ((degreeLimit * xPercentage) / 100) * -1; }
    // Right (Rotates neck right between 0 and degreeLimit)
      if (x >= w.x / 2) {
        xdiff = x - w.x / 2;
        xPercentage = (xdiff / (w.x / 2)) * 100;
        dx = (degreeLimit * xPercentage) / 100;
      }
      // Up (Rotates neck up between 0 and -degreeLimit)
      if (y <= w.y / 2) {
        ydiff = w.y / 2 - y;
        yPercentage = (ydiff / (w.y / 2)) * 100;
        // Note that I cut degreeLimit in half when she looks up
        dy = (((degreeLimit * 0.5) * yPercentage) / 100) * -1;
        }
      
      // Down (Rotates neck down between 0 and degreeLimit)
      if (y >= w.y / 2) {
        ydiff = y - w.y / 2;
        yPercentage = (ydiff / (w.y / 2)) * 100;
        dy = (degreeLimit * yPercentage) / 100;
      }
      return { x: dx, y: dy };
    }

    Once we have that function, we can now use moveJoint. We’re going to use it for the neck with a 50 degree limit, and for the waist with a 30 degree limit.

    Update our mousemove event listener to include these moveJoints:

    document.addEventListener('mousemove', function(e) {
        var mousecoords = getMousePos(e);
      if (neck && waist) {
          moveJoint(mousecoords, neck, 50);
          moveJoint(mousecoords, waist, 30);
      }
      });

    Just like that, move your mouse around the viewport and Stacy should watch your cursor wherever you go! Notice how idle animation is still running, but because we snipped the neck and spine bone (yuck), we’re able to controls those independently.

    This may not be the most scientifically accurate way of doing it, but it certainly looks convincing enough to create the effect we’re after. Here’s our progress so far, dig into this pen if you feel you’ve missed something or you’re not getting the same effect.

    See the Pen
    Character Tutorial – Round 2
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Part 6: Tapping into the rest of the animations

    As I mentioned earlier, Stacy actually has 10 animations loaded into the file, and we’ve only used one of them. Let’s head back to our loader function and find this line.

    mixer = new THREE.AnimationMixer(model);

    Below this line, we’re going to get a list of AnimationClips that aren’t idle (we don’t want to randomly select idle as one of the options when we click on Stacy). We do that like so:

    let clips = fileAnimations.filter(val => val.name !== 'idle');

    Now below that, we’re going to convert all of those clips into Three.js AnimationClips, the same way we did for idle. We’re also going to splice the neck and spine bone out of the skeleton and add all of these AnimationClips into a variable called possibleAnims, which is already referenced at the top of our project.

    possibleAnims = clips.map(val => {
      let clip = THREE.AnimationClip.findByName(clips, val.name);
      clip.tracks.splice(3, 3);
      clip.tracks.splice(9, 3);
      clip = mixer.clipAction(clip);
      return clip;
     }
    );

    We now have an array of clipActions we can play when we click Stacy. The trick here though is that we can’t add a simple click event listener on Stacy, as she isn’t part of our DOM. We are instead going to use raycasting, which essentially means shooting a laser beam in a direction and returning the objects that it hit. In this case we’re shooting from our camera in the direction of our cursor.

    Let’s add this above our mousemove event listener:

    // We will add raycasting here
    document.addEventListener('mousemove', function(e) {...}

    So paste this function in that spot, and I’ll explain what it does:

    window.addEventListener('click', e => raycast(e));
    window.addEventListener('touchend', e => raycast(e, true));
    
    function raycast(e, touch = false) {
      var mouse = {};
      if (touch) {
        mouse.x = 2 * (e.changedTouches[0].clientX / window.innerWidth) - 1;
        mouse.y = 1 - 2 * (e.changedTouches[0].clientY / window.innerHeight);
      } else {
        mouse.x = 2 * (e.clientX / window.innerWidth) - 1;
        mouse.y = 1 - 2 * (e.clientY / window.innerHeight);
      }
      // update the picking ray with the camera and mouse position
      raycaster.setFromCamera(mouse, camera);
    
      // calculate objects intersecting the picking ray
      var intersects = raycaster.intersectObjects(scene.children, true);
    
      if (intersects[0]) {
        var object = intersects[0].object;
    
        if (object.name === 'stacy') {
    
          if (!currentlyAnimating) {
            currentlyAnimating = true;
            playOnClick();
          }
        }
      }
    }

    We’re adding two event listeners, one for desktop and one for touch screens. We pass the event to the raycast() function but for touch screens, we’re setting the touch argument as true.

    Inside the raycast() function, we have a variable called mouse. Here we set mouse.x and mouse.y to be changedTouches[0] position if touch is true, or just return the mouse position on desktop.

    Next we call setFromCamera on raycaster, which has already been set up as a new Raycaster at the top of our project, ready to use. This line essentially raycasts from the camera to the mouse position. Remember we’re doing this every time we click, so we’re shooting lasers with a mouse at Stacy (brand new sentence?).

    We then get an array of intersected objects; if there are any, we set the first object that was hit to be our object.

    We check that the objects name is ‘stacy’, and we run a function called playOnClick() if the object is called ‘stacy’. Note that we are also checking that a variable currentlyAnimating is false before we proceed. We toggle this variable on and off so that we can’t run a new animation when one is currently running (other than idle). We will turn this back to false at the end of our animation. This variable is referenced at the top of our project.

    Okay, so playOnClick. Below our rayasting function, add our playOnClick function.

    // Get a random animation, and play it 
     function playOnClick() {
      let anim = Math.floor(Math.random() * possibleAnims.length) + 0;
      playModifierAnimation(idle, 0.25, possibleAnims[anim], 0.25);
    }

    This simply chooses a random number between 0 and the length of our possibleAnims array, then we call another function called playModifierAnimation. This function takes in idle (we’re moving from idle), the speed to blend from idle to a new animation (possibleAnims[anim]), and the last argument is the speed to blend from our animation back to idle. Under our playOnClick function, lets add our playModifierAnimation and I’ll explain what its doing.

    function playModifierAnimation(from, fSpeed, to, tSpeed) {
      to.setLoop(THREE.LoopOnce);
      to.reset();
      to.play();
      from.crossFadeTo(to, fSpeed, true);
      setTimeout(function() {
        from.enabled = true;
        to.crossFadeTo(from, tSpeed, true);
        currentlyAnimating = false;
      }, to._clip.duration * 1000 - ((tSpeed + fSpeed) * 1000));
    }

    The first thing we do is reset the to animation, this is the animation that’s about to play. We also set it to only play once, this is done because once the animation has completed its course (perhaps we played it earlier), it needs to be reset to play again. We then play it.

    Each clipAction has a method called crossFadeTo, we use it to fade from (idle) to our new animation using our first speed (fSpeed, or from speed).

    At this point our function has faded from idle to our new animation.

    We then set a timeout function, we turn our from animation (idle) back to true, we cross fade back to idle, then we toggle currentlyAnimating back to false (allowing another click on Stacy). The time of the setTimeout is calculated by combining our animations length (* 1000 as this is in seconds instead of milliseconds), and removing the speed it took to fade to and from that animation (also set in seconds, so * 1000 again). This leaves us with a function that fades from idle, plays an animation and once it’s completed, fades back to idle, allowing another click on Stacy.

    Notice that our neck and spine bones aren’t affected, giving us the ability to still control the way those rotate during the animation!

    That concludes this tutorial, here’s the completed project to reference if you got stuck.

    See the Pen
    Character Tutorial – Final
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Before I leave you though, if you’re interested in the workings of the model and animations itself, I’ll cover some of the basics in the final part. I’ll leave you to research some of the finer aspects, but this should give you plenty insight.

    Part 7: Creating the model file (optional)

    You’ll require Blender for this part if you follow along. I recommend Blender 2.8, the latest stable build.

    Before I get started, remember I mentioned that although you can include texture files inside your GLTF file (the format you export from Blender in), I had issues where Stacy’s texture was really dark. It had to do with the fact that GLTF expects sRGB format, and although I tried to convert it in Photoshop, it still wasn’t playing ball. You can’t guarantee the type of file you’re going to get as a texture, so the way I managed to fix this issue was instead export my file without textures, and let Three.js add it natively. I recommend doing it this way unless your project is super complicated.

    Any way, here’s what I started with in Blender, just a standard mesh of a character in a T pose. Your character most definitely should be in a T pose, because Mixamo is going to generate the skeleton for us, so it is expecting this.

    blender-1

    You want to export your model in the FBX format.

    blender-2

    You aren’t going to need the current Blender session any more, but more on that soon.

    Head to www.mixamo.com, this site has a bunch of free animations that are used for all sorts of things, commonly browsed by Indie game developers, this Adobe service goes hand-in-hand with Adobe Fuse, which is essentially a character creator software. This is free to use, but you will need an Adobe account (by free I mean, you won’t need a Creative Cloud subscription). So create one and sign in.

    The first thing you want to do is upload your character. This is the FBX file that we exported from Blender. Mixamo will automatically bring up the Auto-Rigger feature once your upload is complete.

    mixamo-3

    Follow the instructions to place the markers on the key areas of your model. Once the auto-rigging is complete, you’ll see a panel with your character animating!

    mixamo-4

    Mixamo has now created a skeleton for your model, this is the skeleton we hooked into in this tutorial.

    Click next, and then select the animations tab in the top left. Let’s find an idle animation to start with, use the search bar and type ‘idle’. The one we used in this tutorial is called “Happy idle” if you’re interested.

    Clicking on any animation will preview it, explore this site to see some crazy other ones. But an important note: this particular project works best with animations where the feet end up where they began, in a position similar to our idle animation, because we’re cross fading these, it looks most natural when the ending pose is similar to the next animations starting pose, and visa versa.

    mixamo-5

    Once you’re happy with your idle animation, click Download Character. Your format should be FBX and skin should be set to With Skin. Leave the rest as default. Download this file. Keep Mixamo open.

    Back in Blender, import this file into a new, empty session (remove the light, camera and default cube that comes with a new Blender session).

    If you hit the play button (if you don’t have a timeline in your session, you can toggle the Editor Type on one of your panels, at this point I recommend an intro into Blenders interface if you get stuck).

    mixamo-6

    At this point you want to rename the animation, so change to the Editor Type called Dope Sheet and the select Action Editor as the sub section.

    dope-sheet

    Click on the drop down next to + New and select the animation that Mixamo includes in this file. At this point you can rename it in the input field, lets call it ‘idle’.

    mixamo-6

    Now if we exported this file as a GLTF, there will be an animation called idle in gltf.animations. Remember we have both gltf.animatons and gltf.scene in our file.

    Before we export though, we need to rename our character objects appropriately. My setup looks like this.

    Screen Shot 2019-10-05 at 1.43.18 PM

    Note that the bottom, child stacy is the object name referenced in our JavaScript.

    Let’s not export yet, instead I’ll quickly show you how to add a new animation. Head back to Mixamo, I’ve selected the Shake Fist animation. Download this file too, we still want to keep the skin, others probably would mention that you don’t need to keep the skin this time, but I found that my skeleton did weird things when I didn’t.

    Let’s import it into Blender.

    blender-5

    At this point we’ve got two Stacys, one called Armature, and the one we want to keep, Stacy. We’re going to delete the Armature one, but first we want to move its current Shake Fist animation to Stacy. Let’s head back to our Dope Sheet > Animation Editor.

    You’ll see we now have a new animation alongside idle, let’s select that, then rename it shakefist.

    blender-6

    blender-7

    We want to bring up one last Editor Type, keep your Dope Sheet > Action Editor open, and in another unused panel (or split the screen to create a new one, again it helps if you get through an intro into Blenders UI).

    We want the new Editor Type to be Nonlinear Animation (NLA).

    blender-9

    Click on stacy. Then click on the Push Down button next to the idle animation. We’ve now added idle as an animation, and created a new track to add our shakefist animation.

    Confusingly, you want to click on stacy‘s name again before we we proceed.

    blender-11

    The way we do this is to head back to our Animation Editor and select shakefist from the drop down.

    blender-12

    Finally, we can use the Push Down button next to shakefist in the NLA editor.

    blender-13

    You should be left with this:

    blender-15

    blender-14

    We’ve transferred the animation from Armature to Stacy, we can now delete Armature.

    blender-15

    Annoyingly, Armature will drop its child mesh into the scene, delete this too

    blender-16

    You can now repeat these steps to add new animations (I promise you it gets less confusing and faster the more you do it).

    I’m going to export my file though:

    blender-17

    Here’s a pen from this tutorial except it’s using our new model! (Disclosure: Stacy’s scale was way different this time, so that’s been updated in this pen. I’ve had no success at all scaling models in Blender when Mixamo has already added the skeleton to it, it’s much easier to do it in Three.js after it’s loaded).

    See the Pen
    Character Tutorial – Remix
    by Kyle Wetton (@kylewetton)
    on CodePen.

    The end!

    How to Create an Interactive 3D Character with Three.js was written by Kyle Wetton and published on Codrops.

    Create Text in Three.js with Three-bmfont-text

    There are many ways of displaying text inside a Three.js application: drawing text to a canvas and use it as a texture, importing a 3D model of a text, creating text geometry, and using bitmap fonts — or BMFonts. This last one has a bunch of helpful properties on how to render text into a scene.

    Text in WebGL opens many possibilities to create amazing things on the web. A great example is Sorry, Not Sorry by awesome folks at Resn or this refraction experiment by Jesper Vos. Let’s use Three.js with three-bmfont-text to create text in 3D and give it a nice look using shaders.

    Three-bmfont-text is a tool created by Matt DesLauriers and Jam3 that renders BMFont files in Three.js, allowing to batch glyphs into a single geometry. It also supports things like word-wrapping, kerning, and msdf — please watch Zach Tellman’s talk on distance fields, he explains it very good.

    With all that said, let’s begin.

    Attention: This tutorial assumes you have some understanding of Three.js, GLSL shaders and glslify, so we’ll skip things like how to set up a scene and import shaders.

    Getting started

    Before everything, we need to load a font file to create a geometry three-bmfont-text provides packed with bitmap glyphs. Then, we load a texture atlas of the font which is a collection of all characters inside a single image. After loading is done, we’ll pass the geometry and material to a function that will initialize a Three.js setup. To generate these files check out this repository.

    const createGeometry = require('three-bmfont-text');
    const loadFont = require('load-bmfont');
    
    loadFont('fonts/Lato.fnt', (err, font) => {
      // Create a geometry of packed bitmap glyphs
      const geometry = createGeometry({
        font,
        text: 'OCEAN'
      });
      
      // Load texture containing font glyphs
      const loader = new THREE.TextureLoader();
      loader.load('fonts/Lato.png', (texture) => {
        // Start and animate renderer
        init(geometry, texture);
        animate();
      });
    });

    Creating the text mesh

    It’s time to create the mesh with the msdf shader three-bmfont-text comes with. This module has a default vertex and fragment shader that forms sharp text. We’ll change them later to produce a wavy effect.

    const MSDFShader = require('three-bmfont-text/shaders/msdf');
    
    function init(geometry, texture) {
      // Create material with msdf shader from three-bmfont-text
      const material = new THREE.RawShaderMaterial(MSDFShader({
        map: texture,
        color: 0x000000, // We'll remove it later when defining the fragment shader
        side: THREE.DoubleSide,
        transparent: true,
        negate: false,
      }));
    
      // Create mesh of text       
      const mesh = new THREE.Mesh(geometry, material);
      mesh.position.set(-80, 0, 0); // Move according to text size
      mesh.rotation.set(Math.PI, 0, 0); // Spin to face correctly
      scene.add(mesh);
    }

    And now the text should appear on screen. Cool, right? You can zoom and rotate with the mouse to see how crisp the text is.

    Let’s make it more interesting in the next step.

    GLSL

    Vertex shader

    To oscillate the text, trigonometry is our best friend. We want to make a sinusoidal movement along the Y and Z axis — up and down, inside and outside the screen. A vertex shader fits the bill for this since it handles the position of the vertices of the mesh. But before this, let’s add the shaders to the material and create a time uniform that will fuel them.

    function init(geometry, texture) {
      // Create material with msdf shader from three-bmfont-text
      const material = new THREE.RawShaderMaterial(MSDFShader({
        vertexShader,
        fragmentShader,
        map: texture,
        side: THREE.DoubleSide,
        transparent: true,
        negate: false,
      }));
    
      // Create time uniform from default uniforms object
      material.uniforms.time = { type: 'f', value: 0.0 };
    }
    
    function animate() {
      requestAnimationFrame(animate);
      render();
    }
    
    function render() {
      // Update time uniform each frame
      mesh.material.uniforms.time.value = this.clock.getElapsedTime();
      mesh.material.uniformsNeedUpdate = true;
    
      renderer.render(scene, camera);
    }

    Then we’ll pass it to the vertex shader:

    // Variable qualifiers that come with the msdf shader
    attribute vec2 uv;
    attribute vec4 position;
    uniform mat4 projectionMatrix;
    uniform mat4 modelViewMatrix;
    varying vec2 vUv;
    // We passed this one
    uniform float time;
    
    void main() {
      vUv = uv;
    
      vec3 p = vec3(position.x, position.y, position.z);
    
      float frequency1 = 0.035;
      float amplitude1 = 20.0;
      float frequency2 = 0.025;
      float amplitude2 = 70.0;
    
      // Oscillate vertices up/down
      p.y += (sin(p.x * frequency1 + time) * 0.5 + 0.5) * amplitude1;
    
      // Oscillate vertices inside/outside
      p.z += (sin(p.x * frequency2 + time) * 0.5 + 0.5) * amplitude2;
    
      gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
    }

    Frequency and amplitude are properties of a wave that determine their quantity and their “height”. Because we are using a sine wave to move the vertices, these properties can help control the behavior of the wave. I encourage you to tweak the values to observe different results.

    Okay, so here is the tidal movement:

    Fragment shader

    For the fragment shader, I thought about just interpolating between two shades of blue – a light and a dark one. Simple as that.

    The built-in GLSL function mix helps interpolating between two values. We can use it along with a cosine function mapped from 1 to 0, so it can go back and forth these values and change the color of the text — a value of 1 will give a dark blue and 0 a light blue, interpolating the colors between.

    #ifdef GL_OES_standard_derivatives
    #extension GL_OES_standard_derivatives : enable
    #endif
    
    // Variable qualifiers that come with the shader
    precision highp float;
    uniform float opacity;
    uniform vec3 color;
    uniform sampler2D map;
    varying vec2 vUv;
    // We passed this one
    uniform float time;
    
    // HSL to RGB color conversion module
    #pragma glslify: hsl2rgb = require(glsl-hsl2rgb)
    
    float median(float r, float g, float b) {
      return max(min(r, g), min(max(r, g), b));
    }
    
    void main() {
      // This is the code that comes to produce msdf
      vec3 sample = texture2D(map, vUv).rgb;
      float sigDist = median(sample.r, sample.g, sample.b) - 0.5;
      float alpha = clamp(sigDist/fwidth(sigDist) + 0.5, 0.0, 1.0);
    
      // Colors
      vec3 lightBlue = hsl2rgb(202.0 / 360.0, 1.0, 0.5);
      vec3 navyBlue = hsl2rgb(238.0 / 360.0, 0.47, 0.31);
    
      // Goes from 1.0 to 0.0 and vice versa
      float t = cos(time) * 0.5 + 0.5;
    
      // Interpolate from light to navy blue
      vec3 newColor = mix(lightBlue, navyBlue, t);
    
      gl_FragColor = vec4(newColor, alpha * opacity);
      if (gl_FragColor.a < 0.0001) discard;
    }

    And here it is! The final result:

    Other examples

    There is plenty of stuff one can do with three-bmfont-text. You can make words fall:

    Enter and leave:

    Distortion:

    Water blend:

    Or mess with noise:

    I encourage you to explore more to create something that gets you excited, and please share it with me via twitter or email. You can reach me there, too if you got any questions, or comment below.

    Hope you learned something new. Cheers!

    References and Credits

    Create Text in Three.js with Three-bmfont-text was written by Mario Carrillo and published on Codrops.

    Creating a Water-like Distortion Effect with Three.js

    In this tutorial we’re going to build a water-like effect with a bit of basic math, a canvas, and postprocessing. No fluid simulation, GPGPU, or any of that complicated stuff. We’re going to draw pretty circles in a canvas, and distort the scene with the result.

    We recommend that you get familiar with the basics of Three.js because we’ll omit some of the setup. But don’t worry, most of the tutorial will deal with good old JavaScript and the canvas API. Feel free to chime in if you don’t feel too confident on the Three.js parts.

    The effect is divided into two main parts:

    1. Capturing and drawing the ripples to a canvas
    2. Displacing the rendered scene with postprocessing

    Let’s start with updating and drawing the ripples since that’s what constitutes the core of the effect.

    Making the ripples

    The first idea that comes to mind is to use the current mouse position as a uniform and then simply displace the scene and call it a day. But that would mean only having one ripple that always remains at the mouse’s position. We want something more interesting, so we want many independent ripples moving at different positions. For that we’ll need to keep track of each one of them.

    We’re going to create a WaterTexture class to manage everything related to the ripples:

    1. Capture every mouse movement as a new ripple in an array.
    2. Draw the ripples to a canvas
    3. Erase the ripples when their lifespan is over
    4. Move the ripples using their initial momentum

    For now, let’s begin coding by creating our main App class.

    import { WaterTexture } from './WaterTexture';
    class App{
        constructor(){
            this.waterTexture = new WaterTexture({ debug: true });
            
            this.tick = this.tick.bind(this);
        	this.init();
        }
        init(){
            this.tick();
        }
        tick(){
            this.waterTexture.update();
            requestAnimationFrame(this.tick);
        }
    }
    const myApp = new App();

    Let’s create our ripple manager WaterTexture with a teeny-tiny 64px canvas.

    export class WaterTexture{
      constructor(options) {
        this.size = 64;
          this.radius = this.size * 0.1;
         this.width = this.height = this.size;
        if (options.debug) {
          this.width = window.innerWidth;
          this.height = window.innerHeight;
          this.radius = this.width * 0.05;
        }
          
        this.initTexture();
          if(options.debug) document.body.append(this.canvas);
      }
        // Initialize our canvas
      initTexture() {
        this.canvas = document.createElement("canvas");
        this.canvas.id = "WaterTexture";
        this.canvas.width = this.width;
        this.canvas.height = this.height;
        this.ctx = this.canvas.getContext("2d");
        this.clear();
    	
      }
      clear() {
        this.ctx.fillStyle = "black";
        this.ctx.fillRect(0, 0, this.canvas.width, this.canvas.height);
      }
      update(){}
    }

    Note that for development purposes there is a debug option to mount the canvas to the DOM and give it a bigger size. In the end result we won’t be using this option.

    Now we can go ahead and start adding some of the logic to make our ripples work:

    1. On constructor() add
      1. this.points array to keep all our ripples
      2. this.radius for the max-radius of a ripple
      3. this.maxAge for the max-age of a ripple
    2. On Update(),

      1. clear the canvas
      2. sing happy birthday to each ripple, and remove those older than this.maxAge
      3. draw each ripple
    3. Create AddPoint(), which is going to take a normalized position and add a new point to the array.
    class WaterTexture(){
        constructor(){
            this.size = 64;
            this.radius = this.size * 0.1;
            
            this.points = [];
            this.maxAge = 64;
            ...
        }
        ...
        addPoint(point){
    		this.points.push({ x: point.x, y: point.y, age: 0 });
        }
    	update(){
            this.clear();
            this.points.forEach(point => {
                point.age += 1;
                if(point.age > this.maxAge){
                    this.points.splice(i, 1);
                }
            })
            this.points.forEach(point => {
                this.drawPoint(point);
            })
        }
    }

    Note that AddPoint() receives normalized values, from 0 to 1. If the canvas happens to resize, we can use the normalized points to draw using the correct size.

    Let’s create drawPoint(point) to start drawing the ripples: Convert the normalized point coordinates into canvas coordinates. Then, draw a happy little circle:

    class WaterTexture(){
        ...
        drawPoint(point) {
            // Convert normalized position into canvas coordinates
            let pos = {
                x: point.x * this.width,
                y: point.y * this.height
            }
            const radius = this.radius;
            
            
            this.ctx.beginPath();
            this.ctx.arc(pos.x, pos.y, radius, 0, Math.PI * 2);
            this.ctx.fill();
        }
    }

    For our ripples to have a strong push at the center and a weak force at the edges, we’ll make our circle a Radial Gradient, which looses transparency as it moves to the edges.

    Radial Gradients create a dithering-like effect when a lot of them overlap. It looks stylish but not as smooth as what we want it to look like.

    To make our ripples smooth, we’ll use the circle’s shadow instead of using the circle itself. Shadows give us the gradient-like result without the dithering-like effect. The difference is in the way shadows are painted to the canvas.

    Since we only want to see the shadow and not the flat-colored circle, we’ll give the shadow a high offset. And we’ll move the circle in the opposite direction.

    As the ripple gets older, we’ll reduce it’s opacity until it disappears:

    export class WaterTexture(){
        ...
        drawPoint(point) {
            ... 
            const ctx = this.ctx;
            // Lower the opacity as it gets older
            let intensity = 1.;
            intensity = 1. - point.age / this.maxAge;
            
            let color = "255,255,255";
            
            let offset = this.width * 5.;
            // 1. Give the shadow a high offset.
            ctx.shadowOffsetX = offset; 
            ctx.shadowOffsetY = offset; 
            ctx.shadowBlur = radius * 1; 
            ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 
    
            
            this.ctx.beginPath();
            this.ctx.fillStyle = "rgba(255,0,0,1)";
            // 2. Move the circle to the other direction of the offset
            this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
            this.ctx.fill();
        }
    }

    To introduce interactivity, we’ll add the mousemove event listener to app class and send the normalized mouse position to WaterTexture.

    import { WaterTexture } from './WaterTexture';
    class App {
    	...
    	init(){
            window.addEventListener('mousemove', this.onMouseMove.bind(this));
            this.tick();
    	}
    	onMouseMove(ev){
            const point = {
    			x: ev.clientX/ window.innerWidth, 
    			y: ev.clientY/ window.innerHeight, 
            }
            this.waterTexture.addPoint(point);
    	}
    }

    Great, now we’ve created a disappearing trail of ripples. Now, let’s give them some momentum!

    Momentum

    To give momentum to a ripple, we need its direction and force. Whenever we create a new ripple, we’ll compare its position with the last ripple. Then we’ll calculate its unit vector and force.

    On every update, we’ll update the ripples’ positions with their unit vector and position. And as they get older we’ll move them slower and slower until they retire or go live on a farm. Whatever happens first.

    export lass WaterTexture{
    	...
        constructor(){
            ...
            this.last = null;
        }
        addPoint(point){
            let force = 0;
            let vx = 0;
            let vy = 0;
            const last = this.last;
            if(last){
                const relativeX = point.x - last.x;
                const relativeY = point.y - last.y;
                // Distance formula
                const distanceSquared = relativeX * relativeX + relativeY * relativeY;
                const distance = Math.sqrt(distanceSquared);
                // Calculate Unit Vector
                vx = relativeX / distance;
                vy = relativeY / distance;
                
                force = Math.min(distanceSquared * 10000,1.);
            }
            
            this.last = {
                x: point.x,
                y: point.y
            }
            this.points.push({ x: point.x, y: point.y, age: 0, force, vx, vy });
        }
    	
    	update(){
            this.clear();
            let agePart = 1. / this.maxAge;
            this.points.forEach((point,i) => {
                let slowAsOlder = (1.- point.age / this.maxAge)
                let force = point.force * agePart * slowAsOlder;
                  point.x += point.vx * force;
                  point.y += point.vy * force;
                point.age += 1;
                if(point.age > this.maxAge){
                    this.points.splice(i, 1);
                }
            })
            this.points.forEach(point => {
                this.drawPoint(point);
            })
        }
    }

    Note that instead of using the last ripple in the array, we use a dedicated this.last. This way, our ripples always have a point of reference to calculate their force and unit vector.

    Let’s fine-tune the intensity with some easings. Instead of just decreasing until it’s removed, we’ll make it increase at the start and then decrease:

    const easeOutSine = (t, b, c, d) => {
      return c * Math.sin((t / d) * (Math.PI / 2)) + b;
    };
    
    const easeOutQuad = (t, b, c, d) => {
      t /= d;
      return -c * t * (t - 2) + b;
    };
    
    export class WaterTexture(){
    	drawPoint(point){
    	...
    	let intensity = 1.;
            if (point.age < this.maxAge * 0.3) {
              intensity = easeOutSine(point.age / (this.maxAge * 0.3), 0, 1, 1);
            } else {
              intensity = easeOutQuad(
                1 - (point.age - this.maxAge * 0.3) / (this.maxAge * 0.7),
                0,
                1,
                1
              );
            }
            intensity *= point.force;
            ...
    	}
    }

    Now we're finished with creating and updating the ripples. It's looking amazing.

    But how do we use what we have painted to the canvas to distort our final scene?

    Canvas as a texture

    Let's use the canvas as a texture, hence the name WaterTexture. We are going to draw our ripples on the canvas, and use it as a texture in a postprocessing shader.

    First, let's make a texture using our canvas and refresh/update that texture at the end of every update:

    import * as THREE from 'three'
    class WaterTexture(){
    	initTexture(){
    		...
    		this.texture = new THREE.Texture(this.canvas);
    	}
    	update(){
            ...
    		this.texture.needsUpdate = true;
    	}
    }

    By creating a texture of our canvas, we can sample our canvas like we would with any other texture. But how is this useful to us? Our ripples are just white spots on the canvas.

    In the distortion shader, we're going to need the direction and intensity of the distortion for each pixel. If you recall, we already have the direction and force of each ripple. But how do we communicate that to the shader?

    Encoding data in the color channels

    Instead of thinking of the canvas as a place where we draw happy little clouds, we are going to think about the canvas' color channels as places to store our data and read them later on our vertex shader.

    In the Red and Green channels, we'll store the unit vector of the ripple. In the Blue channel, we'll store the intensity of the ripple.

    Since RGB channels range from 0 to 255, we need to send our data that range to normalize it. So, we'll transform the unit vector range (-1 to 1) and the intensity range (0 to 1) into 0 to 255.

    class WaterEffect {
        drawPoint(point){
    		...
            
    		// Insert data to color channels
            // RG = Unit vector
            let red = ((point.vx + 1) / 2) * 255;
            let green = ((point.vy + 1) / 2) * 255;
            // B = Unit vector
            let blue = intensity * 255;
            let color = `${red}, ${green}, ${blue}`;
    
            
            let offset = this.size * 5;
            ctx.shadowOffsetX = offset; 
            ctx.shadowOffsetY = offset; 
            ctx.shadowBlur = radius * 1; 
            ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 
    
            this.ctx.beginPath();
            this.ctx.fillStyle = "rgba(255,0,0,1)";
            this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
            this.ctx.fill();
        }
    }

    Note: Remember how we painted the canvas black? When our shader reads that pixel, it's going to apply a distortion of 0, only distorting where our ripples are painting.

    Look at the pretty color our beautiful data gives the ripples now!

    With that, we're finished with the ripples. Next, we'll create our scene and apply the distortion to the result.

    Creating a basic Three.js scene

    For this effect, it doesn't matter what we render. So, we'll only have a single plane to showcase the effect. But feel free to create an awesome-looking scene and share it with us in the comments!

    Since we're done with WaterTexture, don't forget to turn the debug option to false.

    import * as THREE from "three";
    import { WaterTexture } from './WaterTexture';
    
    class App {
        constructor(){
            this.waterTexture = new WaterTexture({ debug: false });
            
            this.renderer = new THREE.WebGLRenderer({
              antialias: false
            });
            this.renderer.setSize(window.innerWidth, window.innerHeight);
            this.renderer.setPixelRatio(window.devicePixelRatio);
            document.body.append(this.renderer.domElement);
            
            this.camera = new THREE.PerspectiveCamera(
              45,
              window.innerWidth / window.innerHeight,
              0.1,
              10000
            );
            this.camera.position.z = 50;
            
            this.touchTexture = new TouchTexture();
            
            this.tick = this.tick.bind(this);
            this.onMouseMove = this.onMouseMove.bind(this);
            
            this.init();
        
        }
        addPlane(){
            let geometry = new THREE.PlaneBufferGeometry(5,5,1,1);
            let material = new THREE.MeshNormalMaterial();
            let mesh = new THREE.Mesh(geometry, material);
            
            window.addEventListener("mousemove", this.onMouseMove);
            this.scene.add(mesh);
        }
        init(){
        	this.addPlane(); 
        	this.tick();
        }
        render(){
            this.renderer.render(this.scene, this.camera);
        }
        tick(){
            this.render();
            this.waterTexture.update();
            requrestAnimationFrame(this.tick);
        }
    }

    Applying the distortion to the rendered scene

    We are going to use postprocessing to apply the water-like effect to our render.

    Postprocessing allows you to add effects or filters after (post) your scene is rendered (processing). Like any kind of image effect or filter you might see on snapchat or Instagram, there is a lot of cool stuff you can do with postprocessing.

    For our case, we'll render our scene normally with a RenderPass, and apply the effect on top of it with a custom EffectPass.

    Let's render our scene with postprocessing's EffectComposer instead of the Three.js renderer.

    Note that EffectComposer works by going through its passes on each render. It doesn't render anything unless it has a pass for it. We need to add the render of our scene using a RenderPass:

    import { EffectComposer, RenderPass } from 'postprocessing'
    class App{
        constructor(){
            ...
    		this.composer = new EffectComposer(this.renderer);
             this.clock = new THREE.Clock();
            ...
        }
        initComposer(){
            const renderPass = new RenderPass(this.scene, this.camera);
        
            this.composer.addPass(renderPass);
        }
        init(){
        	this.initComposer();
        	...
        }
        render(){
            this.composer.render(this.clock.getDelta());
        }
    }

    Things should look about the same. But now we start adding custom postprocessing effects.

    We are going to create the WaterEffect class that extends postprocessing's Effect. It is going to receive the canvas texture in the constructor and make it a uniform in its fragment shader.

    In the fragment shader, we'll distort the UVs using postprocessing's function mainUv using our canvas texture. Postprocessing is then going to take these UVs and sample our regular scene distorted.

    Although we'll only use postprocessing's mainUv function, there are a lot of interesting functions you can use. I recommend you check out the wiki for more information!

    Since we already have the unit vector and intensity, we only need to multiply them together. But since the texture values are normalized we need to convert our unit vector from a range of 1 to 0, into a range of -1 to 0:

    import * as THREE from "three";
    import { Effect } from "postprocessing";
    
    export class WaterEffect extends Effect {
      constructor(texture) {
        super("WaterEffect", fragment, {
          uniforms: new Map([["uTexture", new THREE.Uniform(texture)]])
        });
      }
    }
    export default WaterEffect;
    
    const fragment = `
    uniform sampler2D uTexture;
    #define PI 3.14159265359
    
    void mainUv(inout vec2 uv) {
            vec4 tex = texture2D(uTexture, uv);
    		// Convert normalized values into regular unit vector
            float vx = -(tex.r *2. - 1.);
            float vy = -(tex.g *2. - 1.);
    		// Normalized intensity works just fine for intensity
            float intensity = tex.b;
            float maxAmplitude = 0.2;
            uv.x += vx * intensity * maxAmplitude;
            uv.y += vy * intensity * maxAmplitude;
        }
    `;

    We'll then instantiate WaterEffect with our canvas texture and add it as an EffectPass after our RenderPass. Then we'll make sure our composer only renders the last effect to the screen:

    import { WaterEffect } from './WaterEffect'
    import { EffectPass } from 'postprocessing'
    class App{
        ...
    	initComposer() {
            const renderPass = new RenderPass(this.scene, this.camera);
            this.waterEffect = new WaterEffect(  this.touchTexture.texture);
    
            const waterPass = new EffectPass(this.camera, this.waterEffect);
    
            renderPass.renderToScreen = false;
            waterPass.renderToScreen = true;
            this.composer.addPass(renderPass);
            this.composer.addPass(waterPass);
    	}
    }

    And here we have the final result!

    An awesome and fun effect to play with!

    Conclusion

    Through this article, we've created ripples, encoded their data into the color channels and used it in a postprocessing effect to distort our render.

    That's a lot of complicated-sounding words! Great work, pat yourself on the back or reach out on Twitter and I'll do it for you 🙂

    But there's still a lot more to explore:

    1. Drawing the ripples with a hollow circle
    2. Giving the ripples an actual radial-gradient
    3. Expanding the ripples as they get older
    4. Or using the canvas as a texture technique to create interactive particles as in Bruno's article.

    We hope you enjoyed this tutorial and had a fun time making ripples. If you have any questions, don't hesitate to comment below or on Twitter!

    Creating a Water-like Distortion Effect with Three.js was written by Daniel Velasquez and published on Codrops.

    Simulating Depth of Field with Particles using the Blurry Library

    Blurry is a set of scripts that allow you to easily visualize simple geometrical shapes with a bokeh/depth of field effect of an out-of-focus camera. It uses Three.js internally to make it easy to develop the shaders and the WebGL programs required to run it.

    The bokeh effect is generated by using millions of particles to draw the primitives supported by the library. These particles are then accumulated in a texture and randomly displaced in a circle depending on how far away they are from the focal plane.

    These are some of the scenes I’ve recently created using Blurry:

    blurry examples

    Since the library itself is very simple and you don’t need to know more than three functions to get started, I’ve decided to write this walk-through of a scene made with Blurry. It will teach you how to use various tricks to create geometrical shapes often found in the works of generative artists. This will also hopefully show you how simple tools can produce interesting and complex looking results.

    In this little introduction to Blurry we’ll try to recreate the following scene, by using various techniques borrowed from the world of generative art:

    targetscene

    Starting out

    You can download the repo here and serve index.html from a local server to render the scene that is currently coded inside libs/createScene.js. You can rotate, zoom and pan around the scene as with any Three.js project using OrbitControls.js.

    There are also some additional key-bindings to change various parameters of the renderer, such as the focal length, exposure, bokeh strength and more. These are visible at the bottom left of the screen.

    All the magic happens inside libs/createScene.js, where you can implement the two functions required to render something with Blurry. All the snippets defined in this article will end up inside createScene.js.

    The most important function we’ll need to implement to recreate the scene shown at the beginning of the article is createScene(), which will be called by the other scripts just before the renderer pushes the primitives to the GPU for the actual rendering of the scene.

    The other function we’ll define is setGlobals(), which is used to define the parameters of the shaders that will render our scene, such as the strength of the bokeh effect, the exposure, background color, etc.

    Let’s head over to createScene.js, remove everything that’s already coded in there, and define setGlobals() as:

    function setGlobals() {
        pointsPerFrame = 50000;
    
        cameraPosition = new THREE.Vector3(0, 0, 115);
        cameraFocalDistance = 100;
    
        minimumLineSize = 0.005;
    
        bokehStrength = 0.02;
        focalPowerFunction = 1;
        exposure = 0.009;
        distanceAttenuation = 0.002;
    
        useBokehTexture = true;
        bokehTexturePath = "assets/bokeh/pentagon2.png";
    
        backgroundColor[0] *= 0.8;
        backgroundColor[1] *= 0.8;
        backgroundColor[2] *= 0.8;
    }

    There’s an explanation for each of these parameters in the Readme of the GitHub repo. The important info at the moment is that the camera will start positioned at (x: 0, y: 0, z: 115) and the cameraFocalDistance (the distance from the camera where our primitives will be in focus) will be set at 100, meaning that every point 100 units away from the camera will be in focus.

    Another variable to consider is pointsPerFrame, which is used internally to assign a set number of points to all the primitives to render in a single frame. If you find that your GPU is struggling with 50000, lower that value.

    Before we start implementing createScene(), let’s first define some initial global variables that will be useful later:

    let rand, nrand;
    let vec3 = function(x,y,z) { return new THREE.Vector3(x,y,z) };

    I’ll explain the usage of each of these variables as we move along; vec3() is just a simple shortcut to create Three.js vectors without having to type THREE.Vector3(…) each time.

    Let’s now define createScene():

    function createScene() {
        Utils.setRandomSeed("3926153465010");
    
        rand = function() { return Utils.rand(); };
        nrand = function() { return rand() * 2 - 1; };
    }

    Very often I find the need to “repeat” the sequence of randomly generated numbers I had in a bugged scene. If I had to rely on the standard Math.random() function, each page-refresh would give me different random numbers, which is why I’ve included a seeded random number generator in the project. Utils.setRandomSeed(…) will take a string as a parameter and use that as the seed of the random numbers that will be generated by Utils.rand(), the seeded generator that is used in place of Math.random() (though you can still use that if you want).

    The functions rand & nrand will be used to generate random values in the interval [0 … 1] for rand, and [-1 … +1] for nrand.

    Let’s draw some lines

    At the moment you can only draw two simple primitives in Blurry: lines and quads. We’ll focus on lines in this article. Here’s the code that generates 10 consecutive straight lines:

    function createScene() {
        Utils.setRandomSeed("3926153465010");
    
        rand = function() { return Utils.rand(); };
        nrand = function() { return rand() * 2 - 1; };
    
        for(let i = 0; i < 10; i++) {
            lines.push(
                new Line({
                    v1: vec3(i, 0, 15),
                    v2: vec3(i, 10, 15),
                    
                    c1: vec3(5, 5, 5),
                    c2: vec3(5, 5, 5),
                })
            );
        }
    }

    lines is simply a global array used to store the lines to render. Every line we .push() into the array will be rendered.

    v1 and v2 are the two vertices of the line. c1 and c2 are the colors associated to each vertex as an RGB triplet. Note that Blurry is not restricted to the [0…1] range for each component of the RGB color. In this case using 5 for each component will give us a white line.

    If you did everything correctly up until now, you’ll see 10 straight lines in the screen as soon as you launch index.html from a local server.

    Here’s the code we have so far.

    Since we’re not here to just draw straight lines, we’ll now make more interesting shapes with the help of these two new functions:

    function createScene() {
        Utils.setRandomSeed("3926153465010");
        
        rand = function() { return Utils.rand(); };
        nrand = function() { return rand() * 2 - 1; };
        
        computeWeb();
        computeSparkles();
    }
    
    function computeWeb() { }
    function computeSparkles() { }

    Let’s start by defining computeWeb() as:

    function computeWeb() {
        // how many curved lines to draw
        let r2 = 17;
        // how many "straight pieces" to assign to each of these curved lines
        let r1 = 35;
        for(let j = 0; j < r2; j++) {
            for(let i = 0; i < r1; i++) {
                // definining the spherical coordinates of the two vertices of the line we're drawing
                let phi1 = j / r2 * Math.PI * 2;
                let theta1 = i / r1 * Math.PI - Math.PI * 0.5;
    
                let phi2 = j / r2 * Math.PI * 2;
                let theta2 = (i+1) / r1 * Math.PI - Math.PI * 0.5;
    
                // converting spherical coordinates to cartesian
                let x1 = Math.sin(phi1) * Math.cos(theta1);
                let y1 = Math.sin(theta1);
                let z1 = Math.cos(phi1) * Math.cos(theta1);
    
                let x2 = Math.sin(phi2) * Math.cos(theta2);
                let y2 = Math.sin(theta2);
                let z2 = Math.cos(phi2) * Math.cos(theta2);
    
                lines.push(
                    new Line({
                        v1: vec3(x1,y1,z1).multiplyScalar(15),
                        v2: vec3(x2,y2,z2).multiplyScalar(15),
                        c1: vec3(5,5,5),
                        c2: vec3(5,5,5),
                    })
                );
            }
        }
    }

    The goal here is to create a bunch of vertical lines that follow the shape of a sphere. Since we can’t make curved lines, we’ll break each line along this sphere in tiny straight pieces. (x1,y1,z1) and (x2,y2,z2) will be the endpoints of the line we’ll draw in each iteration of the loop. r2 is used to decide how many vertical lines in the surface of the sphere we’ll be drawing, whereas r1 is the amount of tiny straight pieces that we’re going to use for each one of the curved lines we’ll draw.

    The phi and theta variables represent the spherical coordinates of both points, which are then converted to Cartesian coordinates before pushing the new line into the lines array.

    Each time the outer loop (j) is entered, phi1 and phi2 will decide at which angle the vertical line will start (for the moment, they’ll hold the same exact value). Every iteration inside the inner loop (i) will construct the tiny pieces creating the vertical line, by slightly incrementing the theta angle at each iteration.

    After the conversion, the resulting Cartesian coordinates will be multiplied by 15 world units with .multiplyScalar(15), thus the curved lines that we’re drawing are placed on the surface of a sphere which has a radius of exactly 15.

    To make things a bit more interesting, let’s twist these vertical lines a bit with this simple change:

    let phi1 = (j + i * 0.075) / r2 * Math.PI * 2;
    ...
    let phi2 = (j + (i+1) * 0.075) / r2 * Math.PI * 2;

    If we twist the phi angles a bit as we move up the line while we’re constructing it, we’ll end up with:

    t1

    And as a last change, let’s swap the z-axis of both points with the y-axis:

    ...
    lines.push(
        new Line({
            v1: vec3(x1,z1,y1).multiplyScalar(15),
            v2: vec3(x2,z2,y2).multiplyScalar(15),
            c1: vec3(5,5,5),
            c2: vec3(5,5,5),
        })
    );
    ...

    Here’s the full source of createScene.js up to this point.

    Segment-Plane intersections

    Now the fun part begins. To recreate these type of intersections between the lines we just did

    t2

    …we’ll need to play a bit with ray-plane intersections. Here’s an overview of what we’ll do:

    Given the lines we made in our 3D scene, we’re going to create an infinite plane with a random direction and we’ll intersect this plane with all the lines we have in the scene. Then we’ll pick one of these lines intersecting the plane (chosen at random) and we’ll find the closest line to it that is also intersected by the plane.

    Let’s use a figure to make the example a bit easier to digest:

    t6

    Let’s assume all the segments in the picture are the lines of our scene that intersected the random plane. The red line was chosen randomly out of all the intersected lines. Every line intersects the plane at a specific point in 3D space. Let’s call “x” the point of contact of the red line with the random plane.

    The next step is to find the closest point to “x”, from all the other contact points of the other lines that were intersected by the plane. In the figure the green point “y” is the closest.

    As soon as we have these two points “x” and “y”, we’ll simply create another line connecting them.

    If we run this process several times (creating a random plane, intersecting our lines, finding the closest point, making a new line) we’ll end up with the result we want. To make it possible, let’s define findIntersectingEdges() as:

    function findIntersectingEdges(center, dir) {
    
        let contactPoints = [];
        for(line of lines) {
            let ires = intersectsPlane(
                center, dir,
                line.v1, line.v2
            );
    
            if(ires === false) continue;
    
            contactPoints.push(ires);
        }
    
        if(contactPoints.length < 2) return;
    }

    The two parameters of findIntersectingEdges() are the center of the 3D plane and the direction that the plane is facing towards. contactPoints will store all the points of intersection between the lines of our scene and the plane, intersectsPlane() will tell us if a given line intersects a plane. If the returned value ires isn’t undefined, which means there’s a point of intersection stored inside the ires variable, we’ll save the ires variable in the contactPoints array.

    intersectsPlane() is defined as:

    function intersectsPlane(planePoint, planeNormal, linePoint, linePoint2) {
    
        let lineDirection = new THREE.Vector3(linePoint2.x - linePoint.x, linePoint2.y - linePoint.y, linePoint2.z - linePoint.z);
        let lineLength = lineDirection.length();
        lineDirection.normalize();
    
        if (planeNormal.dot(lineDirection) === 0) {
            return false;
        }
    
        let t = (planeNormal.dot(planePoint) - planeNormal.dot(linePoint)) / planeNormal.dot(lineDirection);
        if (t > lineLength) return false;
        if (t < 0) return false;
    
        let px = linePoint.x + lineDirection.x * t;
        let py = linePoint.y + lineDirection.y * t;
        let pz = linePoint.z + lineDirection.z * t;
        
        let planeSize = Infinity;
        if(vec3(planePoint.x - px, planePoint.y - py, planePoint.z - pz).length() > planeSize) return false;
    
        return vec3(px, py, pz);
    }

    I won’t go over the details of how this function works, if you want to know more check the original version of the function here.

    Let’s now go to step 2: Picking a random contact point (we’ll call it randCp) and finding its closest neighbor contact point. Append this snippet at the end of findIntersectingEdges():

    function findIntersectingEdges(center, dir) {
        ...
        ...
    
        let randCpIndex = Math.floor(rand() * contactPoints.length);
        let randCp = contactPoints[randCpIndex];
    
        // let's search the closest contact point from randCp
        let minl = Infinity;
        let minI = -1;
        
        // iterate all contact points
        for(let i = 0; i < contactPoints.length; i++) {
            // skip randCp otherwise the closest contact point to randCp will end up being... randCp!
            if(i === randCpIndex) continue;
    
            let cp2 = contactPoints[i];
    
            // 3d point in space of randCp
            let v1 = vec3(randCp.x, randCp.y, randCp.z);
            // 3d point in space of the contact point we're testing for proximity
            let v2 = vec3(cp2.x, cp2.y, cp2.z);
    
            let sv = vec3(v2.x - v1.x, v2.y - v1.y, v2.z - v1.z);
            // "l" holds the euclidean distance between the two contact points
            let l = sv.length();
    
            // if "l" is smaller than the minimum distance we've registered so far, store this contact point's index as minI
            if(l < minl) {
                minl = l;
                minI = i;
            }
        }
    
        let cp1 = contactPoints[randCpIndex];
        let cp2 = contactPoints[minI];
    
        // let's create a new line out of these two contact points
        lines.push(
            new Line({
                v1: vec3(cp1.x, cp1.y, cp1.z),
                v2: vec3(cp2.x, cp2.y, cp2.z),
                c1: vec3(2,2,2),
                c2: vec3(2,2,2),
            })
        );
    }

    Now that we have our routine to test intersections against a 3D plane, let’s use it repeatedly against the lines that we already made in the surface of the sphere. Append the following code at the end of computeWeb():

    function computeWeb() {
    
        ...
        ...
    
        // intersect many 3d planes against all the lines we made so far
        for(let i = 0; i < 4500; i++) {
            let x0 = nrand() * 15;
            let y0 = nrand() * 15;
            let z0 = nrand() * 15;
            
            // dir will be a random direction in the unit sphere
            let dir = vec3(nrand(), nrand(), nrand()).normalize();
            findIntersectingEdges(vec3(x0, y0, z0), dir);
        }
    }

    If you followed along, you should get this result

    t3

    Click here to see the source up to this point.

    Adding sparkles

    We’re almost done! To make the depth of field effect more prominent we’re going to fill the scene with little sparkles. So, it’s now time to define the last function we were missing:

    function computeSparkles() {
        for(let i = 0; i < 5500; i++) {
            let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);
    
            let c = 1.325 * (0.3 + rand() * 0.7);
            let s = 0.125;
    
            if(rand() > 0.9) {
                c *= 4;
            }
    
            lines.push(new Line({
                v1: vec3(v0.x - s, v0.y, v0.z),
                v2: vec3(v0.x + s, v0.y, v0.z),
    
                c1: vec3(c, c, c),
                c2: vec3(c, c, c),
            }));
    
            lines.push(new Line({
                v1: vec3(v0.x, v0.y - s, v0.z),
                v2: vec3(v0.x, v0.y + s, v0.z),
        
                c1: vec3(c, c, c),
                c2: vec3(c, c, c),
            }));
        }
    }

    Let’s start by explaining this line:

    let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);

    Here we’re creating a 3D vector with three random values between -1 and +1. Then, by doing .normalize() we’re making it a “unit vector”, which is a vector whose length is exactly 1.

    If you drew many points by using this method (choosing three random components between [-1, +1] and then normalizing the vector) you’d notice that all the points you draw end up on the surface of a sphere (which have a radius of exactly one).

    Since the sphere we’re drawing in computeWeb() has a radius of exactly 15 units, we want to make sure that all our sparkles don’t end up inside the sphere generated in computeWeb().

    We can make sure that all points are far enough from the sphere by multiplying each vector component by a scalar that is bigger than the sphere radius with .multiplyScalar(18 … and then adding some randomness to it by adding + rand() * 65.

    let c = 1.325 * (0.3 + rand() * 0.7);

    c is a multiplier for the color intensity of the sparkle we’re computing. At a minimum, it will be 1.325 * (0.3), if rand() ends up at the highest possible value, c will be 1.325 * (1).

    The line if(rand() > 0.9) c *= 4; can be read as “every 10 sparkles, make one whose color intensity is four times bigger than the others”.

    The two calls to lines.push() are drawing a horizontal line of size s, and center v0, and a vertical line of size s, and center v0. All the sparkles are in fact little “plus signs”.

    And here’s what we have up to this point:

    t4

    … and the code for createScene.js

    Adding lights and colors

    The final step to our small journey with Blurry is to change the color of our lines to match the colors of the finished scene.

    Before we do so, I’ll give a very simplistic explanation of the algebraic operation called “dot product”. If we plot two unit vectors in 3D space, we can measure how “similar” the direction they’re pointing to is.

    Two parallel unit vectors will have a dot product of 1 while orthogonal unit vectors will instead have a dot product of 0. Opposite unit vectors will result in a dot product of -1.

    Take this picture as a reference for the value of the dot product depending on the two input unit vectors:

    t5

    We can use this operation to calculate “how close” two directions are to each other, and we’ll use it to fake diffuse lighting and create the effect that two light sources are lighting up the scene.

    Here’s a drawing which will hopefully make it easier to understand what we’ll do:

    lighting

    The red and white dot on the surface of the sphere has the red unit vector direction associated with it. Now let’s imagine that the violet vectors represent light emitted from a directional light source, and the green vector is the opposite vector of the violet vector (in algebraic terms the green vector is the negation of the violet vector). If we take the dot product between the red and the green vector, we’ll get an estimate of how much the two vectors point to the same direction. The bigger the value is, the bigger the amount of light received at that point will be. The intuitive reasoning behind this process is essentially to imagine each of the points in our lines as if they were very small planes. If these little planes are facing toward the light source, they’ll absorb and reflect more light from it.

    Remember though that the dot operation can also return negative values. We’ll catch that by making sure that the minimum value returned by that function is greater or equal than 0.

    Let’s now code what we said so far with words and define two new global variables just before the definition of createScene():

    let lightDir0 = vec3(1, 1, 0.2).normalize();
    let lightDir1 = vec3(-1, 1, 0.2).normalize();

    You can think about both variables as two green vectors in the picture above, pointing to two different directional light sources.

    We’ll also create a normal1 variable which will be used as our “red vector” in the picture above and calculate the dot products between normal1 and the two light directions we just added. Each light direction will have a color associated to it. After we calculate with the dot products how much light is reflected from both light directions, we’ll just sum the two colors together (we’ll sum the RGB triplets) and use that as the new color of the line we’ll create.

    Lets finally append a new snippet to the end of computeWeb() which will change the color of the lines we computed in the previous steps:

    function computeWeb() {
        ...
    
        // recolor edges
        for(line of lines) {
            let v1 = line.v1;
            
            // these will be used as the "red vectors" of the previous example
            let normal1 = v1.clone().normalize();
            
            // lets calculate how much light normal1
            // will get from the "lightDir0" light direction (the white light)
            // we need Math.max( ... , 0.1) to make sure the dot product doesn't get lower than
            // 0.1, this will ensure each point is at least partially lit by a light source and
            // doesn't end up being completely black
            let diffuse0 = Math.max(lightDir0.dot(normal1), 0.1);
            // lets calculate how much light normal1
            // will get from the "lightDir1" light direction (the reddish light)
            let diffuse1 = Math.max(lightDir1.dot(normal1), 0.1);
            
            let firstColor = [diffuse0, diffuse0, diffuse0];
            let secondColor = [2 * diffuse1, 0.2 * diffuse1, 0];
            
            // the two colors will represent how much light is received from both light directions,
            // so we'll need to sum them togheter to create the effect that our scene is being lit by two light sources
            let r1 = firstColor[0] + secondColor[0];
            let g1 = firstColor[1] + secondColor[1];
            let b1 = firstColor[2] + secondColor[2];
            
            let r2 = firstColor[0] + secondColor[0];
            let g2 = firstColor[1] + secondColor[1];
            let b2 = firstColor[2] + secondColor[2];
            
            line.c1 = vec3(r1, g1, b1);
            line.c2 = vec3(r2, g2, b2);
        }
    }

    Keep in mind what we’re doing is a very, very simple way to recreate diffuse lighting, and it’s incorrect for many reasons, starting from the fact we’re only considering the first vertex of each line, and assigning the calculated light contribution to both, the first and second vertex of the line, without considering the fact that the second vertex might be very far away from the first vertex, thus ending up with a different normal vector and consequently different light contributions. But we’ll live with this simplification for the purpose of this article.

    Let’s also update the lines created with computeSparkles() to reflect these changes as well:

    function computeSparkles() {
        for(let i = 0; i < 5500; i++) {
            let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);
    
            let c = 1.325 * (0.3 + rand() * 0.7);
            let s = 0.125;
    
            if(rand() > 0.9) {
                c *= 4;
            }
    
            let normal1 = v0.clone().normalize();
    
            let diffuse0 = Math.max(lightDir0.dot(normal1), 0.1);
            let diffuse1 = Math.max(lightDir1.dot(normal1), 0.1);
    
            let r = diffuse0 + 2 * diffuse1;
            let g = diffuse0 + 0.2 * diffuse1;
            let b = diffuse0;
    
            lines.push(new Line({
                v1: vec3(v0.x - s, v0.y, v0.z),
                v2: vec3(v0.x + s, v0.y, v0.z),
    
                c1: vec3(r * c, g * c, b * c),
                c2: vec3(r * c, g * c, b * c),
            }));
    
            lines.push(new Line({
                v1: vec3(v0.x, v0.y - s, v0.z),
                v2: vec3(v0.x, v0.y + s, v0.z),
        
                c1: vec3(r * c, g * c, b * c),
                c2: vec3(r * c, g * c, b * c),
            }));
        }
    }

    And that’s it!

    The scene you’ll end up seeing will be very similar to the one we wanted to recreate at the beginning of the article. The only difference will be that I’m calculating the light contribution for both computeWeb() and computeSparkles() as:

    let diffuse0 = Math.max(lightDir0.dot(normal1) * 3, 0.15);
    let diffuse1 = Math.max(lightDir1.dot(normal1) * 2, 0.2 );

    Check the full source here or take a look at the live demo.

    Final words

    If you made it this far, you’ll now know how this very simple library works and hopefully you learned a few tricks for your future generative art projects!

    This little project only used lines as primitives, but you can also use textured quads, motion blur, and a custom shader pass that I’ve used recently to recreate volumetric light shafts. Look through the examples in libs/scenes/ if you’re curious to see those features in action.

    If you have any question about the library or if you’d like to suggest a feature/change feel free to open an issue in the github repo. I’d love to hear your suggestions!

    Simulating Depth of Field with Particles using the Blurry Library was written by Domenico Bruzzese and published on Codrops.

    How to Build a Color Customizer App for a 3D Model with Three.js





    In this tutorial you’ll learn how to create a customizer app that lets you change the colors of a 3D model of a chair using Three.js.

    3DModelCustomizer01

    See the demo in action: 3D Model Color Customizer App with Three.js

    A quick introduction

    This tool is built inspired by the Vans shoe customizer, and uses the amazing JavaScript 3D library Three.js.

    For this tutorial, I’ll assume you are comfortable with JavaScript, HTML and CSS.

    I’m going to do something a little bit different here in the interest of actually teaching you, and not making you copy/paste parts that aren’t all that relevant to this tutorial, we’re going to start with all of the CSS in place. The CSS really is just for the dressing around the app, it focusses on the UI only. That being said, each time we paste some HTML, I’ll explain quickly what the CSS does. Let’s get started.

    Part 1: The 3D model

    If you want to skip this part entirely, feel free to do so, but it may pay to read it just so you have a deeper understanding of how everything works.

    This isn’t a 3D modelling tutorial, but I will explain how the model is set up in Blender, and if you’d like to create something of your own, change a free model you found somewhere online, or instruct someone you’re commissioning. Here’s some information about how our chairs 3D model is authored.

    The 3D model for this tutorial is hosted and included within the JavaScript, so don’t worry about downloading or having to do any of this unless you’d like to look further into using Blender, and learning how to create your own model.

    Scale

    The scale is set to approximately what it would be in the real world; I don’t know if this is important, but it feels like the right thing to do, so why not?

    blender-a

    Layering and naming conventions

    This part is important: each element of the object you want to customize independently needs to be its own object in the 3D scene, and each item needs to have a unique name. Here we have back, base, cushions, legs and supports. Note that if you have say, three items all called supports, Blender is going to name them as supportssupports.001, supports.002. That doesn’t matter, because in our JavaScript we’ll be using includes(“supports”) to find all of those objects that contain the string supports in it.

    blender-b

    Placement

    The model should be placed at the world origin, ideally with its feet on the floor. It should ideally be facing the right way, but this can easily be rotated via JavaScript, no harm, no foul.

    Setting up for export

    Before exporting, you want to use Blender’s Smart UV unwrap option. Without going too much into detail, this makes textures keep its aspect ratio in tact as it wraps around the different shapes in your model without stretching in weird ways (I’d advise reading up on this option only if you’re making your own model).

    You want to be sure to select all of your objects, and apply your transformations. For instance, if you changed the scale or transformed it in any way, you’re telling Blender that this is the new 100% scale, instead of it still being 32.445% scale if you scaled it down a bit.

    File Format

    Apparently Three.js supports a bunch of 3D object file formats, but the one it recommends is glTF (.glb). Blender supports this format as an export option, so no worries there.

    Part 2: Setting up our environment

    Go ahead and fork this pen, or start your own one and copy the CSS from this pen. This is a blank pen with just the CSS we’re going to be using in this tutorial.

    See the Pen
    3D Chair Customizer Tutorial – Blank
    by Kyle Wetton (@kylewetton)
    on CodePen.

    If you don’t choose to fork this, grab the HTML as well; it has the responsive meta tags and Google fonts included.

    We’re going to use three dependencies for this tutorial. I’ve included comments above each that describe what they do. Copy these into your HTML, right at the bottom:

    <!-- The main Three.js file -->
    <script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/108/three.min.js'></script>
    
    <!-- This brings in the ability to load custom 3D objects in the .gltf file format. Blender allows the ability to export to this format out the box -->
    <script src='https://cdn.jsdelivr.net/gh/mrdoob/Three.js@r92/examples/js/loaders/GLTFLoader.js'></script>
    
    <!-- This is a simple to use extension for Three.js that activates all the rotating, dragging and zooming controls we need for both mouse and touch, there isn't a clear CDN for this that I can find -->
    <script src='https://threejs.org/examples/js/controls/OrbitControls.js'></script>
    

    Let’s include the canvas element. The entire 3D experience gets rendered into this element, all other HTML will be UI around this. Place the canvas at the bottom of your HTML, above your dependencies.

    <!-- The canvas element is used to draw the 3D scene -->
    <canvas id="c"></canvas>

    Now, we’re going to create a new Scene for Three.js. In your JavaScript, lets make a reference to this scene like so:

    // Init the scene
    const scene = new THREE.Scene();

    Below this, we’re going to reference our canvas element

    const canvas = document.querySelector('#c');

    Three.js requires a few things to run, and we will get to all of them. The first was scene, the second is a renderer. Let’s add this below our canvas reference. This creates a new WebGLRenderer, we’re passing our canvas to it, and we’ve opted in for antialiasing, this creates smoother edges around our 3D model.

    // Init the renderer
    const renderer = new THREE.WebGLRenderer({canvas, antialias: true});

    And now we’re going to append the renderer to the document body

    document.body.appendChild(renderer.domElement);

    The CSS for the canvas element is just stretching it to 100% height and width of the body, so your entire page has now turned black, because the entire canvas is now black!

    Our scene is black, we’re on the right track here.

    The next thing Three.js needs is an update loop, basically this is a function that runs on each frame draw and is really important to the way our app will work. We’ve called our update function animate(). Let’s add it below everything else in our JavaScript.

    function animate() {
      renderer.render(scene, camera);
      requestAnimationFrame(animate);
    }
    
    animate();

    Note that we’re referencing a camera here, but we haven’t set one up yet. Let’s add one now.

    At the top of your JavaScript, we’ll add a variable called cameraFar. When we add our camera to our scene, it’s going to be added at position 0,0,0. Which is where our chair is sitting! so cameraFar is the variable that tells our camera how far off this mark to move, so that we can see our chair.

    var cameraFar = 5;

    Now, above our function animate() {….} lets add a camera.

    // Add a camera
    var camera = new THREE.PerspectiveCamera( 50, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.z = cameraFar;
    camera.position.x = 0;

    This is a perspective camera, with the field of view of 50, the size of the whole window/canvas, and some default clipping planes. The planes determine how near or far the camera should be before the object isn’t rendered. It’s not something we need to pay attention to in our app.

    Our scene is still black, let’s set a background color.

    At the top, above our scene reference, add a background color variable called BACKGROUND_COLOR.

    const BACKGROUND_COLOR = 0xf1f1f1;

    Notice how we used 0x instead of # in our hex? These are hexadecimal numbers, and the only thing you need to remember about that is that its not a string the way you’d handle a standard #hex variable in JavaScript. It’s an integer and it starts with 0x.

    Below our scene reference, let’s update the scenes background color, and add some fog of the same color off in the distance, this is going to help hide the edges of the floor once we add that in.

    const BACKGROUND_COLOR = 0xf1f1f1;
    
    // Init the scene
    const scene = new THREE.Scene();
    
    // Set background
    scene.background = new THREE.Color(BACKGROUND_COLOR );
    scene.fog = new THREE.Fog(BACKGROUND_COLOR, 20, 100);
    

    Now it’s an empty world. It’s hard to tell that though, because there’s nothing in there, nothing casting shadows. We have a blank scene. Now it’s time to load in our model.

    Part 3: Loading the model

    We’re going to add the function that loads in models, this is provided by our second dependency we added in our HTML.

    Before we do that though, let’s reference the model, we’ll be using this variable quite a bit. Add this at the top of your JavaScript, above your BACKGROUND_COLOR. Let’s also add a path to the model. I’ve hosted it for us, it’s about 1Mb in size.

    var theModel;
    const MODEL_PATH =  "https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/chair.glb";
    

    Now we can create a new loader, and use the load method. This sets theModel as our 3D models entire scene. We’re also going to set the size for this app, the right size seems to be about twice as big as it’s loaded. Thirdly, we’re going to offset the y position by -1 to bring it down a little bit, and finally we’re going to add the model to the scene.

    The first parameter is the model’s filepath, the second is a function that runs once the resource is loaded, the third is undefined for now but can be used for a second function that runs while the resource is loading, and the final parameter handles errors.

    Add this below our camera.

    // Init the object loader
    var loader = new THREE.GLTFLoader();
    
    loader.load(MODEL_PATH, function(gltf) {
      theModel = gltf.scene;
    
    // Set the models initial scale   
      theModel.scale.set(2,2,2);
    
      // Offset the y position a bit
      theModel.position.y = -1;
    
      // Add the model to the scene
      scene.add(theModel);
    
    }, undefined, function(error) {
      console.error(error)
    });

    At this point you should be seeing a stretched, black, pixelated chair. As awful as it looks, this is right so far. So don’t worry!

    model-loaded

    Along with a camera, we need lights. The background isn’t affected by lights, but if we added a floor right now, it would also be black (dark). There are a number of lights available for Three.js, and a number of options to tweak all of them. We’re going to add two: a hemisphere light, and a directional light. The settings are also sorted for our app, and they include position and intensity. This is something to play around with if you ever adopt these methods in your own app, but for now, lets use the ones I’ve included. Add these lights below your loader.

    // Add lights
    var hemiLight = new THREE.HemisphereLight( 0xffffff, 0xffffff, 0.61 );
        hemiLight.position.set( 0, 50, 0 );
    // Add hemisphere light to scene   
    scene.add( hemiLight );
    
    var dirLight = new THREE.DirectionalLight( 0xffffff, 0.54 );
        dirLight.position.set( -8, 12, 8 );
        dirLight.castShadow = true;
        dirLight.shadow.mapSize = new THREE.Vector2(1024, 1024);
    // Add directional Light to scene    
        scene.add( dirLight );

    Your chair looks marginally better! Before we continue, here’s our JavaScript so far:

    var cameraFar = 5;
    var theModel;
    
    const MODEL_PATH =  "https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/chair.glb";
    
    const BACKGROUND_COLOR = 0xf1f1f1;
    // Init the scene
    const scene = new THREE.Scene();
    // Set background
    scene.background = new THREE.Color(BACKGROUND_COLOR );
    scene.fog = new THREE.Fog(BACKGROUND_COLOR, 20, 100);
    
    const canvas = document.querySelector('#c');
    
    // Init the renderer
    const renderer = new THREE.WebGLRenderer({canvas, antialias: true});
    
    document.body.appendChild(renderer.domElement);
    
    // Add a camerra
    var camera = new THREE.PerspectiveCamera( 50, window.innerWidth / window.innerHeight, 0.1, 1000);
    camera.position.z = cameraFar;
    camera.position.x = 0;
    
    // Init the object loader
    var loader = new THREE.GLTFLoader();
    
    loader.load(MODEL_PATH, function(gltf) {
      theModel = gltf.scene;
    
    // Set the models initial scale   
      theModel.scale.set(2,2,2);
    
      // Offset the y position a bit
      theModel.position.y = -1;
    
      // Add the model to the scene
      scene.add(theModel);
    
    }, undefined, function(error) {
      console.error(error)
    });
    
    // Add lights
    var hemiLight = new THREE.HemisphereLight( 0xffffff, 0xffffff, 0.61 );
        hemiLight.position.set( 0, 50, 0 );
    // Add hemisphere light to scene   
    scene.add( hemiLight );
    
    var dirLight = new THREE.DirectionalLight( 0xffffff, 0.54 );
        dirLight.position.set( -8, 12, 8 );
        dirLight.castShadow = true;
        dirLight.shadow.mapSize = new THREE.Vector2(1024, 1024);
    // Add directional Light to scene    
        scene.add( dirLight );
    
    function animate() {
      renderer.render(scene, camera);
      requestAnimationFrame(animate);
    }
    
    animate();
    

    Here’s what we should be looking at right now:

    with-lights

    Let’s fix the pixelation and the stretching. Three.js needs to update the canvas size when it shifts, and it needs to set its internal resolution not only to the dimensions of the canvas, but also the device pixel ratio of the screen (which is much higher on phones).

    Lets head to the bottom of our JavaScript, below where we call animate(), and add this function. This function basically listens to both, the canvas size and the window size, and returns a boolean depending on whether the two sizes are the same or not. We will use that function inside the animate function to determine whether to re-render the scene. This function is also going to take into account the device pixel ratio to be sure that the canvas is sharp on mobile phones too.

    Add this function at the bottom of your JavaScript.

    function resizeRendererToDisplaySize(renderer) {
      const canvas = renderer.domElement;
      var width = window.innerWidth;
      var height = window.innerHeight;
      var canvasPixelWidth = canvas.width / window.devicePixelRatio;
      var canvasPixelHeight = canvas.height / window.devicePixelRatio;
    
      const needResize = canvasPixelWidth !== width || canvasPixelHeight !== height;
      if (needResize) {
        
        renderer.setSize(width, height, false);
      }
      return needResize;
    }
    

    Now update your animate function to look like this:

    function animate() {
      renderer.render(scene, camera);
      requestAnimationFrame(animate);
      
      if (resizeRendererToDisplaySize(renderer)) {
        const canvas = renderer.domElement;
        camera.aspect = canvas.clientWidth / canvas.clientHeight;
        camera.updateProjectionMatrix();
      }
    }
    

    Instantly, our chair is looking so much better!

    Screen Shot 2019-09-16 at 6.49.13 PM

    I need to mention a couple things before we continue:

    • The chair is backwards, this is my bad. We’re going to simply rotate the model on its Y position
    • The supports are black? but the rest is white? This is because the model has some material information that has been imported with it that I had set up in Blender. This doesn’t matter, because we’re going to add a function that lets us define textures in our app, and add them to different areas of the chair when the model loads. So, if you have a wood texture and a denim texture (spoiler: we will), we will have the ability to set these on load without the user having to choose them. So the materials on the chair right now don’t matter all that much.

    Humour me quickly, head to the loader function, and remember where we set the scale to (2,2,2)? Lets add this under it:

    // Set the models initial scale   
      theModel.scale.set(2,2,2);
    
      theModel.rotation.y = Math.PI;
    

    Yeah, much better, sorry about that. One more thing: Three.js doesn’t have support for degrees as far as I know (?), everyone appears to be using Math.PI. This equals 180 degrees, so if you want something angled at a 45 degree angle, you’d use Math.PI / 4.

    rotated

    Okay, we’re getting there! We need a floor though, without a floor there can’t really be any shadows right?

    Add a floor, what we’re doing here is creating a new plane (a two-dimensional shape, or a three-dimensional shape with no height).

    Add this below our lights…

    // Floor
    var floorGeometry = new THREE.PlaneGeometry(5000, 5000, 1, 1);
    var floorMaterial = new THREE.MeshPhongMaterial({
      color: 0xff0000,
      shininess: 0
    });
    
    var floor = new THREE.Mesh(floorGeometry, floorMaterial);
    floor.rotation.x = -0.5 * Math.PI;
    floor.receiveShadow = true;
    floor.position.y = -1;
    scene.add(floor);
    

    Let’s take a look at whats happening here.

    First, we made a geometry, we won’t be needing to make another geometry in Three.js in this tutorial, but you can make all sorts.

    Secondly, notice how we also made a new MeshPhongMaterial and set a couple options. It’s color, and it’s shininess. Check out some of Three.js other materials later on. Phong is great because you can adjust its reflectiveness and specular highlights. There is also MeshStandardMaterial which has support for more advanced texture aspects such as metallic and ambient occlusion, and there is also the MeshBasicMaterial, which doesn’t support shadows. We will just be creating Phong materials in this tutorial.

    We created a variable called floor and merged the geometry and material into a Mesh.

    We set the floor’s rotation to be flat, opted in for the ability to receive shadows, moved it down the same way we moved the chair down, and then added it to the scene.

    We should now be looking at this:

    Screen Shot 2019-09-16 at 7.08.46 PM

    We will leave it red for now, but, where are the shadows? There’s a couple of things we need to do for that. First, under our const renderer, lets include a couple of options:

    // Init the renderer
    const renderer = new THREE.WebGLRenderer({canvas, antialias: true});
    
    renderer.shadowMap.enabled = true;
    renderer.setPixelRatio(window.devicePixelRatio); 
    

    We’ve set the pixel ratio to whatever the device’s pixel ratio is, not relevant to shadows, but while we’re there, let’s do that. We’ve also enabled shadowMap, but there are still no shadows? That’s because the materials we have on our chair are the ones brought in from Blender, and we want to author some of them in our app.

    Our loader function includes the ability to traverse the 3D model. So, head to our loader function and add this in below the theModel = gltf.scene; line. For each object in our 3D model (legs, cushions, etc), we’re going to to enable to option to cast shadows, and to receive shadows. This traverse method will be used again later on.

    Add this line below theModel = gltf.scene;

      theModel.traverse((o) => {
         if (o.isMesh) {
           o.castShadow = true;
           o.receiveShadow = true;
         }
       });
    

    It looks arguably worse than it did before, but at least theres a shadow on the floor! This is because our model still has materials brought in from Blender. We’re going to replace all of these materials with a basic, white PhongMaterial.

    Lets create another PhongMaterial and add it above our loader function:

    // Initial material
    const INITIAL_MTL = new THREE.MeshPhongMaterial( { color: 0xf1f1f1, shininess: 10 } );
    

    This is a great starting material, it’s a slight off-white, and it’s only a little bit shiny. Cool!

    We could just add this to our chair and be done with it, but some objects may need a specific color or texture on load, and we can’t just blanket the whole thing with the same base color, the way we’re going to do this is to add this array of objects under our initial material.

    // Initial material
    const INITIAL_MTL = new THREE.MeshPhongMaterial( { color: 0xf1f1f1, shininess: 10 } );
    
    const INITIAL_MAP = [
      {childID: "back", mtl: INITIAL_MTL},
      {childID: "base", mtl: INITIAL_MTL},
      {childID: "cushions", mtl: INITIAL_MTL},
      {childID: "legs", mtl: INITIAL_MTL},
      {childID: "supports", mtl: INITIAL_MTL},
    ];

    We’re going to traverse through our 3D model again and use the childID to find different parts of the chair, and apply the material to it (set in the mtl property). These childID’s match the names we gave each object in Blender, if you read that section, consider yourself informed!

    Below our loader function, let’s add a function that takes the the model, the part of the object (type), and the material, and sets the material. We’re also going to add a new property to this part called nameID so that we can reference it later.

    // Function - Add the textures to the models
    function initColor(parent, type, mtl) {
      parent.traverse((o) => {
       if (o.isMesh) {
         if (o.name.includes(type)) {
              o.material = mtl;
              o.nameID = type; // Set a new property to identify this object
           }
       }
     });
    }
    

    Now, inside our loader function, just before we add our model to the scene (scene.add(theModel);)

    Let’s run that function for each object in our INITIAL_MAP array:

      // Set initial textures
      for (let object of INITIAL_MAP) {
        initColor(theModel, object.childID, object.mtl);
      }
    

    Finally, head back to our floor, and change the color from red (0xff0000) to a light grey(0xeeeeee).

    // Floor
    var floorGeometry = new THREE.PlaneGeometry(5000, 5000, 1, 1);
    var floorMaterial = new THREE.MeshPhongMaterial({
      color: 0xeeeeee, // <------- Here
      shininess: 0
    });
    

    It’s worth mentioning here that 0xeeeeee is different to our background color. I manually dialed this in until the floor with the lights shining on it matched the lighter background color. We’re now looking at this:

    See the Pen
    3D Chair Customizer Tutorial – Part 1
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Congratulations, we’ve got this far! If you got stuck anywhere, fork this pen or investigate it until you find the issue.

    Part 4: Adding controls

    For real though this is a very small part, and is super easy thanks to our third dependency OrbitControls.js.

    Above our animate function, we add this in our controls:

    // Add controls
    var controls = new THREE.OrbitControls( camera, renderer.domElement );
    controls.maxPolarAngle = Math.PI / 2;
    controls.minPolarAngle = Math.PI / 3;
    controls.enableDamping = true;
    controls.enablePan = false;
    controls.dampingFactor = 0.1;
    controls.autoRotate = false; // Toggle this if you'd like the chair to automatically rotate
    controls.autoRotateSpeed = 0.2; // 30
    

    Inside the animate function, at the top, add:

      controls.update();
    

    So our controls variable is a new OrbitControls class. We’ve set a few options that you can change here if you’d like. These include the range in which the user is allowed to rotate around the chair (above and below). We’ve disabled panning to keep the chair centered, enabled dampening to give it weight, and included auto rotate ability if you choose to use them. This is currently set to false.

    Try click and drag your chair, you should be able to explore the model with full mouse and touch functionality!

    See the Pen
    Scrollable
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Part 5: Changing colors

    Our app currently doesn’t do anything, so this next part will focus on changing our colors. We’re going to add a bit more HTML. Afterwards, I’ll explain a bit about what the CSS is doing.

    Add this below your canvas element:

    <div class="controls">
    <!-- This tray will be filled with colors via JS, and the ability to slide this panel will be added in with a lightweight slider script (no dependency used for this) -->
     <div id="js-tray" class="tray">
         <div id="js-tray-slide" class="tray__slide"></div>
     </div>
    </div>
    

    Basically, the .controls DIV is stuck to the bottom of the screen, the .tray is set to be 100% width of the body, but its child, .tray__slide is going to fill with swatches and can be as wide as it needs. We’re going to add the ability to slide this child to explore colors as one of the final steps of this tutorial.

    Let’s start by adding in a couple colors. At the top of our JavaScript, lets add an array of five objects, each with a color property.

    const colors = [
    {
        color: '66533C'
    },
    {
        color: '173A2F'
    },
    {
        color: '153944'
    },
    {
        color: '27548D'
    },
    {
        color: '438AAC'
    }  
    ]

    Note that these neither have # or 0x to represent the hex. We will use these colors for both in functions. Also, it’s an object because we will be able to add other properties to this color, like shininess, or even a texture image (spoiler: we will, and we will).

    Lets make swatches out of these colors!

    First, let’s reference our tray slider at the top of our JavaScript:

    const TRAY = document.getElementById('js-tray-slide');
    

    Right at the bottom of our JavaScript, lets add a new function called buildColors and immediately call it.

    // Function - Build Colors
    function buildColors(colors) {
      for (let [i, color] of colors.entries()) {
        let swatch = document.createElement('div');
        swatch.classList.add('tray__swatch');
    
          swatch.style.background = "#" + color.color;
    
        swatch.setAttribute('data-key', i);
        TRAY.append(swatch);
      }
    }
    
    buildColors(colors);
    

    swatches

    We’re now creating swatches out of our colors array! Note that we set the data-key attribute to the swatch, we’re going to use this to look up our color and make them into materials.

    Below our new buildColors function, let’s add an event handler to our swatches:

    // Swatches
    const swatches = document.querySelectorAll(".tray__swatch");
    
    for (const swatch of swatches) {
      swatch.addEventListener('click', selectSwatch);
    }
    

    Our click handler calls a function called selectSwatch. This function is going to build a new PhongMaterial out of the color and call another function to traverse through our 3d model, find the part it’s meant to change, and update it!

    Below the event handlers we just added, add the selectSwatch function:

    function selectSwatch(e) {
         let color = colors[parseInt(e.target.dataset.key)];
         let new_mtl;
    
          new_mtl = new THREE.MeshPhongMaterial({
              color: parseInt('0x' + color.color),
              shininess: color.shininess ? color.shininess : 10
              
            });
        
        setMaterial(theModel, 'legs', new_mtl);
    }
    

    This function looks up our color by its data-key attribute, and creates a new material out of it.

    This won’t work yet, we need to add the setMaterial function, (see the final line of the function we just added).

    Take note of this line: setMaterial(theModel, ‘legs’, new_mtl);. Currently we’re just passing ‘legs’ to this function, soon we will add the ability to change out the different sections we want to update. But first, lets add the zcode>setMaterial

    function.

    Below this function, add the setMaterial function:

    function setMaterial(parent, type, mtl) {
      parent.traverse((o) => {
       if (o.isMesh && o.nameID != null) {
         if (o.nameID == type) {
              o.material = mtl;
           }
       }
     });
    }
    

    This function is similar to our initColor function, but with a few differences. It checks for the nameID we added in the initColor, and if its the same as the parameter type, it adds the material to it.

    Our swatches can now create a new material, and change the color of the legs, give it a go! Here’s everything we have so far in a pen. Investigate it if you’re lost.

    See the Pen
    Swatches change the legs color!
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Part 6: Selecting the parts to change

    We can now change the color of the legs, which is awesome, but let’s add the ability to select the part our swatch should add its material to. Include this HTML just below the opening body tag, I’ll explain the CSS below.

    <!-- These toggle the the different parts of the chair that can be edited, note data-option is the key that links to the name of the part in the 3D file -->
    <div class="options">
        <div class="option --is-active" data-option="legs">
            <img src="https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/legs.svg" alt=""/>
        </div>
        <div class="option" data-option="cushions">
            <img src="https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/cushions.svg" alt=""/>
        </div>
        <div class="option" data-option="base">
            <img src="https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/base.svg" alt=""/>
        </div>
        <div class="option" data-option="supports">
            <img src="https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/supports.svg" alt=""/>
        </div>
        <div class="option" data-option="back">
            <img src="https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/back.svg" alt=""/>
        </div>
    </div>
    
    

    This is just a collection of buttons with custom icons in each. The .options DIV is stuck to the side of the screen via CSS (and shifts a bit with media queries). Each .option DIV is just a white square, that has a red border on it when a –is-active class is added to it. It also includes a data-option attribute that matches our nameID, so we can identify it. Lastly, the image element has a CSS property called pointer-events: none so that the event stays on the parent even if you click the image.

    Options set

    Lets add another variable at the top of our JavaScript called activeOptions and by default let’s set it to ‘legs’:

    var activeOption = 'legs';

    Now head back to our selectSwatch function and update that hard-coded ‘legs’ parameter to activeOption

    setMaterial(theModel, activeOption, new_mtl);

    Now all we need to do is create a event handler to change out activeOption when an option is clicked!

    Let’s add this above our const swatches and selectSwatch function.

    // Select Option
    const options = document.querySelectorAll(".option");
    
    for (const option of options) {
      option.addEventListener('click',selectOption);
    }
    
    function selectOption(e) {
      let option = e.target;
      activeOption = e.target.dataset.option;
      for (const otherOption of options) {
        otherOption.classList.remove('--is-active');
      }
      option.classList.add('--is-active');
    }
    

    We’ve added the selectOption function, which sets the activeOption to our event targets data-option value, and toggles the –is-active class. Thats it!

    Try it out

    See the Pen
    Changing options
    by Kyle Wetton (@kylewetton)
    on CodePen.

    But why stop here? An object could look like anything, it can’t all be the same material. A chair with no wood or fabric? Lets expand our color selection a little bit. Update your color array to this:

    const colors = [
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/wood.jpg',
        size: [2,2,2],
        shininess: 60
    },
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/denim.jpg',
        size: [3, 3, 3],
        shininess: 0
    },
    {
        color: '66533C'
    },
    {
        color: '173A2F'
    },
    {
        color: '153944'
    },
    {
        color: '27548D'
    },
    {
        color: '438AAC'
    }  
    ]

    The top two are now textures. We’ve got wood and denim. We also have two new properties, size and shininess. Size is how often to repeat a pattern, so the larger the number, the more dense the pattern is, or more simply put – the more it repeats.

    There are two function we need to update to add this ability. Firstly, lets head to the buildColors function and update to this

    // Function - Build Colors
    
    function buildColors(colors) {
      for (let [i, color] of colors.entries()) {
        let swatch = document.createElement('div');
        swatch.classList.add('tray__swatch');
        
        if (color.texture)
        {
          swatch.style.backgroundImage = "url(" + color.texture + ")";   
        } else
        {
          swatch.style.background = "#" + color.color;
        }
    
        swatch.setAttribute('data-key', i);
        TRAY.append(swatch);
      }
    }
    

    Now its checking to see if its a texture, if it is, it’s going to set the swatches background to be that texture, neat!

    Screen Shot 2019-09-16 at 9.28.44 PM
    Notice the gap between the 5th and 6th swatch? The final batch of colors, which I will provide, is grouped into color schemes of 5 colors per scheme. So each scheme will have that small divider in it, this is set in the CSS and will make more sense in the final product.

    The second function we’re going to update is the selectSwatch function. Update it to this:

    function selectSwatch(e) {
         let color = colors[parseInt(e.target.dataset.key)];
         let new_mtl;
    
        if (color.texture) {
          
          let txt = new THREE.TextureLoader().load(color.texture);
          
          txt.repeat.set( color.size[0], color.size[1], color.size[2]);
          txt.wrapS = THREE.RepeatWrapping;
          txt.wrapT = THREE.RepeatWrapping;
          
          new_mtl = new THREE.MeshPhongMaterial( {
            map: txt,
            shininess: color.shininess ? color.shininess : 10
          });    
        } 
        else
        {
          new_mtl = new THREE.MeshPhongMaterial({
              color: parseInt('0x' + color.color),
              shininess: color.shininess ? color.shininess : 10
              
            });
        }
        
        setMaterial(theModel, activeOption, new_mtl);
    }
    

    To explain what’s going on here, this function will now check if it’s a texture. If it is, it’s going to create a new texture using the Three.js TextureLoader method, it’s going to set the texture repeat using our size values, and set the wrapping of it (this wrapping option seems to work best, I’ve tried the others, so lets go with it), then its going to set the PhongMaterials map property to the texture, and finally use the shininess value.

    If it’s not a texture, it uses our older method. Note that you can set a shininess property to any of our original colors!

    Screen Shot 2019-09-16 at 9.50.02 PM

    Important: if your textures just remain black when you try add them. Check your console. Are you getting cross domain CORS errors? This is a CodePen bug and I’ve done my best to try fix it. These assets are hosted directly in CodePen via a Pro feature so its unfortunate to have to battle with this. Apparently, the best bet here is to not visit those image URLs directly, otherwise I recommend signing up to Cloudinary and using their free tier, you may have better luck pointing your textures there.

    Here’s a pen with the textures working on my end at least:

    See the Pen
    Texture support
    by Kyle Wetton (@kylewetton)
    on CodePen.

    Part 7: Finishing touches

    I’ve had projects get run passed clients with a big button that is begging to be pressed, positively glistening with temptation to even just hover over it, and them and their co-workers (Dave from accounts) come back with feedback about how they didn’t know there was anything to be pressed (screw you, Dave).

    So let’s add some calls to action. First, let’s chuck in a patch of HTML above the canvas element:

    <!-- Just a quick notice to the user that it can be interacted with -->
    <span class="drag-notice" id="js-drag-notice">Drag to rotate 360&#176;</span>
    

    The CSS places this call-to-action above the chair, it’s a nice big button that instructs the user to drag to rotate the chair. It just stays there though? We will get to that.

    Let’s spin the chair once it’s loaded first, then, once the spin is done, let’s hide that call-to-action.

    First, lets add a loaded variable to the top of our JavaScript and set it to false:

    var loaded = false;
    

    Right at the bottom of your JavaScript, add this function

    // Function - Opening rotate
    let initRotate = 0;
    
    function initialRotation() {
      initRotate++;
    if (initRotate <= 120) {
        theModel.rotation.y += Math.PI / 60;
      } else {
        loaded = true;
      }
    }
    

    This simply rotates the the model 360 degrees within the span of 120 frames (around 2 seconds at 60fps), and we’re going to run this in the animate function to call it for 120 frames, once its done, it’s going to set loaded to true in our animate function. Here’s how it will look in its entirely with the new code at the end there:

    function animate() {
    
      controls.update();
      renderer.render(scene, camera);
      requestAnimationFrame(animate);
      
      if (resizeRendererToDisplaySize(renderer)) {
        const canvas = renderer.domElement;
        camera.aspect = canvas.clientWidth / canvas.clientHeight;
        camera.updateProjectionMatrix();
      }
      
      if (theModel != null && loaded == false) {
        initialRotation();
      }
    }
    
    animate();
    

    We check that theModel doesn’t equal null, and that the variable loaded is false, and we run that function for 120 frames, at which point the function switches to loaded = true, and our animate function ignores it.

    You should have a nice spinning chair. When that chair stops is a great time to remove our call-to-action.

    In the CSS, there’s a class that can be added to that call-to-action that will hide it with an animation, this animation has a delay of 3 seconds, so let’s add that class at the same time the rotation starts.

    At the top of your JavaScript we will reference it:

    const DRAG_NOTICE = document.getElementById('js-drag-notice');
    

    and update your animate function like so

    if (theModel != null && loaded == false) {
        initialRotation();
        DRAG_NOTICE.classList.add('start');
      }
    

    Great! Okay, here’s some more colors, update your color array, I’ve give a lightweight sliding function below it:

    const colors = [
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/wood_.jpg',
        size: [2,2,2],
        shininess: 60
    },
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/fabric_.jpg',
        size: [4, 4, 4],
        shininess: 0
    },
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/pattern_.jpg',
        size: [8, 8, 8],
        shininess: 10
    },
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/denim_.jpg',
        size: [3, 3, 3],
        shininess: 0
    },
    {
        texture: 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1376484/quilt_.jpg',
        size: [6, 6, 6],
        shininess: 0
    },
    {
        color: '131417'  
    },
    {
        color: '374047'  
    },
    {
        color: '5f6e78'  
    },
    {
        color: '7f8a93'  
    },
    {
        color: '97a1a7'  
    },
    {
        color: 'acb4b9'  
    },
    {
        color: 'DF9998',
    },
    {
        color: '7C6862'
    },
    {
        color: 'A3AB84'
    },
    {
        color: 'D6CCB1'
    },
    {
        color: 'F8D5C4'
    },
    {
        color: 'A3AE99'
    },
    {
        color: 'EFF2F2'
    },
    {
        color: 'B0C5C1'
    },
    {
        color: '8B8C8C'
    },
    {
        color: '565F59'
    },
    {
        color: 'CB304A'
    },
    {
        color: 'FED7C8'
    },
    {
        color: 'C7BDBD'
    },
    {
        color: '3DCBBE'
    },
    {
        color: '264B4F'
    },
    {
        color: '389389'
    },
    {
        color: '85BEAE'
    },
    {
        color: 'F2DABA'
    },
    {
        color: 'F2A97F'
    },
    {
        color: 'D85F52'
    },
    {
        color: 'D92E37'
    },
    {
        color: 'FC9736'
    },
    {
        color: 'F7BD69'
    },
    {
        color: 'A4D09C'
    },
    {
        color: '4C8A67'
    },
    {
        color: '25608A'
    },
    {
        color: '75C8C6'
    },
    {
        color: 'F5E4B7'
    },
    {
        color: 'E69041'
    },
    {
        color: 'E56013'
    },
    {
        color: '11101D'
    },
    {
        color: '630609'
    },
    {
        color: 'C9240E'
    },
    {
        color: 'EC4B17'
    },
    {
        color: '281A1C'
    },
    {
        color: '4F556F'
    },
    {
        color: '64739B'
    },
    {
        color: 'CDBAC7'
    },
    {
        color: '946F43'
    },
    {
        color: '66533C'
    },
    {
        color: '173A2F'
    },
    {
        color: '153944'
    },
    {
        color: '27548D'
    },
    {
        color: '438AAC'
    }
    ]
    

    Awesome! These hang off the page though, right at the bottom of your JavaScript, add this function, it will allow you to drag the swatches panel with mouse and touch. For the interest of keeping on topic, I won’t delve too much into how it works.

    var slider = document.getElementById('js-tray'), sliderItems = document.getElementById('js-tray-slide'), difference;
    
    function slide(wrapper, items) {
      var posX1 = 0,
          posX2 = 0,
          posInitial,
          threshold = 20,
          posFinal,
          slides = items.getElementsByClassName('tray__swatch');
      
      // Mouse events
      items.onmousedown = dragStart;
      
      // Touch events
      items.addEventListener('touchstart', dragStart);
      items.addEventListener('touchend', dragEnd);
      items.addEventListener('touchmove', dragAction);
    
    
      function dragStart (e) {
        e = e || window.event;
         posInitial = items.offsetLeft;
         difference = sliderItems.offsetWidth - slider.offsetWidth;
         difference = difference * -1;
        
        if (e.type == 'touchstart') {
          posX1 = e.touches[0].clientX;
        } else {
          posX1 = e.clientX;
          document.onmouseup = dragEnd;
          document.onmousemove = dragAction;
        }
      }
    
      function dragAction (e) {
        e = e || window.event;
        
        if (e.type == 'touchmove') {
          posX2 = posX1 - e.touches[0].clientX;
          posX1 = e.touches[0].clientX;
        } else {
          posX2 = posX1 - e.clientX;
          posX1 = e.clientX;
        }
        
        if (items.offsetLeft - posX2 <= 0 && items.offsetLeft - posX2 >= difference) {
            items.style.left = (items.offsetLeft - posX2) + "px";
        }
      }
      
      function dragEnd (e) {
        posFinal = items.offsetLeft;
        if (posFinal - posInitial < -threshold) { } else if (posFinal - posInitial > threshold) {
    
        } else {
          items.style.left = (posInitial) + "px";
        }
    
        document.onmouseup = null;
        document.onmousemove = null;
      }
    
    }
    
    slide(slider, sliderItems);
    

    Now, head to your CSS and under .tray__slider, uncomment this small animation

    /*   transform: translateX(-50%);
      animation: wheelin 1s 2s ease-in-out forwards; */
    

    Okay, let’s finish it off with a the final two touches, and we’re done!

    Let’s update our .controls div to include this extra call-to-action:

    <div class="controls">
    <div class="info">
        <div class="info__message">
            <p><strong>&nbsp;Grab&nbsp;</strong> to rotate chair. <strong>&nbsp;Scroll&nbsp;</strong> to zoom. <strong>&nbsp;Drag&nbsp;</strong> swatches to view more.</p>
        </div>
    </div>
    
    <!-- This tray will be filled with colors via JS, and the ability to slide this panel will be added in with a lightweight slider script (no dependency used for this) -->
     <div id="js-tray" class="tray">
         <div id="js-tray-slide" class="tray__slide"></div>
     </div>
    </div>
    

    Note that we have a new info section that includes some instructions on how to control the app.

    Finally, let’s add a loading overlay so that our app is clean while everything loads, and we will remove it once the model is loaded.

    Add this to the top of our HTML, below the body tag.

    <!-- The loading element overlays all else until the model is loaded, at which point we remove this element from the DOM -->  
    <div class="loading" id="js-loader"><div class="loader"></div></div>
    

    Here’s the thing about our loader, in order for it to load first, we’re going to add the CSS to the head tag instead of being included in the CSS. So simply add this CSS just above the closing head tag.

    
    
    <style>
    .loading {
      position: fixed;
      z-index: 50;
      width: 100%;
      height: 100%;
      top: 0; left: 0;
      background: #f1f1f1;
      display: flex;
      justify-content: center;
      align-items: center;
    }
    
    .loader{
      -webkit-perspective: 120px;
      -moz-perspective: 120px;
      -ms-perspective: 120px;
      perspective: 120px;
      width: 100px;
      height: 100px;
    }
    
    .loader:before{
      content: "";
      position: absolute;
      left: 25px;
      top: 25px;
      width: 50px;
      height: 50px;
      background-color: #ff0000;
      animation: flip 1s infinite;
    }
    
    @keyframes flip {
      0% {
        transform: rotate(0);
      }
    
      50% {
        transform: rotateY(180deg);
      }
    
      100% {
        transform: rotateY(180deg)  rotateX(180deg);
      }
    }
    </style>

    Almost there! Let’s remove it once the model is loaded.

    At the top of our JavaScript, lets reference it:

    const LOADER = document.getElementById('js-loader');
    

    Then in our loader function, after scene.add(theModel), include this line

      // Remove the loader
      LOADER.remove();
    

    Now our app loads behind this DIV, polishing it off:

    Screen Shot 2019-09-16 at 10.31.25 PM

    And that’s it! Here’s the completed pen for reference.

    See the Pen
    3D Chair Customizer Tutorial – Part 4
    by Kyle Wetton (@kylewetton)
    on CodePen.

    You can also check out the demo hosted here on Codrops.

    Thank you for sticking with me!

    This is a big tutorial. If you feel I made a mistake somewhere, please let me know in the comments, and thanks again for following with me as we create this absolute unit.

    How to Build a Color Customizer App for a 3D Model with Three.js was written by Kyle Wetton and published on Codrops.

    How to Create a Webcam Audio Visualizer with Three.js

    In this tutorial you’ll learn how to create an interesting looking audio visualizer that also takes input from the web camera. The result is a creative visualizer with a depth distortion effect. Although the final result looks complex, the Three.js code that powers it is straightforward and easy to understand.

    So let’s get started.

    Processing flow

    The processing flow of our script is going to be the following:

    1. Create a vertex from every pixel of the image we get from the web camera input
    2. Use the image data from the web camera and apply the magnitude value of the sound frequency to the Z coordinate of each particle
    3. Draw
    4. Repeat point 2 and 3

    Now, let’s have a look at how we can get and use the data from the web camera.

    Web camera

    First of all, let’s see how to access the web camera and get an image from it.

    Camera access

    For camera access in the browser, simply use getUserMedia().

    <video id="video" autoplay style="display: none;"></video>
    video = document.getElementById("video");
    
    const option = {
        video: true,
        audio: false
    };
    
    // Get image from camera
    navigator.getUserMedia(option, (stream) => {
        video.srcObject = stream;  // Load as source of video tag
        video.addEventListener("loadeddata", () => {
            // ready
        });
    }, (error) => {
        console.log(error);
    });

    Draw camera image to canvas

    After camera access succeeded, we’ll get the image from the camera and draw it on the canvas.

    const getImageDataFromVideo = () => {
        const w = video.videoWidth;
        const h = video.videoHeight;
        
        canvas.width = w;
        canvas.height = h;
        
        // Reverse image like a mirror
        ctx.translate(w, 0);
        ctx.scale(-1, 1);
    
        // Draw to canvas
        ctx.drawImage(image, 0, 0);
    
        // Get image as array
        return ctx.getImageData(0, 0, w, h);
    };

    About acquired imageData

    ctx.getImageData() returns an array which RGBA is in order.

    [0]  // R
    [1]  // G
    [2]  // B
    [3]  // A
    
    [4]  // R
    [5]  // G
    [6]  // B
    [7]  // A...

    And this is how you can access the color information of every pixel.

    for (let i = 0, len = imageData.data.length; i < len; i+=4) {
        const index = i * 4;  // Get index of "R" so that we could access to index with 1 set of RGBA in every iteration.?0, 4, 8, 12...?
        const r = imageData.data[index];
        const g = imageData.data[index + 1];
        const b = imageData.data[index + 2];
        const a = imageData.data[index + 3];
    }

    Accessing image pixels

    We are going to calculate the X and Y coordinates so that the image can be placed in the center.

    const imageData = getImageDataFromVideo();
    for (let y = 0, height = imageData.height; y < height; y += 1) {
        for (let x = 0, width = imageData.width; x < width; x += 1) {
            const vX = x - imageData.width / 2;  // Shift in X direction since origin is center of screen
            const vY = -y + imageData.height / 2;  // Shift in Y direction in the same way (you need -y)
        }
    }

    Create particles from image pixels

    For creating a particle, we can use THREE.Geometry() and THREE.PointsMaterial().

    Each pixel is added to the geometry as a vertex.

    const geometry = new THREE.Geometry();
    geometry.morphAttributes = {};
    const material = new THREE.PointsMaterial({
        size: 1,
        color: 0xff0000,
        sizeAttenuation: false
    });
    
    const imageData = getImageDataFromVideo();
    for (let y = 0, height = imageData.height; y < height; y += 1) {
        for (let x = 0, width = imageData.width; x < width; x += 1) {
            const vertex = new THREE.Vector3(
                x - imageData.width / 2,
                -y + imageData.height / 2,
                0
            );
            geometry.vertices.push(vertex);
        }
    }
    particles = new THREE.Points(geometry, material);
    scene.add(particles);

    Draw

    In the drawing stage, the updated image is drawn using particles by getting the image data from the camera and calculating a grayscale value from it.

    By calling this process on every frame, the screen visual is updated just like a video.

    const imageData = getImageDataFromVideo();
    for (let i = 0, length = particles.geometry.vertices.length; i < length; i++) {
        const particle = particles.geometry.vertices[i];
        let index = i * 4;
    
        // Take an average of RGB and make it a gray value.
        let gray = (imageData.data[index] + imageData.data[index + 1] + imageData.data[index + 2]) / 3;
    
        let threshold = 200;
        if (gray < threshold) {
            // Apply the value to Z coordinate if the value of the target pixel is less than threshold.
            particle.z = gray * 50;
        } else {
            // If the value is greater than threshold, make it big value.
            particle.z = 10000;
        }
    }
    particles.geometry.verticesNeedUpdate = true;

    Audio

    In this section, let’s have a look at how the audio is processed.

    Loading of the audio file and playback

    For audio loading, we can use THREE.AudioLoader().

    const audioListener = new THREE.AudioListener();
    audio = new THREE.Audio(audioListener);
    
    const audioLoader = new THREE.AudioLoader();
    // Load audio file inside asset folder
    audioLoader.load('asset/audio.mp3', (buffer) => {
        audio.setBuffer(buffer);
        audio.setLoop(true);
        audio.play();  // Start playback
    });

    For getting the average frequency analyser.getAverageFrequency() comes in handy.

    By applying this value to the Z coordinate of our particles, the depth effect of the visualizer is created.

    Getting the audio frequency

    And this is how we get the audio frequency:

    // About fftSize https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode/fftSize
    analyser = new THREE.AudioAnalyser(audio, fftSize);
    
    // analyser.getFrequencyData() returns array of half size of fftSize.
    // ex. if fftSize = 2048, array size will be 1024.
    // data includes magnitude of low ~ high frequency.
    const data = analyser.getFrequencyData();
    
    for (let i = 0, len = data.length; i < len; i++) {
        // access to magnitude of each frequency with data[i].
    }

    Combining web camera input and audio

    Finally, let’s see how the drawing process works that uses both, the camera image and the audio data.

    Manipulate the image by reacting to the audio

    By combining the techniques we’ve seen so far, we can now draw an image of the web camera with particles and manipulate the visual using audio data.

    const draw = () => {
        // Audio
        const data = analyser.getFrequencyData();
        let averageFreq = analyser.getAverageFrequency();
    
        // Video
        const imageData = getImageData();
        for (let i = 0, length = particles.geometry.vertices.length; i < length; i++) {
            const particle = particles.geometry.vertices[i];
        
            let index = i * 4;
            let gray = (imageData.data[index] + imageData.data[index + 1] + imageData.data[index + 2]) / 3;
            let threshold = 200;
            if (gray < threshold) {
                // Apply gray value of every pixels of web camera image and average value of frequency to Z coordinate of particle.
                particle.z = gray * (averageFreq / 255);
            } else {
                particle.z = 10000;
            }
        }
        particles.geometry.verticesNeedUpdate = true;  // Necessary to update
    
        renderer.render(scene, camera);
    
        requestAnimationFrame(draw);
    };

    And that’s all. Wasn’t that complicated, was it? Now you know how to create your own audio visualizer using web camera and audio input.

    We’ve used THREE.Geometry and THREE.PointsMaterial here but you can take it further and use Shaders. Demo 2 shows an example of that.

    We hope you enjoyed this tutorial and get inspired to create something with it.

    How to Create a Webcam Audio Visualizer with Three.js was written by Ryota Takemoto and published on Codrops.

    A Configurator for Creating Custom WebGL Distortion Effects

    In one of our previous tutorials we showed you how to create thumbnail to fullscreen WebGL distortion animations. Today we would like to invite you to build your own personalized effects by using the configurator we’ve created.

    We’ll briefly go over some main concepts so you can make full use of the configurator. If you’d like to understand the main idea behind the work, and why the animations behave the way they do in more depth, we highly recommend you to read the main tutorial Creating Grid-to-Fullscreen Animations with Three.js.

    Basics of the configurator

    The configurator allows you to modify all the details of the effect, making it possible to create unique animations. Even though you don’t have to be a programmer to create your own effect, understanding the options available will give you more insight into what you can achieve with it.

    To see your personalized effect in action, either click on the image or drag the Progress bar. The Duration option sets the time of the whole animation.

    Under Easings you can control the “rate of change” of your animation. For example:

    • Power1.easeOut: Start really fast but end slowly
    • Power1.easeInOut: Start and end slowly, but go really fast in the middle of the animation
    • Bounce: Bounce around like a basketball

    The simplest easings to play around with are Power0-4 with ease-out. If you would like to know the difference between each easing, check out this ease visualizer.

    Note that the configurator automatically saves your progress for later use. Feel free to close the page and come back to it later.

    Timing, Activation and Transformation

    Timing, Activation and Transformation are concepts that come from our previous tutorial. Each on of them has their own list of types, that also have their own set of options for you to explore.

    You can explore them by changing the types, and expanding the respective options tab. When you swap one type for another, your previous set of options is saved in case you want to go back to it.

    configurator

    Timing

    The timing function maps the activation into actual progress for each vertex. Without timing, the activation doesn’t get applied and all the vertices move at the same rate. Set timing type to none to see it in action.

    • SameEnd: The vertices have different start times, but they all end at the same time. Or vice versa.
    • sections: Move by sections, wait for the previous section to finish before starting.

    The same activation with a different timing will result in a very different result.

    Activation

    The activation determines how the plane is going to move to full screen:

    • side: From left to right.
    • corners: From top-left to bottom-right
    • radial: From the position of the mouse
    • And others.

    For a visual representation of the current activation, toggle debug activation and start the animation to see it in action.

    Transformation

    Transform the plane into a different shape or position over the course of the animation:

    • Flip: Flip the plane on the X axis
    • simplex: Move the vertices with noise over the while transitioning
    • wavy: Make the plane wavy while transitioning
    • And more

    Some effects, use seed for their inner workings. You can set the initial seed and determine when this seed is going to be randomized.

    Note that although these three concepts allow for a large amount of possible effects, some options won’t work quite well together.

    Sharing your effect

    To share the effect you can simply copy and share the URL.

    We would love to see what you come up with. Please share your effect in the comments or tag us on Twitter using @anemolito and @codrops.

    Adding your effect to your site

    Now that you made your custom effect, it is time to add it to your site. Let’s see how to do that, step by step.

    First, download the code and copy some of the required files over:

    • THREEjs: js/three.min.js
    • TweenLite: js/TweenLite.min.js
    • ImagesLoaded: js/imagesloaded.pkgd.min.js
    • For preloading the images
    • The effect’s code: js/GridToFullscreenEffect.js
    • TweenLite’s CSSPlugin: js/CSSPlugin.min.js (optional)
    • TweenLite’s EasePack:js/EasePack.min.js (optional; if you use the extra easings)

    Include these in your HTML file and make sure to add js/GridToFullscreenEffect.js last.

    Now let’s add the HTML structure for the effect to work. We need two elements:

    • div#App: Where our canvas is going to be
    • div#itemsWrapper: Where our HTML images are going to be
    <body>
        <div id="app"></div>
        <div id="itemsWrapper"></div>    
    </body>

    Note: You can use any IDs or classes you want as long as you use them when instantiating the effect.

    Inside #itemsWrapper we are going to have the HTML items for our effect.

    Our HTML items inside #itemsWrapper can have almost any structure. The only requirement is that it has two image elements as the first two children of the item.

    The first element is for the small-scale image and the second element is the large-scale image.

    Aside from that, you can have any caption or description you may want to add at the bottom. Take a look at how we did ours in our previous post:

    <div id="app"></div>
    <div id="itemsWrapper">
        <figure class="grid__item">
            <img class="grid__item-img" src="img/1.jpg" alt="An image" />
            <img class="grid__item-img grid__item-img--large" src="img/1_large.jpg" />
            <figcaption class="grid__item-caption">
                <h2 class="grid__item-title">Our Item Title</h2>
                <p class="grid__item-text">
                    Our Item Description
                </p>
            </figcaption>
        </figure>
        ...
    </div>

    You may add as many items as you want. If you add enough items to make your container scrollable. Make sure to send your container in the options, so the effect can account for its scroll.

    With our HTML items in place, let’s get the effect up and running.

    We’ll instantiate GridToFullscreenEffect, add our custom options, and initialize it.

    <script>
      const transitionEffect = new GridToFullscreenEffect(
            document.getElementById("app"),
            document.getElementById("itemsWrapper"),
          {
              "duration":1.8,
              "timing":{"type":"sameEnd","props":{"latestStart":0.5,"reverse":true}},
              "activation":{"type":"snake","props":{"rows":4}},
              "transformation":{"type":"flipX"},
              "easings":{"toFullscreen":Quint.easeOut,"toGrid":Quint.easeOut}
          }
      );
      transitionEffect.init();
    </script>
    

    Our effect is now mounted and working. But clicking on an item makes the image disappear and we end up with a black square.

    The effect doesn’t take care of loading the images. Instead, it requires you to give them to the effect whenever they load. This might seem a bit inconvenient, but it allows you to load your images the way it’s most suitable for your application.

    You could preload all the images upfront, or you could only load the images that are on screen, and load the other ones when needed. It’s up to how you want to do that.

    We decided to preload all the images using imagesLoaded like this:

    imagesLoaded(document.querySelectorAll("img"), instance => {
        document.body.classList.remove("loading");
    
        // Make Images sets for creating the textures.
        let images = [];
        for (var i = 0, imageSet = {}; i < instance.elements.length; i++) {
            let image = {
                element: instance.elements[i],
                image: instance.images[i].isLoaded ? instance.images[i].img : null
            };
            if (i % 2 === 0) {
                imageSet = {};
                imageSet.small = image;
            }
    
            if (i % 2 === 1) {
                imageSet.large = image;
                images.push(imageSet);
            }
        }
        configurator.effect.createTextures(images);
    });
    

    With that last piece of code, our effect is running and it shows the correct images. If you are having troubles with adding it to your site, let us know!

    Our Creations

    While working on this configurator, we managed to create some interesting results of our own. Here are three examples. You can use the parameters and attach it to the URL or use the settings:

    Preset 1

    ?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
    {"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

    Preset 2

    ?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
    {"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

    Preset 4

    ?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
    {"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

    Check out all the demos to explore more presets!

    We hope you enjoy the configurator and find it useful for creating some unique animations!

    A Configurator for Creating Custom WebGL Distortion Effects was written by Daniel Velasquez and published on Codrops.

    How to Create a Sticky Image Effect with Three.js

    If you recently browsed Awwwards or FWA you might have stumbled upon Ultranoir’s website. An all-round beautifully crafted website, with some amazing WebGL effects. One of which is a sticky effect for images in their project showcase. This tutorial is going to show how to recreate this special effect.

    The same kind of effect can be seen on the amazing website of MakeReign.

    Understanding the effect

    When playing with the effect a couple of times we can make a very simple observation about the “stick”.

    In either direction of the effect, the center always reaches its destination first, and the corners last. They go at the same speed, but start at different times.

    With this simple observation we can extrapolate some of the things we need to do:

    1. Differentiate between the unsticky part of the image which is going to move normally and the sticky part of the image which is going to start with an offset. In this case, the corners are sticky and the center is unsticky.
    2. Sync the movements
      1. Move the unsticky part to the destination while not moving the sticky part.
      2. When the unsticky part reaches its destination, start moving the sticky part

    Getting started

    For this recreation we’ll be using three.js, and Popmotion’s Springs. But you can implement the same concepts using other libraries.

    We’ll define a plane geometry with its height as the view height, and its width as 1.5 of the view width.

    const camera = new THREE.PerspectiveCamera(45, 1, 0.1, 10000);
    const fovInRadians = (camera.fov * Math.PI) / 180;
    // Camera aspect ratio is 1. The view width and height are equal.
    const viewSize = Math.abs(camera.position.z * Math.tan(fovInRadians / 2) * 2);
    const geometry = new THREE.PlaneBufferGeometry(viewSize *1.5,viewSize,60,60)

    Then we’ll define a shader material with a few uniforms we are going to use later on:

    • u_progress Elapsed progress of the complete effect.
    • u_direction Direction to which u_progress is moving.
    • u_offset Largest z displacement
    const material = new THREE.ShaderMaterial({
    	uniforms: {
    		// Progress of the effect
    		u_progress: { type: "f", value: 0 },
    		// In which direction is the effect going
    		u_direction: { type: "f", value: 1 },
    		u_waveIntensity: { type: "f", value: 0 }
    	},
    	vertexShader: vertex,
    	fragmentShader: fragment,
    	side: THREE.DoubleSide
    });

    We are going to focus on the vertex shader since the effect mostly happens in there. If you have an interest in learning about the things that happen in the fragment shader, check out the GitHub repo.

    Into the stick

    To find which parts are going to be sticky we are going to use a normalized distance from the center. Lower values mean less stickiness, and higher values mean more sticky. Since the corners are the farthest away from the center, they end up being most sticky.

    Since our effect is happening in both directions, we are going to have it stick both ways. We have two separate variables:

    1. One that will stick to the front. Used when the effect is moving away from the screen.
    2. And a second one that will stick to the back. Used when the effect is moving towards the viewer.
    uniform float u_progress;
    uniform float u_direction;
    uniform float u_offset;
    uniform float u_time;
    void main(){
    	vec3 pos = position.xyz;
    	float distance = length(uv.xy - 0.5 );
    	float maxDistance = length(vec2(0.5,0.5));
    	float normalizedDistance = distance/sizeDist;
    	// Stick to the front
    	float stickOutEffect = normalizedDistance ;
    	// Stick to the back
    	float stickInEffect = -normalizedDistance ;
    	float stickEffect = mix(stickOutEffect,stickInEffect, u_direction);
    	pos.z += stickEffect * u_offset;
    	gl_Position =
    	projectionMatrix *
    	modelViewMatrix *
    	vec4(pos, 1.0);
    }

    Depending on the direction, we are going to determine which parts are not going to move as much. Until we want them to stop being sticky and move normally.

    The Animation

    For the animation we have a few options to choose from:

    1. Tween and timelines: Definitely the easiest option. But we would have to reverse the animation if it ever gets interrupted which would look awkward.
    2. Springs and vertex-magic: A little bit more convoluted. But springs are made so they feel more fluid when interrupted or have their direction changed.

    In our demo we are going to use Popmotion’s Springs. But tweens are also a valid option and ultranoir’s website actually uses them.

    Note: When the progress is either 0 or 1, the direction will be instant since it doesn’t need to transform.

    function onMouseDown(){
    	...
    	const directionSpring = spring({
    		from: this.progress === 0 ? 0 : this.direction,
    		to: 0,
    		mass: 1,
    		stiffness: 800,
    		damping: 2000
    	});
    	const progressSpring = spring({
    		from: this.progress,
    		to: 1,
    		mass: 5,
    		stiffness: 350,
    		damping: 500
    	});
    	parallel(directionSpring, progressSpring).start((values)=>{
    		// update uniforms
    	})
    	...
    }
    
    function onMouseUp(){
    	...
    	const directionSpring = spring({
    		from: this.progress === 1 ? 1 : this.direction,
    		to: 1,
    		mass: 1,
    		stiffness: 800,
    		damping: 2000
    	});
    	const progressSpring = spring({
    		from: this.progress,
    		to: 0,
    		mass: 4,
    		stiffness: 400,
    		damping: 70,
    		restDelta: 0.0001
    	});
    	parallel(directionSpring, progressSpring).start((values)=>{
    		// update uniforms
    	})
    	...
    }

    And we are going to sequence the movements by moving through a wave using u_progress.

    This wave is going to start at 0, reach 1 in the middle, and come back down to 0 in the end. Making it so the stick grows in the beginning and decreases in the end.

    void main(){
    	...
    	float waveIn = u_progress*(1. / stick);
    	float waveOut = -( u_progress - 1.) * (1./(1.-stick) );
    	float stickProgress = min(waveIn, waveOut);
    	pos.z += stickEffect * u_offset * stickProgress;
    	gl_Position =
    	projectionMatrix *
    	modelViewMatrix *
    	vec4(pos, 1.0);
    }

    Now, the last step is to move the plane back or forward as the stick is growing.

    Since the stick grow starts in different values depending on the direction, we’ll also move and start the plane offset depending on the direction.

    void main(){
    	...
    	float offsetIn = clamp(waveIn,0.,1.);
    	// Invert waveOut to get the slope moving upwards to the right and move 1 the left
    	float offsetOut = clamp(1.-waveOut,0.,1.);
    	float offsetProgress = mix(offsetIn,offsetOut,u_direction);
    	pos.z += stickEffect * u_offset * stickProgress - u_offset * offsetProgress;
    	gl_Position =
    	projectionMatrix *
    	modelViewMatrix *
    	vec4(pos, 1.0);
    }

    And here is the final result:

    Conclusion

    Simple effects like this one can make our experience look and feel great. But they only become amazing when complemented with other amazing details and effects. In this tutorial we’ve covered the core of the effect seen on ultranoir’s website, and we hope that it gave you some insight on the workings of such an animation. If you’d like to dive deeper into the complete demo, please feel free to explore the code.

    We hope you enjoyed this tutorial, feel free to share your thoughts and questions in the comments!

    How to Create a Sticky Image Effect with Three.js was written by Daniel Velasquez and published on Codrops.

    Exploding 3D Objects with Three.js

    Today we’d like to share an exploding object experiment with you. The effect is inspired by Kubrick Life Website: 3D Motion. No icosahedrons were hurt during these experiments!

    The following short walk-through assumes that you are familiar with some WebGL and shaders.

    The demo is kindly sponsored by Airtable: Build MVPs faster than ever before. If you would like to sponsor one of our demos, find out more here.

    How it’s done

    For this effect we need to break apart the respective object and calculate all fragments.

    The easiest way to produce naturally looking fragments, is to look at how nature does them:

    giraffe

    Giraffes have been using those fashionable fragments for millions of years.

    This kind of pattern is called a Voronoi diagram (after Georgy Feodosevich Voronoy, mathematician).

    voronoi
    Image by Mike Bostock done with Voronator

    We are lucky to have algorithms that can create those diagrams programmatically. Not only on surfaces, as giraffes do, but also as spatial ones that break down volumes. We can even partition four dimensional space. But let’s stop at three dimensions for today’s example. I will leave the four dimensional explosions as an exercise for the reader 😉

    We prepared some models (you could use Blender/Cinema4D for that, or your own Voronoi algorithm):

    heart
    You can see that this heart is no longer whole. This heart is broken. With Voronoi.

    That looks already beautiful by itself, doesn’t it? ❤

    On the other hand, that’s a lot of data to load, so I managed to compress it with the glTF file format using Draco 3D data compression.

    Shader

    I decided to use three.js for the rendering, as it has a lot of useful built-in stuff. It’s great if you want reflecting materials, and it has some utilities for working with fragments and lightning.

    With too many fragments it is not very wise to put all calculations on the CPU, so it’s better to animate that in the shaders, i.e. on the GPU. There’s a really simple vertex shader to tear all those fragments apart:

    	position = rotate(position);
    	position += position + direction*progress;
    

    …where direction is the explosion direction and progress is the animation progress.

    We can then use some three.js materials and CubeTexture to color all the surfaces, and that’s basically it!

    During development, I accidentally typed the wrong variable in one of my shaders, and got pretty interesting result:

    error

    So, don’t be afraid to make mistakes, you never know what you end up with when you try something new!

    I hope you like the demos and the short insight into how it works, and that this story will inspire you to do more cool things, too! Let me know what you think, and what ideas you have!

    References and Credits

    Exploding 3D Objects with Three.js was written by Yuriy Artyukh and published on Codrops.

    How to Create Smooth WebGL Transitions on Scroll using Phenomenon

    This tutorial is going to demonstrate how to create a smooth WebGL transition on scroll using Phenomenon (based on three.js).

    Attention: This tutorial assumes you already have some understanding of how three.js works.
    If you are not familiar, I highly recommend checking out the official documentation and examples .

    Let’s get started

    Interactive elements on websites can enhance the user experience a lot. In the demo, a mix of WebGL and regular UI elements will transition based on the scroll position.

    The following libraries are used for this demo:

    • Three.js: Provides the structure for everything in the WebGL environment.
    • THREE.Phenomenon: Makes it easy to create an instanced mesh that can be transitioned smoothly.
    • updateOnScroll: Observes scroll position changes based on percentage values.

    About Phenomenon

    Phenomenon is a small wrapper around three.js built for high-performance WebGL experiences. It started out as a no-dependency library that I created to learn more about WebGL and was later made compatible with the powerful features of three.js.

    With Phenomenon it’s possible to transition thousands of objects in 3D space in a smooth way. This is done by combining all the separate objects as one. The objects will share the same logic but can move or scale or look different based on unique attributes. To make the experience as smooth as possible it’s important to make it run almost entirely on the GPU. This technique will be explained further below.

    Animate an instance

    To create the animated instances in the demo there are a few steps we need to go through.

    Provide base properties

    Define what Geometry to multiply:

    const geometry = new THREE.IcosahedronGeometry(1, 0);

    Define how many of objects we want to combine:

    const multiplier = 200;

    Define what Material it should have:

    const material = new THREE.MeshPhongMaterial({
      color: '#448aff',
      emissive: '#448aff',
      specular: '#efefef',
      shininess: 20,
      flatShading: true,
    });

    Here we only define the behavior for a single instance. To add to this experience you can add more objects, add lights and shadow or even post processing. Have a look at the Three.js documentation for more information.

    Build the transition

    The transition of the instance is a little more complex as we will write a vertex shader that will later be combined with our base properties. For this example, we’ll start by moving the objects from point A to point B.

    We can define these points through attributes which are stored directly on the GPU (for every object) and can be accessed from our program. In Phenomenon these attributes are defined with a name so we can use it in our shader and a data function which can provide a unique value for every object.

    The code below will define a start- and end position between -10 and 10 for every instance randomly.

    function r(v) {
      return -v + Math.random() * v * 2;
    }
    
    const attributes = [
      {
        name: 'aPositionStart',
        data: () => [r(10), r(10), r(10)},
        size: 3,
      },
      {
        name: 'aPositionEnd',
        data: () => [r(10), r(10), r(10)]
        size: 3,
      },
    ];

    After all of the objects have a unique start and end position we’ll need a progress value to transition between them. This variable is a uniform that is updated at any time from our main script (for example based on scroll or time).

    const uniforms = {
      progress: {
        value: 0,
      },
    };

    Once this is in place the only thing left for us is writing the vertex shader, which isn’t our familiar Javascript syntax, but instead GLSL. We’ll keep it simple for the example to explain the concept, but if you’re interested you can check out the more complex source.

    In the vertex shader a `gl_Position` should be set that will define where every point in our 3d space is located. Based on these positions we can also move, rotate, ease or scale every object separately.

    Below we define our two position attributes, the progress uniform, and the required main function. In this case, we mix the positions together with the progress which will let the instances move around.

    attribute vec3 aPositionStart;
    attribute vec3 aPositionEnd;
    uniform float progress;
    
    void main(){
      gl_Position = position + mix(aPositionStart, aPositionEnd, progress);
    }

    The position value is defined in the core of three.js and is based on the selected geometry.

    Transition on scroll

    When we put all the above code together and give each instance a slight offset (so they move after each other) we can start updating our progress based on the scroll position.

    With the updateOnScroll library we can easily observe the scroll position based on percentage values. In the example below a value of 0 to 1 is returned between 0% and 50% of the total scroll height. By setting the progress uniform to that value our interaction will be connected to the transition in WebGL!

    const phenomenon = new THREE.Phenomenon({ ... });
    
    updateOnScroll(0, 0.5, progress => {
     phenomenon.uniforms.progress.value = progress;
    });

    In the demo every instance (and the UI elements in between) have their own scroll handler (based on a single listener).

    Next steps

    With all of the above combined our experience is off to a great start, but there’s a lot more we can do:

    • Add color, lights and custom material
    • Add more types user interaction
    • Transition with easing
    • Transition scale or rotation
    • Transition based on noise

    Have a look at the WebGL Wonderland collection for multiple experiments showcasing the possibilities!

    WebGLWonderland

    Conclusion

    Learning about WebGL has been an interesting journey for me and I hope this tutorial has inspired you!

    Feel free to ask questions or share experiments that you’ve created, thank you for reading! 🙂

    How to Create Smooth WebGL Transitions on Scroll using Phenomenon was written by Colin van Eenige and published on Codrops.

    Buildings Wave Animation with Three.js

    This tutorial is going to demonstrate how to build a wave animation effect for a grid of building models using three.js and TweenMax (GSAP).

    Attention: This tutorial assumes you already have a some understanding of how three.js works.
    If you are not familiar, I highly recommend checking out the official documentation and examples .

    Inspiration

    Source: View
    by: Baran Kahyaoglu

    Core Concept

    The idea is to create a grid of random buildings, that reveal based on their distance towards the camera. The motion we are trying to get is like a wave passing through, and the farthest elements will be fading out in the fog.

    side-view

    We also modify the scale of each building in order to create some visual randomness.

    Getting started

    First we have to create the markup for our demo. It’s a very simple boilerplate since all the code will be running inside a canvas element:

    <html lang="en">
      <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <meta name="target" content="all">
        <meta http-equiv="cleartype" content="on">
        <meta name="apple-mobile-web-app-capable" content="yes">
        <meta name="mobile-web-app-capable" content="yes">
        <title>Buildings Wave</title>
        <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/100/three.min.js">></script>
        <script src="https://Threejs.org/examples/js//loaders/OBJLoader.js" ></script>
        <script src="https://Threejs.org/examples/js/controls/OrbitControls.js"></script>
        <script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/1.20.3/TweenMax.min.js"></script>
      </head>
      <body>
      </body>
    </html>
    

    Basic CSS Styles

    html, body {
      margin: 0;
      padding: 0;
      background-color: #fff;
      color: #fff;
      box-sizing: border-box;
      overflow: hidden;
    }
    
    canvas {
      width: 100%;
      height: 100%;
    }
    

    Initial setup of the 3D world

    We create a function called init inside our main class. All subsequent methods will be added inside this method.

    
    init() {
      this.group = new THREE.Object3D();
      this.gridSize = 40;
      this.buildings = [];
      this.fogConfig = {
        color: '#fff',
        near: 1,
        far: 138
      };
    }
    

    Creating our 3D scene

    createScene() {
      this.scene = new THREE.Scene();
    
      this.renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
      this.renderer.setSize(window.innerWidth, window.innerHeight);
    
      this.renderer.shadowMap.enabled = true;
      this.renderer.shadowMap.type = THREE.PCFSoftShadowMap;
    
      document.body.appendChild(this.renderer.domElement);
    
      // this is the line that will give us the nice foggy effect on the scene
      this.scene.fog = new THREE.Fog(this.fogConfig.color, this.fogConfig.near, this.fogConfig.far);
    }
    

    Camera

    Let’s add a camera for to scene:

    createCamera() {
      const width = window.innerWidth;
      const height = window.innerHeight;
    
      this.camera = new THREE.PerspectiveCamera(20, width / height, 1, 1000);
    
      // set the distance our camera will have from the grid
      // this will give us a nice frontal view with a little perspective
      this.camera.position.set(3, 16, 111);
    
      this.scene.add(this.camera);
    }
    

    Ground

    Now we need to add a shape to serve as the scene’s ground

    addFloor() {
      const width = 200;
      const height = 200;
      const planeGeometry = new THREE.PlaneGeometry(width, height);
    
      // all materials can be changed according to your taste and needs
      const planeMaterial = new THREE.MeshStandardMaterial({
        color: '#fff',
        metalness: 0,
        emissive: '#000000',
        roughness: 0,
      });
    
      const plane = new THREE.Mesh(planeGeometry, planeMaterial);
    
      planeGeometry.rotateX(- Math.PI / 2);
    
      plane.position.y = 0;
    
      this.scene.add(plane);
    }
    

    Load 3D models

    Before we can build the grid, we have to load our models.

    models

    loadModels(path, onLoadComplete) {
      const loader = new THREE.OBJLoader();
    
      loader.load(path, onLoadComplete);
    }
    
    onLoadModelsComplete(model) {
      // our buildings.obj file contains many models
      // so we have to traverse them to do some initial setup
    
      this.models = [...model.children].map((model) => {
        // since we don't control how the model was exported
        // we need to scale them down because they are very big
    
        // scale model down
        const scale = .01;
        model.scale.set(scale, scale, scale);
    
        // position it under the ground
        model.position.set(0, -14, 0);
    
        // allow them to emit and receive shadow
        model.receiveShadow = true;
        model.castShadow = true;
    
        return model;
      });
    
      // our list of models are now setup
    }
    

    Ambient Light

    addAmbientLight() {
      const ambientLight = new THREE.AmbientLight('#fff');
    
      this.scene.add(ambientLight);
    }
    

    Grid Setup

    Now we are going to place those models in a grid layout.

    initialgrid

    createGrid() {
      // define general bounding box of the model
      const boxSize = 3;
    
      // define the min and max values we want to scale
      const max = .009;
      const min = .001;
    
      const meshParams = {
        color: '#fff',
        metalness: .58,
        emissive: '#000000',
        roughness: .18,
      };
    
      // create our material outside the loop so it performs better
      const material = new THREE.MeshPhysicalMaterial(meshParams);
    
      for (let i = 0; i < this.gridSize; i++) {
        for (let j = 0; j < this.gridSize; j++) {
    
          // for every iteration we pull out a random model from our models list and clone it
          const building = this.getRandomBuiding().clone();
    
          building.material = material;
    
          building.scale.y = Math.random() * (max - min + .01);
    
          building.position.x = (i * boxSize);
          building.position.z = (j * boxSize);
    
          // add each model inside a group object so we can move them easily
          this.group.add(building);
    
          // store a reference inside a list so we can reuse it later on
          this.buildings.push(building);
        }
      }
    
      this.scene.add(this.group);
    
      // center our group of models in the scene
      this.group.position.set(-this.gridSize - 10, 1, -this.gridSize - 10);
    }
    

    Spot Light

    We also add a SpotLight to the scene for a nice light effect.

    spot-light

    addSpotLight() {
      const light = { color: '#fff', x: 100, y: 150, z: 100 };
      const spotLight = new THREE.SpotLight(light.color, 1);
    
      spotLight.position.set(light.x, light.y, light.z);
      spotLight.castShadow = true;
    
      this.scene.add(spotLight);
    }
    

    Point Lights

    Let's add some point lights.

    point-lights

    addPointLight(params) {
      // sample params
      // {
      //   color: '#00ff00',
      //   intensity: 4,
      //   position: {
      //     x: 18,
      //     y: 22,
      //     z: -9,
      //   }
      // };
    
      const pointLight = new THREE.PointLight(params.color, params.intensity);
    
      pointLight.position.set(params.position.x, params.position.y, params.position.z);
    
      this.scene.add(pointLight);
    }
    

    Sort Models

    Before we animate the models into the scene, we want to sort them according to their z distance to the camera.

    sortBuildingsByDistance() {
      this.buildings.sort((a, b) => {
        if (a.position.z > b.position.z) {
          return 1;
        }
    
        if (a.position.z < b.position.z) {
          return -1;
        }
    
        return 0;
      }).reverse();
    }
    

    Animate Models

    This is the function where we go through our buildings list and animate them. We define the duration and the delay of the animation based on their position in the list.

    showBuildings() {
      this.sortBuildingsByDistance();
    
      this.buildings.map((building, index) => {
        TweenMax.to(building.position, .3 + (index / 350), { y: 1, ease: Power3.easeOut, delay: index / 350 });
      });
    }
    

    Here is how a variation with camera rotation looks like:

    Credits

    Buildings Wave Animation with Three.js was written by Ion D. Filho and published on Codrops.

    Interactive Particles with Three.js

    This tutorial is going to demonstrate how to draw a large number of particles with Three.js and an efficient way to make them react to mouse and touch input using shaders and an off-screen texture.

    Attention: You will need an intermediate level of experience with Three.js. We will omit some parts of the code for brevity and assume you already know how to set up a Three.js scene and how to import your shaders — in this demo we are using glslify.
    codrops-02

    Instanced Geometry

    The particles are created based on the pixels of an image. Our image’s dimensions are 320×180, or 57,600 pixels.

    However, we don’t need to create one geometry for each particle. We can create only a single one and render it 57,600 times with different parameters. This is called geometry instancing. With Three.js we use InstancedBufferGeometry to define the geometry, BufferAttribute for attributes which remain the same for every instance and InstancedBufferAttribute for attributes which can vary between instances (i.e. colour, size).

    The geometry of our particles is a simple quad, formed by 4 vertices and 2 triangles.

    quad

    
    const geometry = new THREE.InstancedBufferGeometry();
    
    // positions
    const positions = new THREE.BufferAttribute(new Float32Array(4 * 3), 3);
    positions.setXYZ(0, -0.5, 0.5, 0.0);
    positions.setXYZ(1, 0.5, 0.5, 0.0);
    positions.setXYZ(2, -0.5, -0.5, 0.0);
    positions.setXYZ(3, 0.5, -0.5, 0.0);
    geometry.addAttribute('position', positions);
    
    // uvs
    const uvs = new THREE.BufferAttribute(new Float32Array(4 * 2), 2);
    uvs.setXYZ(0, 0.0, 0.0);
    uvs.setXYZ(1, 1.0, 0.0);
    uvs.setXYZ(2, 0.0, 1.0);
    uvs.setXYZ(3, 1.0, 1.0);
    geometry.addAttribute('uv', uvs);
    
    // index
    geometry.setIndex(new THREE.BufferAttribute(new Uint16Array([ 0, 2, 1, 2, 3, 1 ]), 1));
    

    Next, we loop through the pixels of the image and assign our instanced attributes. Since the word position is already taken, we use the word offset to store the position of each instance. The offset will be the x,y of each pixel in the image. We also want to store the particle index and a random angle which will be used later for animation.

    
    const indices = new Uint16Array(this.numPoints);
    const offsets = new Float32Array(this.numPoints * 3);
    const angles = new Float32Array(this.numPoints);
    
    for (let i = 0; i < this.numPoints; i++) {
    	offsets[i * 3 + 0] = i % this.width;
    	offsets[i * 3 + 1] = Math.floor(i / this.width);
    
    	indices[i] = i;
    
    	angles[i] = Math.random() * Math.PI;
    }
    
    geometry.addAttribute('pindex', new THREE.InstancedBufferAttribute(indices, 1, false));
    geometry.addAttribute('offset', new THREE.InstancedBufferAttribute(offsets, 3, false));
    geometry.addAttribute('angle', new THREE.InstancedBufferAttribute(angles, 1, false));
    

    Particle Material

    The material is a RawShaderMaterial with custom shaders particle.vert and particle.frag.

    The uniforms are described as follows:

    • uTime: elapsed time, updated every frame
    • uRandom: factor of randomness used to displace the particles in x,y
    • uDepth: maximum oscillation of the particles in z
    • uSize: base size of the particles
    • uTexture: image texture
    • uTextureSize: dimensions of the texture
    • uTouch: touch texture
    
    const uniforms = {
    	uTime: { value: 0 },
    	uRandom: { value: 1.0 },
    	uDepth: { value: 2.0 },
    	uSize: { value: 0.0 },
    	uTextureSize: { value: new THREE.Vector2(this.width, this.height) },
    	uTexture: { value: this.texture },
    	uTouch: { value: null }
    };
    
    const material = new THREE.RawShaderMaterial({
    	uniforms,
    	vertexShader: glslify(require('../../../shaders/particle.vert')),
    	fragmentShader: glslify(require('../../../shaders/particle.frag')),
    	depthTest: false,
    	transparent: true
    });
    

    A simple vertex shader would output the position of the particles according to their offset attribute directly. To make things more interesting, we displace the particles using random and noise. And the same goes for particles’ sizes.

    
    // particle.vert
    
    void main() {
    	// displacement
    	vec3 displaced = offset;
    	// randomise
    	displaced.xy += vec2(random(pindex) - 0.5, random(offset.x + pindex) - 0.5) * uRandom;
    	float rndz = (random(pindex) + snoise_1_2(vec2(pindex * 0.1, uTime * 0.1)));
    	displaced.z += rndz * (random(pindex) * 2.0 * uDepth);
    
    	// particle size
    	float psize = (snoise_1_2(vec2(uTime, pindex) * 0.5) + 2.0);
    	psize *= max(grey, 0.2);
    	psize *= uSize;
    
    	// (...)
    }
    

    The fragment shader samples the RGB colour from the original image and converts it to greyscale using the luminosity method (0.21 R + 0.72 G + 0.07 B).

    The alpha channel is determined by the linear distance to the centre of the UV, which essentially creates a circle. The border of the circle can be blurred out using smoothstep.

    
    // particle.frag
    
    void main() {
    	// pixel color
    	vec4 colA = texture2D(uTexture, puv);
    
    	// greyscale
    	float grey = colA.r * 0.21 + colA.g * 0.71 + colA.b * 0.07;
    	vec4 colB = vec4(grey, grey, grey, 1.0);
    
    	// circle
    	float border = 0.3;
    	float radius = 0.5;
    	float dist = radius - distance(uv, vec2(0.5));
    	float t = smoothstep(0.0, border, dist);
    
    	// final color
    	color = colB;
    	color.a = t;
    
    	// (...)
    }
    

    Optimisation

    In our demo we set the size of the particles according to their brightness, which means dark particles are almost invisible. This makes room for some optimisation. When looping through the pixels of the image, we can discard the ones which are too dark. This reduces the number of particles and improves performance.

    optimised

    The optimisation starts before we create our InstancedBufferGeometry. We create a temporary canvas, draw the image onto it and call getImageData() to retrieve an array of colours [R, G, B, A, R, G, B … ]. We then define a threshold — hex #22 or decimal 34 — and test it against the red channel. The red channel is an arbitrary choice, we could also use green or blue, or even an average of all three channels, but the red channel is simple to use.

    
    // discard pixels darker than threshold #22
    if (discard) {
    	numVisible = 0;
    	threshold = 34;
    
    	const img = this.texture.image;
    	const canvas = document.createElement('canvas');
    	const ctx = canvas.getContext('2d');
    
    	canvas.width = this.width;
    	canvas.height = this.height;
    	ctx.scale(1, -1); // flip y
    	ctx.drawImage(img, 0, 0, this.width, this.height * -1);
    
    	const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
    	originalColors = Float32Array.from(imgData.data);
    
    	for (let i = 0; i < this.numPoints; i++) {
    		if (originalColors[i * 4 + 0] > threshold) numVisible++;
    	}
    }
    

    We also need to update the loop where we define offset, angle and pindex to take the threshold into account.

    
    for (let i = 0, j = 0; i < this.numPoints; i++) {
    	if (originalColors[i * 4 + 0] <= threshold) continue;
    
    	offsets[j * 3 + 0] = i % this.width;
    	offsets[j * 3 + 1] = Math.floor(i / this.width);
    
    	indices[j] = i;
    
    	angles[j] = Math.random() * Math.PI;
    
    	j++;
    }
    

    Interactivity

    Considerations

    There are many different ways of introducing interaction with the particles. For example, we could give each particle a velocity attribute and update it on every frame based on its proximity to the cursor. This is a classic technique and it works very well, but it might be a bit too heavy if we have to loop through tens of thousands of particles.

    A more efficient way would be to do it in the shader. We could pass the cursor’s position as a uniform and displace the particles based on their distance from it. While this would perform a lot faster, the result could be quite dry. The particles would go to a given position, but they wouldn’t ease in or out of it.

    Chosen Approach

    The technique we chose in our demo was to draw the cursor position onto a texture. The advantage is that we can keep a history of cursor positions and create a trail. We can also apply an easing function to the radius of that trail, making it grow and shrink smoothly. Everything would happen in the shader, running in parallel for all the particles.

    codrops-05

    In order to get the cursor’s position we use a Raycaster and a simple PlaneBufferGeometry the same size of our main geometry. The plane is invisible, but interactive.

    Interactivity in Three.js is a topic on its own. Please see this example for reference.

    When there is an intersection between the cursor and the plane, we can use the UV coordinates in the intersection data to retrieve the cursor’s position. The positions are then stored in an array (trail) and drawn onto an off-screen canvas. The canvas is passed as a texture to the shader via the uniform uTouch.

    In the vertex shader the particles are displaced based on the brightness of the pixels in the touch texture.

    
    // particle.vert
    
    void main() {
    	// (...)
    
    	// touch
    	float t = texture2D(uTouch, puv).r;
    	displaced.z += t * 20.0 * rndz;
    	displaced.x += cos(angle) * t * 20.0 * rndz;
    	displaced.y += sin(angle) * t * 20.0 * rndz;
    
    	// (...)
    }
    

    Conclusion

    Hope you enjoyed the tutorial! If you have any questions don’t hesitate to get in touch.

    rhino

    Interactive Particles with Three.js was written by Bruno Imbrizi and published on Codrops.

    Animated Mesh Lines

    Two years ago, I started playing with lines in WebGL using THREE.MeshLine, a library made by Jaume Sanchez Elias for Three.js.

    This library tackles the problem that you cannot handle the width of your lines with classic lines in Three.js. A MeshLine builds a strip of triangles billboarded to create a custom geometry instead of using the native WebGL GL_LINE method that does not support the width parameter.

    These lines shaped as ribbons have a really interesting graphic style. They also have less vertices than a TubeGeometry usually used to create thick lines.

    Animate a MeshLine

    The only thing missing is the ability to animate lines without having to rebuild the geometry for each frame.
    Based on what had already been started and how SVG Line animation works, I added three new parameters to MeshLineMaterial to visualize animated dashed line directly through the shader.

    • DashRatio: The ratio between what is visible or not (~0: more visible, ~1: less visible)
    • DashArray: The length of a dash and its space (0 == no dash)
    • DashOffset: The location where the first dash begins

    Like with an SVG path, these parameters allow you to animate the entire traced line if they are correctly handled.

    Here is a complete example of how to create and animate a MeshLine:

    
      // Build an array of points
      const segmentLength = 1;
      const nbrOfPoints = 10;
      const points = [];
      for (let i = 0; i < nbrOfPoints; i++) {
        points.push(i * segmentLength, 0, 0);
      }
    
      // Build the geometry
      const line = new MeshLine();
      line.setGeometry(points);
      const geometry = line.geometry;
    
      // Build the material with good parameters to animate it.
      const material = new MeshLineMaterial({
        lineWidth: 0.1,
        color: new Color('#ff0000'),
        dashArray: 2,     // always has to be the double of the line
        dashOffset: 0,    // start the dash at zero
        dashRatio: 0.75,  // visible length range min: 0.99, max: 0.5
      });
    
      // Build the Mesh
      const lineMesh = new Mesh(geometry, material);
      lineMesh.position.x = -4.5;
    
      // ! Assuming you have your own webgl engine to add meshes on scene and update them.
      webgl.add(lineMesh);
    
      // ! Call each frame
      function update() {
        // Check if the dash is out to stop animate it.
        if (lineMesh.material.uniforms.dashOffset.value < -2) return;
    
        // Decrement the dashOffset value to animate the path with the dash.
        lineMesh.material.uniforms.dashOffset.value -= 0.01;
      }
    

    First animated MeshLine

    Create your own line style

    Now that you know how to animate lines, I will show you some tips on how to customize the shape of your lines.

    Use SplineCurve or CatmullRomCurve3

    These classes smooth an array of points that is roughly positioned. They are perfect to build curved and fluid lines and keep control of them (length, orientation, turbulences…).

    For instance, let’s add some turbulences to our previous array of points:

    
      const segmentLength = 1;
      const nbrOfPoints = 10;
      const points = [];
      const turbulence = 0.5;
      for (let i = 0; i < nbrOfPoints; i++) {
        // ! We have to wrapped points into a THREE.Vector3 this time
        points.push(new Vector3(
          i * segmentLength,
          (Math.random() * (turbulence * 2)) - turbulence,
          (Math.random() * (turbulence * 2)) - turbulence,
        ));
      }
    

    Then, use one of these classes to smooth your array of lines before you create the geometry:

    
      // 2D spline
      // const linePoints = new Geometry().setFromPoints(new SplineCurve(points).getPoints(50));
    
      // 3D spline
      const linePoints = new Geometry().setFromPoints(new CatmullRomCurve3(points).getPoints(50));
    
      const line = new MeshLine();
      line.setGeometry(linePoints);
      const geometry = line.geometry;
    

    And like that you create your smooth curved line!

    Animated MeshLine Curved

    Note that SplineCurve only smoothes in 2D (x and y axis) compared to CatmullRomCurve3 that takes into account three axes.

    I recommend to use the SplineCurve, anyway. It is more performant to calculate lines and is often enough to create the desired curved effect.

    For instance, my demos Confetti and Energy are only made with the SplineCurve method:

    AnimatedMeshLine - Confetti demo

    AnimatedMeshLine - Energy demo

    Use Raycasting

    Another technique taken from a THREE.MeshLine example is using a Raycaster to scan a Mesh already present in the scene.

    Thus, you can create your lines that follow the shape of an object:

    
      const radius = 4;
      const yMax = -4;
      const points = [];
      const origin = new Vector3();
      const direction = new Vector3();
      const raycaster = new Raycaster();
    
      let y = 0;
      let angle = 0;
      // Start the scan
      while (y < yMax) {
        // Update the orientation and the position of the raycaster
        y -= 0.1;
        angle += 0.2;
        origin.set(radius * Math.cos(angle), y, radius * Math.sin(angle));
        direction.set(-origin.x, 0, -origin.z);
        direction.normalize();
        raycaster.set(origin, direction);
    
        // Save the coordinates raycsted.
        // !Assuming the raycaster cross the object in the scene each time
        const intersect = raycaster.intersectObject(objectToRaycast, true);
        if (intersect.length) {
          points.push(
            intersect[0].point.x,
            intersect[0].point.y,
            intersect[0].point.z,
          );
        }
      }
    

    This method is employed in the Boreal Sky demo. Here I used a sphere part as geometry to create the mesh objectToRaycast:

    Boreal Sky - raycasting example

    Now, you have enough tools to play and animate MeshLines. Many of these methods are inspired by the library’s examples. Feel free to explore these and share your own experiments and methods to create your own lines!

    References and Credits

    Animated Mesh Lines was written by Jérémie Boulay and published on Codrops.

    Interactive Animated Landscape

    Today we are going to explore a playful animated landscape with a psychedelic look. The idea is to show how an experimentation on art and design with a generative process can lead to interesting interactive visuals which can be used in a variety of mediums like web, print, illustration, VJing, installations, games and many others. We made 3 variants of the landscape to show you how small changes in parameters can change a lot in visuals.

    The demos are made with three.js and the animations and colors are controlled in a custom GLSL shader. For the letter animations we are using TweenMax.

    The cool thing about doing this with WebGL is that it’s widely supported and with GLSL shaders we can animate thousands, even millions of vertices at 60 FPS on the major desktop and mobile web browsers.

    If you’re not familiar with three.js and GLSL shaders, you can start by creating a scene and reading this introduction to Shaders.

    Let’s go through the main build up of the demo.

    Breaking down the demo

    1. Creating terrain with a plane

    Let’s make a basic three.js scene, place a plane with a nice amount of vertices, rotate it 90 degrees is the x-axis, and lift the camera a little bit:

    interactive_landscape1_plane

    Create custom vertex and fragment shaders and bind them to a ShaderMaterial. The objective is to displace vertices up in the vertex shader with a perlin noise and multiply it with a height value:

    // pseudo-code for noise implementation
    
    vec3 coord = vec3(uv, 1.0)*10.0;
    float noise = 1 + pnoise( vec3( coord.x, coord.y + time, coord.z ));
    
    float height = h * noise;
    
    // we apply height to z because the plane is rotated on x-axis
    vec4 pos = vec4( position.x, position.y, height, 1.0 );
    
    // output the final position
    gl_Position = projectionMatrix * modelViewMatrix * pos;

    interactive_landscape2_terrain

    2. Create a road with some math

    Now we’ll use a little bit of math. We’ll implement the formula below, where x is the vertex x-coordinate, h is the maximum height of terrain, c is the center of road and w is the width of road:

    interactive_lanscape_formula

    Playing with those variables, we can get different results, as we can see in the graphs:

    interactive_lanscape_graph_h

    interactive_lanscape_graph_c

    interactive_lanscape_graph_w

    Now, applied in vertex-shader code, multiplied by the previously calculated noise it looks as follows:

    // pseudo-code for formula implementation
    float height = h * pow( abs( cos( uv.x + c ) ), w ) * noise;
    
    // we apply height to z because the plane is rotated on x-axis
    vec4 pos = vec4( position.x, position.y, height, 1.0 );
    
    // output the final position
    gl_Position = projectionMatrix * modelViewMatrix * pos;

    interactive_landscape1_road

    To make a curved road, we use uv.y as angle and take the sin of it to oscillate the center along the y-axis (the plane is rotated on the x-axis, remember?).

    interactive_landscape3_road_curve

    3. Adding color layers

    Let’s colorize the terrain with a nice trick. First, create a color pallete image like this one:

    interactive_landscape_pallete1

    And then we’ll use it as a lookup texture in the fragment shader, getting the color value from the height calculated in the vertex shader as texture uv.y coordinate:

    // pseudo-code for getting the color
    vec2 coord = vec2( 0.0, normalize( height ) );
    vec4 color = texture2D( palleteTexture, coord );
    
    gl_FragColor = color

    interactive_landscape4_colors_layers

    4. Having fun adding interactivity

    Now we’ve done the heaviest part, it’s easy to use mouse, touch or whatever input you want, to control the formula’s variables and get interesting forms of interactivity:

    // JS pseudo-code in the render loop for uniforms manipulation with mouse
    terrain.material.uniforms.c.value = (mouseX / window.innerWidth - 0.5) * 0.1;
    terrain.material.uniforms.w.value = (mouseY / window.innerHeight - 0.5) * 4;

    5. Final touches

    Let’s adjust the camera position, add a nice color pallete, fog, a sky background, and we are done!

    interactive_landscape1_final

    We hope you enjoy this walk-through and find the experiment inspirational!

    References and Credits

    Interactive Animated Landscape was written by André Mattos and published on Codrops.