Case Study: Studiogusto

Get a glimpse of the creative and innovative techniques used by Studiogusto, a dynamic agency, in designing their new website to better reflect their values and showcase their expertise.

Recreating a Dave Whyte Animation in React-Three-Fiber

There’s a slew of artists and creative coders on social media who regularly post satisfying, hypnotic looping animations. One example is Dave Whyte, also known as @beesandbombs on Twitter. In this tutorial I’ll explain how to recreate one of his more popular recent animations, which I’ve dubbed “Breathing Dots”. Here’s the original animation:

The Tools

Dave says he uses Processing for his animations, but I’ll be using react-three-fiber (R3F) which is a React renderer for Three.js. Why am I using a 3D library for a 2D animation? Well, R3F provides a powerful declarative syntax for WebGL graphics and grants you access to useful Three.js features such as post-processing effects. It lets you do a lot with few lines of code, all while being highly modular and re-usable. You can use whatever tool you like, but I find the combined ecosystems of React and Three.js make R3F a robust tool for general purpose graphics.

I use an adapted Codesandbox template running Create React App to bootstrap my R3F projects; You can fork it by clicking the button above to get a project running in a few seconds. I will assume some familiarity with React, Three.js and R3F for the rest of the tutorial. If you’re totally new, you might want to start here.

Step 1: Observations

First things first, we need to take a close look at what’s going on in the source material. When I look at the GIF, I see a field of little white dots. They’re spread out evenly, but the pattern looks more random than a grid. The dots are moving in a rhythmic pulse, getting pulled towards the center and then flung outwards in a gentle shockwave. The shockwave has the shape of an octagon. The dots aren’t in constant motion, rather they seem to pause at each end of the cycle. The dots in motion look really smooth, almost like they’re melting. We need to zoom in to really understand what’s going on here. Here’s a close up of the corners during the contraction phase:

Interesting! The moving dots are split into red, green, and blue parts. The red part points in the direction of motion, while the blue part points away from the motion. The faster the dot is moving, the farther these three parts are spread out. As the colored parts overlap, they combine into a solid white color. Now that we understand what exactly we want to produce, lets start coding.

Step 2: Making Some Dots

If you’re using the Codesandbox template I provided, you can strip down the main App.js to just an empty scene with a black background:

import React from 'react'
import { Canvas } from 'react-three-fiber'

export default function App() {
  return (
    <Canvas>
      <color attach="background" args={['black']} />
    </Canvas>
  )
}

Our First Dot

Let’s create a component for our dots, starting with just a single white circle mesh composed of a CircleBufferGeometry and MeshBasicMaterial

function Dots() {
  return (
    <mesh>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </mesh>
  )
}

Add the <Dots /> component inside the canvas, and you should see a white octagon appear onscreen. Our first dot! Since it’ll be tiny, it doesn’t matter that it’s not very round.

But wait a second… Using a color picker, you’ll notice that it’s not pure white! This is because R3F sets up color management by default which is great if you’re working with glTF models, but not if you need raw colors. We can disable the default behavior by setting colorManagement={false} on our canvas.

More Dots

We need approximately 10,000 dots to fully fill the screen throughout the animation. A naive approach at creating a field of dots would be to simply render our dot mesh a few thousand times. However, you’ll quickly notice that this destroys performance. Rendering 10,000 of these chunky dots brings my gaming rig down to a measly 5 FPS. The problem is that each dot mesh incurs its own draw call, which means the CPU needs to send 10,000 (largely redundant) instructions to the GPU every frame.

The solution is to use instanced rendering, which means the CPU can tell the GPU about the dot shape, material, and the locations of all 10,000 instances in a single draw call. Three.js offers a helpful InstancedMesh class to facilitate instanced rendering of a mesh. According to the docs it accepts a geometry, material, and integer count as constructor arguments. Let’s convert our regular old mesh into an <instancedMesh> , starting with just one instance. We can leave the geometry and material slots as null since the child elements will fill them, so we only need to specify the count.

function Dots() {
  return (
    <instancedMesh args={[null, null, 1]}>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </instancedMesh>
  )
}

Hey, where did it go? The dot disappeared because of how InstancedMesh is initialized. Internally, the .instanceMatrix stores the transformation matrix of each instance, but it’s initialized with all zeros which squashes our mesh into the abyss. Instead, we should start with an identity matrix to get a neutral transformation. Let’s get a reference to our InstancedMesh and apply the identity matrix to the first instance inside of useLayoutEffect so that it’s properly positioned before anything is painted to the screen.

function Dots() {
  const ref = useRef()
  useLayoutEffect(() => {
    // THREE.Matrix4 defaults to an identity matrix
    const transform = new THREE.Matrix4()

    // Apply the transform to the instance at index 0
    ref.current.setMatrixAt(0, transform)
  }, [])
  return (
    <instancedMesh ref={ref} args={[null, null, 1]}>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </instancedMesh>
  )
}

Great, now we have our dot back. Time to crank it up to 10,000. We’ll increase the instance count and set the transform of each instance along a centered 100 x 100 grid.

for (let i = 0; i < 10000; ++i) {
  const x = (i % 100) - 50
  const y = Math.floor(i / 100) - 50
  transform.setPosition(x, y, 0)
  ref.current.setMatrixAt(i, transform)
}

We should also decrease the circle radius to 0.15 to better fit the grid proportions. We don’t want any perspective distortion on our grid, so we should set the orthographic prop on the canvas. Lastly, we’ll lower the default camera’s zoom to 20 to fit more dots on screen.

The result should look like this:

Although you can’t notice yet, it’s now running at a silky smooth 60 FPS 😀

Adding Some Noise

There’s a variety of ways to distribute points on a surface beyond a simple grid. “Poisson disc sampling” and “centroidal Voronoi tessellation” are some mathematical approaches that generate slightly more natural distributions. That’s a little too involved for this demo, so let’s just approximate a natural distribution by turning our square grid into hexagons and adding in small random offsets to each point. The positioning logic now looks like this:

// Place in a grid
let x = (i % 100) - 50
let y = Math.floor(i / 100) - 50

// Offset every other column (hexagonal pattern)
y += (i % 2) * 0.5

// Add some noise
x += Math.random() * 0.3
y += Math.random() * 0.3

Step 3: Creating Motion

Sine waves are the heart of cyclical motion. By feeding the clock time into a sine function, we get a value that oscillates between -1 and 1. To get the effect of expansion and contraction, we want to oscillate each point’s distance from the center. Another way of thinking about this is that we want to dynamically scale each point’s intial position vector. Since we should avoid unnecessary computations in the render loop, let’s cache our initial position vectors in useMemo for re-use. We’re also going to need that Matrix4 in the loop, so let’s cache that as well. Finally, we don’t want to overwrite our initial dot positions, so let’s cache a spare Vector3 for use during calculations.

const { vec, transform, positions } = useMemo(() => {
  const vec = new THREE.Vector3()
  const transform = new THREE.Matrix4()
  const positions = [...Array(10000)].map((_, i) => {
    const position = new THREE.Vector3()
    position.x = (i % 100) - 50
    position.y = Math.floor(i / 100) - 50
    position.y += (i % 2) * 0.5
    position.x += Math.random() * 0.3
    position.y += Math.random() * 0.3
    return position
  })
  return { vec, transform, positions }
}, [])

For simplicity let’s scrap the useLayoutEffect call and configure all the matrix updates in a useFrame loop. Remember that in R3F, the useFrame callback receives the same arguments as useThree including the Three.js clock, so we can access a dynamic time through clock.elapsedTime. We’ll add some simple motion by copying each instance position into our scratch vector, scaling it by some factor of the sine wave, and then copying that to the matrix. As mentioned in the docs, we need to set .needsUpdate to true on the instanced mesh’s .instanceMatrix in the loop so that Three.js knows to keep updating the positions.

useFrame(({ clock }) => {
  const scale = 1 + Math.sin(clock.elapsedTime) * 0.3
  for (let i = 0; i < 10000; ++i) {
    vec.copy(positions[i]).multiplyScalar(scale)
    transform.setPosition(vec)
    ref.current.setMatrixAt(i, transform)
  }
  ref.current.instanceMatrix.needsUpdate = true
})

Rounded square waves

The raw sine wave follows a perfectly round, circular motion. However, as we observed earlier:

The dots aren’t in constant motion, rather they seem to pause at each end of the cycle.

This calls for a different, more boxy looking wave with longer plateaus and shorter transitions. A search through the digital signal processing StackExchange produces this post with the equation for a rounded square wave. I’ve visualized the equation here and animated the delta parameter, watch how it goes from smooth to boxy:

The equation translates to this Javascript function:

const roundedSquareWave = (t, delta, a, f) => {
  return ((2 * a) / Math.PI) * Math.atan(Math.sin(2 * Math.PI * t * f) / delta)
}

Swapping out our Math.sin call for the new wave function with a delta of 0.1 makes the motion more snappy, with time to rest in between:

Ripples

How do we use this wave to make the dots move at different speeds and create ripples? If we change the input to the wave based on the dot’s distance from the center, then each ring of dots will be at a different phase causing the surface to stretch and squeeze like an actual wave. We’ll use the initial distances on every frame, so let’s cache and return the array of distances in our useMemo callback:

const distances = positions.map(pos => pos.length())

Then, in the useFrame callback we subtract a factor of the distance from the t (time) variable that gets plugged into the wave. That looks like this:

That already looks pretty cool!

The Octagon

Our ripple is perfectly circular, how can we make it look more octagonal like the original? One way to approximate this effect is by combining a sine or cosine wave with our distance function at an appropriate frequency (8 times per revolution). Watch how changing the strength of this wave changes the shape of the region:

A strength of 0.5 is a pretty good balance between looking like an octagon and not looking too wavy. That change can happen in our initial distance calculations:

const right = new THREE.Vector3(1, 0, 0)
const distances = positions.map((pos) => (
  pos.length() + Math.cos(pos.angleTo(right) * 8) * 0.5
))

It’ll take some additional tweaks to really see the effect of this. There’s a few places that we can focus our adjustments on:

  • Influence of point distance on wave phase
  • Influence of point distance on wave roundness
  • Frequency of the wave
  • Amplitude of the wave

It’s a bit of educated trial and error to make it match the original GIF, but after fiddling with the wave parameters and multipliers eventually you can get something like this:

When previewing in full screen, the octagonal shape is now pretty clear.

Step 4: Post-processing

We have something that mimics the overall motion of the GIF, but the dots in motion don’t have the same color shifting effect that we observed earlier. As a reminder:

The moving dots are split into red, green, and blue parts. The red part points in the direction of motion, while the blue part points away from the motion. The faster the dot is moving, the farther these three parts are spread out. As the colored parts overlap, they combine into a solid white color.

We can achieve this effect using the post-processing EffectComposer built into Three.js, which we can conveniently tack onto the scene without any changes to the code we’ve already written. If you’re new to post-processing like me, I highly recommend reading this intro guide from threejsfundamentals. In short, the composer lets you toss image data back and forth between two “render targets” (glorified image textures), applying shaders and other operations in between. Each step of the pipeline is called a “pass”. Typically the first pass performs the initial scene render, then there are some passes to add effects, and by default the final pass writes the resulting image to the screen.

An example: motion blur

Here’s a JSFiddle from Maxime R that demonstrates a naive motion blur effect with the EffectComposer. This effect makes use of a third render target in order to preserve a blend of previous frames. I’ve drawn out a diagram to track how image data moves through the pipeline (read from the top down):

VML diagram depicting the flow of data through four passes of a simple motion blur effect. The process is explained below.

First, the scene is rendered as usual and written to rt1 with a RenderPass. Most passes will automatically switch the read and write buffers (render targets), so our next pass will read what we just rendered in rt1 and write to rt2. In this case we use a ShaderPass configured with a BlendShader to blend the contents of rt1 with whatever is stored in our third render target (empty at first, but it eventually accumulates a blend of previous frames). This blend is written to rt2 and another swap automatically occurs. Next, we use a SavePass to save the blend we just created in rt2 back to our third render target. The SavePass is a little unique in that it doesn’t swap the read and write buffers, which makes sense since it doesn’t actually change the image data. Finally, that same blend in rt2 (which is still the read buffer) gets read into another ShaderPass set to a CopyShader, which simply copies its input into the output. Since it’s the last pass on the stack, it automatically gets renderToScreen=true which means that its output is what you’ll see on screen.

Working with post-processing requires some mental gymnastics, but hopefully this makes some sense of how different components like ShaderPass, SavePass, and CopyPass work together to apply effects and preserve data between frames.

RGB Delay Effect

A simple RGB color shifting effect involves turning our single white dot into three colored dots that get farther apart the faster they move. Rather than trying to compute velocities for all the dots and passing them to the post-processing stack, we can cheat by overlaying previous frames:

A red, green, and blue dot overlayed like a Venn diagram depicting three consecutive frames.

This turns out to be a very similar problem as the motion blur, since it requires us to use additional render targets to store data from previous frames. We actually need two extra render targets this time, one to store the image from frame n-1 and another for frame n-2. I’ll call these render targets delay1 and delay2.

Here’s a diagram of the RGB delay effect:

VML diagram depicting the flow of data through four passes of a RGB color delay effect. Key aspects of the process is explained below.
A circle containing a value X represents the individual frame for delay X.

The trick is to manually disable needsSwap on the ShaderPass that blends the colors together, so that the proceeding SavePass re-reads the buffer that holds the current frame rather than the colored composite. Similarly, by manually enabling needsSwap on the SavePass we ensure that we read from the colored composite on the final ShaderPass for the end result. The other tricky part is that since we’re placing the current frame’s contents in the delay2 buffer (as to not lose the contents of delay1 for the next frame), we need to swap these buffers each frame. It’s easiest to do this outside of the EffectComposer by swapping the references to these render targets on the ShaderPass and SavePass within the render loop.

Implementation

This is all very abstract, so let’s see what this means in practice. In a new file (Effects.js), start by importing the necessary passes and shaders, then extending the classes so that R3F can access them declaratively.

import { useThree, useFrame, extend } from 'react-three-fiber'
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer'
import { ShaderPass } from 'three/examples/jsm/postprocessing/ShaderPass'
import { SavePass } from 'three/examples/jsm/postprocessing/SavePass'
import { CopyShader } from 'three/examples/jsm/shaders/CopyShader'
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass'

extend({ EffectComposer, ShaderPass, SavePass, RenderPass })

We’ll put our effects inside a new component. Here is what a basic effect looks like in R3F:

function Effects() {
  const composer = useRef()
  const { scene, gl, size, camera } = useThree()
  useEffect(() => void composer.current.setSize(size.width, size.height), [size])
  useFrame(() => {
    composer.current.render()
  }, 1)
  return (
    <effectComposer ref={composer} args={[gl]}>
      <renderPass attachArray="passes" scene={scene} camera={camera} />
    </effectComposer>
  )
}

All that this does is render the scene to the canvas. Let’s start adding in the pieces from our diagram. We’ll need a shader that takes in 3 textures and respectively blends the red, green, and blue channels of them. The vertexShader of a post-processing shader always looks the same, so we only really need to focus on the fragmentShader. Here’s what the complete shader looks like:

const triColorMix = {
  uniforms: {
    tDiffuse1: { value: null },
    tDiffuse2: { value: null },
    tDiffuse3: { value: null }
  },
  vertexShader: `
    varying vec2 vUv;
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
    }
  `,
  fragmentShader: `
    varying vec2 vUv;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform sampler2D tDiffuse3;
    
    void main() {
      vec4 del0 = texture2D(tDiffuse1, vUv);
      vec4 del1 = texture2D(tDiffuse2, vUv);
      vec4 del2 = texture2D(tDiffuse3, vUv);
      float alpha = min(min(del0.a, del1.a), del2.a);
      gl_FragColor = vec4(del0.r, del1.g, del2.b, alpha);
    }
  `
}

With the shader ready to roll, we’ll then memo-ize our helper render targets and set up some additional refs to hold constants and references to our other passes.

const savePass = useRef()
const blendPass = useRef()
const swap = useRef(false) // Whether to swap the delay buffers
const { rtA, rtB } = useMemo(() => {
  const rtA = new THREE.WebGLRenderTarget(size.width, size.height)
  const rtB = new THREE.WebGLRenderTarget(size.width, size.height)
  return { rtA, rtB }
}, [size])

Next, we’ll flesh out the effect stack with the other passes specified in the diagram above and attach our refs:

return (
  <effectComposer ref={composer} args={[gl]}>
    <renderPass attachArray="passes" scene={scene} camera={camera} />
    <shaderPass attachArray="passes" ref={blendPass} args={[triColorMix, 'tDiffuse1']} needsSwap={false} />
    <savePass attachArray="passes" ref={savePass} needsSwap={true} />
    <shaderPass attachArray="passes" args={[CopyShader]} />
  </effectComposer>
)

By stating args={[triColorMix, 'tDiffuse1']} on the blend pass, we indicate that the composer’s read buffer should be passed as the tDiffuse1 uniform in our custom shader. The behavior of these passes is unfortunately not documented, so you sometimes need to poke through the source files to figure this stuff out.

Finally, we’ll need to modify the render loop to swap between our spare render targets and plug them in as the remaining 2 uniforms:

useFrame(() => {
  // Swap render targets and update dependencies
  let delay1 = swap.current ? rtB : rtA
  let delay2 = swap.current ? rtA : rtB
  savePass.current.renderTarget = delay2
  blendPass.current.uniforms['tDiffuse2'].value = delay1.texture
  blendPass.current.uniforms['tDiffuse3'].value = delay2.texture
  swap.current = !swap.current
  composer.current.render()
}, 1)

All the pieces for our RGB delay effect are in place. Here’s a demo of the end result on a simpler scene with one white dot moving back and forth:

Putting it all together

As you’ll notice in the previous sandbox, we can make the effect take hold by simply plopping the <Effects /> component inside the canvas. After doing this, we can make it look even better by adding an anti-aliasing pass to the effect composer.

import { FXAAShader } from 'three/examples/jsm/shaders/FXAAShader'

...
  const pixelRatio = gl.getPixelRatio()
  return (
    <effectComposer ref={composer} args={[gl]}>
      <renderPass attachArray="passes" scene={scene} camera={camera} />
      <shaderPass attachArray="passes" ref={blendPass} args={[triColorMix, 'tDiffuse1']} needsSwap={false} />
      <savePass attachArray="passes" ref={savePass} needsSwap={true} />
      <shaderPass
        attachArray="passes"
        args={[FXAAShader]}
        uniforms-resolution-value-x={1 / (size.width * pixelRatio)}
        uniforms-resolution-value-y={1 / (size.height * pixelRatio)}
      />
      <shaderPass attachArray="passes" args={[CopyShader]} />
    </effectComposer>
  )
}

And here’s our finished demo!

(Bonus) Interactivity

While outside the scope of this tutorial, I’ve added an interactive demo variant which responds to mouse clicks and cursor position. This variant uses react-spring v9 to smoothly reposition the focus point of the dots. Check it out in the “Demo 2” page of the demo linked at the top of this page, and play around with the source code to see if you can add other forms of interactivity.

Step 5: Sharing Your Work

I highly recommend publicly sharing the things you create. It’s a great way to track your progress, share your learning with others, and get feedback. I wouldn’t be writing this tutorial if I hadn’t shared my work! For perfect loops you can use the use-capture hook to automate your recording. If you’re sharing to Twitter, consider converting to a GIF to avoid compression artifacts. Here’s a thread from @arc4g explaining how they create smooth 50 FPS GIFs for Twitter.

I hope you learned something about Three.js or react-three-fiber from this tutorial. Many of the animations I see online follow a similar formula of repeated shapes moving in some mathematical rhythm, so the principles here extend beyond just rippling dots. If this inspired you to create something cool, tag me in it so I can see!

The post Recreating a Dave Whyte Animation in React-Three-Fiber appeared first on Codrops.

Creating Mirrors in React-Three-Fiber and Three.js

This tutorial is inspired by Claudio Guglieri’s new personal website that features a collection of playful 3D scenes. What we’ll do today is to explore the “Don’t” scene that is composed of rotating mirrors:

We’ll be using Three.js with react-three-fiber v5, drei and use-cannon and we’ll assume that you have some basic knowledge on how to set up a scene and work with Three.js.

Since real-time reflections would be extremely performance-heave, we’ll employ a few neat tricks!

All these libraries are part of Poimandres, a collection of libraries for creative coding. Follow Poimandres on Twitter to get the latest updates:

Drawing sharp text in 3D Space

To make our text look as sharp as possible, we use drei’s Text component, which is a wrapper around Troika Three Text. This library allows us to draw any webfont using signed distance fields and antialiasing:

import { Text } from '@react-three/drei'

function Title() {
   return <Text material-toneMapped={false}>My Title</Text>
}

The `material-toneMapped={false}` tells three.js to ignore our material when doing tone mapping. Since react-three-fiber v5 uses sRGB by default, our text would otherwise be more grey than white.

Mirrors

The mirrors are simple Box objects positioned in 3D Space by loading the positions from a JSON file. We use `useResource` to store a reference to the materials and re-use them in the single Mirror components, meaning we will only instance the materials once.

To make the mirrors pop out of the black backdrop, we added a thin film effect by David Lenaerts.

import { useResource } from 'react-three-fiber'

function Mirrors({ envMap }) {
  const sideMaterial = useResource();
  const reflectionMaterial = useResource();
  const [thinFilmFresnelMap] = useState(new ThinFilmFresnelMap());

  return (
    <>
      <meshLambertMaterial ref={sideMaterial} map={thinFilmFresnelMap} color={0xaaaaaa} />
      <meshLambertMaterial ref={reflectionMaterial} map={thinFilmFresnelMap} envMap={envMap} />

      {mirrorsData.mirrors.map((mirror, index) => (
        <Mirror
          key={`mirror-${index}`}
          {...mirror}
          sideMaterial={sideMaterial.current}
          reflectionMaterial={reflectionMaterial.current}
        />
      ))}
    </>
  );
}

For the single mirrors, we assigned a material to each face by setting the material prop as an array with 6 values (a material for each of the 6 faces of the Box geometry):

function Mirror({ sideMaterial, reflectionMaterial, args, ...props }) {
  const ref = useRef()

  useFrame(() => {
    ref.current.rotation.y += 0.001
    ref.current.rotation.z += 0.01
  })
  
  return (
    <Box {...props} 
      ref={ref} 
      args={args}
      material={[
        sideMaterial,
        sideMaterial,
        sideMaterial,
        sideMaterial,
        reflectionMaterial,
        reflectionMaterial
      ]}
    />
  )
}

The mirrors are rotated each frame on the y and z axis to create interesting movements in the reflected image.

Reflections

As you noticed, we are using an envMap property on our mirror materials. The envMap is used to show reflections on metallic objects. But how can we create one for our scene?

Enter cubeCamera, a Three.js object that creates 6 perspective cameras and makes a cube texture out of them:

// 1. we create a CubeRenderTarget
const [renderTarget] = useState(new THREE.WebGLCubeRenderTarget(1024))

// 2. we get a reference to our cubeCamera
const cubeCamera = useRef()
  
// 3. we update the camera each frame
useFrame(({ gl, scene }) => {
  cubeCamera.current.update(gl, scene)
})

return (
   <cubeCamera 
     layers={[11]} 
     name="cubeCamera" 
     ref={cubeCamera} 
     position={[0, 0, 0]} 
     // i. notice how the renderTarget is passed as a constructor argument of the cubeCamera object
     args={[0.1, 100, renderTarget]} 
  />
)

In this basic example, we setup cubeCamera that helps us bring the sky reflections on our physical material.

Right now, our scene doesn’t really have much else than the mirrors, so we use a magic trick to create interesting reflections:

function TitleCopies({ layers }) {
  const vertices = useMemo(() => {
    const y = new THREE.IcosahedronGeometry(8)
    return y.vertices
  }, [])

  return <group name="titleCopies">{vertices.map((vertex,i) => <Title name={"titleCopy-" + i} position={vertex} layers={layers} />)}</group>
}

We create an IcosahedronGeometry (20 faces) and use its vertices to create copies of our title, so that our cubeCamera has something to look at. To make sure the text is always visible, we also make it rotate to look at the center of the scene, where our camera is positioned.

Since we don’t want the fake text copies to be visible in the main scene, but only in the reflections, we use the layers system of Three.js. 

By assigning layer 11 to our cubeCamera, only objects that share the same layer would be visible to it. This is what our cubeCamera is going to see (and thus what we are going to get on the mirrors).

Fun fact: Claudio was kind enough to show us that he also used the same technique to make the reflections more interesting.

Finishing touches

To finish it up, we added a simple mouse interaction that really helps selling the reflections on the mirrors. We wrapped our whole scene in a <group> and animated it using the mouse position:

import { useFrame } from "react-three-fiber";

function Scene() {
  const group = useRef();
  const rotationEuler = new THREE.Euler(0, 0, 0);
  const rotationQuaternion = new THREE.Quaternion(0, 0, 0, 0);
  const { viewport } = useThree();

  useFrame(({ mouse }) => {
    const x = (mouse.x * viewport.width) / 100;
    const y = (mouse.y * viewport.height) / 100;

    rotationEuler.set(y, x, 0);
    rotationQuaternion.setFromEuler(rotationEuler);

    group.current.quaternion.slerp(rotationQuaternion, 0.1);
  });

  return <group ref={group}>...</group>;
}

We create the Euler and Quaternion objects outside of the useFrame loop, since object creation on every frame would hinder performance.
To make a smooth rotation, we first set the rotation angle from mouse x and y, then slerp (which sounds funny but actually means spherical linear interpolation) the group’s quaternion to our new quaternion.

Bonus Points: Cannon!

Our second variation on this theme involves some simple physics simulation using use-cannon, another library in the react-three-fiber’s ecosystem.

For this scene, we setup a wall of cubes that use the same materials setup of our mirrors:

import { useBox } from '@react-three/cannon'

function Mirror({ envMap, fresnel, ...props }) {
  const [ref, api] = useBox(() => props)
  
  return (
    <Box ref={ref} args={props.args} 
      onClick={() => api.applyImpulse([0, 0, -50], [0, 0, 0])} 
      receiveShadow castShadow material={[...]} 
    />
  )
}

The useBox hook from use-cannon creates a physical box that is then bound to the Box mesh using the given ref, meaning that any change in position of the physical box will also be applied to our mesh.

We also added two physical planes, one for the floor and one for the back wall. Then we only render the floor with a ShadowMaterial:

import { usePlane } from '@react-three/cannon'

function PhysicalWalls(props) {
  // ground
  usePlane(() => ({ ...props }))

  // back wall
  usePlane(() => ({ position: [0, 0, -20] }))

  return (
    <Plane args={[1000, 1000]} {...props} receiveShadow>
      <shadowMaterial transparent opacity={0.2} />
    </Plane>
  )
}

To make everything magically work, we wrap it in the <Physics> provider:

import { Physics } from '@react-three/cannon'

<Physics gravity={[0, -10, 0]} >
  <Mirrors envMap={renderTarget.texture} />
  <PhysicalWalls rotation={[-Math.PI/2, 0, 0]} position={[0, -2, 0]}/>
</Physics>

Here is a simplified version of the physical scene we used:

And here we go with some DESTRUCTION:

And just so you know… Panna, Olga and Pedro are the names of Gianmarco’s bunny (Panna) and Marco’s cats (Olga and Pedro) 🙂

The post Creating Mirrors in React-Three-Fiber and Three.js appeared first on Codrops.

Scroll, Refraction and Shader Effects in Three.js and React

In this tutorial I will show you how to take a couple of established techniques (like tying things to the scroll-offset), and cast them into re-usable components. Composition will be our primary focus.

In this tutorial we will:

  • build a declarative scroll rig
  • mix HTML and canvas
  • handle async assets and loading screens via React.Suspense
  • add shader effects and tie them to scroll
  • and as a bonus: add an instanced variant of Jesper Vos multiside refraction shader

Setting up

We are using React, hooks, Three.js and react-three-fiber. The latter is a renderer for Three.js which allows us to declare the scene graph by breaking up tasks into self-contained components. However, you still need to know a bit of Three.js. All there is to know about react-three-fiber you can find on the GitHub repo’s readme. Check out the tutorial on alligator.io, which goes into the why and how.

We don’t emulate a scroll bar, which would take away browser semantics. A real scroll-area in front of the canvas with a set height and a listener is all we need.

I decided to divide the content into:

  • virtual content sections
  • and pages, each 100vh long, this defines how long the scroll area is
function App() {
  const scrollArea = useRef()
  const onScroll = e => (state.top.current = e.target.scrollTop)
  useEffect(() => void onScroll({ target: scrollArea.current }), [])
  return (
    <>
      <Canvas orthographic>{/* Contents ... */}</Canvas>
      <div ref={scrollArea} onScroll={onScroll}>
        <div style={{ height: `${state.pages * 100}vh` }} />
      </div>

scrollTop is written into a reference because it will be picked up by the render-loop, which is carrying out the animations. Re-rendering for often occurring state doesn’t make sense.

A first-run effect synchronizes the local scrollTop with the actual one, which may not be zero.

Building a declarative scroll rig

There are many ways to go about it, but generally it would be nice if we could distribute content across the number of sections in a declarative way while the number of pages defines how long we have to scroll. Each content-block should have:

  • an offset, which is the section index, given 3 sections, 0 means start, 2 means end, 1 means in between
  • a factor, which gets added to the offset position and subtracted using scrollTop, it will control the blocks speed and direction

Blocks should also be nestable, so that sub-blocks know their parents’ offset and can scroll along.

const offsetContext = createContext(0)

function Block({ children, offset, factor, ...props }) {
  const ref = useRef()
  // Fetch parent offset and the height of a single section
  const { offset: parentOffset, sectionHeight } = useBlock()
  offset = offset !== undefined ? offset : parentOffset
  // Runs every frame and lerps the inner block into its place
  useFrame(() => {
    const curY = ref.current.position.y
    const curTop = state.top.current
    ref.current.position.y = lerp(curY, (curTop / state.zoom) * factor, 0.1)
  })
  return (
    <offsetContext.Provider value={offset}>
      <group {...props} position={[0, -sectionHeight * offset * factor, 0]}>
        <group ref={ref}>{children}</group>
      </group>
    </offsetContext.Provider>
  )
}

This is a block-component. Above all, it wraps the offset that it is given into a context provider so that nested blocks and components can read it out. Without an offset it falls back to the parent offset.

It defines two groups. The first is for the target position, which is the height of one section multiplied by the offset and the factor. The second, inner group is animated and cancels out the factor. When the user scrolls to the given section offset, the block will be centered.

We use that along with a custom hook which allows any component to access block-specific data. This is how any component gets to react to scroll.

function useBlock() {
  const { viewport } = useThree()
  const offset = useContext(offsetContext)
  const canvasWidth = viewport.width / zoom
  const canvasHeight = viewport.height / zoom
  const sectionHeight = canvasHeight * ((pages - 1) / (sections - 1))
  // ...
  return { offset, canvasWidth, canvasHeight, sectionHeight }
}

We can now compose and nest blocks conveniently:

<Block offset={2} factor={1.5}>
  <Content>
    <Block factor={-0.5}>
      <SubContent />
    </Block>
  </Content>
</Block>

Anything can read from block-data and react to it (like that spinning cross):

function Cross() {
  const ref = useRef()
  const { viewportHeight } = useBlock()
  useFrame(() => {
    const curTop = state.top.current
    const nextY = (curTop / ((state.pages - 1) * viewportHeight)) * Math.PI
    ref.current.rotation.z = lerp(ref.current.rotation.z, nextY, 0.1)
  })
  return (
    <group ref={ref}>

Mixing HTML and canvas, and dealing with assets

Keeping HTML in sync with the 3D world

We want to keep layout and text-related things in the DOM. However, keeping it in sync is a bit of a bummer in Three.js, messing with createElement and camera calculations is no fun.

In three-fiber all you need is the <Dom /> helper (@beta atm). Throw this into the canvas and add declarative HTML. This is all it takes for it to move along with its parents’ world-matrix.

<group position={[10, 0, 0]}>
  <Dom><h1>hello</h1></Dom>
</group>

Accessibility

If we strictly divide between layout and visuals, supporting a11y is possible. Dom elements can be behind the canvas (via the prepend prop), or in front of it. Make sure to place them in front if you need them to be accessible.

Responsiveness, media-queries, etc.

While the DOM fragments can rely on CSS, their positioning overall relies on the scene graph. Canvas elements on the other hand know nothing of the sort, so making it all work on smaller screens can be a bit of a challenge.

Fortunately, three-fiber has auto-resize inbuilt. Any component requesting size data will be automatically informed of changes.

You get:

  • viewport, the size of the canvas in its own units, must be divided by camera.zoom for orthographic cameras
  • size, the size of the screen in pixels
const { viewport, size } = useThree()

Most of the relevant calculations for margins, maxWidth and so on have been made in useBlock.

Handling async assets and loading screens via React.Suspense

Concerning assets, Reacts Suspense allows us to control loading and caching, when components should show up, in what order, fallbacks, and how errors are handled. It makes something like a loading screen, or a start-up animation almost too easy.

The following will suspend all contents until each and every component, even nested ones, have their async data ready. Meanwhile it will show a fallback. When everything is there, the <Startup /> component will render along with everything else.

<Suspense fallback={<Fallback />}>
  <AsyncContent />
  <Startup />
</Suspense>

In three-fiber you can suspend a component with the useLoader hook, which takes any Three.js loader, then loads (and caches) assets with it.

function Image() {
  const texture = useLoader(THREE.TextureLoader, "/texture.png")
  // It will only get here if the texture has been loaded
  return (
    <mesh>
      <meshBasicMaterial attach="material" map={texture} />

Adding shader effects and tying them to scroll

The custom shader in this demo is a Frankenstein based on the Three.js MeshBasicMaterial, plus:

The relevant portion of code in which we feed the shader block-specific scroll data is this one:

material.current.scale =
  lerp(material.current.scale, offsetFactor - top / ((pages - 1) * viewportHeight), 0.1)
material.current.shift =
  lerp(material.current.shift, (top - last) / 150, 0.1)

Adding Diamonds

The technique is explained in full detail in the article Real-time Multiside Refraction in Three Steps by Jesper Vos. I placed Jesper’s code into a re-usable component, so that it can be mounted and unmounted, taking care of all the render logic. I also changed the shader slightly to enable instancing, which now allows us to draw dozens of these onto the screen without hitting a performance snag anytime soon.

The component reads out block-data like everything else. The diamonds are put into place according to the scroll offset by distributing the instanced meshes. This is a relatively new feature in Three.js.

Wrapping up

This tutorial may give you a general idea, but there are many things that are possible beyond the generic parallax; you can tie anything to scroll. Above all, being able to compose and re-use components goes a long way and is so much easier than dealing with a soup of code fragments whose implicit contracts span the codebase.

Scroll, Refraction and Shader Effects in Three.js and React was written by Paul Henschel and published on Codrops.