Replicating the Light Effect from MIDWAM with Three.js and Postprocessing

In this new ALL YOUR HTML coding session we’ll dive into replicating the magical light effects seen on Midwam’s website using Three.js and postprocessing.

Original: https://midwam.com/en

Developed by: https://immersive-g.com/

This coding session was streamed live on January 9, 2022.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

How to Code an On-Scroll Folding 3D Cardboard Box Animation with Three.js and GSAP

Today we’ll walk through the creation of a 3D packaging box that folds and unfolds on scroll. We’ll be using Three.js and GSAP for this.

We won’t use any textures or shaders to set it up. Instead, we’ll discover some ways to manipulate the Three.js BufferGeometry.

This is what we will be creating:

Scroll-driven animation

We’ll be using GSAP ScrollTrigger, a handy plugin for scroll-driven animations. It’s a great tool with a good documentation and an active community so I’ll only touch the basics here.

Let’s set up a minimal example. The HTML page contains:

  1. a full-screen <canvas> element with some styles that will make it cover the browser window
  2. a <div class=”page”> element behind the <canvas>. The .page element a larger height than the window so we have a scrollable element to track.

On the <canvas> we render a 3D scene with a box element that rotates on scroll.

To rotate the box, we use the GSAP timeline which allows an intuitive way to describe the transition of the box.rotation.x property.

gsap.timeline({})
    .to(box.rotation, {
        duration: 1, // <- takes 1 second to complete
        x: .5 * Math.PI,
        ease: 'power1.out'
    }, 0) // <- starts at zero second (immediately)

The x value of the box.rotation is changing from 0 (or any other value that was set before defining the timeline) to 90 degrees. The transition starts immediately. It has a duration of one second and power1.out easing so the rotation slows down at the end.

Once we add the scrollTrigger to the timeline, we start tracking the scroll position of the .page element (see properties trigger, start, end). Setting the scrub property to true makes the transition not only start on scroll but actually binds the transition progress to the scroll progress.

gsap.timeline({
    scrollTrigger: {
        trigger: '.page',
        start: '0% 0%',
        end: '100% 100%',
        scrub: true,
        markers: true // to debug start and end properties
    },
})
    .to(box.rotation, {
        duration: 1,
        x: .5 * Math.PI,
        ease: 'power1.out'
    }, 0)

Now box.rotation.x is calculated as a function of the scroll progress, not as a function of time. But the easing and timing parameters still matter. Power1.out easing still makes the rotation slower at the end (check out ease visualiser tool and try other options to see the difference). Start and duration values don’t mean seconds anymore but they still define the sequence of the transitions within the timeline.

For example, in the following timeline the last transition is finished at 2.3 + 0.7 = 3.

gsap.timeline({
    scrollTrigger: {
        // ... 
    },
})
    .to(box.rotation, {
        duration: 1,
        x: .5 * Math.PI,
        ease: 'power1.out'
    }, 0)
    .to(box.rotation, {
        duration: 0.5,
        x: 0,
        ease: 'power2.inOut'
    }, 1)
    .to(box.rotation, {
        duration: 0.7, // <- duration of the last transition
        x: - Math.PI,
        ease: 'none'
    }, 2.3) // <- start of the last transition

We take the total duration of the animation as 3. Considering that, the first rotation starts once the scroll starts and takes ⅓ of the page height to complete. The second rotation starts without any delay and ends right in the middle of the scroll (1.5 of 3). The last rotation starts after a delay and ends when we scroll to the end of the page. That’s how we can construct the sequences of transitions bound to the scroll.

To get further with this tutorial, we don’t need more than some basic understanding of GSAP timing and easing. Let me just mention a few tips about the usage of GSAP ScrollTrigger, specifically for a Three.js scene.

Tip #1: Separating 3D scene and scroll animation

I found it useful to introduce an additional variable params = { angle: 0 } to hold animated parameters. Instead of directly changing rotation.x in the timeline, we animate the properties of the “proxy” object, and then use it for the 3D scene (see the updateSceneOnScroll() function under tip #2). This way, we keep scroll-related stuff separate from 3D code. Plus, it makes it easier to use the same animated parameter for multiple 3D transforms; more about that a bit further on.

Tip #2: Render scene only when needed

Maybe the most common way to render a Three.js scene is calling the render function within the window.requestAnimationFrame() loop. It’s good to remember that we don’t need it, if the scene is static except for the GSAP animation. Instead, the line renderer.render(scene, camera) can be simply added to to the onUpdate callback so the scene is redrawing only when needed, during the transition.

// No need to render the scene all the time
// function animate() {
//     requestAnimationFrame(animate);
//     // update objects(s) transforms here
//     renderer.render(scene, camera);
// }

let params = { angle: 0 }; // <- "proxy" object

// Three.js functions
function updateSceneOnScroll() {
    box.rotation.x = angle.v;
    renderer.render(scene, camera);
}

// GSAP functions
function createScrollAnimation() {
    gsap.timeline({
        scrollTrigger: {
            // ... 
            onUpdate: updateSceneOnScroll
        },
    })
        .to(angle, {
            duration: 1,
            v: .5 * Math.PI,
            ease: 'power1.out'
        })
}

Tip #3: Three.js methods to use with onUpdate callback

Various properties of Three.js objects (.quaternion, .position, .scale, etc) can be animated with GSAP in the same way as we did for rotation. But not all the Three.js methods would work. 

Some of them are aimed to assign the value to the property (.setRotationFromAxisAngle(), .setRotationFromQuaternion(), .applyMatrix4(), etc.) which works perfectly for GSAP timelines.

But other methods add the value to the property. For example, .rotateX(.1) would increase the rotation by 0.1 radians every time it’s called. So in case box.rotateX(angle.v) is placed to the onUpdate callback, the angle value will be added to the box rotation every frame and the 3D box will get a bit crazy on scroll. Same with .rotateOnAxis, .translateX, .translateY and other similar methods – they work for animations in the window.requestAnimationFrame() loop but not as much for today’s GSAP setup.

View the minimal scroll sandbox here.

Note: This Three.js scene and other demos below contain some additional elements like axes lines and titles. They have no effect on the scroll animation and can be excluded from the code easily. Feel free to remove the addAxesAndOrbitControls() function, everything related to axisTitles and orbits, and <div> classed ui-controls to get a truly minimal setup.

Now that we know how to rotate the 3D object on scroll, let’s see how to create the package box.

Box structure

The box is composed of 4 x 3 = 12 meshes:

We want to control the position and rotation of those meshes to define the following:

  • unfolded state
  • folded state 
  • closed state

For starters, let’s say our box doesn’t have flaps so all we have is two width-sides and two length-sides. The Three.js scene with 4 planes would look like this:

let box = {
    params: {
        width: 27,
        length: 80,
        depth: 45
    },
    els: {
        group: new THREE.Group(),
        backHalf: {
            width: new THREE.Mesh(),
            length: new THREE.Mesh(),
        },
        frontHalf: {
            width: new THREE.Mesh(),
            length: new THREE.Mesh(),
        }
    }
};

scene.add(box.els.group);
setGeometryHierarchy();
createBoxElements();

function setGeometryHierarchy() {
    // for now, the box is a group with 4 child meshes
    box.els.group.add(box.els.frontHalf.width, box.els.frontHalf.length, box.els.backHalf.width, box.els.backHalf.length);
}

function createBoxElements() {
    for (let halfIdx = 0; halfIdx < 2; halfIdx++) {
        for (let sideIdx = 0; sideIdx < 2; sideIdx++) {

            const half = halfIdx ? 'frontHalf' : 'backHalf';
            const side = sideIdx ? 'width' : 'length';

            const sideWidth = side === 'width' ? box.params.width : box.params.length;
            box.els[half][side].geometry = new THREE.PlaneGeometry(
                sideWidth,
                box.params.depth
            );
        }
    }
}

All 4 sides are by default centered in the (0, 0, 0) point and lying in the XY-plane:

Folding animation

To define the unfolded state, it’s sufficient to:

  • move panels along X-axis aside from center so they don’t overlap

Transforming it to the folded state means

  • rotating width-sides to 90 deg around Y-axis
  • moving length-sides to the opposite directions along Z-axis 
  • moving length-sides along X-axis to keep the box centered

Aside of box.params.width, box.params.length and box.params.depth, the only parameter needed to define these states is the opening angle. So the box.animated.openingAngle parameter is added to be animated on scroll from 0 to 90 degrees.

let box = {
    params: {
        // ...
    },
    els: {
        // ...
    },
    animated: {
        openingAngle: 0
    }
};

function createFoldingAnimation() {
    gsap.timeline({
        scrollTrigger: {
            trigger: '.page',
            start: '0% 0%',
            end: '100% 100%',
            scrub: true,
        },
        onUpdate: updatePanelsTransform
    })
        .to(box.animated, {
            duration: 1,
            openingAngle: .5 * Math.PI,
            ease: 'power1.inOut'
        })
}

Using box.animated.openingAngle, the position and rotation of sides can be calculated

function updatePanelsTransform() {

    // place width-sides aside of length-sides (not animated)
    box.els.frontHalf.width.position.x = .5 * box.params.length;
    box.els.backHalf.width.position.x = -.5 * box.params.length;

    // rotate width-sides from 0 to 90 deg 
    box.els.frontHalf.width.rotation.y = box.animated.openingAngle;
    box.els.backHalf.width.rotation.y = box.animated.openingAngle;

    // move length-sides to keep the closed box centered
    const cos = Math.cos(box.animated.openingAngle); // animates from 1 to 0
    box.els.frontHalf.length.position.x = -.5 * cos * box.params.width;
    box.els.backHalf.length.position.x = .5 * cos * box.params.width;

    // move length-sides to define box inner space
    const sin = Math.sin(box.animated.openingAngle); // animates from 0 to 1
    box.els.frontHalf.length.position.z = .5 * sin * box.params.width;
    box.els.backHalf.length.position.z = -.5 * sin * box.params.width;
}
View the sandbox here.

Nice! Let’s think about the flaps. We want them to move together with the sides and then to rotate around their own edge to close the box.

To move the flaps together with the sides we simply add them as the children of the side meshes. This way, flaps inherit all the transforms we apply to the sides. An additional position.y transition will place them on top or bottom of the side panel.

let box = {
    params: {
        // ...
    },
    els: {
        group: new THREE.Group(),
        backHalf: {
            width: {
                top: new THREE.Mesh(),
                side: new THREE.Mesh(),
                bottom: new THREE.Mesh(),
            },
            length: {
                top: new THREE.Mesh(),
                side: new THREE.Mesh(),
                bottom: new THREE.Mesh(),
            },
        },
        frontHalf: {
            width: {
                top: new THREE.Mesh(),
                side: new THREE.Mesh(),
                bottom: new THREE.Mesh(),
            },
            length: {
                top: new THREE.Mesh(),
                side: new THREE.Mesh(),
                bottom: new THREE.Mesh(),
            },
        }
    },
    animated: {
        openingAngle: .02 * Math.PI
    }
};

scene.add(box.els.group);
setGeometryHierarchy();
createBoxElements();

function setGeometryHierarchy() {
    // as before
    box.els.group.add(box.els.frontHalf.width.side, box.els.frontHalf.length.side, box.els.backHalf.width.side, box.els.backHalf.length.side);

    // add flaps
    box.els.frontHalf.width.side.add(box.els.frontHalf.width.top, box.els.frontHalf.width.bottom);
    box.els.frontHalf.length.side.add(box.els.frontHalf.length.top, box.els.frontHalf.length.bottom);
    box.els.backHalf.width.side.add(box.els.backHalf.width.top, box.els.backHalf.width.bottom);
    box.els.backHalf.length.side.add(box.els.backHalf.length.top, box.els.backHalf.length.bottom);
}

function createBoxElements() {
    for (let halfIdx = 0; halfIdx < 2; halfIdx++) {
        for (let sideIdx = 0; sideIdx < 2; sideIdx++) {

            // ...

            const flapWidth = sideWidth - 2 * box.params.flapGap;
            const flapHeight = .5 * box.params.width - .75 * box.params.flapGap;

            // ...

            const flapPlaneGeometry = new THREE.PlaneGeometry(
                flapWidth,
                flapHeight
            );
            box.els[half][side].top.geometry = flapPlaneGeometry;
            box.els[half][side].bottom.geometry = flapPlaneGeometry;
            box.els[half][side].top.position.y = .5 * box.params.depth + .5 * flapHeight;
            box.els[half][side].bottom.position.y = -.5 * box.params.depth -.5 * flapHeight;
        }
    }
}

The flaps rotation is a bit more tricky.

Changing the pivot point of Three.js mesh

Let’s get back to the first example with a Three.js object rotating around the X axis.

There’re many ways to set the rotation of a 3D object: Euler angle, quaternion, lookAt() function, transform matrices and so on. Regardless of the way angle and axis of rotation are set, the pivot point (transform origin) will be at the center of the mesh.

Say we animate rotation.x for the 4 boxes that are placed around the scene:

const boxGeometry = new THREE.BoxGeometry(boxSize[0], boxSize[1], boxSize[2]);
const boxMesh = new THREE.Mesh(boxGeometry, boxMaterial);

const numberOfBoxes = 4;
for (let i = 0; i < numberOfBoxes; i++) {
    boxes[i] = boxMesh.clone();
    boxes[i].position.x = (i - .5 * numberOfBoxes) * (boxSize[0] + 2);
    scene.add(boxes[i]);
}
boxes[1].position.y = .5 * boxSize[1];
boxes[2].rotation.y = .5 * Math.PI;
boxes[3].position.y = - boxSize[1];
See the sandbox here.

For them to rotate around the bottom edge, we need to move the pivot point to -.5 x box size. There are couple of ways to do this:

  • wrap mesh with additional Object3D
  • transform geometry of mesh
  • assign pivot point with additional transform matrix
  • could be some other tricks

If you’re curious why Three.js doesn’t provide origin positioning as a native method, check out this discussion.

Option #1: Wrapping mesh with additional Object3D

For the first option, we add the original box mesh as a child of new Object3D. We treat the parent object as a box so we apply transforms (rotation.x) to it, exactly as before. But we also translate the mesh to half of its size. The mesh moves up in the local space but the origin of the parent object stays in the same point.

const boxGeometry = new THREE.BoxGeometry(boxSize[0], boxSize[1], boxSize[2]);
const boxMesh = new THREE.Mesh(boxGeometry, boxMaterial);

const numberOfBoxes = 4;
for (let i = 0; i < numberOfBoxes; i++) {
    boxes[i] = new THREE.Object3D();
    const mesh = boxMesh.clone();
    mesh.position.y = .5 * boxSize[1];
    boxes[i].add(mesh);

    boxes[i].position.x = (i - .5 * numberOfBoxes) * (boxSize[0] + 2);
    scene.add(boxes[i]);
}
boxes[1].position.y = .5 * boxSize[1];
boxes[2].rotation.y = .5 * Math.PI;
boxes[3].position.y = - boxSize[1];
See the sandbox here.

Option #2: Translating the geometry of Mesh

With the second option, we move up the geometry of the mesh. In Three.js, we can apply a transform not only to the objects but also to their geometry.

const boxGeometry = new THREE.BoxGeometry(boxSize[0], boxSize[1], boxSize[2]);
boxGeometry.translate(0, .5 * boxSize[1], 0);
const boxMesh = new THREE.Mesh(boxGeometry, boxMaterial);

const numberOfBoxes = 4;
for (let i = 0; i < numberOfBoxes; i++) {
    boxes[i] = boxMesh.clone();
    boxes[i].position.x = (i - .5 * numberOfBoxes) * (boxSize[0] + 2);
    scene.add(boxes[i]);
}
boxes[1].position.y = .5 * boxSize[1];
boxes[2].rotation.y = .5 * Math.PI;
boxes[3].position.y = - boxSize[1];
See the sandbox here.

The idea and result are the same: we move the mesh up ½ of its height but the origin point is staying at the same coordinates. That’s why rotation.x transform makes the box rotate around its bottom side.

Option #3: Assign pivot point with additional transform matrix

I find this way less suitable for today’s project but the idea behind it is pretty simple. We take both, pivot point position and desired transform as matrixes. Instead of simply applying the desired transform to the box, we apply the inverted pivot point position first, then do rotation.x as the box is centered at the moment, and then apply the point position.

object.matrix = inverse(pivot.matrix) * someTranformationMatrix * pivot.matrix

You can find a nice implementation of this method here.

I’m using geometry translation (option #2) to move the origin of the flaps. Before getting back to the box, let’s see what we can achieve if the very same rotating boxes are added to the scene in hierarchical order and placed one on top of another.

const boxGeometry = new THREE.BoxGeometry(boxSize[0], boxSize[1], boxSize[2]);
boxGeometry.translate(0, .5 * boxSize[1], 0);
const boxMesh = new THREE.Mesh(boxGeometry, boxMaterial);

const numberOfBoxes = 4;
for (let i = 0; i < numberOfBoxes; i++) {
    boxes[i] = boxMesh.clone();
    if (i === 0) {
        scene.add(boxes[i]);
    } else {
        boxes[i - 1].add(boxes[i]);
        boxes[i].position.y = boxSize[1];
    }
}

We still animate rotation.x of each box from 0 to 90 degrees, so the first mesh rotates to 90 degrees, the second one does the same 90 degrees plus its own 90 degrees rotation, the third does 90+90+90 degrees, etc.

See the sandbox here.

A very easy and quite useful trick.

Animating the flaps

Back to the flaps. Flaps are made from translated geometry and added to the scene as children of the side meshes. We set their position.y property once and animate their rotation.x property on scroll.

function setGeometryHierarchy() {
    box.els.group.add(box.els.frontHalf.width.side, box.els.frontHalf.length.side, box.els.backHalf.width.side, box.els.backHalf.length.side);
    box.els.frontHalf.width.side.add(box.els.frontHalf.width.top, box.els.frontHalf.width.bottom);
    box.els.frontHalf.length.side.add(box.els.frontHalf.length.top, box.els.frontHalf.length.bottom);
    box.els.backHalf.width.side.add(box.els.backHalf.width.top, box.els.backHalf.width.bottom);
    box.els.backHalf.length.side.add(box.els.backHalf.length.top, box.els.backHalf.length.bottom);
}

function createBoxElements() {
    for (let halfIdx = 0; halfIdx < 2; halfIdx++) {
        for (let sideIdx = 0; sideIdx < 2; sideIdx++) {

            // ...

            const topGeometry = flapPlaneGeometry.clone();
            topGeometry.translate(0, .5 * flapHeight, 0);

            const bottomGeometry = flapPlaneGeometry.clone();
            bottomGeometry.translate(0, -.5 * flapHeight, 0);

            box.els[half][side].top.position.y = .5 * box.params.depth;
            box.els[half][side].bottom.position.y = -.5 * box.params.depth;
        }
    }
}

The animation of each flap has an individual timing and easing within the gsap.timeline so we store the flap angles separately.

let box = {
    // ...
    animated: {
        openingAngle: .02 * Math.PI,
        flapAngles: {
            backHalf: {
                width: {
                    top: 0,
                    bottom: 0
                },
                length: {
                    top: 0,
                    bottom: 0
                },
            },
            frontHalf: {
                width: {
                    top: 0,
                    bottom: 0
                },
                length: {
                    top: 0,
                    bottom: 0
                },
            }
        }
    }
}

function createFoldingAnimation() {
    gsap.timeline({
        scrollTrigger: {
            // ...
        },
        onUpdate: updatePanelsTransform
    })
        .to(box.animated, {
            duration: 1,
            openingAngle: .5 * Math.PI,
            ease: 'power1.inOut'
        })
        .to([ box.animated.flapAngles.backHalf.width, box.animated.flapAngles.frontHalf.width ], {
            duration: .6,
            bottom: .6 * Math.PI,
            ease: 'back.in(3)'
        }, .9)
        .to(box.animated.flapAngles.backHalf.length, {
            duration: .7,
            bottom: .5 * Math.PI,
            ease: 'back.in(2)'
        }, 1.1)
        .to(box.animated.flapAngles.frontHalf.length, {
            duration: .8,
            bottom: .49 * Math.PI,
            ease: 'back.in(3)'
        }, 1.4)
        .to([box.animated.flapAngles.backHalf.width, box.animated.flapAngles.frontHalf.width], {
            duration: .6,
            top: .6 * Math.PI,
            ease: 'back.in(3)'
        }, 1.4)
        .to(box.animated.flapAngles.backHalf.length, {
            duration: .7,
            top: .5 * Math.PI,
            ease: 'back.in(3)'
        }, 1.7)
        .to(box.animated.flapAngles.frontHalf.length, {
            duration: .9,
            top: .49 * Math.PI,
            ease: 'back.in(4)'
        }, 1.8)
}

function updatePanelsTransform() {

    // ... folding / unfolding

    box.els.frontHalf.width.top.rotation.x = -box.animated.flapAngles.frontHalf.width.top;
    box.els.frontHalf.length.top.rotation.x = -box.animated.flapAngles.frontHalf.length.top;
    box.els.frontHalf.width.bottom.rotation.x = box.animated.flapAngles.frontHalf.width.bottom;
    box.els.frontHalf.length.bottom.rotation.x = box.animated.flapAngles.frontHalf.length.bottom;

    box.els.backHalf.width.top.rotation.x = box.animated.flapAngles.backHalf.width.top;
    box.els.backHalf.length.top.rotation.x = box.animated.flapAngles.backHalf.length.top;
    box.els.backHalf.width.bottom.rotation.x = -box.animated.flapAngles.backHalf.width.bottom;
    box.els.backHalf.length.bottom.rotation.x = -box.animated.flapAngles.backHalf.length.bottom;
}
See the sandbox here.

With all this, we finish the animation part! Let’s now work on the look of our box.

Lights and colors 

This part is as simple as replacing multi-color wireframes with a single color MeshStandardMaterial and adding a few lights.

const ambientLight = new THREE.AmbientLight(0xffffff, .5);
scene.add(ambientLight);
lightHolder = new THREE.Group();
const topLight = new THREE.PointLight(0xffffff, .5);
topLight.position.set(-30, 300, 0);
lightHolder.add(topLight);
const sideLight = new THREE.PointLight(0xffffff, .7);
sideLight.position.set(50, 0, 150);
lightHolder.add(sideLight);
scene.add(lightHolder);

const material = new THREE.MeshStandardMaterial({
    color: new THREE.Color(0x9C8D7B),
    side: THREE.DoubleSide
});
box.els.group.traverse(c => {
    if (c.isMesh) c.material = material;
});

Tip: Object rotation effect with OrbitControls

OrbitControls make the camera orbit around the central point (left preview). To demonstrate a 3D object, it’s better to give users a feeling that they rotate the object itself, not the camera around it (right preview). To do so, we keep the lights position static relative to camera.

It can be done by wrapping lights in an additional lightHolder object. The pivot point of the parent object is (0, 0, 0). We also know that the camera rotates around (0, 0, 0). It means we can simply apply the camera’s rotation to the lightHolder to keep the lights static relative to the camera.

function render() {
    // ...
    lightHolder.quaternion.copy(camera.quaternion);
    renderer.render(scene, camera);
}
See the sandbox here.

Layered panels

So far, our sides and flaps were done as a simple PlaneGeomery. Let’s replace it with “real” corrugated cardboard material ‐ two covers and a fluted layer between them.


First step is replacing a single plane with 3 planes merged into one. To do so, we need to place 3 clones of PlaneGeometry one behind another and translate the front and back levels along the Z axis by half of the total cardboard thickness.

There’re many ways to move the layers, starting from the geometry.translate(0, 0, .5 * thickness) method we used to change the pivot point. But considering other transforms we’re about to apply to the cardboard geometry, we better go through the geometry.attributes.position array and add the offset to the z-coordinates directly:

fconst baseGeometry = new THREE.PlaneGeometry(
    params.width,
    params.height,
);

const geometriesToMerge = [
    getLayerGeometry(- .5 * params.thickness),
    getLayerGeometry(0),
    getLayerGeometry(.5 * params.thickness)
];

function getLayerGeometry(offset) {
    const layerGeometry = baseGeometry.clone();
    const positionAttr = layerGeometry.attributes.position;
    for (let i = 0; i < positionAttr.count; i++) {
        const x = positionAttr.getX(i);
        const y = positionAttr.getY(i)
        const z = positionAttr.getZ(i) + offset;
        positionAttr.setXYZ(i, x, y, z);
    }
    return layerGeometry;
}

For merging the geometries we use the mergeBufferGeometries method. It’s pretty straightforward, just don’t forget to import the BufferGeometryUtils module into your project.

See the sandbox here.

Wavy flute

To turn a mid layer into the flute, we apply the sine wave to the plane. In fact, it’s the same z-coordinate offset, just calculated as Sine function of the x-attribute instead of a constant value.

function getLayerGeometry() {
    const baseGeometry = new THREE.PlaneGeometry(
        params.width,
        params.height,
        params.widthSegments,
        1
    );

    const offset = (v) => .5 * params.thickness * Math.sin(params.fluteFreq * v);
    const layerGeometry = baseGeometry.clone();
    const positionAttr = layerGeometry.attributes.position;
    for (let i = 0; i < positionAttr.count; i++) {
        const x = positionAttr.getX(i);
        const y = positionAttr.getY(i)
        const z = positionAttr.getZ(i) + offset(x);
        positionAttr.setXYZ(i, x, y, z);
    }
    layerGeometry.computeVertexNormals();

    return layerGeometry;
}

The z-offset is not the only change we need here. By default, PlaneGeometry is constructed from two triangles. As it has only one width segment and one height segment, there’re only corner vertices. To apply the sine(x) wave, we need enough vertices along the x axis – enough resolution, you can say.

Also, don’t forget to update the normals after changing the geometry. It doesn’t happen automatically.

See the sandbox here.

I apply the wave with an amplitude equal to the cardboard thickness to the middle layer, and the same wave with a little amplitude to the front and back layers, just to give some texture to the box.

The surfaces and cuts look pretty cool. But we don’t want to see the wavy layer on the folding lines. At the same time, I want those lines to be visible before the folding happens:

To achieve this, we can “press” the cardboard on the selected edges of each panel.

We can do so by applying another modifier to the z-coordinate. This time it’s a power function of the x or y attribute (depending on the side we’re “pressing”). 

function getLayerGeometry() {
    const baseGeometry = new THREE.PlaneGeometry(
        params.width,
        params.height,
        params.widthSegments,
        params.heightSegments // to apply folding we need sufficient number of segments on each side
    );

    const offset = (v) => .5 * params.thickness * Math.sin(params.fluteFreq * v);
    const layerGeometry = baseGeometry.clone();
    const positionAttr = layerGeometry.attributes.position;
    for (let i = 0; i < positionAttr.count; i++) {
        const x = positionAttr.getX(i);
        const y = positionAttr.getY(i)
        let z = positionAttr.getZ(i) + offset(x); // add wave
        z = applyFolds(x, y, z); // add folds
        positionAttr.setXYZ(i, x, y, z);
    }
    layerGeometry.computeVertexNormals();

    return layerGeometry;
}

function applyFolds(x, y, z) {
    const folds = [ params.topFold, params.rightFold, params.bottomFold, params.leftFold ];
    const size = [ params.width, params.height ];
    let modifier = (c, size) => (1. - Math.pow(c / (.5 * size), params.foldingPow));

    // top edge: Z -> 0 when y -> plane height,
    // bottom edge: Z -> 0 when y -> 0,
    // right edge: Z -> 0 when x -> plane width,
    // left edge: Z -> 0 when x -> 0

    if ((x > 0 && folds[1]) || (x < 0 && folds[3])) {
        z *= modifier(x, size[0]);
    }
    if ((y > 0 && folds[0]) || (y < 0 && folds[2])) {
        z *= modifier(y, size[1]);
    }
    return z;
}
See the sandbox here.

The folding modifier is applied to all 4 edges of the box sides, to the bottom edges of the top flaps, and to the top edges of bottom flaps.

With this the box itself is finished.

There is room for optimization, and for some extra features, of course. For example, we can easily remove the flute level from the side panels as it’s never visible anyway. Let me also quickly describe how to add zooming buttons and a side image to our gorgeous box.

Zooming

The default behaviour of OrbitControls is zooming the scene by scroll. It means that our scroll-driven animation is in conflict with it, so we set orbit.enableZoom property to false.

We still can have zooming on the scene by changing the camera.zoom property. We can use the same GSAP animation as before, just note that animating the camera’s property doesn’t automatically update the camera’s projection. According to the documentation, updateProjectionMatrix() must be called after any change of the camera parameters so we have to call it on every frame of the transition:

// ...
// changing the zoomLevel variable with buttons

gsap.to(camera, {
    duration: .2,
    zoom: zoomLevel,
    onUpdate: () => {
        camera.updateProjectionMatrix();
    }
})

Side image

The image, or even a clickable link, can be added on the box side. It can be done with an additional plane mesh with a texture on it. It should be just moving together with the selected side of the box:

function updatePanelsTransform() {

   // ...

   // for copyright mesh to be placed on the front length side of the box
   copyright.position.copy(box.els.frontHalf.length.side.position);
   copyright.position.x += .5 * box.params.length - .5 * box.params.copyrightSize[0];
   copyright.position.y -= .5 * (box.params.depth - box.params.copyrightSize[1]);
   copyright.position.z += box.params.thickness;
}

As for the texture, we can import an image/video file, or use a canvas element we create programmatically. In the final demo I use a canvas with a transparent background, and two lines of text with an underline. Turning the canvas into a Three.js texture makes me able to map it on the plane:

function createCopyright() {
    
    // create canvas
    
    const canvas = document.createElement('canvas');
    canvas.width = box.params.copyrightSize[0] * 10;
    canvas.height = box.params.copyrightSize[1] * 10;
    const planeGeometry = new THREE.PlaneGeometry(box.params.copyrightSize[0], box.params.copyrightSize[1]);

    const ctx = canvas.getContext('2d');
    ctx.clearRect(0, 0, canvas.width, canvas.width);
    ctx.fillStyle = '#000000';
    ctx.font = '22px sans-serif';
    ctx.textAlign = 'end';
    ctx.fillText('ksenia-k.com', canvas.width - 30, 30);
    ctx.fillText('codepen.io/ksenia-k', canvas.width - 30, 70);

    ctx.lineWidth = 2;
    ctx.beginPath();
    ctx.moveTo(canvas.width - 160, 35);
    ctx.lineTo(canvas.width - 30, 35);
    ctx.stroke();
    ctx.beginPath();
    ctx.moveTo(canvas.width - 228, 77);
    ctx.lineTo(canvas.width - 30, 77);
    ctx.stroke();

    // create texture

    const texture = new THREE.CanvasTexture(canvas);

    // create mesh mapped with texture

    copyright = new THREE.Mesh(planeGeometry, new THREE.MeshBasicMaterial({
        map: texture,
        transparent: true,
        opacity: .5
    }));
    scene.add(copyright);
}

To make the text lines clickable, we do the following:

  • use Raycaster and mousemove event to track if the intersection between cursor ray and plane, change the cursor appearance if the mesh is hovered
  • if a click happened while the mesh is hovered, check the uv coordinate of intersection
  • if the uv coordinate is on the top half of the mesh (uv.y > .5) we open the first link, if uv coordinate is below .5, we go to the second link

The raycaster code is available in the full demo.


Thank you for scrolling this far!
Hope this tutorial can be useful for your Three.js projects ♡

Sketchy Pencil Effect with Three.js Post-Processing

In this tutorial, you’ll learn how to create a sketchy, pencil effect using Three.js post-processing. We’ll go through the steps for creating a custom post-processing render pass, implementing edge detection in WebGL, re-rendering the normal buffer to a render target, and tweaking the end result with generated and imported textures.

This is what the end result looks like, so let’s get started!

Post-processing in Three.js

Post-processing in Three.js is a way of applying effects to your rendered scene after it has been drawn. In addition to all the out-of-the-box post-processing effects provided by Three.js, it is also possible to add your own filters by creating custom render passes.

A custom render pass is essentially a function that takes in an image of the scene and returns a new image, with the desired effect applied. You can think of these render passes like layer effects in Photoshop—each applies a new filter based on the previous effect output. The resulting image is a combination of all the different effects.

Enabling post-processing in Three.js

To add post-processing to our scene, we need to set up our scene rendering to use an EffectComposer in addition to the WebGLRenderer. The effect composer stacks the post-processing effects one on top of the other, in the order in which they’re passed. If we want our rendered scene to be passed to the next effect, we need to add the RenderPass post-processing pass is passed first.

Then, inside the tick function which starts our render loop, we call composer.render() instead of renderer.render(scene, camera).

const renderer = new THREE.WebGLRenderer()
// ... settings for the renderer are available in the Codesandbox below

const composer = new EffectComposer(renderer)
const renderPass = new RenderPass(scene, camera)

composer.addPass(renderPass)

function tick() {
	requestAnimationFrame(tick)
	composer.render()
}

tick()

There are two ways of creating a custom post-processing effect:

  1. Creating a custom shader and passing it to a ShaderPass instance, or
  2. Creating a custom render pass by extending the Pass class.

Because we want our post-processing effect to get more information than just uniforms and attributes, we will be creating a custom render pass.

Creating a custom render pass

While there isn’t much documentation currently available on how to write your own custom post-processing pass in Three.js, there are plenty of examples to learn from already inside the library. A custom pass inherits from the Pass class and has three methods: setSize, render, and dispose. As you may have guessed, we’ll mostly be focusing on the render method.

First we’ll start by creating our own PencilLinesPass that extends the Pass class and will later implement our own rendering logic.

import { Pass, FullScreenQuad } from 'three/examples/jsm/postprocessing/Pass'
import * as THREE from 'three'

export class PencilLinesPass extends Pass {
	constructor() {
		super()
	}

	render(
		renderer: THREE.WebGLRenderer,
		writeBuffer: THREE.WebGLRenderTarget,
		readBuffer: THREE.WebGLRenderTarget
	) {
		if (this.renderToScreen) {
			renderer.setRenderTarget(null)
		} else {
			renderer.setRenderTarget(writeBuffer)
			if (this.clear) renderer.clear()
		}
	}
}

As you can see, the render method takes in a WebGLRenderer and two WebGLRenderTargets, one for the write buffer and the other for the read buffer. In Three.js, render targets are basically textures to which we can render the scene, and they serve to send data between passes. The read buffer receives data in from the previous render pass, in our case that is the default RenderPass. The write buffer sends data out to the next render pass.

If renderToScreen is true, that means we want to send our buffer to the screen instead of to a render target. The renderer’s render target is set to null, so it defaults to the on-screen canvas.

At this point, we’re not actually rendering anything, not even the data coming in through the readBuffer. In order to get things rendered, we’ll need to create a FullscreenQuad and a shader material that will take care of rendering. The shader material gets rendered to the FullscreenQuad.

To test that everything is set up correctly, we can use the built-in CopyShader that will display whatever image we put into it. In this case, in this case the readBuffer‘s texture.

import { Pass, FullScreenQuad } from 'three/examples/jsm/postprocessing/Pass'
import { CopyShader } from 'three/examples/jsm/shaders/CopyShader'
import * as THREE from 'three'

export class PencilLinesPass extends Pass {
	fsQuad: FullScreenQuad
	material: THREE.ShaderMaterial

	constructor() {
		super()

		this.material = new THREE.ShaderMaterial(CopyShader)
		this.fsQuad = new FullScreenQuad(this.material)
	}

	dispose() {
		this.material.dispose()
		this.fsQuad.dispose()
	}

	render(
		renderer: THREE.WebGLRenderer,
		writeBuffer: THREE.WebGLRenderTarget,
		readBuffer: THREE.WebGLRenderTarget
	) {
		this.material.uniforms['tDiffuse'].value = readBuffer.texture

		if (this.renderToScreen) {
			renderer.setRenderTarget(null)
			this.fsQuad.render(renderer)
		} else {
			renderer.setRenderTarget(writeBuffer)
			if (this.clear) renderer.clear()
			this.fsQuad.render(renderer)
		}
	}
}

Note: we’re passing the uniform tDiffuse to the shader material. The CopyShader already has this uniform built in and it represents the image to be displayed on the screen. If you’re writing your own ShaderPass, this uniform will be passed in to your shader automatically.

All that’s left is to hook the custom render pass into the scene by adding it to the EffectComposer—after the RenderPass of course!

const renderPass = new RenderPass(scene, camera)
const pencilLinesPass = new PencilLinesPass()

composer.addPass(renderPass)
composer.addPass(pencilLinesPass)

View the Codesandbox example

Scene with a custom render pass and the CopyShader

Now that we have everything set up, we can actually get started with creating our special effect!

Sobel operator for creating outlines

We need to be able to tell the computer to detect lines based on our input image, in this case the rendered scene. The kind of edge detection we’ll be using is called the Sobel operator, and it only consists of a few steps.

The Sobel operator does edge detection by looking at the gradient of a small section of the image—essentially how sharp the transition from one value to another is. The image is broken down into smaller “kernels”, or 3px by 3px squares where the central pixel is the one currently being processed. The image below shows what this might look like: the red square in the center represents the current pixel being evaluated and the rest of the squares are its neighbors.

3px by 3px kernel

The weighted value for each neighbor is then calculated by taking the pixels value (brightness) and multiplying it by a weight based on its position relative to the pixel being evaluated. This is done with weights biasing the gradient horizontally and vertically. The average of both values is taken, and if it passes a certain threshold, we consider the pixel to represent an edge.

The horizontal and vertical gradients for the Sobel operator

While the implementation for the Sobel operator follows the image representations above almost directly, it still takes time to grasp. Thankfully we don’t have to implement our own as Three.js already provides us with working code for one in the SobelOperatorShader. We’ll copy this code over into our shader material.

Implementing the Sobel operator

Instead of the CopyShader, we’ll now need to add our own ShaderMaterial so that we have control over the vertex and fragment shaders, as well as the uniforms sent to those shaders.

// PencilLinesMaterial.ts
export class PencilLinesMaterial extends THREE.ShaderMaterial {
	constructor() {
		super({
			uniforms: {
				// we'll keep the naming convention here since the CopyShader
				// also used a tDiffuse texture for the currently rendered scene.
				tDiffuse: { value: null },
				// we'll pass in the canvas size here later
				uResolution: {
					value: new THREE.Vector2(1, 1)
				}
			},
			fragmentShader, // to be imported from another file
			vertexShader // to be imported from another file
		})
	}
}

We’ll get to the fragment and vertex shaders soon, but first we need to use our new shader material in the scene. We do this by swapping out the CopyShader. Don’t forget to pass the resolution—the canvas size—as a the shader’s uniform. While outside the scope of this tutorial, it’s also important to update this uniform when the canvas resizes.

// PencilLinesPass.ts
export class PencilLinesPass extends Pass {
	fsQuad: FullScreenQuad
	material: PencilLinesMaterial

	constructor({ width, height }: { width: number; height: number }) {
		super()
		
		// change the material from to our new PencilLinesMaterial
		this.material = new PencilLinesMaterial() 
		this.fsQuad = new FullScreenQuad(this.material)

		// set the uResolution uniform with the current canvas's width and height
		this.material.uniforms.uResolution.value = new THREE.Vector2(width, height)
	}
}

Next, we can start on the vertex and fragment shaders.

The vertex shader doesn’t do much except set the gl_Position value and pass the uv attribute to the fragment shader. Because we’re rendering our image to a FullscreenQuad, the uv information corresponds to the position on the screen of any given fragment.

// vertex shader
varying vec2 vUv;

void main() {

    vUv = uv;

    gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}

The fragment shader is a fair bit more complicated, so let’s break it down line by line. First we want to implement the Sobel operator using the implementation already provided by Three.js. The only difference is that we want to control how we calculate the value at each pixel, since we’ll be introducing line detection of the normal buffer as well.

float combinedSobelValue() {
    // kernel definition (in glsl matrices are filled in column-major order)
    const mat3 Gx = mat3(-1, -2, -1, 0, 0, 0, 1, 2, 1);// x direction kernel
    const mat3 Gy = mat3(-1, 0, 1, -2, 0, 2, -1, 0, 1);// y direction kernel

    // fetch the 3x3 neighbourhood of a fragment

    // first column
    float tx0y0 = getValue(-1, -1);
    float tx0y1 = getValue(-1, 0);
    float tx0y2 = getValue(-1, 1);

    // second column
    float tx1y0 = getValue(0, -1);
    float tx1y1 = getValue(0, 0);
    float tx1y2 = getValue(0, 1);

    // third column
    float tx2y0 = getValue(1, -1);
    float tx2y1 = getValue(1, 0);
    float tx2y2 = getValue(1, 1);

    // gradient value in x direction
    float valueGx = Gx[0][0] * tx0y0 + Gx[1][0] * tx1y0 + Gx[2][0] * tx2y0 +
    Gx[0][1] * tx0y1 + Gx[1][1] * tx1y1 + Gx[2][1] * tx2y1 +
    Gx[0][2] * tx0y2 + Gx[1][2] * tx1y2 + Gx[2][2] * tx2y2;

    // gradient value in y direction
    float valueGy = Gy[0][0] * tx0y0 + Gy[1][0] * tx1y0 + Gy[2][0] * tx2y0 +
    Gy[0][1] * tx0y1 + Gy[1][1] * tx1y1 + Gy[2][1] * tx2y1 +
    Gy[0][2] * tx0y2 + Gy[1][2] * tx1y2 + Gy[2][2] * tx2y2;

    // magnitude of the total gradient
    float G = (valueGx * valueGx) + (valueGy * valueGy);
    return clamp(G, 0.0, 1.0);
}

To the getValue function we pass in the offset from the current pixel, thus identifying which pixel in the kernel we are looking at to get that value. For the time being, only the value of the diffuse buffer is being evaluated, we’ll add the normal buffer in the next step.

float valueAtPoint(sampler2D image, vec2 coord, vec2 texel, vec2 point) {
    vec3 luma = vec3(0.299, 0.587, 0.114);

    return dot(texture2D(image, coord + texel * point).xyz, luma);
}

float diffuseValue(int x, int y) {
    return valueAtPoint(tDiffuse, vUv, vec2(1.0 / uResolution.x, 1.0 / uResolution.y), vec2(x, y)) * 0.6;
}

float getValue(int x, int y) {
    return diffuseValue(x, y);
}

The valueAtPoint function takes any texture (diffuse or normal) and returns the grayscale value at a specified point. The luma vector is used to calculate the brightness of a color, hence turning the color into grayscale. The implementation comes from glsl-luma.

Because the getValue function only takes into account the diffuse buffer, this means that any edge in the scene will be detected, including edges created by both cast shadows and core shadows. This also means that edges which we would intuit, such as the outlines of objects, could get missed if they blend in too well with their surroundings. To capture those missing edges, we’ll add edge detection from the normal buffer next.

Finally, we call the Sobel operator in the main function like this:

void main() {
    float sobelValue = combinedSobelValue();
    sobelValue = smoothstep(0.01, 0.03, sobelValue);

    vec4 lineColor = vec4(0.32, 0.12, 0.2, 1.0);

    if (sobelValue > 0.1) {
        gl_FragColor = lineColor;
    } else {
        gl_FragColor = vec4(1.0);
    }
}

View the Codesandbox example

Rendered scene with edge detection using the Sobel operator

Creating a normal buffer rendering

For proper outlines, the Sobel operator is often applied to the normals of the scene and the depth buffer, so the outlines of objects are captured, but not lines within the object. Omar Shehata describes such a method in his excellent How to render outlines in WebGL tutorial. For the purposes of a sketchy pencil effect, we don’t need complete edge detection, but we do want to use normals for more complete edges and for sketchy shading effects later.

Since the normal is a vector that represents the direction of an object’s surface at each point, it often gets represented with a color to get an image with all the normal data from the scene. This image is the “normal buffer.”

In order to create a normal buffer, first we need a new render target in the PencilLinesPass constructor. We also need to create a single MeshNormalMaterial on the class, since we’ll be using this to override the scene’s default materials when rendering the normal buffer.

const normalBuffer = new THREE.WebGLRenderTarget(width, height)

normalBuffer.texture.format = THREE.RGBAFormat
normalBuffer.texture.type = THREE.HalfFloatType
normalBuffer.texture.minFilter = THREE.NearestFilter
normalBuffer.texture.magFilter = THREE.NearestFilter
normalBuffer.texture.generateMipmaps = false
normalBuffer.stencilBuffer = false
this.normalBuffer = normalBuffer

this.normalMaterial = new THREE.MeshNormalMaterial()

In order to render the scene inside the pass, the render pass actually needs a reference to the scene and to the camera. We’ll need to send those through the constructor of the render pass as well.

// PencilLinesPass.ts constructor
constructor({ ..., scene, camera}: { ...; scene: THREE.Scene; camera: THREE.Camera }) {
	super()
	this.scene = scene
	this.camera = camera
    ...
}

Inside the pass’s render method, we want to re-render the scene with the normal material overriding the default materials. We set the renderTarget to the normalBuffer and render the scene with the WebGLRenderer as we normally would. The only difference is that instead of rendering to the screen with the scene’s default materials, the renderer renders to our render target with the normal material. Then we pass the normalBuffer.texture to the shader material.

renderer.setRenderTarget(this.normalBuffer)
const overrideMaterialValue = this.scene.overrideMaterial

this.scene.overrideMaterial = this.normalMaterial
renderer.render(this.scene, this.camera)
this.scene.overrideMaterial = overrideMaterialValue

this.material.uniforms.uNormals.value = this.normalBuffer.texture
this.material.uniforms.tDiffuse.value = readBuffer.texture

If at this point you were to set the gl_FragColor to the value of the normal buffer with texture2D(uNormals, vUv); this would be the result:

Normal buffer of the current scene

Instead, inside the custom material’s fragment shader, we want to modify the getValue function to include the Sobel operator value from the normal buffer as well.

float normalValue(int x, int y) {
    return valueAtPoint(uNormals, vUv, vec2(1.0 / uResolution.x, 1.0 / uResolution.y), vec2(x, y)) * 0.3;
}

float getValue(int x, int y) {
    return diffuseValue(x, y) + normalValue(x, y);
}

The result will look similar to the previous step, but we will be able to add additional noise and sketchiness with this normal data in the next step.

View the Codesandbox example

Sobel operator applied to the diffuse and normal buffers

Adding generated and textured noise for shading and squiggles

There are two ways to bring noise into the post-processing effect at this point:

  1. By procedurally generating the noise within the shader, or
  2. By using an existing image with noise and applying it as a texture.

Both providing different levels of flexibility and control. For the noise function, I’ve gone with Inigo Quilez’s gradient noise implementation, since it provides nice uniformity in the noise when applying to the “shading” effect”.

This noise function is called when getting the value of the Sobel operator and is specifically applied to the normal value, so the getValue function in the fragment shader changes like so:

float getValue(int x, int y) {
    float noiseValue = noise(gl_FragCoord.xy);
    noiseValue = noiseValue * 2.0 - 1.0;
    noiseValue *= 10.0;

    return diffuseValue(x, y) + normalValue(x, y) * noiseValue;
}

The result is a textured pencil line and stippled effect on object curves where the normal vector values change. Notice that flat objects, like the plane, do not get these effects, since they don’t have any variation in normal values.

The next and final step of this effect is to add distortion to the lines. For this I used a texture file created in Photoshop using the Render Clouds effect.

Generated clouds texture created in Photoshop

The cloud texture is passed to the shader through a uniform, the same way that the diffuse and normal buffers are. Once the shader has access to the texture, we can sample the texture for each fragment and use it to offset the location we’re reading from in the buffer. Essentially, we get the squiggled line effect by distorting the image we’re reading from, not by drawing to a different place. Because the texture’s noise is smooth, the lines don’t come out jagged and irregular.

float normalValue(int x, int y) {
    float cutoff = 50.0;
    float offset = 0.5 / cutoff;
    float noiseValue = clamp(texture(uTexture, vUv).r, 0.0, cutoff) / cutoff - offset;

    return valueAtPoint(uNormals, vUv + noiseValue, vec2(1.0 / uResolution.x, 1.0 / uResolution.y), vec2(x, y)) * 0.3;
}

You can also play around with how the effect is applied each buffer individually. This can lead to lines being offset from one another, giving an even better hand-drawn effect.

View the Codesandbox example

Final effect including normal buffer based “shading” and line distortion

Conclusion

There are many techniques to create hand-drawn or sketchy effects in 3D, this tutorial lists just a few. From here, there are multiple ways to go forward. You could adjust the line thickness by modulating the threshold of what is considered an edge based on a noise texture. You could also apply the Sobel operator to the depth buffer, ignoring the diffuse buffer entirely, to get outlined objects without outlining shadows. You could add generated noise based on lighting information in the scene instead of based on the normals of the objects. The possibilities are endless, and I hope this tutorial has inspired you to pursue them!

Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js

Simple rules can produce structured, complex systems. And beautiful images often follow. This is the core idea behind the Game of Life, a cellular automaton devised by British mathematician John Horton Conway in 1970. Often called just ‘Life’, it’s probably one of the most popular and well known examples of cellular automata. There are many examples and tutorials on the web that go over implementing it, like this one by Daniel Shiffman.

But in many of these examples this computation runs on the CPU, limiting the possible complexity and amount of cells in the system. So this article will go over implementing the Game of Life in WebGL which allows GPU-accelerated computations (= way more complex and detailed images). Writing WebGL on its own can be very painful so it’s going to be implemented using Three.js, a WebGL graphics library. This is going to require some advanced rendering techniques, so some basic familiarity with Three.js and GLSL would be helpful in order to follow along.

Cellular Automata

Conway’s game of life is what’s called a cellular automaton and it makes sense to consider a more abstract view of what that means. This relates to automata theory in theoretical computer science, but really it’s just about creating some simple rules. A cellular automaton is a model of a system that consists of automata, called cells, that are interlinked via some simple logic which allows modelling complex behaviour. A cellular automaton has the following characteristics:

  • Cells live on a grid which can be 1D or higher-dimensional (in our Game of Life it’s a 2D grid of pixels)
  • Each cell has only one current state. Our example only has two possibilities: 0 or 1 / dead or alive
  • Each cell has a neighbourhood, a list of adjacent cells

The basic working principle of a cellular automaton usually involves the following steps:

  • An initial (global) state is selected by assigning a state for each cell.
  • A new generation is created, according to some fixed rule that determines the new state of each cell in terms of:
    • The current state of the cell
    • The states of cells in its neighbourhood
The state of a cell together with its neighbourhood determine the state in the next generation

As already mentioned, the Game of Life is based on a 2D grid. In its initial state there are cells which are either alive or dead. We generate the next generation of cells according to only four rules:

  • Any live cell with fewer than two live neighbours dies as if caused by underpopulation.
  • Any live cell with two or three live neighbours lives on to the next generation.
  • Any live cell with more than three live neighbours dies, as if by overpopulation.
  • Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Conway’s Game of Life uses a Moore neighbourhood, which is composed of the current cell and the eight cells that surround it, so those are the ones we’ll be looking at in this example. There are many variations and possibilities to this, and Life is actually Turing complete, but this post is about implementing it in WebGL with Three.js so we will stick to a rather basic version but feel free to research more.

Three.js

Now with most of the theory out of the way, we can finally start implementing the Game of Life.

Three.js is a pretty high-level WebGL library, but it lets you decide how deep you want to go. So it provides a lot of options to control the way scenes are structured and rendered and allows users to get close to the WebGL API by writing custom shaders in GLSL and passing Buffer Attributes.

In the Game of Life each cell needs information about its neighbourhood. But in WebGL all fragments are processed simultaneously by the GPU, so when a fragment shader is in the midst of processing one pixel, there’s no way it can directly access information about any other fragments. But there’s a workaround. In a fragment shader, if we pass a texture, we can easily query the neighbouring pixels in the texture as long as we know its width and height. This idea allows all kinds of post-processing effects to be applied to scenes.

We’ll start with the initial state of the system. In order to get any interesting results, we need non-uniform starting-conditions. In this example we’ll place cells randomly on the screen, so we’ll render a regular noise texture for the first frame. Of course we could initialise with another type of noise but this is the easiest way to get started.

/**
 * Sizes
 */
const sizes = {
	width: window.innerWidth,
	height: window.innerHeight
};

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();

/**
 * Textures
 */
//The generated noise texture
const dataTexture = createDataTexture();

/**
 * Meshes
 */
// Geometry
const geometry = new THREE.PlaneGeometry(2, 2);

//Screen resolution
const resolution = new THREE.Vector3(sizes.width, sizes.height, window.devicePixelRatio);

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

// Meshes
const mesh = new THREE.Mesh(geometry, quadMaterial);
scene.add(mesh);

/**
 * Animate
 */

const tick = () => {
    //The texture will get rendere to the default framebuffer
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

This code simply initialises a Three.js scene and adds a 2D plane to fill the screen (the snippet doesn’t show all the basic boilerplate code). The plane is supplied with a Shader Material, that for now does nothing but display a texture in its fragment shader. In this code we generate a texture using a DataTexture. It would be possible to load an image as a texture too, in that case we would need to keep track of the exact texture size. Since the scene will take up the entire screen, creating a texture with the viewport dimensions seems like the simpler solution for this tutorial. Currently the scene will be rendered to the default framebuffer (the device screen).

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Framebuffers

When writing a WebGL application, whether using the vanilla API or a higher level library like Three.js, after setting up the scene the results are rendered to the default WebGL framebuffer, which is the device screen (as done above).

But there’s also the option to create framebuffers that render off-screen, to image buffers on the GPU’s memory. Those can then be used just like a regular texture for whatever purpose. This idea is used in WebGL when it comes to creating advanced post-processing effects such as depth-of-field, bloom, etc. by applying different effects on the scene once rendered. In Three.js we can do that by using THREE.WebGLRenderTarget. We’ll call our framebuffer renderBufferA.

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();
//Create a second scene that will be rendered to the off-screen buffer
const bufferScene = new THREE.Scene();

/**
 * Render Buffers
 */
// Create a new framebuffer we will use to render to
// the GPU memory
let renderBufferA = new THREE.WebGLRenderTarget(sizes.width, sizes.height, {
	// Below settings hold the uv coordinates and retain precision.
	minFilter: THREE.NearestFilter,
	magFilter: THREE.NearestFilter,
	format: THREE.RGBAFormat,
	type: THREE.FloatType,
	stencilBuffer: false
});

//Screen Material
const quadMaterial = new THREE.ShaderMaterial({
	uniforms: {
        //Now the screen material won't get a texture initially
        //The idea is that this texture will be rendered off-screen
		uTexture: { value: null },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
});

//off-screen Framebuffer will receive a new ShaderMaterial
// Buffer Material
const bufferMaterial = new THREE.ShaderMaterial({
	uniforms: {
		uTexture: { value: dataTexture },
		uResolution: {
			value: resolution
		}
	},
	vertexShader: document.getElementById('vertexShader').textContent,
	//For now this fragment shader does the same as the one used above
	fragmentShader: document.getElementById('fragmentShaderBuffer').textContent
});

/**
 * Animate
 */

const tick = () => {
	// Explicitly set renderBufferA as the framebuffer to render to
	//the output of this rendering pass will be stored in the texture associated with renderBufferA
	renderer.setRenderTarget(renderBufferA);
	// This will the off-screen texture
	renderer.render(bufferScene, camera);

	mesh.material.uniforms.uTexture.value = renderBufferA.texture;
	//This will set the default framebuffer (i.e. the screen) back to being the output
	renderer.setRenderTarget(null);
	//Render to screen
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
};

tick();

Now there’s nothing to be seen because, while the scene is rendered, it’s rendered to an off-screen buffer.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

We’ll need to access it as a texture in the animation loop to render the generated texture from the previous step to the fullscreen plane on our screen.

//In the animation loop before rendering to the screen
mesh.material.uniforms.uTexture.value = renderBufferA.texture;

And that’s all it takes to get back the noise, except now it’s rendered off-screen and the output of that render is used as a texture in the framebuffer that renders on to the screen.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Ping-Pong 🏓

Now that there’s data rendered to a texture, the shaders can be used to perform general computation using the texture data. Within GLSL, textures are read-only, and we can’t write directly to our input textures, we can only “sample” them. Using the off-screen framebuffer, however, we can use the output of the shader itself to write to a texture. Then, if we can chain together multiple rendering passes, the output of one rendering pass becomes the input for the next pass. So we create two off-screen buffers. This technique is called ping pong buffering. We create a kind of simple ring buffer, where after every frame we swap the off-screen buffer that is being read from with the off-screen buffer that is being written to. We can then use the off-screen buffer that was just written to, and display that to the screen. This lets us perform iterative computation on the GPU, which is useful for all kinds of effects.

To achieve it in THREE.js, first we need to create a second framebuffer. We will call it renderBufferB. Then the ping-pong technique is actually performed in the animation loop.

//Add another framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
    sizes.width,
    sizes.height,
    {
        minFilter: THREE.NearestFilter,
        magFilter: THREE.NearestFilter,
        format: THREE.RGBAFormat,
        type: THREE.FloatType,
        stencilBuffer: false
    }

    //At the end of each animation loop

    // Ping-pong the framebuffers by swapping them
    // at the end of each frame render
    // Now prepare for the next cycle by swapping renderBufferA and renderBufferB
    // so that the previous frame's *output* becomes the next frame's *input*
    const temp = renderBufferA
    renderBufferA = renderBufferB
    renderBufferB = temp
    //output becomes input
    bufferMaterial.uniforms.uTexture.value = renderBufferB.texture;
)

Now the render buffers are swapped every frame, it’ll look the same but it’s possible to verify by logging out the textures that get passed to the on-screen plane each frame for example. Here’s a more in depth look at ping pong buffers in WebGL.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

Game Of Life

From here it’s about implementing the actual Game of Life. Since the rules are so simple, the resulting code isn’t very complicated either and there’s many good resources that go through coding it up, so I’ll only go over the key ideas. All the logic for this will happen in the fragment shader that gets rendered off-screen, which will provide the texture for the next frame.

As described earlier, we want to access neighbouring fragments (or pixels) via the texture that’s passed in. This is achieved in a nested for loop in the getNeighbours function. We skip our current cell and check the 8 surrounding pixels by sampling the texture at an offset. Then we check whether the pixels r value is above 0.5, which means it’s alive, and increment the count to represent the alive neighbours.

//GLSL in fragment shader
precision mediump float;
//The input texture
uniform sampler2D uTexture;
//Screen resolution
uniform vec3 uResolution;

// uv coordinates passed from vertex shader
varying vec2 vUvs;

float GetNeighbours(vec2 p) {
    float count = 0.0;

    for(float y = -1.0; y <= 1.0; y++) {
        for(float x = -1.0; x <= 1.0; x++) {

            if(x == 0.0 && y == 0.0)
                continue;

            // Scale the offset down
            vec2 offset = vec2(x, y) / uResolution.xy;
            // Apply offset and sample texture
            vec4 lookup = texture2D(uTexture, p + offset);
             // Accumulate the result
            count += lookup.r > 0.5 ? 1.0 : 0.0;
        }
    }

    return count;
}

Based on this count we can set the rules. (Note how we can use the standard UV coordinates here because the Texture we created in the beginning fills the screen. If we had initialised with an image texture of arbitrary dimensions, we’d need to scale coordinates according to its exact pixel size to get a value between 0.0 and 1.0)

//In the main function
    vec3 color = vec3(0.0);

    float neighbors = 0.0;

    neighbors += GetNeighbours(vUvs);

    bool alive = texture2D(uTexture, vUvs).x > 0.5;

    //cell is alive
    if(alive && (neighbors == 2.0 || neighbors == 3.0)) {

      //Any live cell with two or three live neighbours lives on to the next generation.
      color = vec3(1.0, 0.0, 0.0);

      //cell is dead
      } else if (!alive && (neighbors == 3.0)) {
      //Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
        color = vec3(1.0, 0.0, 0.0);

      }

    //In all other cases cell remains dead or dies so color stays at 0
    gl_FragColor = vec4(color, 1.0);

And that’s basically it, a working Game of Life using only GPU shaders, written in Three.js. The texture will get sampled every frame via the ping pong buffers, which creates the next generation in our cellular automaton, so no additional variable tracking the time or frames needs to be passed for it to animate.

See the Pen feTurbluence: baseFrequency by Jason Andrew (@jasonandrewth) on CodePen.light

In summary, we first went over the basic ideas behind cellular automata, which is a very powerful model of computation used to generate complex behaviour. Then we were able to implement it in Three.js using ping pong buffering and framebuffers. Now there’s near endless possibilities for taking it further, try adding different rules or mouse interaction for example.

3D Typing Effects with Three.js

In this tutorial we’ll explore various animated WebGL text typing effects. We will mostly be using Three.js but not the whole tutorial relies on the specific features of this library.
But who doesn’t love Three.js though ❤

This tutorial is aimed at developers who are familiar with the basic concepts of WebGL.

The main idea is to create a JavaScript template that takes a keyboard input and draws the text on the screen in some fancy way. The effects we will build today are all about composing a text shape with a big number of repeating objects. We will cover the following steps:

  • Sampling text on Canvas (generating 2D coordinates)
  • Setting up the scene and placing the Canvas element
  • Generating particles in 3D space
  • Turning particles to an instanced mesh
  • Replacing a static string with some user input
  • Basic animation
  • Typing-related animation
  • Generating the visuals: clouds, bubbles, flowers, eyeballs

Text sampling

In the following we will fill a text shape with some particles.

First, let’s think about what a 3D text shape is. In general, a text mesh is nothing but a 2D shape being extruded. So we don’t need to sample the 3rd coordinate – we can just use X/Y coordinates with Z being randomly generated within the text depth (although we’re not about to use the Z coordinate much today).

One of the ways to generate 2D coordinates inside the shape is with Canvas sampling. So let’s create a <canvas> element, apply some font-related styles to it and make sure the size of <canvas> is big enough for the text to fit (extra space is okay).

// Settings
const fontName = 'Verdana';
const textureFontSize = 100;

// String to show
let string = 'Some text' + '\n' + 'to sample' + '\n' + 'with Canvas';

// Create canvas to sample the text
const textCanvas = document.createElement('canvas');
const textCtx = textCanvas.getContext('2d');
document.body.appendChild(textCanvas);

// ---------------------------------------------------------------

sampleCoordinates();

// ---------------------------------------------------------------

function sampleCoordinates() {

    // Parse text
    const lines = string.split(`\n`);
    const linesMaxLength = [...lines].sort((a, b) => b.length - a.length)[0].length;
    const wTexture = textureFontSize * .7 * linesMaxLength;
    const hTexture = lines.length * textureFontSize;

    // ...
}

With the Canvas API you can set all the font styling pretty much like in CSS. Custom fonts can be used as well, but I’m using good old Verdana today.

Once the style is set, we draw the text (or any other graphics!) on the <canvas>…

function sampleCoordinates() {

    // Parse text
    // ...

    // Draw text
    const linesNumber = lines.length;
    textCanvas.width = wTexture;
    textCanvas.height = hTexture;
    textCtx.font = '100 ' + textureFontSize + 'px ' + fontName;
    textCtx.fillStyle = '#2a9d8f';
    textCtx.clearRect(0, 0, textCanvas.width, textCanvas.height);
    for (let i = 0; i < linesNumber; i++) {
        textCtx.fillText(lines[i], 0, (i + .8) * hTexture / linesNumber);
    }

    // ...
}

… to be able to get imageData from it.

The ImageData object contains a one-dimensional array with RGBA data for every pixel. Knowing the size of the canvas, we can go through the array and check if the given X/Y coordinate matches the color of text or the color of the background.

Since our canvas doesn’t have anything but colored text on the unset (transparent black) background, we can check any of the four RGBA bytes with against a condition as simple as “bigger than zero”.

function sampleCoordinates() {
    // Parse text
    // ...
    // Draw text
    // ...
    // Sample coordinates
    textureCoordinates = [];
    const samplingStep = 4;
    if (wTexture > 0) {
        const imageData = textCtx.getImageData(0, 0, textCanvas.width, textCanvas.height);
        for (let i = 0; i < textCanvas.height; i += samplingStep) {
            for (let j = 0; j < textCanvas.width; j += samplingStep) {
                // Checking if R-channel is not zero since the background RGBA is (0,0,0,0)
                if (imageData.data[(j + i * textCanvas.width) * 4] > 0) {
                    textureCoordinates.push({
                        x: j,
                        y: i
                    })
                }
            }
        }
    }
}

There’re lots of things you can do with the sampling function: change the sampling step, add some randomness, apply an outline stroke to the text, and more. Below we’ll keep using only the simplest sampling. To check the result we can add a second <canvas> and draw the dot for each of sampled textureCoordinates.

It works 🙂

The Three.js scene

Let’s set up a basic Three.js scene and place a Plane object on it. We can use the text sampling Canvas from the previous step as a color map for the Plane.

Generating the particles

We can generate 3D coordinates with the very same sampling function. X/Y are gathered from the Canvas and for the Z coordinate we can take a random number.

The easiest way to visualize this set of coordinates would be a particle system known as THREE.Points.

function createParticles() {
    const geometry = new THREE.BufferGeometry();
    const material = new THREE.PointsMaterial({
        color: 0xff0000,
        size: 2
    });
    const vertices = [];
    for (let i = 0; i < textureCoordinates.length; i ++) {
        vertices.push(textureCoordinates[i].x, textureCoordinates[i].y, 5 * Math.random());
    }
    geometry.setAttribute('position', new THREE.Float32BufferAttribute(vertices, 3));
    const particles = new THREE.Points(geometry, material);
    scene.add(particles);
}

Somehow it works ¯\_(ツ)_/¯

Obviously, we need to flip the Y coordinate for each particle and center the whole text.

To do both, we need to know the bounding box of our text. There are various ways to measure the box using the canvas API or Three.js functions. But as a temporary solution, we just take max X and Y coordinates as width and height of the text.

function refreshText() {
    sampleCoordinates();
    
    // Gather with and height of the bounding box
    const maxX = textureCoordinates.map(v => v.x).sort((a, b) => (b - a))[0];
    const maxY = textureCoordinates.map(v => v.y).sort((a, b) => (b - a))[0];
    stringBox.wScene = maxX;
    stringBox.hScene = maxY;

    createParticles();
}

For each point, the Y coordinate becomes boxTotalHeight - Y.

Shifting the whole particles system by half-width and half-height of the box solves the centering issue.

function createParticles() {
    
    // ...
    for (let i = 0; i < textureCoordinates.length; i ++) {
       // Turning Y coordinate to stringBox.hScene - Y
       vertices.push(textureCoordinates[i].x, stringBox.hScene - textureCoordinates[i].y, 5 * Math.random());
    }
    // ...
    
    // Centralizing the text
    particles.position.x = -.5 * stringBox.wScene;
    particles.position.y = -.5 * stringBox.hScene;
}

Until now, we were using pixel coordinates gathered from text canvas directly on the 3D scene. But let’s say we need the 3D text to have the height equal to 10 units. If we set 10 as a font size, the canvas resolution would be too low to make a proper sampling. To avoid it (and to be more flexible with the particles density), we can add an additional scaling factor: the value we’d multiply the canvas coordinates with before using them in 3D space.

// Settings
// ...
const textureFontSize = 30;
const fontScaleFactor = .3;

// ...

function refreshText() {

    // ...

    textureCoordinates = textureCoordinates.map(c => {
        return { x: c.x * fontScaleFactor, y: c.y * fontScaleFactor }
    });
    
    // ...
}

At this point, we can also remove the Plane object. We keep using the canvas to draw the text and sample coordinates but we don’t need to turn it to a texture and put it on the scene.

Switching to instanced mesh

Of course there are many cool things we can do with THREE.Points but our next step is turning the particles into THREE.InstancedMesh.

The main limitation of THREE.Points is the particle size. THREE.PointsMaterial is based on WebGL gl_PointSize, which can be rendered with a maximum pixel size of around 50 to 100, depending on your video card. So even if we need our particles to be as simple as planes, we sometimes can’t use THREE.Points due to this limitation. You may think about THREE.Sprite as an alternative, but (surprisingly) instanced mesh gives us much better performance on the big (10k+) number of particles.

Plus, if we want to use 3D shapes as particles, THREE.InstancedMesh is the only choice.

There is a well-known approach to work with THREE.InstancedMesh:

  1. Create an instanced mesh with a known number of instances. In our case, the number of instances is the length of our coordinates array.
function createInstancedMesh() {
    instancedMesh = new THREE.InstancedMesh(particleGeometry, particleMaterial, textureCoordinates.length);
    scene.add(instancedMesh);

    // centralize it in the same way as before
    instancedMesh.position.x = -.5 * stringBox.wScene;
    instancedMesh.position.y = -.5 * stringBox.hScene;
}
  1. Add the geometry and material to be used for each instance. I use a doughnut shape known as THREE.TorusGeometry and THREE.MeshNormalMaterial.
function init() {
    // Create scene and text canvas
    // ...

    // Instanced geometry and material
    particleGeometry = new THREE.TorusGeometry(.1, .05, 16, 50);
    particleMaterial = new THREE.MeshNormalMaterial({ });

    // ...
}
  1. Create a dummy object that helps us generate a 4×4 transform matrix for each particle. It doesn’t need to be a part of the scene.
function init() {
    // Create scene, text canvas, instanced geometry and material
    // ...

    dummy = new THREE.Object3D();
}
  1. Apply the transform matrix to each instance with the .setMatrixAt method
function updateParticlesMatrices() {
    let idx = 0;
    textureCoordinates.forEach(p => {

        // we apply samples coordinates like before + some random rotation
        dummy.rotation.set(2 * Math.random(), 2 * Math.random(), 2 * Math.random());
        dummy.position.set(p.x, stringBox.hScene - p.y, Math.random());

        dummy.updateMatrix();
        instancedMesh.setMatrixAt(idx, dummy.matrix);

        idx ++;
    })
    instancedMesh.instanceMatrix.needsUpdate = true;
}

Listening to the keyboard

So far, the string value was hard-coded. We want it to be dynamic and contain the user input.

There are many ways to listen to the keyboard: working directly with keyup/keydown events, using the HTML input element as a proxy, etc. I ended up with a <div> element that has a contenteditable attribute set. Compared to an <input> or a <textarea>, it’s more painful to parse the multi-line string from an editable <div>. But it’s much easier to get an accurate pixel values for the cursor position and the text bounding box.

I won’t go too much into details here. The main idea is to keep the editable <div> focused all the time so that we keep track of whatever the user types there.

<div id="text-input" contenteditable="true" onblur="this.focus()" autofocus></div>

Using the keyup event we parse the string and get the width and height of stringBox from the contenteditable <div>, and then refresh the instanced mesh.

document.addEventListener('keyup', () => {
    handleInput();
    refreshText();
});

While parsing, we replace the inner tags with new lines (this part is specific for <div contenteditable>), and do a few things for usability like disabling empty new lines above and below the text.

Please note that <div contenteditable> and text canvas should have the same CSS properties (font, font size, etc). With the same styles applied, the text is rendered in the very same way on both elements. With that in place, we can take the pixel values from <div contenteditable> (text width, height, cursor position) and use them for the canvas.

const textInputEl = document.querySelector('#text-input');
textInputEl.style.fontSize = textureFontSize + 'px';
textInputEl.style.font = '100 ' + textureFontSize + 'px ' + fontName;
textInputEl.style.lineHeight = 1.1 * textureFontSize + 'px'; 
// ...
function handleInput() {
    if (isNewLine(textInputEl.firstChild)) {
        textInputEl.firstChild.remove();
    }
    if (isNewLine(textInputEl.lastChild)) {
        if (isNewLine(textInputEl.lastChild.previousSibling)) {
            textInputEl.lastChild.remove();
        }
    }
    string = textInputEl.innerHTML
        .replaceAll("<p>", "\n")
        .replaceAll("</p>", "")
        .replaceAll("<div>", "\n")
        .replaceAll("</div>", "")
        .replaceAll("<br>", "")
        .replaceAll("<br/>", "")
        .replaceAll(" ", " ");
    stringBox.wTexture = textInputEl.clientWidth;
    stringBox.wScene = stringBox.wTexture * fontScaleFactor;
    stringBox.hTexture = textInputEl.clientHeight;
    stringBox.hScene = stringBox.hTexture * fontScaleFactor;
    function isNewLine(el) {
        if (el) {
            if (el.tagName) {
                if (el.tagName.toUpperCase() === 'DIV' || el.tagName.toUpperCase() === 'P') {
                    if (el.innerHTML === '<br>' || el.innerHTML === '</br>') {
                        return true;
                    }
                }
            }
        }
        return false
    }
}

Once we have the string and the stringBox, we update the instanced mesh.

function refreshText() {
    sampleCoordinates();
    textureCoordinates = textureCoordinates.map(c => {
        return { x: c.x * fontScaleFactor, y: c.y * fontScaleFactor }
    });
    // This part can be removed as we take text size from editable <div>
    // const sortedX = textureCoordinates.map(v => v.x).sort((a, b) => (b - a))[0];
    // const sortedY = textureCoordinates.map(v => v.y).sort((a, b) => (b - a))[0];
    // stringBox.wScene = sortedX;
    // stringBox.hScene = sortedY;</s>
    recreateInstancedMesh();
    updateParticlesMatrices();
}

Coordinate sampling is the same as before with one difference: we now can create canvas with the exact text size, no extra space to sample.

function sampleCoordinates() {
    const lines = string.split(`\n`);
    // This part can be removed as we take text size from editable <div>
    // const linesMaxLength = [...lines].sort((a, b) => b.length - a.length)[0].length;
    // stringBox.wTexture = textureFontSize * .7 * linesMaxLength;
    // stringBox.hTexture = lines.length * textureFontSize;
    textCanvas.width = stringBox.wTexture;
    textCanvas.height = stringBox.hTexture;
    // ...
}

We can’t increase the number of instances for the existing mesh. So the mesh should be recreated every time the text is updated. Although text centering and instances transform is done exactly like before.

// function createInstancedMesh() {
function recreateInstancedMesh() {

    // Now we need to remove the old Mesh and create a new one every refreshText() call
    scene.remove(instancedMesh);
    instancedMesh = new THREE.InstancedMesh(particleGeometry, particleMaterial, textureCoordinates.length);

    // ...
}

function updateParticlesMatrices() {

    // same as before
    //...

}

Since our text is dynamic and it can get pretty long, let’s make sure the instanced mesh fits the screen:

function refreshText() {

    // ...

    makeTextFitScreen();
}

function makeTextFitScreen() {
    const fov = camera.fov * (Math.PI / 180);
    const fovH = 2 * Math.atan(Math.tan(fov / 2) * camera.aspect);
    const dx = Math.abs(.55 * stringBox.wScene / Math.tan(.5 * fovH));
    const dy = Math.abs(.55 * stringBox.hScene / Math.tan(.5 * fov));
    const factor = Math.max(dx, dy) / camera.position.length();
    if (factor > 1) {
        camera.position.x *= factor;
        camera.position.y *= factor;
        camera.position.z *= factor;
    }
}

One more thing to add is a caret (text cursor). It can be a simple 3D box with a size matching the font size.

function init() {
    // ...
    const cursorGeometry = new THREE.BoxGeometry(.3, 4.5, .03);
    cursorGeometry.translate(.5, -2.7, 0)
    const cursorMaterial = new THREE.MeshNormalMaterial({
        transparent: true,
    });
    cursorMesh = new THREE.Mesh(cursorGeometry, cursorMaterial);
    scene.add(cursorMesh);
}

We gather the position of the caret from our editable <div> in pixels and multiply it by fontScaleFactor, like we do with the bounding box width and height.

function handleInput() {

    // ...
    
    stringBox.caretPosScene = getCaretCoordinates().map(c => c * fontScaleFactor);

    function getCaretCoordinates() {
        const range = window.getSelection().getRangeAt(0);
        const needsToWorkAroundNewlineBug = (range.startContainer.nodeName.toLowerCase() === 'div' && range.startOffset === 0);
        if (needsToWorkAroundNewlineBug) {
            return [
                range.startContainer.offsetLeft,
                range.startContainer.offsetTop
            ]
        } else {
            const rects = range.getClientRects();
            if (rects[0]) {
                return [rects[0].left, rects[0].top]
            } else {
                // since getClientRects() gets buggy in FF
                document.execCommand('selectAll', false, null);
                return [
                    0, 0
                ]
            }
        }
    }
}

The cursor just needs same centering as our instanced mesh has, and voilà, the 3D caret position is the same as in the the input div.

function refreshText() {
    // ...
    
    updateCursorPosition();
}

function updateCursorPosition() {
    cursorMesh.position.x = -.5 * stringBox.wScene + stringBox.caretPosScene[0];
    cursorMesh.position.y = .5 * stringBox.hScene - stringBox.caretPosScene[1];
}

The only thing left is to make the cursor blink when the page (and hence the input element) is focused. The roundPulse function generates the rounded pulse between 0 and 1 from THREE.Clock.getElapsedTime(). We need to update the cursor opacity all the time, so the updateCursorOpacity call goes to the main render loop.

function render() {
    // ...

    updateCursorOpacity();
    
    // ...
}

let roundPulse = (t) => Math.sign(Math.sin(t * Math.PI)) * Math.pow(Math.sin((t % 1) * 3.14), .2);

function updateCursorOpacity() {
    if (document.hasFocus() && document.activeElement === textInputEl) {
        cursorMesh.material.opacity = roundPulse(2 * clock.getElapsedTime());
    } else {
        cursorMesh.material.opacity = 0;
    }
}

Basic animation

Instead of setting the instances transform just on the text update, we can also animate this transform.

To do this, we add an additional array of Particle objects to store the parameters for each instance. We still need the textureCoordinates array to store the 2D coordinates in pixels, but now we remap them to the particles array. And obviously, the particles transform update should happen in the main render loop now.

// ...
let textureCoordinates = [];
let particles = [];

function refreshText() {
    
    // ...

    // textureCoordinates are only pixel coordinates, particles is array of data objects
    particles = textureCoordinates.map(c => 
        new Particle([c.x * fontScaleFactor, c.y * fontScaleFactor])
    );

    // We call it in the render() loop now
    // updateParticlesMatrices();

    // ...
}

Each Particle object contains a list of properties and a grow() function that updates some of those properties.

For starters, we define position, rotation and scale. Position would be static for each particle, scale would increase from zero to one when the particle is created, and rotation would be animated all the time.

function Particle([x, y]) {
    this.x = x;
    this.y = y;
    this.z = 0;
    this.rotationX = Math.random() * 2 * Math.PI;
    this.rotationY = Math.random() * 2 * Math.PI;
    this.rotationZ = Math.random() * 2 * Math.PI;
    this.scale = 0;
    this.deltaRotation = .2 * (Math.random() - .5);
    this.deltaScale = .01 + .2 * Math.random();
    this.grow = function () {
        this.rotationX += this.deltaRotation;
        this.rotationY += this.deltaRotation;
        this.rotationZ += this.deltaRotation;
        if (this.scale < 1) {
            this.scale += this.deltaScale;
        }
    }
}
// ...
function updateParticlesMatrices() {
    let idx = 0;
    // textureCoordinates.forEach(p => {
    particles.forEach(p => {
        // update the particles data
        p.grow();
        // dummy.rotation.set(2 * Math.random(), 2 * Math.random(), 2 * Math.random());
        dummy.rotation.set(p.rotationX, p.rotationY, p.rotationZ);
        dummy.scale.set(p.scale, p.scale, p.scale);
        dummy.position.set(p.x, stringBox.hScene - p.y, p.z);
        dummy.updateMatrix();
        instancedMesh.setMatrixAt(idx, dummy.matrix);
        idx ++;
    })
    instancedMesh.instanceMatrix.needsUpdate = true;
}

Typing animation

We already have a nice template by now. But every time the text is updated we recreate all the instances for all the symbols. So every time the text is changed we reset all the properties and animations of all the particles.

Instead, we need to keep the properties and animations for “old” particles. To do so, we need to know if each particle should be recreated or not.

In other words, for each sampled coordinate we need to check if Particle already exists or not. If we found a Particle object with the same X/Y coordinates, we keep it along with all its properties. If there is no existing Particle for the sampled coordinate, we call new Particle() like we did before.

We evolve the sampling function so we don’t only gather the X/Y values and refill textureCoordinates array but also do the following:

  1. Turn one-dimensional array imageData to two-dimensional imageMask array
  2. Go through the existing textureCoordinates array and compare its elements to the imageMask. If coordinate exists, add old property to the coordinate, otherwise add toDelete property.
  3. All the sampled coordinates that were not found in the textureCoordinates, we handle as new coordinate that has X and Y values and old or toDelete properties set to false

It would make sense to simply delete old coordinates that were not found in the new imageMask. But we use a special toDelete property instead to play a fade-out animation for deleted particles first, and actually delete the Particle data only in the next step.

function sampleCoordinates() {
    // Draw text
    // ...
    // Sample coordinates
    if (stringBox.wTexture > 0) {
        // Image data to 2d array
        const imageData = textCtx.getImageData(0, 0, textCanvas.width, textCanvas.height);
        const imageMask = Array.from(Array(textCanvas.height), () => new Array(textCanvas.width));
        for (let i = 0; i < textCanvas.height; i++) {
            for (let j = 0; j < textCanvas.width; j++) {
                imageMask[i][j] = imageData.data[(j + i * textCanvas.width) * 4] > 0;
            }
        }
        if (textureCoordinates.length !== 0) {
            // Clean up: delete coordinates and particles which disappeared on the prev step
            // We need to keep same indexes for coordinates and particles to reuse old particles properly
            textureCoordinates = textureCoordinates.filter(c => !c.toDelete);
            particles = particles.filter(c => !c.toDelete);
            // Go through existing coordinates (old to keep, toDelete for fade-out animation)
            textureCoordinates.forEach(c => {
                if (imageMask[c.y]) {
                    if (imageMask[c.y][c.x]) {
                        c.old = true;
                        if (!c.toDelete) {
                            imageMask[c.y][c.x] = false;
                        }
                    } else {
                        c.toDelete = true;
                    }
                } else {
                    c.toDelete = true;
                }
            });
        }
        // Add new coordinates
        for (let i = 0; i < textCanvas.height; i++) {
            for (let j = 0; j < textCanvas.width; j++) {
                if (imageMask[i][j]) {
                    textureCoordinates.push({
                        x: j,
                        y: i,
                        old: false,
                        toDelete: false
                    })
                }
            }
        }
    } else {
        textureCoordinates = [];
    }
}

With old and toDelete properties, mapping texture coordinates to the particles becomes conditional:

function refreshText() {
    
    // ...

    // particles = textureCoordinates.map(c => 
    //     new Particle([c.x * fontScaleFactor, c.y * fontScaleFactor])
    // );
    particles = textureCoordinates.map((c, cIdx) => {
        const x = c.x * fontScaleFactor;
        const y = c.y * fontScaleFactor;
        let p = (c.old && particles[cIdx]) ? particles[cIdx] : new Particle([x, y]);
        if (c.toDelete) {
            p.toDelete = true;
            p.scale = 1;
        }
        return p;
    });

    // ...

}

The grow() call would not only increase the size of the particle when it’s created. We would also decrease it if the particle meant to be deleted.

function Particle([x, y]) {
    // ...
    
    this.toDelete = false;
    
    this.grow = function () {
        // ...
        if (this.scale < 1) {
            this.scale += this.deltaScale;
        }
        if (this.toDelete) {
            this.scale -= this.deltaScale;
            if (this.scale <= 0) {
                this.scale = 0;
            }
        }
    }
}

The template is now ready and we can use it to create various effects with only little changes.

Bubbles effect 🫧

See the Pen Bubble Typer Three.js – Demo #2 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Here is the full list of changes I made to make these bubbles based on the template:

  1. Change TorusGeometry to IcosahedronGeometry so each instance is a sphere
  2. Replace MeshNormalMaterial with ShaderMaterial. You can check out the GLSL code in the sandbox above but the shader essentially does this:
    • mix white color and randomized gradient (taken from normal vector), and use the result as sphere color
    • applies transparency in a way to make less transparent outline and more transparent middle of the sphere if you look from the camera position
  3. Adjust textureFontSize and fontScaleFactor values to change the density of the particles
  4. Evolve the Particle object so that
    • bubble position is a bit randomized comparing to the sampled coordinates
    • maximum size of the bubble is defined by randomized maxScale property
    • no rotation
    • bubbles size is randomized as the scale limit is maxScale property, not 1
    • bubble grows all the time, bursts, and then grows again. So the scale increase happens not only when Particle is created but all the time. Once the scale reaches the maxScale value, we reset the scale to zero
    • some bubbles would get isFlying property so they move up from the initial position
  5. Change color of page background and cursor

Clouds effect ☁

You don’t need to do much for having clouds, too:

  1. Use PlaneGeometry for instance shape
  2. Use MeshBasicMaterial and apply the following image as an alpha map
  3. Adjust textureFontSize and fontScaleFactor to change the density of the particles
  4. Evolve the Particle object so that
    • particle position is a bit randomized compared to the sampled coordinates
    • size of the particle is defined by randomized maxScale property
    • only rotation around Z axis is needed
    • particle size (scale) is pulsating all the time
  5. Additional transform dummy.quaternion.copy(camera.quaternion) should be applied for each instance. This way the particle is always facing towards the camera; rotate the cloudy text to see the result 🙂
  6. Change color of page background and cursor

See the Pen Clouds Typer Three.js – Demo #1 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Flowers effect 🌸

Flowers are actually quite similar to clouds. The main difference is about having two instanced meshes and two materials. One is mapped as flower texture, another one as a leaf

Also, all the particles must have a new color property. We apply colors to the instanced mesh with the setColorAt method every time we recreate the meshes.

With a few small changes like particles density, scaling speed, rotation speed, and the color of the background and cursor, we have this:

See the Pen Flower Typer Three.js – Demo #3 by Ksenia Kondrashova (@ksenia-k) on CodePen.

Eyes effect 👀

We can go further and load a glb model and use it as an instance! I took this nice looking eye from turbosquid.com

Instead of applying a random rotation, we can make the eyeballs follow the mouse position! To do so, we need an additional transparent plane in front of the instanced mesh, THREE.Raycaster() and the mouse position tracker. We are listening to the mousemove event, set ray from mouse to the plane, and make the dummy object look at the intersection point.

Don’t forget to add some lights to see the imported model. And as we have lights, let’s make the instanced mesh cast the shadow to the plane behind the text.

Together with some other small changes like sampling density, grow() function parameters, cursor and background style, we get this:

See the Pen Eyes Typer Three.js – Demo #4 by Ksenia Kondrashova (@ksenia-k) on CodePen.

And that’s it! I hope this tutorial was interesting and that it gave you some inspiration. Feel free to use this template to create more fun things!

How to Map Texture to a 3D Face with Three.js

In this new ALL YOUR HTML coding session you’ll learn how to wrap a texture on a shape. The shape will be a 3D face and we’ll use Three.js.

Original website: https://www.zikd.space

This coding session was streamed live on July 3, 2022.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post How to Map Texture to a 3D Face with Three.js appeared first on Codrops.

Volumetric Light Rays with Three.js

In this new ALL YOUR HTML coding session we’ll be decompiling volumetric light rays and recreating them with fragment shaders and Three.js.

Original website: https://moonfall.oblio.io/desktop/

Agency: https://oblio.io/

This coding session was streamed live on June 19, 2022.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Volumetric Light Rays with Three.js appeared first on Codrops.

Creating a Particles Galaxy with Three.js

In this new ALL YOUR HTML coding session we will look into recreating the particles galaxy from Viverse using Three.js.

Original: https://www.viverse.com/

Made by: https://ohzi.io/

This coding session was streamed live on June 19, 2022.

Check out the live demo.

Image credit goes to https://www.instagram.com/tre.zen/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Creating a Particles Galaxy with Three.js appeared first on Codrops.

Coding an Infinite Slider using Texture Recursion with Three.js

In this new ALL YOUR HTML coding session we’ll be recreating the infinite image slider seen on https://tismes.com/ made by Lecamus Jocelyn using texture recursion in Three.js.

This coding session was streamed live on May 29, 2022.

Image credit goes to https://www.instagram.com/tre.zen/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding an Infinite Slider using Texture Recursion with Three.js appeared first on Codrops.

Creating an Infinite Distorted Slider with PixiJS and Bézier Curves

In this new ALL YOUR HTML coding session we’ll be replicating the amazing fluid slider from Yuto Takahashi’s website, using PixiJS, Bézier curves and little bit of GLSL.

This coding session was streamed live on May 23, 2022.

Image credit goes to https://www.instagram.com/tre.zen/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Creating an Infinite Distorted Slider with PixiJS and Bézier Curves appeared first on Codrops.

Case Study: Windland — An Immersive Three.js Experience

In this article we’ll look at the creation of a mini-city full of post effects and micro-interactions using Three.js.

Visit Windland

See how you can create your own using this free boilerplate.

Introduction

I am an unconditional game lover. I’ve always dreamed of creating an interactive mini-city, using saturated colors, similar to SimCity and alike. The challenge was that I had neither enough 3D knowledge nor a library.

At the end of 2021, I finally decided to fulfill an old desire and I took Bruno Simon’s course – Three.js Journey. I’m a designer who likes to program. I ended up discovering myself as a creative developer because of this course, where I was able to use part of my dormant knowledge of ActionScript 2.0, from the late Macromedia Flash.

The entire project was created in approximately 2 weeks, between shifts at my work at Neotix. It felt amazing and I loved doing it, so I decided to share some interesting information about it, so that I could help in some way those who are at the beginning of this journey.

Creative challenge

It was crucial to use various post-processing effects present in games to give this city a decent level of realism without affecting performance. The artistic path I chose to follow was to use a mix of realistic lights with low-poly models.

image3.png
Image showing a 3D model of the city and the Three.js experience next to it.

Performance

Part of what I wanted with this project was to apply techniques that would perform well on different devices, especially mobile. It needed to work on as many devices as possible, with an acceptable frame rate (at least 30 fps). I also wanted the experience to load as quickly as possible, with a file size smaller than 2MB.

In order to accomplish this, I had to use a series of techniques that I will describe below.

Image showing the wireframe on the model within Blender of the model, where it all started.

Creating the 3D model in Blender

I used Blender to make the city model. I imported part of the buildings from free templates on the Internet. I modified a few of them to better match the setting. To make the terrain, I used Blender’s sculpt mode, creating valleys and peaks that look beautiful with light and shadow.

Image showing the 3D model in Blender in wireframe mode.

Each model was optimized considering the number of triangles when exporting. I chose to use the GLB format because the compression with Draco does incredible compression – sometimes 7x smaller, in file size. In addition, all project resources are also compressed at runtime, on the server with gzip, for a more reduced transfer.

Creating natural light

Lighting in games is fascinating – the way shadow interacts with the terrain in order to create a scene that’s pleasing to the eye while not being held back by reality. I used Blender’s global lighting system, with a “world” node, using “Nishita” ambient lighting. This allows for very natural lighting, with ambient settings that quickly give a pleasing result.

Image showing the lighting of the sun in different positions and superimposed on the blender’s nishita sun parameter.

Distributing trees in Blender with geometry nodes

Trees play an important role, as they help create a cast shadow that gives the terrain a touch of realism. I used Blender’s GeometryNodes to distribute the trees in the model and create a variation in size, shape, and rotation. I also used the material selector, to choose the regions that would have more or fewer trees, painting the density with the material selection.

Image showing the blender interface with the geometry nodes created to distribute the trees in the scene.

Bake lighting for export

In order for the experience to function and perform well in Three.js, it’s important that the scene loads the lighting baked into the textures. I created a single texture for the floor of 2048×2048, containing all of the shadows. The process of how to do the bake of shadows can be found in several tutorials on the internet. The end result is impressive and has no impact on performance.

Image showing the 3D model in blender with the baked lighting texture.

Export to Three.js and the tree performance issue

After finishing the bake and connecting the texture to the color node in the ground mesh, I exported all of the meshes to GLTF format. The entire model, using DRACO COMPRESSION, is 1.2MB. However, we have a problem with the trees: they cannot be exported all at once, as it would take too long for the GPU to finish the process.

I created the trees using the MESH SURFACE SAMPLER from Three.js, which serves exactly this purpose. You can use a model and distribute it on a surface, creating variations of the same model, but making modifications to each of them. Thus, the performance is incredible, even with a very large number of variations.

You can see an example of this in the official Three.js documentation.

Image showing the MeshSurfaceSampler code inside the VSCODE and the Three.js scene image with the projected trees.

Loading everything in Three.js

Using my boilerplate (see more about this at the end of the article), which I created to simplify things, I loaded the exported model. Afterwards, I spent a lot of time adjusting the light coloring, intensities, and other small details that make all the difference. The result of rendering in 3D is no always great for the experience in Three.js using the default parameters.

Image showing the evolution of the model within the Three.js and the color changes as a before and after.

It is essential to use the DAT.GUI to be able to visually adjust the parameters. It is impossible to get the colors and intensities right by guessing the numbers.

Image showing the Three.js DAT.GUI highlighted

Using VertexShader for the animation of the trees

One thing that brings reality to the scene is the smooth animation of the trees. Doing this is possible by exporting the animation directly from Blender, but performance would be greatly impacted – especially given the large number of trees.

The best approach in these cases is to animate using VertexShader, using GPU processing directly on the positioning of vertices in the 3D world. With that, the performance is very good and the animations are beautiful.

VSCODE screenshot showing part of the vertex shader responsible for animating the trees along with an image of the animated trees inside the Three.js

Animating the birds and other elements of the experience

The other animated elements of the experience, such as the helicopter, car, and wind turbines, were animated by changing the rotation of the model pieces directly in the render loop. It’s a very simple way to animate.

The birds were animated differently. I wanted them to have wing movement and a sense of grouping. So, I animated the whole group inside blender and exported the animation along with the GLFT file. I used the Animation Mixer to animate the wings while changing the group’s position. The result is quite convincing and very lightweight (only 200kb).

Image showing the birds and their animation timeline inside blender and the same birds inside Three.js.

Lights, shadows, and the night mode + apocalyptic

As the shadows are baked inside the imported GLB file, I was able to gain some performance by not having to use a dynamically generated shadow map inside Three.js.

I played around with the lighting effects, creating a night mode and an apocalyptic mode. It’s a lot of fun to have that kind of creative freedom without having to modify the template. The possibilities are endless.

The apocalyptic mode is an easter egg, accessible to anyone who knows how to activate it :).

Image showing night mode and apocalyptic mode.

Post Processing with Effect Composer

I’ve always loved the depth-of-field effect in games, but I thought it would be very difficult to use something like that in a Three.js experience. Thanks to the latest updates to the library, it’s much easier.

Using EffectComposer, I was able to use the BokehPass effect in day mode, which generates a dynamic depth-of-field effect, based on the distance from the camera. For night mode, I use UnrealBlooomPass, which makes the lights super exposed, ideal for this type of situation.

I change the effects between night and day mode for performance reasons – using the insertPass() and removePass() methods.

Clicking and Selecting a Building

A lot of people asked me how to make buildings clickable UI items. This was done using Three.js’ RayCaster, which detects an intersection between an invisible ray, fired by the camera, and the mouse. With this, I can detect when a building has been selected and – based on its name – trigger an event.

Image showing the source code inside the VSCODE responsible for making the object selection raycaster and the camera animation.

The animations that happen when clicking on a building were done using TWEEN.JS, by animating the initial camera position into the position of the clicked building. That way, I can place multiple buildings and have an animation generated automatically.

Responsive tweaks: Also working on mobile

Part of the work also involved tweaking the experience parameters to work well on mobile devices. Not just the responsive adjustments to the HTML and CSS, but also to the camera parameters, animation duration, and several other details.

image16.jpg
Image showing Windland running in mobile mode with responsiveness.

Dynamic quality: Adjusting performance dynamically with the power of the user device

Despite all the optimizations, some devices still cannot run all the effects, especially the post-processing ones. So I created a script that measures the FPS at the beginning of the experiment (during the loading process). That way, when the experiment starts, Three.js knows whether or not to activate certain effects to save on processing and ensure that the performance is within what is possible for that device.

image9.png
Image showing code inside the VSCODE responsible for detecting the FPS.

Also working on smartwatches

As a proof of concept, I wanted to demonstrate that experiments done in Three.js are not heavy to process and run even on a smartwatch. During this process, I found that the number of vertices in the model is what would most impact performance on these devices. So, I created an “ultra-low poly” mode of the model to be used on mobile devices. Ready! Nothing else in the code needed to be changed.

image8.png
Image showing Windland running on a smartwatch

Three.js Boilerplate: Create your own Windland

To make it easier to create projects in Three.js, I’ve created an easy-to-use, well-documented, and free starter project. So, using this boilerplate, you will be able to create a scene in Three.js in just a few lines and import a model. The boilerplate also has instructions on how to export the model and some other information that can help you create your city.

Find the repository on GitHub: https://github.com/ektogamat/threejs-andy-bolierplate

Watch the video on how to use it: https://www.youtube.com/watch?v=qM6Ih_cC6Gc

image17.png

Interesting facts

Some people ask: What is the advantage of making a project using Three.js and not just use a rendered image for a website? Well, a rendered image of this scene at 1920×1080 is approximately 2.8mb in size, with reasonable compression. This whole scene in Three.js, with all its interactivity, animations, interface and everything you see in the animations is only 1.8mb.

Windland was awarded an Honorable Mention at Awwwards on March 31, 2022. It transformed itself from a test project into a Three.js use case to complex scenes that mimic the look and feel of games.

Three.js is increasingly my favorite library. You can find more videos on my YouTube channel or on my official Twitter.

Thanks for your time 🙂

The post Case Study: Windland — An Immersive Three.js Experience appeared first on Codrops.