Rock the Stage with a Smooth WebGL Shader Transformation on Scroll

It’s fascinating which magical effects you can add to a website when you experiment with vertex displacement. Today we’d like to share a method with you that you can use to create your own WebGL shader animation linked to scroll progress. It’s a great way to learn how to bind shader vertices and colors to user interactions and to find the best flow.

We’ll be using Pug, Sass, Three.js and GSAP for our project.

Let’s rock!

The stage

For our flexible scroll stage, we quickly create three sections with Pug. By adding an element to the sections array, it’s easy to expand the stage.

index.pug:

.scroll__stage
  .scroll__content
    - const sections = ['Logma', 'Naos', 'Chara']
      each section, index in sections
        section.section
          .section__title
            h1.section__title-number= index < 9 ? `0${index + 1}` : index + 1

            h2.section__title-text= section

          p.section__paragraph The fireball that we rode was moving – But now we've got a new machine – They got music in the solar system
            br
            a.section__button Discover

The sections are quickly formatted with Sass, the mixins we will need later.

index.sass:

%top
  top: 0
  left: 0
  width: 100%

%fixed
  @extend %top

  position: fixed

%absolute
  @extend %top

  position: absolute

*,
*::after,
*::before
  margin: 0
  padding: 0
  box-sizing: border-box

.section
  display: flex
  justify-content: space-evenly
  align-items: center
  width: 100%
  min-height: 100vh
  padding: 8rem
  color: white
  background-color: black

  &:nth-child(even)
    flex-direction: row-reverse
    background: blue

  /* your design */

Now we write our ScrollStage class and set up a scene with Three.js. The camera range of 10 is enough for us here. We already prepare the loop for later instructions.

index.js:

import * as THREE from 'three'

class ScrollStage {
  constructor() {
    this.element = document.querySelector('.content')

    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight,
    }

    this.scene = new THREE.Scene()

    this.renderer = new THREE.WebGLRenderer({ 
      antialias: true, 
      alpha: true 
    })

    this.canvas = this.renderer.domElement

    this.camera = new THREE.PerspectiveCamera( 
      75, 
      this.viewport.width / this.viewport.height, 
      .1, 
      10
    )

    this.clock = new THREE.Clock()

    this.update = this.update.bind(this)

    this.init()
  }

  init() {
    this.addCanvas()
    this.addCamera()
    this.addEventListeners()
    this.onResize()
    this.update()
  }

  /**
   * STAGE
   */
  addCanvas() {
    this.canvas.classList.add('webgl')
    document.body.appendChild(this.canvas)
  }

  addCamera() {
    this.camera.position.set(0, 0, 2.5)
    this.scene.add(this.camera)
  }

  /**
   * EVENTS
   */
  addEventListeners() {
    window.addEventListener('resize', this.onResize.bind(this))
  }

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.camera.aspect = this.viewport.width / this.viewport.height
    this.camera.updateProjectionMatrix()

    this.renderer.setSize(this.viewport.width, this.viewport.height)
    this.renderer.setPixelRatio(Math.min(window.devicePixelRatio, 1.5))  
  }

  /**
   * LOOP
   */
  update() {
    this.render()

    window.requestAnimationFrame(this.update) 
  }

  /**
   * RENDER
   */
  render() {
    this.renderer.render(this.scene, this.camera)
  }  
}

new ScrollStage()

We disable the pointer events and let the canvas blend.

index.sass:

...

canvas.webgl
  @extend %fixed

  pointer-events: none
  mix-blend-mode: screen

...

The rockstar

We create a mesh, assign a icosahedron geometry and set the blending of its material to additive for loud colors. And – I like the wireframe style. For now, we set the value of all uniforms to 0 (uOpacity to 1).
I usually scale down the mesh for portrait screens. With only one object, we can do it this way. Otherwise you better transform the camera.position.z.

Let’s rotate our sphere slowly.

index.js:

...

import vertexShader from './shaders/vertex.glsl'
import fragmentShader from './shaders/fragment.glsl'

...

  init() {

    ...

    this.addMesh()

    ...
  }

  /**
   * OBJECT
   */
  addMesh() {
    this.geometry = new THREE.IcosahedronGeometry(1, 64)

    this.material = new THREE.ShaderMaterial({
      wireframe: true,
      blending: THREE.AdditiveBlending,
      transparent: true,
      vertexShader,
      fragmentShader,
      uniforms: {
        uFrequency: { value: 0 },
        uAmplitude: { value: 0 },
        uDensity: { value: 0 },
        uStrength: { value: 0 },
        uDeepPurple: { value: 0 },
        uOpacity: { value: 1 }
      }
    })

    this.mesh = new THREE.Mesh(this.geometry, this.material)

    this.scene.add(this.mesh)
  }

  ...

  onResize() {

    ...

    if (this.viewport.width < this.viewport.height) {
      this.mesh.scale.set(.75, .75, .75)
    } else {
      this.mesh.scale.set(1, 1, 1)
    }

    ...

  }

  update() {
    const elapsedTime = this.clock.getElapsedTime()

    this.mesh.rotation.y = elapsedTime * .05

    ...

  }

In the vertex shader (which positions the geometry) and fragment shader (which assigns a color to the pixels) we control the values of the uniforms that we will get from the scroll position. To generate an organic randomness, we make some noise. This shader program runs now on the GPU.

/shaders/vertex.glsl:

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

uniform float uFrequency;
uniform float uAmplitude;
uniform float uDensity;
uniform float uStrength;

varying float vDistortion;

void main() {  
  float distortion = pnoise(normal * uDensity, vec3(10.)) * uStrength;

  vec3 pos = position + (normal * distortion);
  float angle = sin(uv.y * uFrequency) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistortion = distortion;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.);
}

/shaders/fragment.glsl:

uniform float uOpacity;
uniform float uDeepPurple;

varying float vDistortion;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}     

void main() {
  float distort = vDistortion * 3.;

  vec3 brightness = vec3(.1, .1, .9);
  vec3 contrast = vec3(.3, .3, .3);
  vec3 oscilation = vec3(.5, .5, .9);
  vec3 phase = vec3(.9, .1, .8);

  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, vDistortion);
  gl_FragColor += vec4(min(uDeepPurple, 1.), 0., .5, min(uOpacity, 1.));
}

If you don’t understand what’s happening here, I recommend this tutorial by Mario Carrillo.

The soundcheck

To find your preferred settings, you can set up a dat.gui for example. I’ll show you another approach here, in which you can combine two (or more) parameters to intuitively find a cool flow of movement. We simply connect the uniform values with the normalized values of the mouse event and log them to the console. As we use this approach only for development, we do not call rAF (requestAnimationFrames).

index.js:

...

import GSAP from 'gsap'

...

  constructor() {

    ...

    this.mouse = {
      x: 0,
      y: 0
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 0
      },
      uAmplitude: {
        start: 0,
        end: 0
      },
      uDensity: {
        start: 0,
        end: 0
      },
      uStrength: {
        start: 0,
        end: 0
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 0,
        end: 0
      },
      uOpacity: {  // max 1
        start: 1,
        end: 1
      }
    }

    ...

  }

  addEventListeners() {

    ...

    window.addEventListener('mousemove', this.onMouseMove.bind(this))
  }

  onMouseMove(event) {
    // play with it!
    // enable / disable / change x, y, multiplier …

    this.mouse.x = (event.clientX / this.viewport.width).toFixed(2) * 4
    this.mouse.y = (event.clientY / this.viewport.height).toFixed(2) * 2

    GSAP.to(this.mesh.material.uniforms.uFrequency, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uAmplitude, { value: this.mouse.x })
    GSAP.to(this.mesh.material.uniforms.uDensity, { value: this.mouse.y })
    GSAP.to(this.mesh.material.uniforms.uStrength, { value: this.mouse.y })
    // GSAP.to(this.mesh.material.uniforms.uDeepPurple, { value: this.mouse.x })
    // GSAP.to(this.mesh.material.uniforms.uOpacity, { value: this.mouse.y })

    console.info(`X: ${this.mouse.x}  |  Y: ${this.mouse.y}`)
  }

The support act

To create a really fluid mood, we first implement our smooth scroll.

index.sass:

body
  overscroll-behavior: none
  width: 100%
  height: 100vh

  ...

.scroll
  &__stage
    @extend %fixed

    height: 100vh

  &__content
    @extend %absolute

     will-change: transform

SmoothScroll.js:

import GSAP from 'gsap'

export default class {
  constructor({ element, viewport, scroll }) {
    this.element = element
    this.viewport = viewport
    this.scroll = scroll

    this.elements = {
      scrollContent: this.element.querySelector('.scroll__content')
    }
  }

  setSizes() {
    this.scroll.height = this.elements.scrollContent.getBoundingClientRect().height
    this.scroll.limit = this.elements.scrollContent.clientHeight - this.viewport.height

    document.body.style.height = `${this.scroll.height}px`
  }

  update() {
    this.scroll.hard = window.scrollY
    this.scroll.hard = GSAP.utils.clamp(0, this.scroll.limit, this.scroll.hard)
    this.scroll.soft = GSAP.utils.interpolate(this.scroll.soft, this.scroll.hard, this.scroll.ease)

    if (this.scroll.soft < 0.01) {
      this.scroll.soft = 0
    }

    this.elements.scrollContent.style.transform = `translateY(${-this.scroll.soft}px)`
  }    

  onResize() {
    this.viewport = {
      width: window.innerWidth,
      height: window.innerHeight
    }

    this.setSizes()
  }
}

index.js:

...

import SmoothScroll from './SmoothScroll'

...

  constructor() {

    ...

    this.scroll = {
      height: 0,
      limit: 0,
      hard: 0,
      soft: 0,
      ease: 0.05
    }

    this.smoothScroll = new SmoothScroll({ 
      element: this.element, 
      viewport: this.viewport, 
      scroll: this.scroll
    })

    ...

  }

  ...

  onResize() {

    ...

    this.smoothScroll.onResize()

    ...

  }

  update() {

    ...

    this.smoothScroll.update()

    ...

  }

The show

Finally, let’s rock the stage!

Once we have chosen the start and end values, it’s easy to attach them to the scroll position. In this example, we want to drop the purple mesh through the blue section so that it is subsequently soaked in blue itself. We increase the frequency and the strength of our vertex displacement. Let’s first enter this values in our settings and update the mesh material. We normalize scrollY so that we can get the values from 0 to 1 and make our calculations with them.

To render the shader only while scrolling, we call rAF by the scroll listener. We don’t need the mouse event listener anymore.

To improve performance, we add an overwrite to the GSAP default settings. This way we kill any existing tweens while generating a new one for every frame. A long duration renders the movement extra smooth. Once again we let the object rotate slightly with the scroll movement. We iterate over our settings and GSAP makes the music.

index.js:

  constructor() {

  ...

    this.scroll = {

      ...

      normalized: 0, 
      running: false
    }

    this.settings = {
      // vertex
      uFrequency: {
        start: 0,
        end: 4
      },
      uAmplitude: {
        start: 4,
        end: 4
      },
      uDensity: {
        start: 1,
        end: 1
      },
      uStrength: {
        start: 0,
        end: 1.1
      },
      // fragment
      uDeepPurple: {  // max 1
        start: 1,
        end: 0
      },
      uOpacity: { // max 1
        start: .33,
        end: .66
      }
    }

    GSAP.defaults({
      ease: 'power2',
      duration: 6.6,
      overwrite: true
    })

    this.updateScrollAnimations = this.updateScrollAnimations.bind(this)

    ...

  }

...

  addMesh() {

  ...

    uniforms: {
      uFrequency: { value: this.settings.uFrequency.start },
      uAmplitude: { value: this.settings.uAmplitude.start },
      uDensity: { value: this.settings.uDensity.start },
      uStrength: { value: this.settings.uStrength.start },
      uDeepPurple: { value: this.settings.uDeepPurple.start },
      uOpacity: { value: this.settings.uOpacity.start }
    }
  }

  ...

  addEventListeners() {

    ...

    // window.addEventListener('mousemove', this.onMouseMove.bind(this))  // enable to find your preferred values (console)

    window.addEventListener('scroll', this.onScroll.bind(this))
  }

  ...

  /**
   * SCROLL BASED ANIMATIONS
   */
  onScroll() {
    this.scroll.normalized = (this.scroll.hard / this.scroll.limit).toFixed(1)

    if (!this.scroll.running) {
      window.requestAnimationFrame(this.updateScrollAnimations)

      this.scroll.running = true
    }
  }

  updateScrollAnimations() {
    this.scroll.running = false

    GSAP.to(this.mesh.rotation, {
      x: this.scroll.normalized * Math.PI
    })

    for (const key in this.settings) {
      if (this.settings[key].start !== this.settings[key].end) {
        GSAP.to(this.mesh.material.uniforms[key], {
          value: this.settings[key].start + this.scroll.normalized * (this.settings[key].end - this.settings[key].start)
        })
      }
    }
  }

Thanks for reading this tutorial, hope you like it!
Try it out, go new ways, have fun – dare a stage dive!

The post Rock the Stage with a Smooth WebGL Shader Transformation on Scroll appeared first on Codrops.

Curly Tubes from the Lusion Website with Three.js

In this ALL YOUR HTML coding session we’re going to look at recreating the curly noise tubes with light scattering from the fantastic website of Lusion using Three.js.

This coding session was streamed live on May 16, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Curly Tubes from the Lusion Website with Three.js appeared first on Codrops.

Noisy Strokes Texture with Three.js and GLSL

In this ALL YOUR HTML coding session you’ll learn how to recreate the amazing noisy strokes texture seen on the website of Leonard, the inventive agency, using Three.js with GLSL. The wonderful effect was originally made by Damien Mortini.

This coding session was streamed live on May 9, 2021.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Noisy Strokes Texture with Three.js and GLSL appeared first on Codrops.

Texture Ripples and Video Zoom Effect with Three.js and GLSL

In this ALL YOUR HTML coding session we’ll be replicating the ripples and video zoom effect from The Avener by TOOSOON studio using Three.js and GLSL coding.

This coding session was streamed live on April 25, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Texture Ripples and Video Zoom Effect with Three.js and GLSL appeared first on Codrops.

Tropical Particles Rain Animation with Three.js

People always want more. More of everything. And there is nothing better in the world to fulfil that need than particle systems. Because you can have thousands of particles easily. And that’s what I did for this demo.

Creating a particle system

You start with just one particle. I like to use classes for that, because that perfectly describes the behaviour of particles. It usually it looks like this:

class Particle{
	// just setting initial position and speed
	constructor(){}
	// updating velocity according to position and physics
	updateVelocity(){}
	// finally update particle position, 
	// sometimes these last two could be merged
	updatePosition(){}
}

And after all of that, you could render them. Sounds easy, right? It is!

Physics

So the particles need to move. For that we need to establish some rules for their movement. The easiest one is: gravity. Implemented in the particle it would look like this:

updateVelocity(){
	this.speed.y = gravity; // some constant
}

updatePosition{
	// this could have been just one line, but imagine there are some other factors for speed, not just gravity.
	this.position.y += this.speed.y;

	// also some code to bring particle back to screen, once it is outside
}

With that you will get this simple rain. I also randomized gravity for each particle. Why? Because I can!

But just rain didn’t seem enough, and I decided to add a little bit of physics to change the speed of the particles. To slow them down or speed them up, depending on the… image. Yup, I took the physics for the particles from the image. My friend @EncharmDre proposed to use some oscillations for the particles, and I also added some colors palettes here. It looked amazing! Like a real tropical rain.

Still, the physics part is the computational bottleneck for this animation. It is possible to animate millions of particles by using GPGPU techniques, and the GPU to calculate the positions. But 10000 particles seemed enough for my demo, and I got away with CPU calculations this time. 10s of thousands also were possible because I saved some performance on rendering.

Rendering

There are a lot of ways to render particles: HTML, Canvas2D, WebGL, SVG, you name it. But so far the fastest one is of course WebGL. It is even faster than Canvas2D rendering. I mean, of course Canvas2D works perfectly fine for particles. But with WebGL you could have MORE OF THEM! Which is the point of this whole adventure =).

So, WebGL narrowed down the search for rendering to a couple of frameworks. PIXI.js is actually very nice for rendering 2D particle systems, but because I love three.js, and because its always nice to have another dimension as a backup (the more dimensions the better, remember?), I chose three.js for this job.

There is already an object specifically for this job: THREE.Points. I also decided to use shaders (because it’s cool) but you could actually easily avoid that for this demo.

The simplified code looks somewhat like this:

// initialization
for (let i = 0; i < numberOfParticles; i++) {
  // because we are animating them in 2D, z=0
  positions.set([x, y, 0], i * 3);
  this.particles.push(
    new Particle({
      x,
      y
    })
  );
}
geometry.setAttribute(
  "position",
  new THREE.BufferAttribute(positions, 3)
);

// animation loop
particles.forEach(particle => {
  particle.updateSpeedAndPosition();
});

To make it even cooler, I decided to create trails for the particles. For that I just used the previous rendering frame, and faded it a little bit. And here is what I got:

Now combining everything, we could achieve this final look:

Final words

I beg you to try this yourself, and experiment with different parameters for particles, or write your own logic for their behavior. It is a lot of fun. And you could be the ruler of thousands or even millions of… particles!

GitHub link coming soon!

The post Tropical Particles Rain Animation with Three.js appeared first on Codrops.

Rotating Loading Animation of 3D Shapes with Three.js

All things we see around are 3D shapes. Some of them are nice looking, some of them you’d like to touch and squeeze. But because it’s still quarantine time, and I don’t have many things around, I decided to create some virtual things to twist around. 🙂 Here’s what I ended up with:

How to do that?

To render everything I used Three.js, just like Mario Carillo in his amazing demo, go check it out! Three.js is the best and most popular library to do things with WebGL at the moment. And if you are just starting your 3D path, it is definitely the easiest way to get something working.

If you just rotate the shape, its nothing special, its just rotating.

Interesting things start to happen when you rotate different parts of the shape with different speeds. Let’s say I want the bottom part of the object to rotate first, and than the rest, let’s see whats going to happen.

To do something like that, you would need to use a Vertex shader. I used a little bit of math and rotated parts of the object depending on the Y or Z coordinate. The simplified code looks like this:

vec3 newposition = rotate(oldPosition, angle*Y_COORDINATE);

So with a coordinate change, the rotation would change as well. The rotate function itself includes a lot of Sine-Cosine’s and matrices. All the math you need to rotate a point around some axis. To give it a more natural feel, I also change the easing function, so it looks like elastic rubber.

Visuals, MatCap

To create a nice look for objects, we could have used lights and shadows and complicated materials. But because there’s not so much in the scene, we got away with the most simple kind of material: MatCaps. So instead of making complicated calculations, you just use a simple image to reference the lighting and color. Here is how it looks like:

And here is how the torus shape looks like with this texture applied to it:

Amazing, isn’t it? And so simple.

So combining Matcaps and a rotation method, we can create a lot of cool shape animations that look great, and have this rubber feeling.

Go and create your shape with this technique and make sure to share it with us! Hope this simple technique will inspire you to create your own animation! Have a good day!

The post Rotating Loading Animation of 3D Shapes with Three.js appeared first on Codrops.

How to Code the Travelling Particles Animation from “Volt for Drive”

In this ALL YOUR HTML coding session you’ll learn how to recreate Volt for Drive’s glowing particles that endlessly travel through city streets. The website was made by the team of Red Collar. We’ll use Three.js and some shader magic to implement the animation.

This coding session was streamed live on January 31, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post How to Code the Travelling Particles Animation from “Volt for Drive” appeared first on Codrops.

Twisted Colorful Spheres with Three.js

I love blobs and I enjoy looking for interesting ways to change basic geometries with Three.js: bending a plane, twisting a box, or exploring a torus (like in this 10-min video tutorial). So this time, my love for shaping things will be the excuse to see what we can do with a sphere, transforming it using shaders. 

This tutorial will be brief, so we’ll skip the basic render/scene setup and focus on manipulating the sphere’s shape and colors, but if you want to know more about the setup check out these steps.

We’ll go with a more rounded than irregular shape, so the premise is to deform a sphere and use that same distortion to color it.

Vertex displacement

As you’ve probably been thinking, we’ll be using noise to deform the geometry by moving each vertex along the direction of its normal. Think of it as if we were pushing each vertex from the inside out with different strengths. I could elaborate more on this, but I rather point you to this article by The Spite aka Jaume Sanchez Elias, he explains this so well! I bet some of you have stumbled upon this article already.

So in code, it looks like this:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And now we should see a blobby sphere:

See the Pen Vertex displacement by Mario (@marioecg) on CodePen.

You can experiment and change its values to see how the blob changes. I know we’re going with a more subtle and rounded distortion, but feel free to go crazy with it; there are audio visualizers out there that deform a sphere to the point that you don’t even think it’s based on a sphere.

Now, this already looks interesting, but let’s add one more touch to it next.

Noitation

…is just a word I came up with to combine noise with rotation (ba dum tss), but yes! Adding some twirl to the mix makes things more compelling.

If you’ve ever played with Play-Doh as a child, you have surely molded a big chunk of clay into a ball, grab it with each hand, and twisted in opposite directions until the clay tore apart. This is kind of what we want to do (except for the breaking part).

To twist the sphere, we are going to generate a sine wave from top to bottom of the sphere. Then, we are going to use this top-bottom wave as a rotation for the current position. Since the values increase/decrease from top to bottom, the rotation is going to oscillate as well, creating a twist:

varying vec3 vNormal;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vNormal = normal;

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

Notice how the waves emerge from the top, it’s soothing. Some of you might find this movement therapeutic, so take some time to appreciate it and think about what we’ve learned so far…

See the Pen Noitation by Mario (@marioecg) on CodePen.

Alright! Now that you’re back let’s get on to the fragment shader.

Colorific

If you take a close look at the shaders before, you see, almost at the end, that we’ve been passing the normals to the fragment shader. Remember that we want to use the distortion to color the shape, so first let’s create a varying where we pass that distortion to:

varying float vDistort;

uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;

#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)

void main() {
  float t = uTime * uSpeed;
  // You can also use classic perlin noise or simplex noise,
  // I'm using its periodic variant out of curiosity
  float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;

  // Disturb each vertex along the direction of its normal
  vec3 pos = position + (normal * distortion);

  // Create a sine wave from top to bottom of the sphere
  // To increase the amount of waves, we'll use uFrequency
  // To make the waves bigger we'll use uAmplitude
  float angle = sin(uv.y * uFrequency + t) * uAmplitude;
  pos = rotateY(pos, angle);    

  vDistort = distortion; // Train goes to the fragment shader! Tchu tchuuu

  gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}

And use vDistort to color the pixels instead:

varying float vDistort;

uniform float uIntensity;

void main() {
  float distort = vDistort * uIntensity;

  vec3 color = vec3(distort);

  gl_FragColor = vec4(color, 1.0);
}

We should get a kind of twisted, smokey black and white color like so:

See the Pen Colorific by Mario (@marioecg) on CodePen.

With this basis, we’ll take it a step further and use it in conjunction with one of my favorite color functions out there.

Cospalette

Cosine palette is a very useful function to create and control color with code based on the brightness, contrast, oscillation of cosine, and phase of cosine. I encourage you to watch Char Stiles explain this further, which is soooo good. Final s/o to Inigo Quilez who wrote an article about this function some years ago; for those of you who haven’t stumbled upon his genius work, please do. I would love to write more about him, but I’ll save that for a poem.

Let’s use cospalette to input the distortion and see how it looks:

varying vec2 vUv;
varying float vDistort;

uniform float uIntensity;

vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
  return a + b * cos(6.28318 * (c * t + d));
}   

void main() {
  float distort = vDistort * uIntensity;

  // These values are my fav combination, 
  // they remind me of Zach Lieberman's work.
  // You can find more combos in the examples from IQ:
  // https://iquilezles.org/www/articles/palettes/palettes.htm
  // Experiment with these!
  vec3 brightness = vec3(0.5, 0.5, 0.5);
  vec3 contrast = vec3(0.5, 0.5, 0.5);
  vec3 oscilation = vec3(1.0, 1.0, 1.0);
  vec3 phase = vec3(0.0, 0.1, 0.2);

  // Pass the distortion as input of cospalette
  vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);

  gl_FragColor = vec4(color, 1.0);
}

¡Liiistoooooo! See how the color palette behaves similar to the distortion because we’re using it as input. Swap it for vUv.x or vUv.y to see different results of the palette, or even better, come up with your own input!

See the Pen Cospalette by Mario (@marioecg) on CodePen.

And that’s it! I hope this short tutorial gave you some ideas to apply to anything you’re creating or inspired you to make something. Next time you use noise, stop and think if you can do something extra to make it more interesting and make sure to save Cospalette in your shader toolbelt.

Explore and have fun with this! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.

I hope you learned something new. Till next time! 

References and Credits

Thanks to all the amazing people that put knowledge out in the world!

The post Twisted Colorful Spheres with Three.js appeared first on Codrops.

Recreating Frontier Development Lab’s Sun in Three.js

In this ALL YOUR HTML coding session you’ll learn how to recreate the sun from Frontier Development Lab’s Sun Challenge, using numerous shaders, lots of layers of noise and CubeCamera.

This coding session was streamed live on January 24, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Recreating Frontier Development Lab’s Sun in Three.js appeared first on Codrops.

Infinite Hyperbolic Helicoid Animation with Three.js

In this ALL YOUR HTML coding session, you’ll learn how to create a beautiful shape with parametric functions, a hyperbolic helicoid, inspired by J. Miguel Medina’s artwork. We’ll be upgrading MeshPhysicalMaterial with some custom shader code, making an infinite animation.

This coding session was streamed live on January 10, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Infinite Hyperbolic Helicoid Animation with Three.js appeared first on Codrops.

Recreating a Dave Whyte Animation in React-Three-Fiber

There’s a slew of artists and creative coders on social media who regularly post satisfying, hypnotic looping animations. One example is Dave Whyte, also known as @beesandbombs on Twitter. In this tutorial I’ll explain how to recreate one of his more popular recent animations, which I’ve dubbed “Breathing Dots”. Here’s the original animation:

The Tools

Dave says he uses Processing for his animations, but I’ll be using react-three-fiber (R3F) which is a React renderer for Three.js. Why am I using a 3D library for a 2D animation? Well, R3F provides a powerful declarative syntax for WebGL graphics and grants you access to useful Three.js features such as post-processing effects. It lets you do a lot with few lines of code, all while being highly modular and re-usable. You can use whatever tool you like, but I find the combined ecosystems of React and Three.js make R3F a robust tool for general purpose graphics.

I use an adapted Codesandbox template running Create React App to bootstrap my R3F projects; You can fork it by clicking the button above to get a project running in a few seconds. I will assume some familiarity with React, Three.js and R3F for the rest of the tutorial. If you’re totally new, you might want to start here.

Step 1: Observations

First things first, we need to take a close look at what’s going on in the source material. When I look at the GIF, I see a field of little white dots. They’re spread out evenly, but the pattern looks more random than a grid. The dots are moving in a rhythmic pulse, getting pulled towards the center and then flung outwards in a gentle shockwave. The shockwave has the shape of an octagon. The dots aren’t in constant motion, rather they seem to pause at each end of the cycle. The dots in motion look really smooth, almost like they’re melting. We need to zoom in to really understand what’s going on here. Here’s a close up of the corners during the contraction phase:

Interesting! The moving dots are split into red, green, and blue parts. The red part points in the direction of motion, while the blue part points away from the motion. The faster the dot is moving, the farther these three parts are spread out. As the colored parts overlap, they combine into a solid white color. Now that we understand what exactly we want to produce, lets start coding.

Step 2: Making Some Dots

If you’re using the Codesandbox template I provided, you can strip down the main App.js to just an empty scene with a black background:

import React from 'react'
import { Canvas } from 'react-three-fiber'

export default function App() {
  return (
    <Canvas>
      <color attach="background" args={['black']} />
    </Canvas>
  )
}

Our First Dot

Let’s create a component for our dots, starting with just a single white circle mesh composed of a CircleBufferGeometry and MeshBasicMaterial

function Dots() {
  return (
    <mesh>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </mesh>
  )
}

Add the <Dots /> component inside the canvas, and you should see a white octagon appear onscreen. Our first dot! Since it’ll be tiny, it doesn’t matter that it’s not very round.

But wait a second… Using a color picker, you’ll notice that it’s not pure white! This is because R3F sets up color management by default which is great if you’re working with glTF models, but not if you need raw colors. We can disable the default behavior by setting colorManagement={false} on our canvas.

More Dots

We need approximately 10,000 dots to fully fill the screen throughout the animation. A naive approach at creating a field of dots would be to simply render our dot mesh a few thousand times. However, you’ll quickly notice that this destroys performance. Rendering 10,000 of these chunky dots brings my gaming rig down to a measly 5 FPS. The problem is that each dot mesh incurs its own draw call, which means the CPU needs to send 10,000 (largely redundant) instructions to the GPU every frame.

The solution is to use instanced rendering, which means the CPU can tell the GPU about the dot shape, material, and the locations of all 10,000 instances in a single draw call. Three.js offers a helpful InstancedMesh class to facilitate instanced rendering of a mesh. According to the docs it accepts a geometry, material, and integer count as constructor arguments. Let’s convert our regular old mesh into an <instancedMesh> , starting with just one instance. We can leave the geometry and material slots as null since the child elements will fill them, so we only need to specify the count.

function Dots() {
  return (
    <instancedMesh args={[null, null, 1]}>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </instancedMesh>
  )
}

Hey, where did it go? The dot disappeared because of how InstancedMesh is initialized. Internally, the .instanceMatrix stores the transformation matrix of each instance, but it’s initialized with all zeros which squashes our mesh into the abyss. Instead, we should start with an identity matrix to get a neutral transformation. Let’s get a reference to our InstancedMesh and apply the identity matrix to the first instance inside of useLayoutEffect so that it’s properly positioned before anything is painted to the screen.

function Dots() {
  const ref = useRef()
  useLayoutEffect(() => {
    // THREE.Matrix4 defaults to an identity matrix
    const transform = new THREE.Matrix4()

    // Apply the transform to the instance at index 0
    ref.current.setMatrixAt(0, transform)
  }, [])
  return (
    <instancedMesh ref={ref} args={[null, null, 1]}>
      <circleBufferGeometry />
      <meshBasicMaterial />
    </instancedMesh>
  )
}

Great, now we have our dot back. Time to crank it up to 10,000. We’ll increase the instance count and set the transform of each instance along a centered 100 x 100 grid.

for (let i = 0; i < 10000; ++i) {
  const x = (i % 100) - 50
  const y = Math.floor(i / 100) - 50
  transform.setPosition(x, y, 0)
  ref.current.setMatrixAt(i, transform)
}

We should also decrease the circle radius to 0.15 to better fit the grid proportions. We don’t want any perspective distortion on our grid, so we should set the orthographic prop on the canvas. Lastly, we’ll lower the default camera’s zoom to 20 to fit more dots on screen.

The result should look like this:

Although you can’t notice yet, it’s now running at a silky smooth 60 FPS 😀

Adding Some Noise

There’s a variety of ways to distribute points on a surface beyond a simple grid. “Poisson disc sampling” and “centroidal Voronoi tessellation” are some mathematical approaches that generate slightly more natural distributions. That’s a little too involved for this demo, so let’s just approximate a natural distribution by turning our square grid into hexagons and adding in small random offsets to each point. The positioning logic now looks like this:

// Place in a grid
let x = (i % 100) - 50
let y = Math.floor(i / 100) - 50

// Offset every other column (hexagonal pattern)
y += (i % 2) * 0.5

// Add some noise
x += Math.random() * 0.3
y += Math.random() * 0.3

Step 3: Creating Motion

Sine waves are the heart of cyclical motion. By feeding the clock time into a sine function, we get a value that oscillates between -1 and 1. To get the effect of expansion and contraction, we want to oscillate each point’s distance from the center. Another way of thinking about this is that we want to dynamically scale each point’s intial position vector. Since we should avoid unnecessary computations in the render loop, let’s cache our initial position vectors in useMemo for re-use. We’re also going to need that Matrix4 in the loop, so let’s cache that as well. Finally, we don’t want to overwrite our initial dot positions, so let’s cache a spare Vector3 for use during calculations.

const { vec, transform, positions } = useMemo(() => {
  const vec = new THREE.Vector3()
  const transform = new THREE.Matrix4()
  const positions = [...Array(10000)].map((_, i) => {
    const position = new THREE.Vector3()
    position.x = (i % 100) - 50
    position.y = Math.floor(i / 100) - 50
    position.y += (i % 2) * 0.5
    position.x += Math.random() * 0.3
    position.y += Math.random() * 0.3
    return position
  })
  return { vec, transform, positions }
}, [])

For simplicity let’s scrap the useLayoutEffect call and configure all the matrix updates in a useFrame loop. Remember that in R3F, the useFrame callback receives the same arguments as useThree including the Three.js clock, so we can access a dynamic time through clock.elapsedTime. We’ll add some simple motion by copying each instance position into our scratch vector, scaling it by some factor of the sine wave, and then copying that to the matrix. As mentioned in the docs, we need to set .needsUpdate to true on the instanced mesh’s .instanceMatrix in the loop so that Three.js knows to keep updating the positions.

useFrame(({ clock }) => {
  const scale = 1 + Math.sin(clock.elapsedTime) * 0.3
  for (let i = 0; i < 10000; ++i) {
    vec.copy(positions[i]).multiplyScalar(scale)
    transform.setPosition(vec)
    ref.current.setMatrixAt(i, transform)
  }
  ref.current.instanceMatrix.needsUpdate = true
})

Rounded square waves

The raw sine wave follows a perfectly round, circular motion. However, as we observed earlier:

The dots aren’t in constant motion, rather they seem to pause at each end of the cycle.

This calls for a different, more boxy looking wave with longer plateaus and shorter transitions. A search through the digital signal processing StackExchange produces this post with the equation for a rounded square wave. I’ve visualized the equation here and animated the delta parameter, watch how it goes from smooth to boxy:

The equation translates to this Javascript function:

const roundedSquareWave = (t, delta, a, f) => {
  return ((2 * a) / Math.PI) * Math.atan(Math.sin(2 * Math.PI * t * f) / delta)
}

Swapping out our Math.sin call for the new wave function with a delta of 0.1 makes the motion more snappy, with time to rest in between:

Ripples

How do we use this wave to make the dots move at different speeds and create ripples? If we change the input to the wave based on the dot’s distance from the center, then each ring of dots will be at a different phase causing the surface to stretch and squeeze like an actual wave. We’ll use the initial distances on every frame, so let’s cache and return the array of distances in our useMemo callback:

const distances = positions.map(pos => pos.length())

Then, in the useFrame callback we subtract a factor of the distance from the t (time) variable that gets plugged into the wave. That looks like this:

That already looks pretty cool!

The Octagon

Our ripple is perfectly circular, how can we make it look more octagonal like the original? One way to approximate this effect is by combining a sine or cosine wave with our distance function at an appropriate frequency (8 times per revolution). Watch how changing the strength of this wave changes the shape of the region:

A strength of 0.5 is a pretty good balance between looking like an octagon and not looking too wavy. That change can happen in our initial distance calculations:

const right = new THREE.Vector3(1, 0, 0)
const distances = positions.map((pos) => (
  pos.length() + Math.cos(pos.angleTo(right) * 8) * 0.5
))

It’ll take some additional tweaks to really see the effect of this. There’s a few places that we can focus our adjustments on:

  • Influence of point distance on wave phase
  • Influence of point distance on wave roundness
  • Frequency of the wave
  • Amplitude of the wave

It’s a bit of educated trial and error to make it match the original GIF, but after fiddling with the wave parameters and multipliers eventually you can get something like this:

When previewing in full screen, the octagonal shape is now pretty clear.

Step 4: Post-processing

We have something that mimics the overall motion of the GIF, but the dots in motion don’t have the same color shifting effect that we observed earlier. As a reminder:

The moving dots are split into red, green, and blue parts. The red part points in the direction of motion, while the blue part points away from the motion. The faster the dot is moving, the farther these three parts are spread out. As the colored parts overlap, they combine into a solid white color.

We can achieve this effect using the post-processing EffectComposer built into Three.js, which we can conveniently tack onto the scene without any changes to the code we’ve already written. If you’re new to post-processing like me, I highly recommend reading this intro guide from threejsfundamentals. In short, the composer lets you toss image data back and forth between two “render targets” (glorified image textures), applying shaders and other operations in between. Each step of the pipeline is called a “pass”. Typically the first pass performs the initial scene render, then there are some passes to add effects, and by default the final pass writes the resulting image to the screen.

An example: motion blur

Here’s a JSFiddle from Maxime R that demonstrates a naive motion blur effect with the EffectComposer. This effect makes use of a third render target in order to preserve a blend of previous frames. I’ve drawn out a diagram to track how image data moves through the pipeline (read from the top down):

VML diagram depicting the flow of data through four passes of a simple motion blur effect. The process is explained below.

First, the scene is rendered as usual and written to rt1 with a RenderPass. Most passes will automatically switch the read and write buffers (render targets), so our next pass will read what we just rendered in rt1 and write to rt2. In this case we use a ShaderPass configured with a BlendShader to blend the contents of rt1 with whatever is stored in our third render target (empty at first, but it eventually accumulates a blend of previous frames). This blend is written to rt2 and another swap automatically occurs. Next, we use a SavePass to save the blend we just created in rt2 back to our third render target. The SavePass is a little unique in that it doesn’t swap the read and write buffers, which makes sense since it doesn’t actually change the image data. Finally, that same blend in rt2 (which is still the read buffer) gets read into another ShaderPass set to a CopyShader, which simply copies its input into the output. Since it’s the last pass on the stack, it automatically gets renderToScreen=true which means that its output is what you’ll see on screen.

Working with post-processing requires some mental gymnastics, but hopefully this makes some sense of how different components like ShaderPass, SavePass, and CopyPass work together to apply effects and preserve data between frames.

RGB Delay Effect

A simple RGB color shifting effect involves turning our single white dot into three colored dots that get farther apart the faster they move. Rather than trying to compute velocities for all the dots and passing them to the post-processing stack, we can cheat by overlaying previous frames:

A red, green, and blue dot overlayed like a Venn diagram depicting three consecutive frames.

This turns out to be a very similar problem as the motion blur, since it requires us to use additional render targets to store data from previous frames. We actually need two extra render targets this time, one to store the image from frame n-1 and another for frame n-2. I’ll call these render targets delay1 and delay2.

Here’s a diagram of the RGB delay effect:

VML diagram depicting the flow of data through four passes of a RGB color delay effect. Key aspects of the process is explained below.
A circle containing a value X represents the individual frame for delay X.

The trick is to manually disable needsSwap on the ShaderPass that blends the colors together, so that the proceeding SavePass re-reads the buffer that holds the current frame rather than the colored composite. Similarly, by manually enabling needsSwap on the SavePass we ensure that we read from the colored composite on the final ShaderPass for the end result. The other tricky part is that since we’re placing the current frame’s contents in the delay2 buffer (as to not lose the contents of delay1 for the next frame), we need to swap these buffers each frame. It’s easiest to do this outside of the EffectComposer by swapping the references to these render targets on the ShaderPass and SavePass within the render loop.

Implementation

This is all very abstract, so let’s see what this means in practice. In a new file (Effects.js), start by importing the necessary passes and shaders, then extending the classes so that R3F can access them declaratively.

import { useThree, useFrame, extend } from 'react-three-fiber'
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer'
import { ShaderPass } from 'three/examples/jsm/postprocessing/ShaderPass'
import { SavePass } from 'three/examples/jsm/postprocessing/SavePass'
import { CopyShader } from 'three/examples/jsm/shaders/CopyShader'
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass'

extend({ EffectComposer, ShaderPass, SavePass, RenderPass })

We’ll put our effects inside a new component. Here is what a basic effect looks like in R3F:

function Effects() {
  const composer = useRef()
  const { scene, gl, size, camera } = useThree()
  useEffect(() => void composer.current.setSize(size.width, size.height), [size])
  useFrame(() => {
    composer.current.render()
  }, 1)
  return (
    <effectComposer ref={composer} args={[gl]}>
      <renderPass attachArray="passes" scene={scene} camera={camera} />
    </effectComposer>
  )
}

All that this does is render the scene to the canvas. Let’s start adding in the pieces from our diagram. We’ll need a shader that takes in 3 textures and respectively blends the red, green, and blue channels of them. The vertexShader of a post-processing shader always looks the same, so we only really need to focus on the fragmentShader. Here’s what the complete shader looks like:

const triColorMix = {
  uniforms: {
    tDiffuse1: { value: null },
    tDiffuse2: { value: null },
    tDiffuse3: { value: null }
  },
  vertexShader: `
    varying vec2 vUv;
    void main() {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1);
    }
  `,
  fragmentShader: `
    varying vec2 vUv;
    uniform sampler2D tDiffuse1;
    uniform sampler2D tDiffuse2;
    uniform sampler2D tDiffuse3;
    
    void main() {
      vec4 del0 = texture2D(tDiffuse1, vUv);
      vec4 del1 = texture2D(tDiffuse2, vUv);
      vec4 del2 = texture2D(tDiffuse3, vUv);
      float alpha = min(min(del0.a, del1.a), del2.a);
      gl_FragColor = vec4(del0.r, del1.g, del2.b, alpha);
    }
  `
}

With the shader ready to roll, we’ll then memo-ize our helper render targets and set up some additional refs to hold constants and references to our other passes.

const savePass = useRef()
const blendPass = useRef()
const swap = useRef(false) // Whether to swap the delay buffers
const { rtA, rtB } = useMemo(() => {
  const rtA = new THREE.WebGLRenderTarget(size.width, size.height)
  const rtB = new THREE.WebGLRenderTarget(size.width, size.height)
  return { rtA, rtB }
}, [size])

Next, we’ll flesh out the effect stack with the other passes specified in the diagram above and attach our refs:

return (
  <effectComposer ref={composer} args={[gl]}>
    <renderPass attachArray="passes" scene={scene} camera={camera} />
    <shaderPass attachArray="passes" ref={blendPass} args={[triColorMix, 'tDiffuse1']} needsSwap={false} />
    <savePass attachArray="passes" ref={savePass} needsSwap={true} />
    <shaderPass attachArray="passes" args={[CopyShader]} />
  </effectComposer>
)

By stating args={[triColorMix, 'tDiffuse1']} on the blend pass, we indicate that the composer’s read buffer should be passed as the tDiffuse1 uniform in our custom shader. The behavior of these passes is unfortunately not documented, so you sometimes need to poke through the source files to figure this stuff out.

Finally, we’ll need to modify the render loop to swap between our spare render targets and plug them in as the remaining 2 uniforms:

useFrame(() => {
  // Swap render targets and update dependencies
  let delay1 = swap.current ? rtB : rtA
  let delay2 = swap.current ? rtA : rtB
  savePass.current.renderTarget = delay2
  blendPass.current.uniforms['tDiffuse2'].value = delay1.texture
  blendPass.current.uniforms['tDiffuse3'].value = delay2.texture
  swap.current = !swap.current
  composer.current.render()
}, 1)

All the pieces for our RGB delay effect are in place. Here’s a demo of the end result on a simpler scene with one white dot moving back and forth:

Putting it all together

As you’ll notice in the previous sandbox, we can make the effect take hold by simply plopping the <Effects /> component inside the canvas. After doing this, we can make it look even better by adding an anti-aliasing pass to the effect composer.

import { FXAAShader } from 'three/examples/jsm/shaders/FXAAShader'

...
  const pixelRatio = gl.getPixelRatio()
  return (
    <effectComposer ref={composer} args={[gl]}>
      <renderPass attachArray="passes" scene={scene} camera={camera} />
      <shaderPass attachArray="passes" ref={blendPass} args={[triColorMix, 'tDiffuse1']} needsSwap={false} />
      <savePass attachArray="passes" ref={savePass} needsSwap={true} />
      <shaderPass
        attachArray="passes"
        args={[FXAAShader]}
        uniforms-resolution-value-x={1 / (size.width * pixelRatio)}
        uniforms-resolution-value-y={1 / (size.height * pixelRatio)}
      />
      <shaderPass attachArray="passes" args={[CopyShader]} />
    </effectComposer>
  )
}

And here’s our finished demo!

(Bonus) Interactivity

While outside the scope of this tutorial, I’ve added an interactive demo variant which responds to mouse clicks and cursor position. This variant uses react-spring v9 to smoothly reposition the focus point of the dots. Check it out in the “Demo 2” page of the demo linked at the top of this page, and play around with the source code to see if you can add other forms of interactivity.

Step 5: Sharing Your Work

I highly recommend publicly sharing the things you create. It’s a great way to track your progress, share your learning with others, and get feedback. I wouldn’t be writing this tutorial if I hadn’t shared my work! For perfect loops you can use the use-capture hook to automate your recording. If you’re sharing to Twitter, consider converting to a GIF to avoid compression artifacts. Here’s a thread from @arc4g explaining how they create smooth 50 FPS GIFs for Twitter.

I hope you learned something about Three.js or react-three-fiber from this tutorial. Many of the animations I see online follow a similar formula of repeated shapes moving in some mathematical rhythm, so the principles here extend beyond just rippling dots. If this inspired you to create something cool, tag me in it so I can see!

The post Recreating a Dave Whyte Animation in React-Three-Fiber appeared first on Codrops.

Coding a 3D Lines Animation with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be recreating the cool 3D lines animation seen on the Voice of Racism website by Assembly using Three.jsDepthTexture.

This coding session was streamed live on December 13, 2020.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a 3D Lines Animation with Three.js appeared first on Codrops.

Coding a Simple Raymarching Scene with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be coding a Raymarching demo in Three.js from scratch. We’ll be using some cool Matcap textures, and add a wobbly animation to the scene, and all will be done as math functions. The scene is inspired by this demo made by Luigi De Rosa.

This coding session was streamed live on December 6, 2020.

Check out the live demo.

The Art of Code channel: https://www.youtube.com/c/TheArtofCodeIsCool/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a Simple Raymarching Scene with Three.js appeared first on Codrops.

Building a Svelte Static Website with Smooth Page Transitions

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, you’ll learn how image transitions with GLSL and Three.js work and how to build a static website with Svelte.js that will be using a third party API. Finally, we’ll code some smooth page transitions using GSAP and Three.js.

This coding session was streamed live on November 29, 2020.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Building a Svelte Static Website with Smooth Page Transitions appeared first on Codrops.

Replicating the Icosahedron from Rogier de Boevé’s Website

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we’ll be replicating the beautiful icosahedron animation from Rogier de Boevé’s website. We’ll be using Three.js and GLSL to make things cool, and also some postprocessing.

This coding session was streamed live on November 22, 2020.

This is what we’ll be learning to code:

Original website: https://rogierdeboeve.com/

Developer’s Twitter: https://twitter.com/rogierdeboeve

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Icosahedron from Rogier de Boevé’s Website appeared first on Codrops.