Case Study: Anatole Touvron’s Portfolio

Like many developers, I’ve decided to redo my portfolio in the summer of 2021. I wanted to do everything from scratch, including the design and development, because I liked the idea of:

  • doing the whole project on my own
  • working on something different than coding
  • being 100% responsible for the website
  • being free to do whatever I want

I know that I wanted to do something simple but efficient. I’ve tried many times to do another portfolio but I’ve always felt stuck at some point and wanted to wipe everything clean by remaking the design or the entire development. I talked to many developers about this and I think that we all know this feeling of getting tired of a project because of how long it is taking to finish. For the first time, I’ve managed to never get this feeling simply by designing this entire project in one afternoon and developing it in two weeks.

Without going further, I want to point out that prior to that I’ve studied @lhbizarro’s course on how to create an immersive website from scratch and I would not have been able to create my portfolio this fast without. It is a true wealth of knowledge that I now use on a regular basis.

Going fast

One of the best tips that I can give to any developer in order to not get tired of doing their portfolio is to go as fast as possible. If you know that you get tired easily with a project, I highly suggest doing it when you have the time and then rush it like crazy. This goes for the coding and for the design part.

Get inspired by websites that you like, find colours that match your vibe, fonts that you like, use your favourite design tool and don’t overthink it.

Things can always get better but if you don’t stick to what you’re doing you will never be able to finish it.

I’m really proud and happy about my portfolio but I know that it isn’t that creative and that crazy compared to stuff that you can see online. I’ve set the bar at a level that I knew I could reach and I think that this is a good way of being sure that you’ll have a new website that will make you progress and feel good about yourself.

Text animations

For the different text animations I’ve chose to do something that I’ve been doing a long time, which is animating lines for paragraphs and letters or words for titles with the help of the Intersection Observer API.

Using it is the most efficient and optimal way to do such animations as it only triggers once in your browser and allows you to have many spans translating without making your user’s machine launch like a rocket.

You have to know that, the more elements there are to animate, the harder it will be to have a smooth experience on every device, which is why I used the least amount of spans to do my animations.

To split my text I always use GSAP’s SplitText because it spoon-feeds you the least fun part of the work. It’s really easy to use and never disappoints.

How to animate

There is multiple ways of animating your divs or spans and I’ve decided to show you two ways of doing it:

  • fully with JavaScript
  • CSS and JavaScript-based

Controlling an animation entirely with JavaScript allows you to have code that’s easier to read and understand because everything will be in the same file. But it could a bit less optimal because you’ll be using JavaScript animation libraries.

Open the following example to see the code:

With CSS your code can get a bit out of hand if you are not organised. There’s a lot more classes to take care of. It’s up to you what you prefer to use but I feel like it’s important to talk about both options because each can have their advantages and disadvantages depending on your needs.

The next example shows how to do it with CSS + JS (open the Codesandbox):

Infinite slider

I’d like to show you the HTML and CSS part of the infinite slider on my portfolio. If you’d like to understand the WebGL part, I can recommend the following tutorial: Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

The main idea of an infinite slider is to duplicate the actual content so that you can never see the end of it.

You can see the example here (open the Codesandbox for a better scroll experience):

Basically, you wrap everything in a big division that contains the doubled content and when you’ve translated the equivalent of 50% of the big wrapper, you put everything back to its initial position. It’s more like an illusion and it will make more sense when you look at this schematic drawing:

At this very moment the big wrapper snaps back to its initial position to mimic the effect of an infinite slider.

WebGL

Although I won’t explain the WebGL part because Luis already explains it in his tutorial, I do want to talk about the post-processing part with OGL because I love this process and it provides a very powerful way of adding interactions to your website.

To do the post-processing, all you need is the following:

this.post = new Post(this.gl);
this.pass = this.post.addPass({
  fragment,
  uniforms: {
    uResolution: {
      value: new Vec2(1, window.innerHeight / window.innerWidth)
    },
    uMouse: {
      value: new Vec2()
    },
    uVelo: {
      value: 0
    },
    uAmount: {
      value: 0
    }
  }
});

We pass the mouse position, velocity and the time to the uniform. It will allow us to create a grain effect, and an RGB distortion based on the mouse position. You can find many shaders like this and I find it really cool to have it. And it’s not that hard to implement.

float circle(vec2 uv, vec2 disc_center, float disc_radius, float border_size) {
  uv -= disc_center;
  uv*= uResolution;
  float dist = sqrt(dot(uv, uv));
  return smoothstep(disc_radius+border_size, disc_radius-border_size, dist);
}

float random( vec2 p )
{
  vec2 K1 = vec2(
    23.14069263277926,
    2.665144142690225
  );
  return fract( cos( dot(p,K1) ) * 12345.6789 );
}

void main() {
  vec2 newUV = vUv;

  float c = circle(newUV, uMouse, 0.0, 0.6);

  float r = texture2D(tMap, newUV.xy += c * (uVelo * .9)).x;
	float g = texture2D(tMap, newUV.xy += c * (uVelo * .925)).y;
	float b = texture2D(tMap, newUV.xy += c * (uVelo * .95)).z;

  vec4 newColor = vec4(r, g, b, 1.);

  newUV.y *= random(vec2(newUV.y, uAmount));
  newColor.rgb += random(newUV)* 0.10;

  gl_FragColor = newColor;
}

You can find the complete implementation here:

Conclusion

I hoped that you liked this case study and learned a few tips and tricks! If you have any questions you can hit me up @anatoletouvron 🙂

The post Case Study: Anatole Touvron’s Portfolio appeared first on Codrops.

Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

In this tutorial we’ll implement an infinite circular gallery using WebGL with OGL based on the website Lions Good News 2020 made by SHIFTBRAIN inc.

Most of the steps of this tutorial can be also reproduced in other WebGL libraries such as Three.js or Babylon.js with the correct adaptations.

With that being said, let’s start coding!

Creating our OGL 3D environment

The first step of any WebGL tutorial is making sure that you’re setting up all the rendering logic required to create a 3D environment.

Usually what’s required is: a camera, a scene and a renderer that is going to output everything into a canvas element. Then inside a requestAnimationFrame loop, you’ll use your camera to render a scene inside the renderer. So here’s our initial snippet:

import { Renderer, Camera, Transform } from 'ogl'
 
export default class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer()
 
    this.gl = this.renderer.gl
    this.gl.clearColor(0.79607843137, 0.79215686274, 0.74117647058, 1)
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 20
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Events.
   */
  onTouchDown (event) {
      
  }
 
  onTouchMove (event) {
      
  }
 
  onTouchUp (event) {
      
  }
 
  onWheel (event) {
      
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
    
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
 
    window.addEventListener('mousedown', this.onTouchDown.bind(this))
    window.addEventListener('mousemove', this.onTouchMove.bind(this))
    window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
    window.addEventListener('touchstart', this.onTouchDown.bind(this))
    window.addEventListener('touchmove', this.onTouchMove.bind(this))
    window.addEventListener('touchend', this.onTouchUp.bind(this))
  }
}
 
new App()

Explaining the App class setup

In our createRenderer method, we’re initializing a renderer with a fixed color background by calling this.gl.clearColor. Then we’re storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> (this.gl.canvas) element to our document.body.

In our createCamera method, we’re creating a new Camera() instance and setting some of its attributes: fov and its z position. The FOV is the field of view of your camera, what you’re able to see from it. And the z is the position of your camera in the z axis.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

You’ll also notice that we already included the wheel, touchstart, touchmove, touchend, mousedown, mousemove and mouseup events, they will be used to include user interactions with our application.

Creating a reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl, {
    heightSegments: 50,
    widthSegments: 100
  })
}

The reason for including heightSegments and widthSegments with these values is being able to manipulate vertices in a way to make the Plane behave like a paper in the air.

Importing our images using Webpack

Now it’s time to import our images into our application. Since we’re using Webpack in this tutorial, all we need to do to request our images is using import:

import Image1 from 'images/1.jpg'
import Image2 from 'images/2.jpg'
import Image3 from 'images/3.jpg'
import Image4 from 'images/4.jpg'
import Image5 from 'images/5.jpg'
import Image6 from 'images/6.jpg'
import Image7 from 'images/7.jpg'
import Image8 from 'images/8.jpg'
import Image9 from 'images/9.jpg'
import Image10 from 'images/10.jpg'
import Image11 from 'images/11.jpg'
import Image12 from 'images/12.jpg'

Now let’s create our array of images that we want to use in our infinite slider, so we’re basically going to call the variables above inside a createMedia method, and use .map to create new instances of the Media class (new Media()), which is going to be our representation of each image of the gallery.

createMedias () {
  this.mediasImages = [
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
    { image: Image1, text: 'New Synagogue' },
    { image: Image2, text: 'Paro Taktsang' },
    { image: Image3, text: 'Petra' },
    { image: Image4, text: 'Gooderham Building' },
    { image: Image5, text: 'Catherine Palace' },
    { image: Image6, text: 'Sheikh Zayed Mosque' },
    { image: Image7, text: 'Madonna Corona' },
    { image: Image8, text: 'Plaza de Espana' },
    { image: Image9, text: 'Saint Martin' },
    { image: Image10, text: 'Tugela Falls' },
    { image: Image11, text: 'Sintra-Cascais' },
    { image: Image12, text: 'The Prophet\'s Mosque' },
  ]
 
 
  this.medias = this.mediasImages.map(({ image, text }, index) => {
    const media = new Media({
      geometry: this.planeGeometry,
      gl: this.gl,
      image,
      index,
      length: this.mediasImages.length,
      scene: this.scene,
      screen: this.screen,
      text,
      viewport: this.viewport
    })
 
    return media
  })
}

As you’ve probably noticed, we’re passing a bunch of arguments to our Media class, I’ll explain why they’re needed when we start setting up the class in the next section. We’re also duplicating the amount of images to avoid any issues of not having enough images when making our gallery infinite on very wide screens.

It’s important to also include some specific calls in the onResize and update methods for our this.medias array, because we want the images to be responsive:

onResize () {
  if (this.medias) {
    this.medias.forEach(media => media.onResize({
      screen: this.screen,
      viewport: this.viewport
    }))
  }
}

And also do some real-time manipulations inside the requestAnimationFrame:

update () {
  this.medias.forEach(media => media.update(this.scroll, this.direction))
}

Setting up the Media class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

export default class {
  constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
    this.geometry = geometry
    this.gl = gl
    this.image = image
    this.index = index
    this.length = length
    this.scene = scene
    this.screen = screen
    this.text = text
    this.viewport = viewport
 
    this.createShader()
    this.createMesh()
 
    this.onResize()
  }
}

Explaining a few of these arguments, basically the geometry is the geometry we’re going to apply to our Mesh class. The this.gl is our GL context, useful to keep doing WebGL manipulations inside the class. The this.image is the URL of the image. Both of the this.index and this.length will be used to do positions calculations of the mesh. The this.scene is the group which we’re going to append our mesh to. And finally this.screen and this.viewport are the sizes of the viewport and environment.

Now it’s time to create the shader that is going to be applied to our Mesh in the createShader method, in OGL shaders are created with Program:

createShader () {
  const texture = new Texture(this.gl, {
    generateMipmaps: false
  })
 
  this.program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uPlaneSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] },
      uViewportSizes: { value: [this.viewport.width, this.viewport.height] }
      },
    transparent: true
  })
 
  const image = new Image()
 
  image.src = this.image
  image.onload = _ => {
    texture.image = image
 
    this.program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  }
}

In the snippet above, we’re basically creating a new Texture() instance, making sure to use generateMipmaps as false so it preserves the quality of the image. Then creating a new Program() instance, which represents a shader composed of fragment and vertex with some uniforms used to manipulate it.

We’re also creating a new Image() instance to preload the image before applying it to the texture.image. And also updating the this.program.uniforms.uImageSizes.value because it’s going to be used to preserve the aspect ratio of our images.

It’s important to create our fragment and vertex shaders now, so we’re going to create two new files: fragment.glsl and vertex.glsl:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}
precision highp float;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  vec3 p = position;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(p, 1.0);
}

And require them in the start of Media.js using Webpack:

import fragment from './fragment.glsl'
import vertex from './vertex.glsl'

Now let’s create our new Mesh() instance in the createMesh method merging together the geometry and shader.

createMesh () {
  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program: this.program
  })
 
  this.plane.setParent(this.scene)
}

The Mesh instance is stored in the this.plane variable to be reused in the onResize and update methods, then appended as a child of the this.scene group.

The only thing we have now on the screen is a simple square with our image:

Let’s now implement the onResize method and make sure we’re rendering rectangles:

onResize ({ screen, viewport } = {}) {
  if (screen) {
    this.screen = screen
  }
 
  if (viewport) {
    this.viewport = viewport
 
    this.plane.program.uniforms.uViewportSizes.value = [this.viewport.width, this.viewport.height]
  }
 
  this.scale = this.screen.height / 1500
 
  this.plane.scale.y = this.viewport.height * (900 * this.scale) / this.screen.height
  this.plane.scale.x = this.viewport.width * (700 * this.scale) / this.screen.width
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

The scale.y and scale.x calls are responsible for scaling our element properly, transforming our previous square into a rectangle of 700×900 sizes based on the scale.

And the uViewportSizes and uPlaneSizes uniform value updates makes the image display correctly. That’s basically what makes the image have the background-size: cover; behavior, but in WebGL environment.

Now we need to position all the rectangles in the x axis, making sure we have a small gap between them. To achieve that, we’re going to use this.plane.scale.x, this.padding and this.index variables to do the calculation required to move them:

this.padding = 2
 
this.width = this.plane.scale.x + this.padding
this.widthTotal = this.width * this.length
 
this.x = this.width * this.index

And in the update method, we’re going to set the this.plane.position to these variables:

update () {
  this.plane.position.x = this.x
}

Now you’ve setup all the initial code of Media, which results in the following image:

Including infinite scrolling logic

Now it’s time to make it interesting and include scrolling logic on it, so we have at least an infinite gallery in place when the user scrolls through your page. In our index.js, we’ll do the following updates.

First, let’s include a new object called this.scroll in our constructor with all variables that we will manipulate to do the smooth scrolling:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

Now let’s add the touch and wheel events, so when the user interacts with the canvas, he will be able to move stuff:

onTouchDown (event) {
  this.isDown = true
 
  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientX : event.clientX
}
 
onTouchMove (event) {
  if (!this.isDown) return
 
  const x = event.touches ? event.touches[0].clientX : event.clientX
  const distance = (this.start - x) * 0.01
 
  this.scroll.target = this.scroll.position + distance
}
 
onTouchUp (event) {
  this.isDown = false
}

Then, we’ll include the NormalizeWheel library in onWheel event, so this way we have the same value on all browsers when the user scrolls:

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.005
}

In our update method with requestAnimationFrame, we’ll lerp the this.scroll.current with this.scroll.target to make it smooth, then we’ll pass it to all medias:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll))
  }
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

And now we just update our Media file to use the current scroll value to move the Mesh to the new scroll position:

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1
}

This is the current result we have:

As you’ve noticed, it’s not infinite yet, to achieve that, we need to include some extra code. The first step is including the direction of the scroll in the update method from index.js:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'right'
  } else {
    this.direction = 'left'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
 
  this.scroll.last = this.scroll.current
}

Now in the Media class, you need to include a variable called this.extra in the constructor, and do some manipulations on it to sum the total width of the gallery, when the element is outside of the screen.

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.extra = 0
}

update (scroll) {
  this.plane.position.x = this.x - scroll.current * 0.1 - this.extra
    
  const planeOffset = this.plane.scale.x / 2
  const viewportOffset = this.viewport.width
 
  this.isBefore = this.plane.position.x + planeOffset < -viewportOffset
  this.isAfter = this.plane.position.x - planeOffset > viewportOffset
 
  if (direction === 'right' && this.isBefore) {
    this.extra -= this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
 
  if (direction === 'left' && this.isAfter) {
    this.extra += this.widthTotal
 
    this.isBefore = false
    this.isAfter = false
  }
}

That’s it, now we have the infinite scrolling gallery, pretty cool right?

Including circular rotation

Now it’s time to include the special flavor of the tutorial, which is making the infinite scrolling also have the circular rotation. To achieve it, we’ll use Math.cos to change the this.mesh.position.y accordingly to the rotation of the element. And map technique to change the this.mesh.rotation.z based on the element position in the z axis.

First, let’s make it rotate in a smooth way based on the position. The map method is basically a way to serve values based on another specific range, let’s say for example you use map(0.5, 0, 1, -500, 500);, it’s going to return 0 because it’s the middle between -500 and 500. Basically the first argument controls the output of min2 and max2:

export function map (num, min1, max1, min2, max2, round = false) {
  const num1 = (num - min1) / (max1 - min1)
  const num2 = (num1 * (max2 - min2)) + min2
 
  if (round) return Math.round(num2)
 
  return num2
}

Let’s see it in action by including the following like of code in the Media class:

this.plane.rotation.z = map(this.plane.position.x, -this.widthTotal, this.widthTotal, Math.PI, -Math.PI)

And that’s the result we get so far. It’s already pretty cool because you’re able to see the rotation changing based on the plane position:

Now it’s time to make it look circular. Let’s use Math.cos, we just need to do a simple calculation with this.plane.position.x / this.widthTotal, this way we’ll have a cos that will return a normalized value that we can just tweak multiplying by how much we want to change the y position of the element:

this.plane.position.y = Math.cos((this.plane.position.x / this.widthTotal) * Math.PI) * 75 - 75

Simple as that, we’re just moving it by 75 in environment space based in the position, this gives us the following result, which is exactly what we wanted to achieve:

Snapping to the closest item

Now let’s include a simple snapping to the closest item when the user stops scrolling. To achieve that, we need to create a new method called onCheck, it’s going to do some calculations when the user releases the scrolling:

onCheck () {
  const { width } = this.medias[0]
  const itemIndex = Math.round(Math.abs(this.scroll.target) / width)
  const item = width * itemIndex
 
  if (this.scroll.target < 0) {
    this.scroll.target = -item
  } else {
    this.scroll.target = item
  }
}

The result of the item variable is always the center of one of the elements in the gallery, which snaps the user to the corresponding position.

For wheel events, we need a debounced version of it called onCheckDebounce that we can include in the constructor by including lodash/debounce:

import debounce from 'lodash/debounce'
 
constructor ({ camera, color, gl, renderer, scene, screen, url, viewport }) {
  this.onCheckDebounce = debounce(this.onCheck, 200)
}
 
onWheel (event) {
  this.onCheckDebounce()
}

Now the gallery is always being snapped to the correct entry:

Writing paper shaders

Finally let’s include the most interesting part of our project, which is enhancing the shaders a little bit by taking into account the scroll velocity and distorting the vertices of our meshes.

The first step is to include two new uniforms in our this.program declaration from Media class: uSpeed and uTime.

this.program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uSpeed: { value: 0 },
    uTime: { value: 0 }
  },
  transparent: true
})

Now let’s write some shader code to make our images bend and distort in a very cool way. In your vertex.glsl file, you should include the new uniforms: uniform float uTime and uniform float uSpeed:

uniform float uTime;
uniform float uSpeed;

Then inside the void main() of your shader, you can now manipulate the vertices in the z axis using these two values plus the position stored variable in p. We’re going to use a sin and cos to bend our vertices like it’s a plane, so all you need to do is including the following line:

p.z = (sin(p.x * 4.0 + uTime) * 1.5 + cos(p.y * 2.0 + uTime) * 1.5);

Also don’t forget to include uTime increment in the update() method from Media:

this.program.uniforms.uTime.value += 0.04

Just this line of code outputs a pretty cool paper effect animation:

Including text in WebGL using MSDF fonts

Now let’s include our text inside the WebGL, to achieve that, we’re going to use msdf-bmfont to generate our files, you can see how to do that in this GitHub repository, but basically it’s installing the npm dependency and running the command below:

msdf-bmfont -f json -m 1024,1024 -d 4 --pot --smart-size freight.otf

After running it, you should now have a .png and .json file in the same directory, these are the files that we’re going to use on our MSDF implementation in OGL.

Now let’s create a new file called Title and start setting up the code of it. First let’s create our class and use import in the shaders and the files:

import AutoBind from 'auto-bind'
import { Color, Geometry, Mesh, Program, Text, Texture } from 'ogl'
 
import fragment from 'shaders/text-fragment.glsl'
import vertex from 'shaders/text-vertex.glsl'
 
import font from 'fonts/freight.json'
import src from 'fonts/freight.png'
 
export default class {
  constructor ({ gl, plane, renderer, text }) {
    AutoBind(this)
 
    this.gl = gl
    this.plane = plane
    this.renderer = renderer
    this.text = text
 
    this.createShader()
    this.createMesh()
  }
}

Now it’s time to start setting up MSDF implementation code inside the createShader() method. The first thing we’re going to do is create a new Texture() instance and load the fonts/freight.png one stored in src:

createShader () {
  const texture = new Texture(this.gl, { generateMipmaps: false })
  const textureImage = new Image()
 
  textureImage.src = src
  textureImage.onload = _ => texture.image = textureImage
}

Then we need to start setting up the fragment shader we’re going to use to render the MSDF text, because MSDF can be optimized in WebGL 2.0, we’re going to use this.renderer.isWebgl2 from OGL to check if it’s supported or not and declare different shaders based on it, so we’ll have vertex300, fragment300, vertex100 and fragment100:

createShader () {
  const vertex100 = `${vertex}`
 
  const fragment100 = `
    #extension GL_OES_standard_derivatives : enable
 
    precision highp float;
 
    ${fragment}
  `
 
  const vertex300 = `#version 300 es
 
    #define attribute in
    #define varying out
 
    ${vertex}
  `
 
  const fragment300 = `#version 300 es
 
    precision highp float;
 
    #define varying in
    #define texture2D texture
    #define gl_FragColor FragColor
 
    out vec4 FragColor;
 
    ${fragment}
  `
 
  let fragmentShader = fragment100
  let vertexShader = vertex100
 
  if (this.renderer.isWebgl2) {
    fragmentShader = fragment300
    vertexShader = vertex300
  }
 
  this.program = new Program(this.gl, {
    cullFace: null,
    depthTest: false,
    depthWrite: false,
    transparent: true,
    fragment: fragmentShader,
    vertex: vertexShader,
    uniforms: {
      uColor: { value: new Color('#545050') },
      tMap: { value: texture }
    }
  })
}

As you’ve probably noticed, we’re prepending fragment and vertex with different setup based on the renderer WebGL version, let’s create also our text-fragment.glsl and text-vertex.glsl files:

uniform vec3 uColor;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec3 color = texture2D(tMap, vUv).rgb;
 
  float signed = max(min(color.r, color.g), min(max(color.r, color.g), color.b)) - 0.5;
  float d = fwidth(signed);
  float alpha = smoothstep(-d, d, signed);
 
  if (alpha < 0.02) discard;
 
  gl_FragColor = vec4(uColor, alpha);
}
attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

Finally let’s create the geometry of our MSDF font implementation in the createMesh() method, for that we’ll use the new Text() instance from OGL, and then apply the buffers generated from it to the new Geometry() instance:

createMesh () {
  const text = new Text({
    align: 'center',
    font,
    letterSpacing: -0.05,
    size: 0.08,
    text: this.text,
    wordSpacing: 0,
  })
 
  const geometry = new Geometry(this.gl, {
    position: { size: 3, data: text.buffers.position },
    uv: { size: 2, data: text.buffers.uv },
    id: { size: 1, data: text.buffers.id },
    index: { data: text.buffers.index }
  })
 
  geometry.computeBoundingBox()
 
  this.mesh = new Mesh(this.gl, { geometry, program: this.program })
  this.mesh.position.y = -this.plane.scale.y * 0.5 - 0.085
  this.mesh.setParent(this.plane)
}

Now let’s apply our brand new titles in the Media class, we’re going to create a new method called createTilte() and apply it to the constructor:

constructor ({ geometry, gl, image, index, length, renderer, scene, screen, text, viewport }) {
  this.createTitle()
}

createTitle () {
  this.title = new Title({
    gl: this.gl,
    plane: this.plane,
    renderer: this.renderer,
    text: this.text,
  })
}

Simple as that, we’re just including a new Title() instance inside our Media class, this will output the following result for you:

One of the best things about rendering text inside WebGL is reducing the overload of calculations required by the browser when animating the text to the right position. If you go with the DOM approach, you’ll usually have a little bit of performance impact because browsers will need to recalculate DOM sections when translating the text properly and checking composite layers.

For the purpose of this demo, we also included a new Number() class implementation that will be responsible for showing the current index that the user is seeing. You can check how it’s implemented in source code, but it’s basically the same implementation of the Title class with the only difference of it loading a different font style:

Including background blocks

To finalize the demo, let’s implement some blocks in the background that will be moving in x and y axis to enhance the depth effect of it:

To achieve this effect we’re going to create a new Background class and inside of it we’ll initialize some new Plane() geometries in a new Mesh() with random sizes and positions by changing the scale and position of the meshes of the for loop:

import { Color, Mesh, Plane, Program } from 'ogl'
 
import fragment from 'shaders/background-fragment.glsl'
import vertex from 'shaders/background-vertex.glsl'
 
import { random } from 'utils/math'
 
export default class {
  constructor ({ gl, scene, viewport }) {
    this.gl = gl
    this.scene = scene
    this.viewport = viewport
 
    const geometry = new Plane(this.gl)
    const program = new Program(this.gl, {
      vertex,
      fragment,
      uniforms: {
        uColor: { value: new Color('#c4c3b6') }
      },
      transparent: true
    })
 
    this.meshes = []
 
    for (let i = 0; i < 50; i++) {
      let mesh = new Mesh(this.gl, {
        geometry,
        program,
      })
 
      const scale = random(0.75, 1)
 
      mesh.scale.x = 1.6 * scale
      mesh.scale.y = 0.9 * scale
 
      mesh.speed = random(0.75, 1)
 
      mesh.xExtra = 0
 
      mesh.x = mesh.position.x = random(-this.viewport.width * 0.5, this.viewport.width * 0.5)
      mesh.y = mesh.position.y = random(-this.viewport.height * 0.5, this.viewport.height * 0.5)
 
      this.meshes.push(mesh)
 
      this.scene.addChild(mesh)
    }
  }
}

Then after that we just need to apply endless scrolling logic on them as well, following the same directional validation we have in the Media class:

update (scroll, direction) {
  this.meshes.forEach(mesh => {
    mesh.position.x = mesh.x - scroll.current * mesh.speed - mesh.xExtra
 
    const viewportOffset = this.viewport.width * 0.5
    const widthTotal = this.viewport.width + mesh.scale.x
 
    mesh.isBefore = mesh.position.x < -viewportOffset
    mesh.isAfter = mesh.position.x > viewportOffset
 
    if (direction === 'right' && mesh.isBefore) {
      mesh.xExtra -= widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    if (direction === 'left' && mesh.isAfter) {
      mesh.xExtra += widthTotal
 
      mesh.isBefore = false
      mesh.isAfter = false
    }
 
    mesh.position.y += 0.05 * mesh.speed
 
    if (mesh.position.y > this.viewport.height * 0.5 + mesh.scale.y) {
      mesh.position.y -= this.viewport.height + mesh.scale.y
    }
  })
}

That’s simple as that, now we have the blocks in the background as well, finalizing the code of our demo!

I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

The post Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders

Hello everyone, introducing myself a little bit first, I’m Luis Henrique Bizarro, I’m a Senior Creative Developer at Active Theory based in São Paulo, Brazil. It’s always a pleasure to me having the opportunity to collaborate with Codrops to help other developers learn new things, so I hope everyone enjoys this tutorial!

In this tutorial I’ll explain you how to create an auto scrolling infinite image gallery. The image grid is also scrollable be user interaction, making it an interesting design element to showcase works. It’s based on this great animation seen on Oneshot.finance made by Jesper Landberg.

I’ve been using the technique of styling images first with HTML + CSS and then creating an abstraction of these elements inside WebGL using some camera and viewport calculations in multiple websites, so this is the approach we’re going to use in this tutorial.

The good thing about this implementation is that it can be reused across any WebGL library, so if you’re more familiar with Three.js or Babylon.js than OGL, you’ll also be able to achieve the same results using a similar code, when it’s about shading and scaling the plane meshes.

So let’s get into it!

Implementing our HTML markup

The first step is implementing our HTML markup. We’re going to use <figure> and <img> elements, nothing special here, just the standard:

<div class="demo-1__gallery">
  <figure class="demo-1__gallery__figure">
    <img class="demo-1__gallery__image" src="images/demo-1/1.jpg">
  </figure>

  <!-- Repeating the same markup until 12.jpg. -->
</div>

Setting our CSS styles

The second step is styling our elements using CSS. One of the first things I do in a website is defining the font-size of the html element because I use rem to help with the responsive breakpoints.

This comes in handy if you’re doing creative websites that only require two or three different breakpoints, so I highly recommend starting using it if you haven’t adopted rem yet.

One thing I’m also using is calc() with the size of the designs. In our tutorial we’re going to use 1920 as our main width, scaling our font-size depending on the screen size of 100vw. This results in 10px at a 1920px screen, for example:

html {
  font-size: calc(100vw / 1920 * 10);
}

Now let’s style our grid of images. We want to freely place our images across the screen using absolute positioning, so we’re just going to set the height, width and left/top styles across all our demo-1 classes:

.demo-1__gallery {
  height: 295rem;
  position: relative;
  visibility: hidden;
}

.demo-1__gallery__figure {
  position: absolute;
 
  &:nth-child(1) {
    height: 40rem;
    width: 70rem;
  }
 
  &:nth-child(2) {
    height: 50rem;
    left: 85rem;
    top: 30rem;
    width: 40rem;
  }
 
  &:nth-child(3) {
    height: 50rem;
    left: 15rem;
    top: 60rem;
    width: 60rem;
  }
 
  &:nth-child(4) {
    height: 30rem;
    right: 0;
    top: 10rem;
    width: 50rem;
  }
 
  &:nth-child(5) {
    height: 60rem;
    right: 15rem;
    top: 55rem;
    width: 40rem;
  }
 
  &:nth-child(6) {
    height: 75rem;
    left: 5rem;
    top: 120rem;
    width: 57.5rem;
  }
 
  &:nth-child(7) {
    height: 70rem;
    right: 0;
    top: 130rem;
    width: 50rem;
  }
 
  &:nth-child(8) {
    height: 50rem;
    left: 85rem;
    top: 95rem;
    width: 40rem;
  }
 
  &:nth-child(9) {
    height: 65rem;
    left: 75rem;
    top: 155rem;
    width: 50rem;
  }
 
  &:nth-child(10) {
    height: 43rem;
    right: 0;
    top: 215rem;
    width: 30rem;
  }
 
  &:nth-child(11) {
    height: 50rem;
    left: 70rem;
    top: 235rem;
    width: 80rem;
  }
 
  &:nth-child(12) {
    left: 0;
    top: 210rem;
    height: 70rem;
    width: 50rem;
  }
}
 
.demo-1__gallery__image {
  height: 100%;
  left: 0;
  object-fit: cover;
  position: absolute;
  top: 0;
  width: 100%;
}

Note that we’re hiding the visibility of our HTML, because it’s not going to be visible for the users since we’re going to load these images inside the <canvas> element. But below you can find a screenshot of what the result will look like.

Creating our OGL 3D environment

Now it’s time to get started with the WebGL implementation using OGL. First let’s create an App class that is going to be the entry point of our demo and inside of it, let’s also create the initial methods: createRenderer, createCamera, createScene, onResize and our requestAnimationFrame loop with update.

import { Renderer, Camera, Transform } from 'ogl'

class App {
  constructor () {
    this.createRenderer()
    this.createCamera()
    this.createScene()
 
    this.onResize()
 
    this.update()
 
    this.addEventListeners()
  }
 
  createRenderer () {
    this.renderer = new Renderer({
      alpha: true
    })
 
    this.gl = this.renderer.gl
 
    document.body.appendChild(this.gl.canvas)
  }
 
  createCamera () {
    this.camera = new Camera(this.gl)
    this.camera.fov = 45
    this.camera.position.z = 5
  }
 
  createScene () {
    this.scene = new Transform()
  }
 
  /**
   * Wheel.
   */
  onWheel (event) {
 
  }
 
  /**
   * Resize.
   */
  onResize () {
    this.screen = {
      height: window.innerHeight,
      width: window.innerWidth
    }
 
    this.renderer.setSize(this.screen.width, this.screen.height)
 
    this.camera.perspective({
      aspect: this.gl.canvas.width / this.gl.canvas.height
    })
 
    const fov = this.camera.fov * (Math.PI / 180)
    const height = 2 * Math.tan(fov / 2) * this.camera.position.z
    const width = height * this.camera.aspect
 
    this.viewport = {
      height,
      width
    }
  }
 
  /**
   * Update.
   */
  update () {
    this.renderer.render({
      scene: this.scene,
      camera: this.camera
    })
 
    window.requestAnimationFrame(this.update.bind(this))
  }
 
  /**
   * Listeners.
   */
  addEventListeners () {
    window.addEventListener('resize', this.onResize.bind(this))
 
    window.addEventListener('mousewheel', this.onWheel.bind(this))
    window.addEventListener('wheel', this.onWheel.bind(this))
  }
}

new App()

Explaining some part of our App.js file

In our createRenderer method, we’re initializing one renderer with alpha enabled, storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> element to our document.body.

In our createCamera method, we’re just creating a new Camera and setting some of its attributes: fov and its z position.

In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.

The onResize method is the most important part of our initial setup. It’s responsible for three different things:

  1. Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
  2. Updating our this.camera perspective dividing the width and height of the viewport.
  3. Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.

The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.

And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.

Create our reusable geometry instance

It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.

import { Renderer, Camera, Transform, Plane } from 'ogl'
 
createGeometry () {
  this.planeGeometry = new Plane(this.gl)
}

Select all images and create a new class for each one

Now it’s time to use document.querySelector to select all our images and create one reusable class that is going to represent our images. (We’re going to create a single Media.js file later.)

createMedias () {
  this.mediasElements = document.querySelectorAll('.demo-1__gallery__figure')
  this.medias = Array.from(this.mediasElements).map(element => {
    let media = new Media({
      element,
      geometry: this.planeGeometry,
      gl: this.gl,
      scene: this.scene,
      screen: this.screen,
      viewport: this.viewport
    })
 
    return media
  })
}

As you can see, we’re just selecting all .demo-1__gallery__figure elements, going through them and generating an array of `this.medias` with new instances of Media.

Now it’s important to start attaching this array in important pieces of our setup code.

Let’s first include all our media inside the method onResize and also call media.onResize for each one of these new instances:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    screen: this.screen,
    viewport: this.viewport
  }))
}

And inside our update method, we’re going to call media.update() as well:

if (this.medias) {
  this.medias.forEach(media => media.update())
}

Setting up our Media.js file and class

Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.

In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:

import { Mesh, Program, Texture } from 'ogl'
 
import fragment from 'shaders/fragment.glsl'
import vertex from 'shaders/vertex.glsl'
 
export default class {
  constructor ({ element, geometry, gl, scene, screen, viewport }) {
    this.element = element
    this.image = this.element.querySelector('img')
 
    this.geometry = geometry
    this.gl = gl
    this.scene = scene
    this.screen = screen
    this.viewport = viewport
 
    this.createMesh()
    this.createBounds()
 
    this.onResize()
  }
}

In our createMesh method, we’ll load the image texture using the this.image.src attribute, then create a new Program, which is basically a representation of the material we’re applying to our Mesh. So our method looks like this:

createMesh () {
  const image = new Image()
  const texture = new Texture(this.gl)

  image.src = this.image.src
  image.onload = _ => {
    texture.image = image
  }

  const program = new Program(this.gl, {
    fragment,
    vertex,
    uniforms: {
      tMap: { value: texture },
      uScreenSizes: { value: [0, 0] },
      uImageSizes: { value: [0, 0] }
    },
    transparent: true
  })

  this.plane = new Mesh(this.gl, {
    geometry: this.geometry,
    program
  })

  this.plane.setParent(this.scene)
}

Looks pretty simple, right? After we generate a new Mesh, we’re setting the plane as children of this.scene, so we’re including our mesh inside our main scene.

As you’ve probably noticed, our Program receives fragment and vertex. These both represent the shaders we’re going to use on our planes. For now, we’re just using simple implementations of both.

In our vertex.glsl file we’re getting the uv and position attributes, and making sure we’re rendering our planes in the right 3D world position.

attribute vec2 uv;
attribute vec3 position;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
varying vec2 vUv;
 
void main() {
  vUv = uv;
 
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

In our fragment.glsl file, we’re receiving a tMap texture, as you can see in the tMap: { value: texture } declaration, and rendering it in our plane geometry:

precision highp float;
 
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  gl_FragColor.rgb = texture2D(tMap, vUv).rgb;
  gl_FragColor.a = 1.0;
}

The createBounds method is important to make sure we’re positioning and scaling our planes in the correct DOM elements positions, so it’s basically going to call for this.element.getBoundingClientRect() to get the right position of our planes, and then after that using these values to calculate the 3D values of our plane.

createBounds () {
  this.bounds = this.element.getBoundingClientRect()

  this.updateScale()
  this.updateX()
  this.updateY()
}

updateScale () {
  this.plane.scale.x = this.viewport.width * this.bounds.width / this.screen.width
  this.plane.scale.y = this.viewport.height * this.bounds.height / this.screen.height
}

updateX (x = 0) {
  this.plane.position.x = -(this.viewport.width / 2) + (this.plane.scale.x / 2) + ((this.bounds.left - x) / this.screen.width) * this.viewport.width
}

updateY (y = 0) {
  this.plane.position.y = (this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height
}

update (y) {
  this.updateScale()
  this.updateX()
  this.updateY(y)
}

As you’ve probably noticed, the calculations for scale.x and scale.y are going to stretch our plane to make it the same width and height of the <img> elements. And the position.x and position.y takes the offset from the element and makes our translate our planes to the correct x and y axis in 3D.

And let’s not forget our onResize method, which is basically going to call createBounds again to refresh our getBoundingClientRect values and make sure we keep our 3D implementation responsive as well.

onResize (sizes) {
  if (sizes) {
    const { screen, viewport } = sizes
 
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
 
  this.createBounds()
}

This is the result we’ve got so far.

Implement cover behavior in fragment shaders

As you’ve probably noticed, our images are stretched. It happens because we need to make proper calculations in the fragment shaders in order to have a behavior like object-fit: cover; or background-size: cover; in WebGL.

I like to use an approach to pass the images’ real sizes and do some ratio calculations inside the fragment shader, so let’s adapt our code to this approach. So in our Program, we’re going to pass two new uniforms called uPlaneSizes and uImageSizes:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] }
  },
  transparent: true
})

Now we need to update our fragment.glsl and use these values to calculate our images ratios:

precision highp float;
 
uniform vec2 uImageSizes;
uniform vec2 uPlaneSizes;
uniform sampler2D tMap;
 
varying vec2 vUv;
 
void main() {
  vec2 ratio = vec2(
    min((uPlaneSizes.x / uPlaneSizes.y) / (uImageSizes.x / uImageSizes.y), 1.0),
    min((uPlaneSizes.y / uPlaneSizes.x) / (uImageSizes.y / uImageSizes.x), 1.0)
  );
 
  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );
 
  gl_FragColor.rgb = texture2D(tMap, uv).rgb;
  gl_FragColor.a = 1.0;
}

And then we also need update our image.onload method to pass naturalWidth and naturalHeight to uImageSizes:

image.onload = _ => {
  program.uniforms.uImageSizes.value = [image.naturalWidth, image.naturalHeight]
  texture.image = image
}

And createBounds to update the uPlaneSizes uniforms:

createBounds () {
  this.bounds = this.element.getBoundingClientRect()
 
  this.updateScale()
  this.updateX()
  this.updateY()
 
  this.plane.program.uniforms.uPlaneSizes.value = [this.plane.scale.x, this.plane.scale.y]
}

That’s it! Now we have properly scaled images.

Implementing smooth scrolling

Before we implement our infinite logic, it’s good to start making scrolling work properly. In our setup code, we have included a onWheel method, which is going to be used to lerp some variables and make our scroll butter smooth.

In our constructor from index.js, let’s create the this.scroll object with these variables:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
}

Now let’s update our onWheel implementation. When working with wheel events, it’s always important to normalize it, because it behaves differently based on the browser, I’ve been using normalize-wheel library to help on it:

import NormalizeWheel from 'normalize-wheel'

onWheel (event) {
  const normalized = NormalizeWheel(event)
  const speed = normalized.pixelY
 
  this.scroll.target += speed * 0.5
}

Let’s also create our lerp utility function inside the file utils/math.js:

export function lerp (p1, p2, t) {
  return p1 + (p2 - p1) * t
}

And now we just need to lerp from the this.scroll.current to the this.scroll.target inside the update method. And finally pass it to the media.update() methods:

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current))
  }
}

After that we already have a result like this.

Making our smooth scrolling infinite

The approach of making an infinite scrolling logic is basically repeating the same grid over and over while the user keeps scrolling your page. Since the user can scroll up or down, you also need to take under consideration what direction is being scrolled, so overall the algorithm should work this way:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

To explain it in a visual way, let’s say we’re scrolling down and the red area is our viewport and the blue elements are not in the viewport anymore.

When we are in this state, we just need to move the blue elements to the end of our gallery grid, which is the entire height of our gallery: 295rem.

Let’s include the logic for it then. First, we need to create a new variable called this.scroll.last to store the last value of our scroll, this is going to be checked to give us up or down strings:

this.scroll = {
  ease: 0.05,
  current: 0,
  target: 0,
  last: 0
}

In our update method, we need to include the following lines of validations and pass this.direction to our this.medias elements.

update () {
  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)
 
  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
  }
 
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll.current, this.direction))
  }
 
  this.renderer.render({
    scene: this.scene,
    camera: this.camera
  })
 
  this.scroll.last = this.scroll.current
 
  window.requestAnimationFrame(this.update.bind(this))
}

Then we need to get the total gallery height and transform it to 3D dimensions, so let’s include a querySelector of .demo-1__gallery and call the createGallery method in our index.js constructor.

createGallery () {
  this.gallery = document.querySelector('.demo-1__gallery')
}

It’s time to do the real calculations using this selector, so in our onResize method, we need to include the following lines:

this.galleryBounds = this.gallery.getBoundingClientRect()
this.galleryHeight = this.viewport.height * this.galleryBounds.height / this.screen.height

The this.galleryHeight variable now is storing the 3D size of the entire grid, now we need to pass it to both onResize and new Media() calls:

if (this.medias) {
  this.medias.forEach(media => media.onResize({
    height: this.galleryHeight,
    screen: this.screen,
    viewport: this.viewport
  }))
}
this.medias = Array.from(this.mediasElements).map(element => {
  let media = new Media({
    element,
    geometry: this.planeGeometry,
    gl: this.gl,
    height: this.galleryHeight,
    scene: this.scene,
    screen: this.screen,
    viewport: this.viewport
  })
 
  return media
})

And then inside our Media class, we need to store the height as well in the constructor and also in the onResize methods:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
  this.height = height
}
onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) this.viewport = viewport
  }
}

Now we’re going to include the logic to move our elements based on their viewport position, just like our visual representation of the red and blue rectangles.

If the idea is to keep summing up a value based on the scroll and element position, we can achieve this by just creating a new variable called this.extra = 0, this is going to store how much we need to sum (or subtract) of our media, so in our constructor let’s include it:

constructor ({ element, geometry, gl, height, scene, screen, viewport }) {
    this.extra = 0
}

And let’s reset it on resizing the browser, to make all values consistent so it doesn’t break when users resizes their viewport:

onResize (sizes) {
  this.extra = 0
}

And in our updateY method, we’re going to include it as well:

updateY (y = 0) {
  this.plane.position.y = ((this.viewport.height / 2) - (this.plane.scale.y / 2) - ((this.bounds.top - y) / this.screen.height) * this.viewport.height) - this.extra
}

Finally, the only thing left now is updating the this.extra variable inside our update method, making sure we’re adding or subtracting the this.height depending on the direction.

const planeOffset = this.plane.scale.y / 2
const viewportOffset = this.viewport.height / 2
 
this.isBefore = this.plane.position.y + planeOffset < -viewportOffset
this.isAfter = this.plane.position.y - planeOffset > viewportOffset
 
if (direction === 'up' && this.isBefore) {
  this.extra -= this.height
 
  this.isBefore = false
  this.isAfter = false
}
 
if (direction === 'down' && this.isAfter) {
  this.extra += this.height
 
  this.isBefore = false
  this.isAfter = false
}

Since we’re working in 3D space, we’re dealing with cartesian coordinates, that’s why you can notice we’re dividing most things by two (ex: this.viewport.heighht / 2). So that’s also the reason why we had to do a different logic for the this.isBefore and this.isAfter checks.

Awesome, we’re almost finishing our demo! That’s how it looks now, pretty cool to have it endless right?

Including touch events

Let’s also include touch events, so this demo can be more responsive to user interactions! In our addEventListeners method, let’s include some window.addEventListener calls:

window.addEventListener('mousedown', this.onTouchDown.bind(this))
window.addEventListener('mousemove', this.onTouchMove.bind(this))
window.addEventListener('mouseup', this.onTouchUp.bind(this))
 
window.addEventListener('touchstart', this.onTouchDown.bind(this))
window.addEventListener('touchmove', this.onTouchMove.bind(this))
window.addEventListener('touchend', this.onTouchUp.bind(this))

Then we just need to implement simple touch events calculations, including the three methods: onTouchDown, onTouchMove and onTouchUp.

onTouchDown (event) {
  this.isDown = true

  this.scroll.position = this.scroll.current
  this.start = event.touches ? event.touches[0].clientY : event.clientY
}

onTouchMove (event) {
  if (!this.isDown) return

  const y = event.touches ? event.touches[0].clientY : event.clientY
  const distance = (this.start - y) * 2

  this.scroll.target = this.scroll.position + distance
}

onTouchUp (event) {
  this.isDown = false
}

Done! Now we also have touch events support enabled for our gallery.

Implementing direction-aware auto scrolling

Let’s also implement auto scrolling to make our interaction even better. In order to achieve that we just need to create a new variable that will store our speed based on the direction the user is scrolling.

So let’s create a variable called this.speed in our index.js file:

constructor () {
  this.speed = 2
}

This variable is going to be changed by our down and up validations we have in our update loop, so if the user is scrolling down, we’re going to keep the speed as 2, if the user is scrolling up, we’re going to replace it with -2, and before that we will sum this.speed to the this.scroll.target variable:

update () {
  this.scroll.target += this.speed

  this.scroll.current = lerp(this.scroll.current, this.scroll.target, this.scroll.ease)

  if (this.scroll.current > this.scroll.last) {
    this.direction = 'down'
    this.speed = 2
  } else if (this.scroll.current < this.scroll.last) {
    this.direction = 'up'
    this.speed = -2
  }
}

Implementing distortion shaders

Now let’s make everything even more interesting, it’s time to play a little bit with shaders and distort our planes while the user is scrolling through our page.

First, let’s update our update method from index.js, making sure we expose both current and last scroll values to all our medias, we’re going to do a simple calculation with them.

update () {
  if (this.medias) {
    this.medias.forEach(media => media.update(this.scroll, this.direction))
  }
}

And now let’s create two uniforms for our Program shader: uOffset and uViewportSizes, and pass them:

const program = new Program(this.gl, {
  fragment,
  vertex,
  uniforms: {
    tMap: { value: texture },
    uPlaneSizes: { value: [0, 0] },
    uImageSizes: { value: [0, 0] },
    uViewportSizes: { value: [this.viewport.width, this.viewport.height] },
    uStrength: { value: 0 }
  },
  transparent: true
})

As you can probably notice, we’re going to need to set uViewportSizes in our onResize method as well, since this.viewport changes when we resize, so to keep this.viewport.width and this.viewport.height up to date, we also need to include the following lines of code in onResize:

onResize (sizes) {
  if (sizes) {
    const { height, screen, viewport } = sizes
 
    if (height) this.height = height
    if (screen) this.screen = screen
    if (viewport) {
      this.viewport = viewport
 
      this.plane.program.uniforms.uOffset.value = [this.viewport.width, this.viewport.height]
    }
  }
}

Remember the this.scroll update we’ve made from index.js? Now it’s time to include a small trick to generate a speed value inside our Media.js:

update (y, direction) {
  this.updateY(y.current)
 
  this.plane.program.uniforms.uStrength.value = ((y.current - y.last) / this.screen.width) * 10
}

We’re basically checking the difference between the current and last values, which returns us some kind of “speed” of the scrolling, and dividing it by the this.screen.width, to keep our effect value behaving correctly independently of the width of our screen.

Finally now it’s time to play a little bit with our vertex shader. We’re going to bend our planes a little bit while the user is scrolling through the page. So let’s update our vertex.glsl file with this new code:

#define PI 3.1415926535897932384626433832795
 
precision highp float;
precision highp int;
 
attribute vec3 position;
attribute vec2 uv;
 
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
 
uniform float uStrength;
uniform vec2 uViewportSizes;
 
varying vec2 vUv;
 
void main() {
  vec4 newPosition = modelViewMatrix * vec4(position, 1.0);
 
  newPosition.z += sin(newPosition.y / uViewportSizes.y * PI + PI / 2.0) * -uStrength;
 
  vUv = uv;
 
  gl_Position = projectionMatrix * newPosition;
}

That’s it! Now we’re also bending our images creating an unique type of effect!

Explaining a little bit of the shader logic: basically what’s implemented in the newPosition.z line is taking into consideration the uViewportSize.y, which is our height from the viewport and the current position.y of our plane, getting the division of both and multiplying by PI that we defined on the very top of our shader file. And then we use the uStrength which is the strength of the bending, that is tight with our scrolling values, making it bend based on how faster you scroll the demo.

That’s the final result of our demo! I hope this tutorial was useful to you and don’t forget to comment if you have any questions!

Photography used in the demos by Planete Elevene and Jayson Hinrichsen.

The post Creating an Infinite Auto-Scrolling Gallery using WebGL with OGL and GLSL Shaders appeared first on Codrops.

Mouse Flowmap Deformation with OGL

Following Nathan’s article on how to create mouse trails using his minimal WebGL framework OGL, we’d now like to show some demos based on his mouse flowmap example. The concept of this effect is to deform an image based on the mouse position and velocity.

In this article, we’re not going to explain how to initialise and use OGL. Nathan’s article, Crafting Stylised Mouse Trails With OGL, is covering all of this, so we’ll rather focus on the creation of this effect and assume you already have your OGL setup ready with a renderer and a plane with its texture.

Getting started

In order to deform our image we need to use one of the many extras provided by OGL:

Flowmap: import { Flowmap } from "./src/Extras.js";

Once everything is setup according to this example you should be seeing this:

In your shader you can alternate between the two textures to understand the concept behind this effect.

// R and G values are velocity in the x and y direction
// B value is the velocity length
vec3 flow = texture2D(tFlow, vUv).rgb;
// Use flow to adjust the uv lookup of a texture
vec2 uv = gl_FragCoord.xy / 600.0;
uv += flow.xy * 0.05;
vec3 tex = texture2D(tWater, uv).rgb;
gl_FragColor = vec4(tex.rgb, 1.0);

If your replace tex by flow in the last line here’s the ouput:

gl_FragColor = vec4(flow.rgb, 1.0);

Here’s a mix of the two textures to visualize the concept:

Now that you know how to create and use this effect you can play with the different parameters of the flowmap while instantiating it.

Example:

const flowmap = new ogl.Flowmap(gl, { falloff: 0.2, dissipation: 0.9 });

Parameters available:

Name Default value Description
size 128 default size of the render targets
falloff 0.3 size of the stamp, percentage of the size
alpha 1 opacity of the stamp
dissipation 0.98 affects the speed that the stamp fades. Closer to 1 is slower

Mouse Flowmap Deformation with OGL was written by Robin Delaporte and published on Codrops.

Crafting Stylised Mouse Trails With OGL

Mmm… swirly goodness…
Hey! Snap out of it! Ok. Now, in this tutorial we’re going to have some fun creating this pretty crazy little effect by tipping our toes into WebGL. The water is warm, I promise!

What’s OGL?

OGL is a WebGL library that is going to save us having to write a bunch of pretty unfriendly WebGL code directly. It has been my baby for the last year or so – I’ve been busy adding a bunch of examples to try and give people a kick-start with practical references.

By design, it’s tightly coupled to WebGL with only a small amount of abstraction – meaning it’s very lightweight. For example, the OGL classes required for this experiment are under 13kb gzipped – and more than half of them are just maths.

You got any more of them mouse trails?

Yes! Moving on, sorry.

Mouse trails (or any sort of trails) are just innately fun to interact with. The little touch of physics that it adds can really make an animation feel incredibly reactive and tangible, to the point where you can be certain that a good percentage of your users’ time will be spent playing with the effect, before consuming any of your site’s content… sorry not sorry.

Of course, they can be achieved using a whole range of techniques. Example time.

Starting with a really clever, recent one done using the DOM (html) and CSS. I especially love the difference-blend style they’ve added to the trail. The crux of the effect is a number of diminishing circles that follow one another, giving the effect of a tapered line.
Developed by Luca Mariotti.

Here’s an effective take on one, made using the 2D Canvas api. This one draws a circle at the head of each line every frame, while simultaneously the whole scene slowly disappears.
By Hakim El Hattab.

And here’s one used as a main game mechanic, made using SVG by yours truly – a few trips-around-the-sun ago… It’s a dynamic bezier curve whose start, middle and end positions are updated every frame.

So each of these previous (viable) options use drawing APIs (CSS, Canvas 2D, SVG) to render those pixels. WebGL, on the other hand, leaves all of that pixel-drawing math in your capable hands. This means two things.

Firstly, it’s more work for you. Yep, it’ll probably take you longer to get something up and running, working out those pesky projection matrices, while keeping an eye on the optimisation of your attribute buffers…

Buuut (important but), it’s less work for the computer – so it’ll run (normally, waaaay) faster. You’re basically cutting out the middle man. It also gives you a crazy amount of flexibility, which I highly appreciate.

The Approach

I’m a fan of visualising problems to understand them (let’s not talk about quaternions…), so let’s break it down.

We will start by making a number of points, made up of [x, y] coordinates, arranged in a lovely little line.

app-dots

Then we’ll use these to generate a geometry, duplicating each point so that there are two vertices at each step along the curve. Why two? Just a sec.

app-line

Now, because our paired points are at the exact same position, our line is infinitely thin when renderer – and hence, invisible… No good. What we need to do is separate those pairs to give the line some width. And, puuuush.

app-mesh

And there we have our line mesh, all visible and line-y…

The part I nimbly skipped over there is also the most complicated – how do we know in which direction to move each vertex?

To solve this, we need to work out the direction between the previous and next points along the line. Then we can rotate it by 90 degrees, and we’re left with the angle we want: the normal.

app-normal

Now that we have our normal, we can start getting a bit creative. For example, what if we separated the pairs differing amounts at each point? Here, if we make the pairs get closer together toward the ends of the line, it will give us this sharp, tapered effect.

app-taper

Now see what fun ideas you can come up with! I’ll wait. A bit more. Ok, stop.

And that’s the most complicated part over. Note: I haven’t gone into the depths of each caveat that drawing lines can present, because, well, I don’t need to. But I couldn’t possibly write about lines without mentioning this extremely informative and digestible article, written by Matt DesLauriers about everything you’d want to know about drawing lines in WebGL.

Code time – setting the scene

To kick us off, let’s set up a basic OGL canvas.

Here’s an OGL CodeSandbox Template project that I’ll be using as a guide. Feel free to fork this for any OGL experiments!

First import the required modules. Normally, I would import from a local copy of OGL for the ability to tree-shake, but to keep the file structure empty on CodeSandbox, here we’re using jsdelivr – which gives us CDN access to the npm deployment.

import {
    Renderer, Camera, Orbit, Transform, Geometry, Vec3, Color, Polyline, 
} from 'https://cdn.jsdelivr.net/npm/ogl@0.0.25/dist/ogl.mjs';

Create the WebGL context, and add the canvas to the DOM.

const renderer = new Renderer({dpr: 2});
const gl = renderer.gl;
document.body.appendChild(gl.canvas);

Create our camera and scene.

const camera = new Camera(gl);
camera.position.z = 3;

const controls = new Orbit(camera);

const scene = new Transform();

And then render the scene in an update loop. Obviously the scene is empty, so this will currently look very black.

function update(t) {
    requestAnimationFrame(update);
    controls.update();

    renderer.render({scene, camera});
}

But now we can do all of the things!

As an input to OGL’s Polyline class, we need to create a bunch of points (xyz coordinates).

Here, the x value goes from -1.5 and 1.5 along the line, while the y value moves in a sine pattern between -0.5 and 0.5.

const count = 100;
const points = [];
for (let i = 0; i < count; i++) {
    const x = (i / (count - 1) - 0.5) * 3;
    const y = Math.sin(i / 10.5) * 0.5;
    const z = 0;

    points.push(new Vec3(x, y, z));
};

Then we pass those points into a new instance of Polyline, along with colour and thickness variables (uniforms). And finally, attach it to the scene.

const polyline = new Polyline(gl, {
    points,
    uniforms: {
        uColor: {value: new Color('#1b1b1b')},
        uThickness: {value: 20},
    },
});

polyline.mesh.setParent(scene);

Here we have that working live. (Click and drag, scroll etc. If you want...)

How about a square? Let's just change those points.

const points = [];
points.push(new Vec3( 0, -1, 0));
points.push(new Vec3(-1, -1, 0));
points.push(new Vec3(-1,  1, 0));
points.push(new Vec3( 1,  1, 0));
points.push(new Vec3( 1, -1, 0));
points.push(new Vec3( 0, -1, 0));

Circle?

const count = 100;
const points = [];
for (let i = 0; i < count; i++) {
    const angle = i / (count - 2) * Math.PI * 2;
    const x = Math.cos(angle);
    const y = Math.sin(angle);
    const z = 0;

    points.push(new Vec3(x, y, z));
};

You may have noticed that when you rotate or zoom the camera, the line will always stay the same thickness. You would probably expect the line to get thicker when it's closer to to camera, and also to be paper thin when rotated on its side.

This is because the pair separation we spoke about earlier is happening after the camera's projection is applied - when the vertex values are in what's called 'NDC Space' (Normalized Device Coordinates). Projection matrices can be confusing, but luckily, NDC Space is not.

NDC Space is simply picturing your canvas as a 2D graph, with left to right (X), and bottom to top (Y) going from -1 to 1. No matter how complicated your scene is (geometry, projections, manipulations), each vertex will eventually need to be projected to a -1 to 1 range for X and Y.

code-screen

A more common term you've probably heard is Screen Space, which is very similar, but instead of a -1 to 1 range, it's mapped from 0 to 1.

We generally use cameras to help us convert our 3D coordinates into NDC Space, which is absolutely vital when you need to spin around an object, or view geometry from a specific perspective. But for what we're doing (mouse trails. I haven't forgotten), we don't really need to do any of that! So, in fact, we're going to skip that whole step, throw away the camera, and create our points directly in NDC Space (-1 to 1) from the get-go. This simplifies things, and it also means that we're going to get the opportunity to write a custom shader! Let me show you.

Shaping the line with a custom shader

Firstly, let's create our points in a straight line, with the X going from -0.5 to 0.5 and the Y left at 0. Keeping in mind that the screen goes from -1 to 1, this means we will end up with a horizontal line in the center of the screen, spanning half the width.

const count = 40;
const points = [];
for (let i = 0; i < count; i++) {
    const x = i / (count - 1) - 0.5;
    const y = 0;
    const z = 0;

    points.push(new Vec3(x, y, z));
};

This time when we create our Polyline, we are going to pass in a custom Vertex shader, which will override the default shader found in that class. We also don't need a thickness just yet as we'll be calculating that in the shader.

const polyline = new Polyline(gl, {
    points,
    vertex,
    uniforms: {
        uColor: {value: new Color('#1b1b1b')},
    },
});

Now, there are two shaders in the WebGL pipeline, Vertex and Fragment. To put it simply, the Vertex shader determines where on the screen to draw, and the Fragment shader determines what colour.

We can pass into a Vertex shader whatever data we want, that's entirely up to you. However, it will always be expected to return a position on the viewport that should be rendered (in Clip Space, which, for this case, is the same as NDC Space; -1 to 1).

At the start of our Vertex shader, you will find the input data: Attributes and Uniforms. Attributes are per-vertex variables, whereas Uniforms are common variables for all of the vertices. For example, as this shader is run for each vertex passed in, the position Attribute value will change, moving along each point, however the uResolution Uniform value will remain the same throughout.

attribute vec3 position;
attribute vec3 next;
attribute vec3 prev;
attribute vec2 uv;
attribute float side;

uniform vec2 uResolution;

At the very end of our Vertex shader, you'll find a function called main that defines the variable gl_Position. These two names are non-debatable! Our WebGL program will automatically look for the main function to run, and then it will pass the gl_Position variable on to the Fragment shader.

void main() {
    gl_Position = vec4(position, 1);
}

As our points are already in NDC Space, our shader - made up of just these two sections - is technically correct. However the only issue (the same as we had in our breakdown) is that the position pairs are on top of each other, so our line would be invisibly thin.

So instead of passing our position right on through to the output, let's add a function, getPosition, to push each vertex apart and give our line some width.

vec4 getPosition() {
    vec2 aspect = vec2(uResolution.x / uResolution.y, 1);
    vec2 nextScreen = next.xy * aspect;
    vec2 prevScreen = prev.xy * aspect;

    vec2 tangent = normalize(nextScreen - prevScreen);
    vec2 normal = vec2(-tangent.y, tangent.x);
    normal /= aspect;
    normal *= 0.1;
    
    vec4 current = vec4(position, 1);
    current.xy -= normal * side;
    return current;
}

void main() {
    gl_Position = getPosition();
}

Ah, now we can see our line. Mmmm, very modernist.

This new function is doing the exact steps in our approach overview. See here.

We determine the direction from the previous to the next point.

vec2 tangent = normalize(nextScreen - prevScreen);

Then rotate it 90 degrees to find the normal.

vec2 normal = vec2(-tangent.y, tangent.x);

Then we push our vertices apart along the normal. The side variable has a value of -1 or 1 for each side of a pair.

current.xy -= normal * side;

"OK OK... but you skipped a few lines".

Indeed. So, the lines that determine and apply the aspect ratio are there to account for the rectangular viewport. Multiplying against the aspect ratio makes our scene square. Then we can perform the rotation without risk of skewing. And after, we divide by the aspect to bring us back to the correct ratio.

And the other line...

normal *= 0.1;

Yes that one... is where we can have some fun. As this manipulates the line's width.

Without this bit of code, our line would cover the entire height of the viewport. Why? See if you can guess...

You see, as the normal is a 'normalised' direction, this means it has a length of 1. As we know, the NDC Space goes from -1 to 1, so if our line is in the middle of the screen, and each side of the line is pushed out by 1, that will cover the entire range of -1 to 1. So multiplying by 0.1 instead only makes our line cover 10% of the viewport.

Now if we were to change this line, to say...

normal *= uv.y * 0.2;

We get this expanding, triangular shape.

width-01

This is because the variable uv.y goes from 0 to 1 along the length of the line. So we can use this to affect the shape in a bunch of different ways.

Like, we can wrap that code in a pow function.

normal *= pow(uv.y, 2.0) * 0.2;

width-02

Hm, how exponentially curvy. No, I want something more edgy.

normal *= abs(fract(uv.y * 2.0) - 0.5) * 0.4;

width-03

Too edgy...

normal *= cos(uv.y * 12.56) * 0.1 + 0.2;

width-04

Too flabby.

normal *= (1.0 - abs(uv.y - 0.5) * 2.0) * 0.2;

width-05

Almost... but a little too diamond-y.

normal *= (1.0 - pow(abs(uv.y - 0.5) * 2.0, 2.0)) * 0.2;

width-06

That's not bad. Let's run with that.

So now we have our shape, let's deal with the movement.

Adding movement

To start off we just need 20 points, left at the default [0, 0, 0] value.

Then we need a new Vec3 that will track the mouse input, and covert the X and Y values to a -1 to 1 range, with the Y flipped.

const mouse = new Vec3();

function updateMouse(e) {
    mouse.set(
        (e.x / gl.renderer.width) * 2 - 1,
        (e.y / gl.renderer.height) * -2 + 1,
        0
    );
}

Then in our update function, we can use this mouse value to move our points.

Every frame, we loop through each of our points. For the first point, we ease it to the mouse value. For every other point, we ease it to the previous point in the line. This creates a trail effect, that grows and shrinks as the user moves the mouse faster and slower.

requestAnimationFrame(update);
function update(t) {
    requestAnimationFrame(update);
    
    for (let i = points.length - 1; i >= 0; i--) {
        if (!i) {
            points[i].lerp(mouse, 0.9);
        } else {
            points[i].lerp(points[i - 1], 0.9);
        }
    }
    polyline.updateGeometry();

    renderer.render({scene});
}

Have a play with it below.

We can make this a bit more fun by replacing the first point's linear easing with a spring.

const spring = 0.06;
const friction = 0.85;
const mouseVelocity = new Vec3();
const tmp = new Vec3();

requestAnimationFrame(update);
function update(t) {
    requestAnimationFrame(update);
    
    for (let i = points.length - 1; i >= 0; i--) {
        if (!i) {
            tmp.copy(mouse).sub(points[i]).multiply(spring);
            mouseVelocity.add(tmp).multiply(friction);
            points[i].add(mouseVelocity);
        } else {
            points[i].lerp(points[i - 1], 0.9);
        }
    }
    polyline.updateGeometry();

    renderer.render({scene});
}

The extra bit of physics just makes it that much more interesting to play with. I can't help but try and make a beautiful curving motion with the mouse...

Finally, one line is never enough. And what's with all of this dark grey?! Give me 5 coloured lines, with randomised spring values, and we'll call it even.

And there we have it!

As we're using random values, every time you refresh, the effect will behave a little differently.

End

Thank you so much for sticking with me. That ended up being a lot more in-depth than I had planned... I implore you to play around with the code, maybe try randomising the number of points in each line... or changing the shape of the curve over time!

If you're new to WebGL, I hope this made the world of buffer attributes and shaders a little less overwhelming - it can be really rewarding to come up with something interesting and unique.

Crafting Stylised Mouse Trails With OGL was written by Nathan Gordon and published on Codrops.