In this new ALL YOUR HTML coding session you’ll learn how to reconstruct the beautiful shape animation from the website of INTERWEAVE agency with Three.js. We’ll be calculating tangents and bitangents and use Physical materials to make a beautiful object.
In this new ALL YOUR HTML coding session we’ll be reconstructing the beautiful noise pattern from Monopo Studio’s website using Three.js and GLSL and some postprocessing.
In April 2019, Harry Alisavakis made a great write-up about the “magical marbles” effect he shared prior on Twitter. Check that out first to get a high level overview of the effect we’re after (while you’re at it, you should see some of his other excellent shader posts).
While his write-up provided a brief summary of the technique, the purpose of this tutorial is to offer a more concrete look at how you could implement code for this in Three.js to run on the web. There’s also some tweaks to the technique here and there that try to make things more straightforward.
This tutorial assumes intermediate familiarity with Three.js and GLSL
Overview
You should read Harry’s post first because he provides helpful visuals, but the gist of it is this:
Add fake depth to a material by offsetting the texture look-ups based on camera direction
Instead of using the same texture at each iteration, let’s use depth-wise “slices” of a heightmap so that the shape of our volume is more dynamic
Add wavy motion by displacing the texture look-ups with scrolling noise
There were a couple parts of this write-up that weren’t totally clear to me, likely due to the difference in features available in Unity vs Three.js. One is the jump from parallax mapping on a plane to a sphere. Another is how to get vertex tangents for the transformation to tangent space. Finally, I wasn’t sure if the noise for the heightmap was evaluated as code inside the shader or pre-rendered. After some experimentation I came to my own conclusions for these, but I encourage you to come up with your own variations of this technique
Here’s the Pen I’ll be starting from, it sets up a boilerplate Three.js app with an init and tick lifecycle, color management, and an environment map from Poly Haven for lighting.
Marbles are made of glass, and Harry’s marbles definitely showed some specular shine. In order to make a truly beautiful glassy material it would take some pretty complex PBR shader code, which is too much work! Instead, let’s just take one of Three.js’s built-in PBR materials and hook our magical bits into that, like the shader parasite we are.
Enter onBeforeCompile, a callback property of the THREE.Material base class that lets you apply patches to built-in shaders before they get compiled by WebGL. This technique is very hacky and not well explained in the official docs, but a good place to learn more about it is Dusan Bosnjak’s post “Extending three.js materials with GLSL”. The hardest part about it is determining which part of the shaders you need to change exactly. Unfortunately, your best bet is to just read through the source code of the shader you want to modify, find a line or chunk that looks vaguely relevant, and try tweaking stuff until the property you want to modify shows visible changes. I’ve been writing personal notes of what I discover since it’s really hard to keep track of what the different chunks and variables do.
I recently discovered there’s a much more elegant way to extend the built-in materials using Three’s experimental Node Materials, but that deserves a whole tutorial of its own, so for this guide I’ll stick with the more common onBeforeCompile approach.
For our purposes, MeshStandardMaterial is a good base to start from. It has specular and environment reflections that will make out material look very glassy, plus it gives you the option to add a normal map later on if you want to add scratches onto the surface. The only part we want to change is the base color on which the lighting is applied. Luckily, this is easy to find. The fragment shader for MeshStandardMaterial is defined in meshphysical_frag.glsl.js (it’s a subset of MeshPhysicalMaterial, so they are both defined in the same file). Oftentimes you need to go digging through the shader chunks represented by each of the #include statements you’ll see in the file, however, this is a rare occasion where the variable we want to tweak is in plain sight.
It’s the line right near the top of the main() function that says:
vec4 diffuseColor = vec4( diffuse, opacity );
This line normally reads from the diffuse and opacity uniforms which you set via the .color and .opacity JavaScript properties of the material, and then all the chunks after that do the complicated lighting work. We are going to replace this line with our own assignment to diffuseColor so we can apply whatever pattern we want on the marble’s surface. You can do this using regular JavaScript string methods on the .fragmentShader field of the shader provided to the onBeforeCompile callback.
By the way, the type definition for that mysterious callback argument is available here.
In the following Pen I swapped our geometry for a sphere, lowered the roughness, and filled the diffuseColor with the screen space normals which are available in the standard fragment shader on vNormal. The result looks like a shiny version of MeshNormalMaterial.
Now comes the harder part — using the diffuse color to create the illusion of volume inside our marble. In Harry’s earlier parallax post, he talks about finding the camera direction in tangent space and using this to offset the UV coordinates. There’s a great explanation of how this general principle works for parallax effects on learnopengl.com and in this archived post.
However, converting stuff into tangent space in Three.js can be tricky. To the best of my knowledge, there’s not a built-in utility to help with this like there are for other space transformations, so it takes some legwork to both generate vertex tangents and then assemble a TBN matrix to perform the transformation. On top of that, spheres are not a nice shape for tangents due to the hairy ball theorem (yes, that’s a real thing), and Three’s computeTangents() function was producing discontinuities for me so you basically have to compute tangents manually. Yuck!
Luckily, we don’t really need to use tangent space if we frame this as a 3D raymarching problem. We have a ray pointing from the camera to the surface of our marble, and we want to march this through the sphere volume as well as down the slices of our height map. We just need to know how to convert a point in 3D space into a point on the surface of our sphere so we can perform texture lookups. In theory you could also just plug the 3D position right into your noise function of choice and skip using the texture, but this effect relies on lots of iterations and I’m operating under the assumption that a texture lookup is cheaper than all the number crunching happening in e.g. the 3D simplex noise function (shader gurus, please correct me if I’m wrong). The other benefit of reading from a texture is that it allows us to use a more art-oriented pipeline to craft our heightmaps, so we can make all sorts of interesting volumes without writing new code.
Originally I wrote a function to do this spherical XYZ→UV conversion based on some answers I saw online, but it turns out there’s already a function that does the same thing inside of common.glsl.js called equirectUv. We can reuse that as long as put our raymarching logic after the #include <common> line in the standard shader.
Creating our heightmap
For the heightmap, we want a texture that seamlessly projects on the surface of a UV sphere. It’s not hard to find seamless noise textures online, but the problem is that these flat projections of noise will look warped near the poles when applied to a sphere. To solve this, let’s craft our own texture using Blender. One way to do this is to bend a high resolution “Grid” mesh into a sphere using two instances of the “Simple Deform modifier”, plug the resulting “Object” texture coordinates into your procedural shader of choice, and then do an emissive bake with the Cycles renderer. I also added some loop cuts near the poles and a subdivision modifier to prevent any artifacts in the bake.
The resulting bake looks something like this:
Raymarching
Now the moment we’ve been waiting for (or dreading) — raymarching! It’s actually not so bad, the following is an abbreviated version of the code. For now there’s no animation, I’m just taking slices of the heightmap using smoothstep (note the smoothing factor which helps hide the sharp edges between layers), adding them up, and then using this to mix two colors.
uniform sampler2D heightMap;
uniform vec3 colorA;
uniform vec3 colorB;
uniform float iterations;
uniform float depth;
uniform float smoothing;
/**
* @param rayOrigin - Point on sphere
* @param rayDir - Normalized ray direction
* @returns Diffuse RGB color
*/
vec3 marchMarble(vec3 rayOrigin, vec3 rayDir) {
float perIteration = 1. / float(iterations);
vec3 deltaRay = rayDir * perIteration * depth;
// Start at point of intersection and accumulate volume
vec3 p = rayOrigin;
float totalVolume = 0.;
for (int i=0; i<iterations; ++i) {
// Read heightmap from current spherical direction
vec2 uv = equirectUv(p);
float heightMapVal = texture(heightMap, uv).r;
// Take a slice of the heightmap
float height = length(p); // 1 at surface, 0 at core, assuming radius = 1
float cutoff = 1. - float(i) * perIteration;
float slice = smoothstep(cutoff, cutoff + smoothing, heightMapVal);
// Accumulate the volume and advance the ray forward one step
totalVolume += slice * perIteration;
p += deltaRay;
}
return mix(colorA, colorB, totalVolume);
}
/**
* We can user this later like:
*
* vec4 diffuseColor = vec4(marchMarble(rayOrigin, rayDir), 1.0);
*/
This logic isn’t really physically accurate — taking slices of the heightmap based on the iteration index assumes that the ray is pointing towards the center of the sphere, but this isn’t true for most of the pixels. As a result, the marble appears to have some heavy refraction. However, I think this actually looks cool and further sells the effect of it being solid glass!
Injecting uniforms
One final note before we see the fruits of our labor — how do we include all these custom uniforms in our modified material? We can’t just stuck stuff onto material.uniforms like you would with THREE.ShaderMaterial. The trick is to create your own personal uniforms object and then wire up its contents onto the shader argument inside of onBeforeCompile. For instance:
When the shader tries to read its shader.uniforms.foo.value reference, it’s actually reading from your local myUniforms.foo.value, so any change to the values in your uniforms object will automatically be reflected in the shader.
I typically use the JavaScript spread operator to wire up all my uniforms at once:
const myUniforms = {
// ...(lots of stuff)
}
material.onBeforeCompile = shader => {
shader.uniforms = { ...shader.uniforms, ...myUniforms }
// ... (all your other patches)
}
Putting this all together, we get a gassy (and glassy) volume. I’ve added sliders to this Pen so you can play around with the iteration count, smoothing, max depth, and colors.
Technically the ray origin and ray direction should be in local space so the effect doesn’t break when the marble moves. However, I’m skipping this transformation because we’re not moving the marble, so world space and local space are interchangeable. Work smarter not harder!
Step 3: Wavy Motion
Almost done! The final touch is to make this marble come alive by animating the volume. Harry’s waving displacement post explains how he accomplishes this using a 2D displacement texture. However, just like with the heightmap, a flat displacement texture warps near the poles of a sphere. So, we’ll make our own again. You can use the same Blender setup as before, but this time let’s bake a 3D noise texture to the RGB channels:
Then in our marchMarble function, we’ll read from this texture using the same equirectUv function as before, center the values, and then add a scaled version of that vector to the position used for the heightmap texture lookup. To animate the displacement, introduce a time uniform and use that to scroll the displacement texture horizontally. For an even better effect, we’ll sample the displacement map twice (once upright, then upside down so they never perfectly align), scroll them in opposite directions and add them together to produce noise that looks chaotic. This general strategy is often used in water shaders to create waves.
Hard part’s over! This formula is a starting point from which there are endless possibilities for improvements and deviations. For instance, what happens if we swap out the noise texture we used earlier for something else like this:
In that example I modified the shader to make the volume look less gaseous by boosting the rate of volume accumulation, breaking the loop once it reached a certain volume threshold, and tinting based on the final number of iterations rather than accumulated volume.
For my last trick, I’ll point back to Harry’s write-up where he suggests mixing between two HDR colors. This basically means mixing between colors whose RGB values exceed the typical [0, 1] range. If we plug such a color into our shader as-is, it’ll create color artifacts in the pixels where the lighting is blown out. There’s an easy solve for this by wrapping the color in a toneMapping() call as is done in tonemapping_fragment.glsl.js, which “tones down” the color range. I couldn’t find where that function is actually defined, but it works!
I’ve added some color multiplier sliders to this Pen so you can push the colors outside the [0, 1] range and observe how mixing these HDR colors creates pleasant color ramps.
Thanks again to Harry for the great learning resources. I had a ton of fun trying to recreate this effect and I learned a lot along the way. Hopefully you learned something too!
Your challenge now is to take these examples and run with them. Change the code, the textures, the colors, and make your very own magical marble. Show me and Harry what you make on Twitter.
It’s fascinating which magical effects you can add to a website when you experiment with vertex displacement. Today we’d like to share a method with you that you can use to create your own WebGL shader animation linked to scroll progress. It’s a great way to learn how to bind shader vertices and colors to user interactions and to find the best flow.
For our flexible scroll stage, we quickly create three sections with Pug. By adding an element to the sections array, it’s easy to expand the stage.
index.pug:
.scroll__stage
.scroll__content
- const sections = ['Logma', 'Naos', 'Chara']
each section, index in sections
section.section
.section__title
h1.section__title-number= index < 9 ? `0${index + 1}` : index + 1
h2.section__title-text= section
p.section__paragraph The fireball that we rode was moving – But now we've got a new machine – They got music in the solar system
br
a.section__button Discover
The sections are quickly formatted with Sass, the mixins we will need later.
Now we write our ScrollStage class and set up a scene with Three.js. The camera range of 10 is enough for us here. We already prepare the loop for later instructions.
We create a mesh, assign a icosahedron geometry and set the blending of its material to additive for loud colors. And – I like the wireframe style. For now, we set the value of all uniforms to 0 (uOpacity to 1). I usually scale down the mesh for portrait screens. With only one object, we can do it this way. Otherwise you better transform the camera.position.z.
In the vertex shader (which positions the geometry) and fragment shader (which assigns a color to the pixels) we control the values of the uniforms that we will get from the scroll position. To generate an organic randomness, we make some noise. This shader program runs now on the GPU.
To find your preferred settings, you can set up a dat.gui for example. I’ll show you another approach here, in which you can combine two (or more) parameters to intuitively find a cool flow of movement. We simply connect the uniform values with the normalized values of the mouse event and log them to the console. As we use this approach only for development, we do not call rAF (requestAnimationFrames).
Once we have chosen the start and end values, it’s easy to attach them to the scroll position. In this example, we want to drop the purple mesh through the blue section so that it is subsequently soaked in blue itself. We increase the frequency and the strength of our vertex displacement. Let’s first enter this values in our settings and update the mesh material. We normalize scrollY so that we can get the values from 0 to 1 and make our calculations with them.
To render the shader only while scrolling, we call rAF by the scroll listener. We don’t need the mouse event listener anymore.
To improve performance, we add an overwrite to the GSAP default settings. This way we kill any existing tweens while generating a new one for every frame. A long duration renders the movement extra smooth. Once again we let the object rotate slightly with the scroll movement. We iterate over our settings and GSAP makes the music.
In this ALL YOUR HTML coding session we’re going to look at recreating the curly noise tubes with light scattering from the fantastic website of Lusion using Three.js.
This coding session was streamed live on May 16, 2021.
In this ALL YOUR HTML coding session you’ll learn how to recreate the amazing noisy strokes texture seen on the website of Leonard, the inventive agency, using Three.js with GLSL. The wonderful effect was originally made by Damien Mortini.
This coding session was streamed live on May 9, 2021.
Learn how to set up Puppeteer inside of Node.js to generate images on the fly using HTML and CSS and then write the generated images to disk and Amazon S3.
A guide that will cover several pseudo-class selectors that currently have the best support along with examples to demonstrate how you can start using them today. By Stephanie Eckles.
In this ALL YOUR HTML coding session we’ll be replicating the ripples and video zoom effect from The Avener by TOOSOON studio using Three.js and GLSL coding.
This coding session was streamed live on April 25, 2021.
In this tutorial we’ll implement an infinite circular gallery using WebGL with OGL based on the website Lions Good News 2020 made by SHIFTBRAIN inc.
Most of the steps of this tutorial can be also reproduced in other WebGL libraries such as Three.js or Babylon.js with the correct adaptations.
With that being said, let’s start coding!
Creating our OGL 3D environment
The first step of any WebGL tutorial is making sure that you’re setting up all the rendering logic required to create a 3D environment.
Usually what’s required is: a camera, a scene and a renderer that is going to output everything into a canvas element. Then inside a requestAnimationFrame loop, you’ll use your camera to render a scene inside the renderer. So here’s our initial snippet:
In our createRenderer method, we’re initializing a renderer with a fixed color background by calling this.gl.clearColor. Then we’re storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> (this.gl.canvas) element to our document.body.
In our createCamera method, we’re creating a new Camera() instance and setting some of its attributes: fov and its z position. The FOV is the field of view of your camera, what you’re able to see from it. And the z is the position of your camera in the z axis.
In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.
The onResize method is the most important part of our initial setup. It’s responsible for three different things:
Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
Updating our this.camera perspective dividing the width and height of the viewport.
Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.
The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.
And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.
You’ll also notice that we already included the wheel, touchstart, touchmove, touchend, mousedown, mousemove and mouseup events, they will be used to include user interactions with our application.
Creating a reusable geometry instance
It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.
The reason for including heightSegments and widthSegments with these values is being able to manipulate vertices in a way to make the Plane behave like a paper in the air.
Importing our images using Webpack
Now it’s time to import our images into our application. Since we’re using Webpack in this tutorial, all we need to do to request our images is using import:
import Image1 from 'images/1.jpg'
import Image2 from 'images/2.jpg'
import Image3 from 'images/3.jpg'
import Image4 from 'images/4.jpg'
import Image5 from 'images/5.jpg'
import Image6 from 'images/6.jpg'
import Image7 from 'images/7.jpg'
import Image8 from 'images/8.jpg'
import Image9 from 'images/9.jpg'
import Image10 from 'images/10.jpg'
import Image11 from 'images/11.jpg'
import Image12 from 'images/12.jpg'
Now let’s create our array of images that we want to use in our infinite slider, so we’re basically going to call the variables above inside a createMedia method, and use .map to create new instances of the Media class (new Media()), which is going to be our representation of each image of the gallery.
As you’ve probably noticed, we’re passing a bunch of arguments to our Media class, I’ll explain why they’re needed when we start setting up the class in the next section. We’re also duplicating the amount of images to avoid any issues of not having enough images when making our gallery infinite on very wide screens.
It’s important to also include some specific calls in the onResize and update methods for our this.medias array, because we want the images to be responsive:
Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.
In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:
Explaining a few of these arguments, basically the geometry is the geometry we’re going to apply to our Mesh class. The this.gl is our GL context, useful to keep doing WebGL manipulations inside the class. The this.image is the URL of the image. Both of the this.index and this.length will be used to do positions calculations of the mesh. The this.scene is the group which we’re going to append our mesh to. And finally this.screen and this.viewport are the sizes of the viewport and environment.
Now it’s time to create the shader that is going to be applied to our Mesh in the createShader method, in OGL shaders are created with Program:
In the snippet above, we’re basically creating a new Texture() instance, making sure to use generateMipmaps as false so it preserves the quality of the image. Then creating a new Program() instance, which represents a shader composed of fragment and vertex with some uniforms used to manipulate it.
We’re also creating a new Image() instance to preload the image before applying it to the texture.image. And also updating the this.program.uniforms.uImageSizes.value because it’s going to be used to preserve the aspect ratio of our images.
It’s important to create our fragment and vertex shaders now, so we’re going to create two new files: fragment.glsl and vertex.glsl:
The Mesh instance is stored in the this.plane variable to be reused in the onResize and update methods, then appended as a child of the this.scene group.
The only thing we have now on the screen is a simple square with our image:
Let’s now implement the onResize method and make sure we’re rendering rectangles:
The scale.y and scale.x calls are responsible for scaling our element properly, transforming our previous square into a rectangle of 700×900 sizes based on the scale.
And the uViewportSizes and uPlaneSizes uniform value updates makes the image display correctly. That’s basically what makes the image have the background-size: cover; behavior, but in WebGL environment.
Now we need to position all the rectangles in the x axis, making sure we have a small gap between them. To achieve that, we’re going to use this.plane.scale.x, this.padding and this.index variables to do the calculation required to move them:
And in the update method, we’re going to set the this.plane.position to these variables:
update () {
this.plane.position.x = this.x
}
Now you’ve setup all the initial code of Media, which results in the following image:
Including infinite scrolling logic
Now it’s time to make it interesting and include scrolling logic on it, so we have at least an infinite gallery in place when the user scrolls through your page. In our index.js, we’ll do the following updates.
First, let’s include a new object called this.scroll in our constructor with all variables that we will manipulate to do the smooth scrolling:
In our update method with requestAnimationFrame, we’ll lerp the this.scroll.current with this.scroll.target to make it smooth, then we’ll pass it to all medias:
As you’ve noticed, it’s not infinite yet, to achieve that, we need to include some extra code. The first step is including the direction of the scroll in the update method from index.js:
Now in the Media class, you need to include a variable called this.extra in the constructor, and do some manipulations on it to sum the total width of the gallery, when the element is outside of the screen.
That’s it, now we have the infinite scrolling gallery, pretty cool right?
Including circular rotation
Now it’s time to include the special flavor of the tutorial, which is making the infinite scrolling also have the circular rotation. To achieve it, we’ll use Math.cos to change the this.mesh.position.y accordingly to the rotation of the element. And map technique to change the this.mesh.rotation.z based on the element position in the z axis.
First, let’s make it rotate in a smooth way based on the position. The map method is basically a way to serve values based on another specific range, let’s say for example you use map(0.5, 0, 1, -500, 500);, it’s going to return 0 because it’s the middle between -500 and 500. Basically the first argument controls the output of min2 and max2:
And that’s the result we get so far. It’s already pretty cool because you’re able to see the rotation changing based on the plane position:
Now it’s time to make it look circular. Let’s use Math.cos, we just need to do a simple calculation with this.plane.position.x / this.widthTotal, this way we’ll have a cos that will return a normalized value that we can just tweak multiplying by how much we want to change the y position of the element:
Simple as that, we’re just moving it by 75 in environment space based in the position, this gives us the following result, which is exactly what we wanted to achieve:
Snapping to the closest item
Now let’s include a simple snapping to the closest item when the user stops scrolling. To achieve that, we need to create a new method called onCheck, it’s going to do some calculations when the user releases the scrolling:
Now the gallery is always being snapped to the correct entry:
Writing paper shaders
Finally let’s include the most interesting part of our project, which is enhancing the shaders a little bit by taking into account the scroll velocity and distorting the vertices of our meshes.
The first step is to include two new uniforms in our this.program declaration from Media class: uSpeed and uTime.
Now let’s write some shader code to make our images bend and distort in a very cool way. In your vertex.glsl file, you should include the new uniforms: uniform float uTime and uniform float uSpeed:
uniform float uTime;
uniform float uSpeed;
Then inside the void main() of your shader, you can now manipulate the vertices in the z axis using these two values plus the position stored variable in p. We’re going to use a sin and cos to bend our vertices like it’s a plane, so all you need to do is including the following line:
Also don’t forget to include uTime increment in the update() method from Media:
this.program.uniforms.uTime.value += 0.04
Just this line of code outputs a pretty cool paper effect animation:
Including text in WebGL using MSDF fonts
Now let’s include our text inside the WebGL, to achieve that, we’re going to use msdf-bmfont to generate our files, you can see how to do that in this GitHub repository, but basically it’s installing the npm dependency and running the command below:
After running it, you should now have a .png and .json file in the same directory, these are the files that we’re going to use on our MSDF implementation in OGL.
Now let’s create a new file called Title and start setting up the code of it. First let’s create our class and use import in the shaders and the files:
import AutoBind from 'auto-bind'
import { Color, Geometry, Mesh, Program, Text, Texture } from 'ogl'
import fragment from 'shaders/text-fragment.glsl'
import vertex from 'shaders/text-vertex.glsl'
import font from 'fonts/freight.json'
import src from 'fonts/freight.png'
export default class {
constructor ({ gl, plane, renderer, text }) {
AutoBind(this)
this.gl = gl
this.plane = plane
this.renderer = renderer
this.text = text
this.createShader()
this.createMesh()
}
}
Now it’s time to start setting up MSDF implementation code inside the createShader() method. The first thing we’re going to do is create a new Texture() instance and load the fonts/freight.png one stored in src:
Then we need to start setting up the fragment shader we’re going to use to render the MSDF text, because MSDF can be optimized in WebGL 2.0, we’re going to use this.renderer.isWebgl2 from OGL to check if it’s supported or not and declare different shaders based on it, so we’ll have vertex300, fragment300, vertex100 and fragment100:
createShader () {
const vertex100 = `${vertex}`
const fragment100 = `
#extension GL_OES_standard_derivatives : enable
precision highp float;
${fragment}
`
const vertex300 = `#version 300 es
#define attribute in
#define varying out
${vertex}
`
const fragment300 = `#version 300 es
precision highp float;
#define varying in
#define texture2D texture
#define gl_FragColor FragColor
out vec4 FragColor;
${fragment}
`
let fragmentShader = fragment100
let vertexShader = vertex100
if (this.renderer.isWebgl2) {
fragmentShader = fragment300
vertexShader = vertex300
}
this.program = new Program(this.gl, {
cullFace: null,
depthTest: false,
depthWrite: false,
transparent: true,
fragment: fragmentShader,
vertex: vertexShader,
uniforms: {
uColor: { value: new Color('#545050') },
tMap: { value: texture }
}
})
}
As you’ve probably noticed, we’re prepending fragment and vertex with different setup based on the renderer WebGL version, let’s create also our text-fragment.glsl and text-vertex.glsl files:
Finally let’s create the geometry of our MSDF font implementation in the createMesh() method, for that we’ll use the new Text() instance from OGL, and then apply the buffers generated from it to the new Geometry() instance:
Simple as that, we’re just including a new Title() instance inside our Media class, this will output the following result for you:
One of the best things about rendering text inside WebGL is reducing the overload of calculations required by the browser when animating the text to the right position. If you go with the DOM approach, you’ll usually have a little bit of performance impact because browsers will need to recalculate DOM sections when translating the text properly and checking composite layers.
For the purpose of this demo, we also included a new Number() class implementation that will be responsible for showing the current index that the user is seeing. You can check how it’s implemented in source code, but it’s basically the same implementation of the Title class with the only difference of it loading a different font style:
Including background blocks
To finalize the demo, let’s implement some blocks in the background that will be moving in x and y axis to enhance the depth effect of it:
To achieve this effect we’re going to create a new Background class and inside of it we’ll initialize some new Plane() geometries in a new Mesh() with random sizes and positions by changing the scale and position of the meshes of the for loop:
import { Color, Mesh, Plane, Program } from 'ogl'
import fragment from 'shaders/background-fragment.glsl'
import vertex from 'shaders/background-vertex.glsl'
import { random } from 'utils/math'
export default class {
constructor ({ gl, scene, viewport }) {
this.gl = gl
this.scene = scene
this.viewport = viewport
const geometry = new Plane(this.gl)
const program = new Program(this.gl, {
vertex,
fragment,
uniforms: {
uColor: { value: new Color('#c4c3b6') }
},
transparent: true
})
this.meshes = []
for (let i = 0; i < 50; i++) {
let mesh = new Mesh(this.gl, {
geometry,
program,
})
const scale = random(0.75, 1)
mesh.scale.x = 1.6 * scale
mesh.scale.y = 0.9 * scale
mesh.speed = random(0.75, 1)
mesh.xExtra = 0
mesh.x = mesh.position.x = random(-this.viewport.width * 0.5, this.viewport.width * 0.5)
mesh.y = mesh.position.y = random(-this.viewport.height * 0.5, this.viewport.height * 0.5)
this.meshes.push(mesh)
this.scene.addChild(mesh)
}
}
}
Then after that we just need to apply endless scrolling logic on them as well, following the same directional validation we have in the Media class:
I love blobs and I enjoy looking for interesting ways to change basic geometries with Three.js: bending a plane, twisting a box, or exploring a torus (like in this 10-min video tutorial). So this time, my love for shaping things will be the excuse to see what we can do with a sphere, transforming it using shaders.
This tutorial will be brief, so we’ll skip the basic render/scene setup and focus on manipulating the sphere’s shape and colors, but if you want to know more about the setup check out these steps.
We’ll go with a more rounded than irregular shape, so the premise is to deform a sphere and use that same distortion to color it.
Vertex displacement
As you’ve probably been thinking, we’ll be using noise to deform the geometry by moving each vertex along the direction of its normal. Think of it as if we were pushing each vertex from the inside out with different strengths. I could elaborate more on this, but I rather point you to this article by The Spite aka Jaume Sanchez Elias, he explains this so well! I bet some of you have stumbled upon this article already.
So in code, it looks like this:
varying vec3 vNormal;
uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
void main() {
float t = uTime * uSpeed;
// You can also use classic perlin noise or simplex noise,
// I'm using its periodic variant out of curiosity
float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;
// Disturb each vertex along the direction of its normal
vec3 pos = position + (normal * distortion);
vNormal = normal;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
You can experiment and change its values to see how the blob changes. I know we’re going with a more subtle and rounded distortion, but feel free to go crazy with it; there are audio visualizers out there that deform a sphere to the point that you don’t even think it’s based on a sphere.
Now, this already looks interesting, but let’s add one more touch to it next.
Noitation
…is just a word I came up with to combine noise with rotation (ba dum tss), but yes! Adding some twirl to the mix makes things more compelling.
If you’ve ever played with Play-Doh as a child, you have surely molded a big chunk of clay into a ball, grab it with each hand, and twisted in opposite directions until the clay tore apart. This is kind of what we want to do (except for the breaking part).
To twist the sphere, we are going to generate a sine wave from top to bottom of the sphere. Then, we are going to use this top-bottom wave as a rotation for the current position. Since the values increase/decrease from top to bottom, the rotation is going to oscillate as well, creating a twist:
varying vec3 vNormal;
uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;
#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)
void main() {
float t = uTime * uSpeed;
// You can also use classic perlin noise or simplex noise,
// I'm using its periodic variant out of curiosity
float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;
// Disturb each vertex along the direction of its normal
vec3 pos = position + (normal * distortion);
// Create a sine wave from top to bottom of the sphere
// To increase the amount of waves, we'll use uFrequency
// To make the waves bigger we'll use uAmplitude
float angle = sin(uv.y * uFrequency + t) * uAmplitude;
pos = rotateY(pos, angle);
vNormal = normal;
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
Notice how the waves emerge from the top, it’s soothing. Some of you might find this movement therapeutic, so take some time to appreciate it and think about what we’ve learned so far…
Alright! Now that you’re back let’s get on to the fragment shader.
Colorific
If you take a close look at the shaders before, you see, almost at the end, that we’ve been passing the normals to the fragment shader. Remember that we want to use the distortion to color the shape, so first let’s create a varying where we pass that distortion to:
varying float vDistort;
uniform float uTime;
uniform float uSpeed;
uniform float uNoiseDensity;
uniform float uNoiseStrength;
uniform float uFrequency;
uniform float uAmplitude;
#pragma glslify: pnoise = require(glsl-noise/periodic/3d)
#pragma glslify: rotateY = require(glsl-rotate/rotateY)
void main() {
float t = uTime * uSpeed;
// You can also use classic perlin noise or simplex noise,
// I'm using its periodic variant out of curiosity
float distortion = pnoise((normal + t), vec3(10.0) * uNoiseDensity) * uNoiseStrength;
// Disturb each vertex along the direction of its normal
vec3 pos = position + (normal * distortion);
// Create a sine wave from top to bottom of the sphere
// To increase the amount of waves, we'll use uFrequency
// To make the waves bigger we'll use uAmplitude
float angle = sin(uv.y * uFrequency + t) * uAmplitude;
pos = rotateY(pos, angle);
vDistort = distortion; // Train goes to the fragment shader! Tchu tchuuu
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
With this basis, we’ll take it a step further and use it in conjunction with one of my favorite color functions out there.
Cospalette
Cosine palette is a very useful function to create and control color with code based on the brightness, contrast, oscillation of cosine, and phase of cosine. I encourage you to watch Char Stiles explain this further, which is soooo good. Final s/o to Inigo Quilez who wrote an article about this function some years ago; for those of you who haven’t stumbled upon his genius work, please do. I would love to write more about him, but I’ll save that for a poem.
Let’s use cospalette to input the distortion and see how it looks:
varying vec2 vUv;
varying float vDistort;
uniform float uIntensity;
vec3 cosPalette(float t, vec3 a, vec3 b, vec3 c, vec3 d) {
return a + b * cos(6.28318 * (c * t + d));
}
void main() {
float distort = vDistort * uIntensity;
// These values are my fav combination,
// they remind me of Zach Lieberman's work.
// You can find more combos in the examples from IQ:
// https://iquilezles.org/www/articles/palettes/palettes.htm
// Experiment with these!
vec3 brightness = vec3(0.5, 0.5, 0.5);
vec3 contrast = vec3(0.5, 0.5, 0.5);
vec3 oscilation = vec3(1.0, 1.0, 1.0);
vec3 phase = vec3(0.0, 0.1, 0.2);
// Pass the distortion as input of cospalette
vec3 color = cosPalette(distort, brightness, contrast, oscilation, phase);
gl_FragColor = vec4(color, 1.0);
}
¡Liiistoooooo! See how the color palette behaves similar to the distortion because we’re using it as input. Swap it for vUv.x or vUv.y to see different results of the palette, or even better, come up with your own input!
And that’s it! I hope this short tutorial gave you some ideas to apply to anything you’re creating or inspired you to make something. Next time you use noise, stop and think if you can do something extra to make it more interesting and make sure to save Cospalette in your shader toolbelt.
Explore and have fun with this! And don’t forget to share it with me on Twitter. If you got any questions or suggestions, let me know.
I hope you learned something new. Till next time!
References and Credits
Thanks to all the amazing people that put knowledge out in the world!
While many people shy away from writing vanilla WebGL and immediately jump to frameworks such as three.js or PixiJS, it is possible to achieve great visuals and complex animation with relatively small amounts of code. Today, I would like to present core WebGL concepts while programming some simple 2D visuals. This article assumes at least some higher-level knowledge of WebGL through a library.
Please note: WebGL2 has been around for years, yet Safari only recently enabled it behind a flag. It is a pretty significant upgrade from WebGL1 and brings tons of new useful features, some of which we will take advantage of in this tutorial.
What are we going to build
From a high level standpoint, to implement our 2D metaballs we need two steps:
Draw a bunch of rectangles with radial linear gradient starting from their centers and expanding to their edges. Draw a lot of them and alpha blend them together in a separate framebuffer.
Take the resulting image with the blended quads from step #1, scan its pixels one by one and decide the new color of the pixel depending on its opacity. For example – if the pixel has opacity smaller then 0.5, render it in red. Otherwise render it in yellow and so on.
Don’t worry if these terms don’t make a lot of sense just yet – we will go over each of the steps needed in detail. Let’s jump into the code and start building!
Bootstrapping our program
We will start things by
Creating a HTMLCanvasElement, sizing it to our device viewport and inserting it into the page DOM
Setting the correct WebGL viewport and the background color for our scene
Starting a requestAnimationFrame loop that will draw our scene as fast as the device allows. The speed is determined by various factors such as the hardware, current CPU / GPU workloads, battery levels, user preferences and so on. For smooth animation we are going to aim for 60FPS.
/* Create our canvas and obtain it's WebGL2RenderingContext */
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl2')
/* Handle error somehow if no WebGL2 support */
if (!gl) {
// ...
}
/* Size our canvas and listen for resize events */
resizeCanvas()
window.addEventListener('resize', resizeCanvas)
/* Append our canvas to the DOM and set its background-color with CSS */
canvas.style.backgroundColor = 'black'
document.body.appendChild(canvas)
/* Issue first frame paint */
requestAnimationFrame(updateFrame)
function updateFrame (timestampMs) {
/* Set our program viewport to fit the actual size of our monitor with devicePixelRatio into account */
gl.viewport(0, 0, canvas.width, canvas.height)
/* Set the WebGL background colour to be transparent */
gl.clearColor(0, 0, 0, 0)
/* Clear the current canvas pixels */
gl.clear(gl.COLOR_BUFFER_BIT)
/* Issue next frame paint */
requestAnimationFrame(updateFrame)
}
function resizeCanvas () {
/*
We need to account for devicePixelRatio when sizing our canvas.
We will use it to obtain the actual pixel size of our viewport and size our canvas to match it.
We will then downscale it back to CSS units so it neatly fills our viewport and we benefit from downsampling antialiasing
We also need to limit it because it can really slow our program. Modern iPhones have devicePixelRatios of 3. This means rendering 9x more pixels each frame!
More info: https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html
*/
const dpr = devicePixelRatio > 2 ? 2 : devicePixelRatio
canvas.width = innerWidth * dpr
canvas.height = innerHeight * dpr
canvas.style.width = `${innerWidth}px`
canvas.style.height = `${innerHeight}px`
}
Drawing a quad
The next step is to actually draw a shape. WebGL has a rendering pipeline, which dictates how does the object you draw and its corresponding geometry and material end up on the device screen. WebGL is essentially just a rasterising engine, in the sense that you give it properly formatted data and it produces pixels for you.
The full rendering pipeline is out of the scope for this tutorial, but you can read more about it here. Let’s break down what exactly we need for our program:
Defining our geometry and its attributes
Each object we draw in WebGL is represented as a WebGLProgram running on the device GPU. It consists of input variables and vertex and fragment shader to operate on these variables. The vertex shader responsibility is to position our geometry correctly on the device screen and fragment shader’s responsibility is to control its appearance.
It’s up to us as developers to write our vertex and fragment shaders, compile them on the device GPU and link them in a GLSL program. Once we have successfully done this, we must query this program’s input variable locations that were allocated on the GPU for us, supply correctly formatted data to them, enable them and instruct them how to unpack and use our data.
To render our quad, we need 3 input variables:
a_position will dictate the position of each vertex of our quad geometry. We will pass it as an array of 12 floats, i.e. 2 triangles with 3 points per triangle, each represented by 2 floats (x, y). This variable is an attribute, i.e. it is obviously different for each of the points that make up our geometry.
a_uv will describe the texture offset for each point of our geometry. They too will be described as an array of 12 floats. We will use this data not to texture our quad with an image, but to dynamically create a radial linear gradient from the quad center. This variable is also an attribute and will too be different for each of our geometry points.
u_projectionMatrix will be an input variable represented as a 32bit float array of 16 items that will dictate how do we transform our geometry positions described in pixel values to the normalised WebGL coordinate system. This variable is a uniform, unlike the previous two, it will not change for each geometry position.
We can take advantage of Vertex Array Object to store the description of our GLSL program input variables, their locations on the GPU and how should they be unpacked and used.
WebGLVertexArrayObjects or VAOs are 1st class citizens in WebGL2, unlike in WebGL1 where they were hidden behind an optional extension and their support was not guaranteed. They let us type less, execute fewer WebGL bindings and keep our drawing state into a single, easy to manage object that is simpler to track. They essentially store the description of our geometry and we can reference them later.
We need to write the shaders in GLSL 3.00 ES, which WebGL2 supports. Our vertex shader will be pretty simple:
/*
Pass in geometry position and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;
/*
Pass in global projection matrix for each vertex
*/
uniform mat4 u_projectionMatrix;
/*
Specify varying variable to be passed to fragment shader
*/
out vec2 v_uv;
void main () {
/*
We need to convert our quad points positions from pixels to the normalized WebGL coordinate system
*/
gl_Position = u_projectionMatrix * a_position;
v_uv = a_uv;
}
At this point, after we have successfully executed our vertex shader, WebGL will fill in the pixels between the points that make up the geometry on the device screen. The way the space between the points is filled depends on what primitives are we using for drawing – WebGL supports points, lines and triangles.
We as developers do not have control over this step.
After it has rasterised our geometry, it will execute our fragment shader on each generated pixel. The fragment shader responsibility is the final appearance of each generated pixel and wether it should even be rendered. Here is our fragment shader:
/*
Set fragment shader float precision
*/
precision highp float;
/*
Consume interpolated tex coord varying from vertex shader
*/
in vec2 v_uv;
/*
Final color represented as a vector of 4 components - r, g, b, a
*/
out vec4 outColor;
void main () {
/*
This function will run on each each pixel generated by our quad geometry
*/
/*
Calculate the distance for each pixel from the center of the quad (0.5, 0.5)
*/
float dist = distance(v_uv, vec2(0.5)) * 2.0;
/*
Invert and clamp our distance from 0.0 to 1.0
*/
float c = clamp(1.0 - dist, 0.0, 1.0);
/*
Use the distance to generate the pixel opacity. We have to explicitly enable alpha blending in WebGL to see the correct result
*/
outColor = vec4(vec3(1.0), c);
}
Let’s write two utility methods: makeGLShader() to create and compile our GLSL shaders and makeGLProgram() to link them into a GLSL program to be ran on the GPU:
/*
Utility method to create a WebGLShader object and compile it on the device GPU
https://developer.mozilla.org/en-US/docs/Web/API/WebGLShader
*/
function makeGLShader (shaderType, shaderSource) {
/* Create a WebGLShader object with correct type */
const shader = gl.createShader(shaderType)
/* Attach the shaderSource string to the newly created shader */
gl.shaderSource(shader, shaderSource)
/* Compile our newly created shader */
gl.compileShader(shader)
const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS)
/* Return the WebGLShader if compilation was a success */
if (success) {
return shader
}
/* Otherwise log the error and delete the faulty shader */
console.error(gl.getShaderInfoLog(shader))
gl.deleteShader(shader)
}
/*
Utility method to create a WebGLProgram object
It will create both a vertex and fragment WebGLShader and link them into a program on the device GPU
https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram
*/
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {
/* Create and compile vertex WebGLShader */
const vertexShader = makeGLShader(gl.VERTEX_SHADER, vertexShaderSource)
/* Create and compile fragment WebGLShader */
const fragmentShader = makeGLShader(gl.FRAGMENT_SHADER, fragmentShaderSource)
/* Create a WebGLProgram and attach our shaders to it */
const program = gl.createProgram()
gl.attachShader(program, vertexShader)
gl.attachShader(program, fragmentShader)
/* Link the newly created program on the device GPU */
gl.linkProgram(program)
/* Return the WebGLProgram if linking was successfull */
const success = gl.getProgramParameter(program, gl.LINK_STATUS)
if (success) {
return program
}
/* Otherwise log errors to the console and delete fauly WebGLProgram */
console.error(gl.getProgramInfoLog(program))
gl.deleteProgram(program)
}
And here is the complete code snippet we need to add to our previous code snippet to generate our geometry, compile our shaders and link them into a GLSL program:
const canvas = document.createElement('canvas')
/* rest of code */
/* Enable WebGL alpha blending */
gl.enable(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
/*
Generate the Vertex Array Object and GLSL program
we need to render our 2D quad
*/
const {
quadProgram,
quadVertexArrayObject,
} = makeQuad(innerWidth / 2, innerHeight / 2)
/* --------------- Utils ----------------- */
function makeQuad (positionX, positionY, width = 50, height = 50, drawType = gl.STATIC_DRAW) {
/*
Write our vertex and fragment shader programs as simple JS strings
!!! Important !!!!
WebGL2 requires GLSL 3.00 ES
We need to declare this version on the FIRST LINE OF OUR PROGRAM
Otherwise it would not work!
*/
const vertexShaderSource = `#version 300 es
/*
Pass in geometry position and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;
/*
Pass in global projection matrix for each vertex
*/
uniform mat4 u_projectionMatrix;
/*
Specify varying variable to be passed to fragment shader
*/
out vec2 v_uv;
void main () {
gl_Position = u_projectionMatrix * a_position;
v_uv = a_uv;
}
`
const fragmentShaderSource = `#version 300 es
/*
Set fragment shader float precision
*/
precision highp float;
/*
Consume interpolated tex coord varying from vertex shader
*/
in vec2 v_uv;
/*
Final color represented as a vector of 4 components - r, g, b, a
*/
out vec4 outColor;
void main () {
float dist = distance(v_uv, vec2(0.5)) * 2.0;
float c = clamp(1.0 - dist, 0.0, 1.0);
outColor = vec4(vec3(1.0), c);
}
`
/*
Construct a WebGLProgram object out of our shader sources and link it on the GPU
*/
const quadProgram = makeGLProgram(vertexShaderSource, fragmentShaderSource)
/*
Create a Vertex Array Object that will store a description of our geometry
that we can reference later when rendering
*/
const quadVertexArrayObject = gl.createVertexArray()
/*
1. Defining geometry positions
Create the geometry points for our quad
V6 _______ V5 V3
| / /|
| / / |
| / / |
V4 |/ V1 /______| V2
We need two triangles to form a single quad
As you can see, we end up duplicating vertices:
V5 & V3 and V4 & V1 end up occupying the same position.
There are better ways to prepare our data so we don't end up with
duplicates, but let's keep it simple for this demo and duplicate them
Unlike regular Javascript arrays, WebGL needs strongly typed data
That's why we supply our positions as an array of 32 bit floating point numbers
*/
const vertexArray = new Float32Array([
/*
First set of 3 points are for our first triangle
*/
positionX - width / 2, positionY + height / 2, // Vertex 1 (X, Y)
positionX + width / 2, positionY + height / 2, // Vertex 2 (X, Y)
positionX + width / 2, positionY - height / 2, // Vertex 3 (X, Y)
/*
Second set of 3 points are for our second triangle
*/
positionX - width / 2, positionY + height / 2, // Vertex 4 (X, Y)
positionX + width / 2, positionY - height / 2, // Vertex 5 (X, Y)
positionX - width / 2, positionY - height / 2 // Vertex 6 (X, Y)
])
/*
Create a WebGLBuffer that will hold our triangles positions
*/
const vertexBuffer = gl.createBuffer()
/*
Now that we've created a GLSL program on the GPU we need to supply data to it
We need to supply our 32bit float array to the a_position variable used by the GLSL program
When you link a vertex shader with a fragment shader by calling gl.linkProgram(someProgram)
WebGL (the driver/GPU/browser) decide on their own which index/location to use for each attribute
Therefore we need to find the location of a_position from our program
*/
const a_positionLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_position')
/*
Bind the Vertex Array Object descriptior for this geometry
Each geometry instruction from now on will be recorded under it
To stop recording after we are done describing our geometry, we need to simply unbind it
*/
gl.bindVertexArray(quadVertexArrayObject)
/*
Bind the active gl.ARRAY_BUFFER to our WebGLBuffer that describe the geometry positions
*/
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/*
Feed our 32bit float array that describes our quad to the vertexBuffer using the
gl.ARRAY_BUFFER global handle
*/
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, drawType)
/*
We need to explicitly enable our the a_position variable on the GPU
*/
gl.enableVertexAttribArray(a_positionLocationOnGPU)
/*
Finally we need to instruct the GPU how to pull the data out of our
vertexBuffer and feed it into the a_position variable in the GLSL program
*/
/*
Tell the attribute how to get data out of positionBuffer (ARRAY_BUFFER)
*/
const size = 2 // 2 components per iteration
const type = gl.FLOAT // the data is 32bit floats
const normalize = false // don't normalize the data
const stride = 0 // 0 = move forward size * sizeof(type) each iteration to get the next position
const offset = 0 // start at the beginning of the buffer
gl.vertexAttribPointer(a_positionLocationOnGPU, size, type, normalize, stride, offset)
/*
2. Defining geometry UV texCoords
V6 _______ V5 V3
| / /|
| / / |
| / / |
V4 |/ V1 /______| V2
*/
const uvsArray = new Float32Array([
0, 0, // V1
1, 0, // V2
1, 1, // V3
0, 0, // V4
1, 1, // V5
0, 1 // V6
])
/*
The rest of the code is exactly like in the vertices step above.
We need to put our data in a WebGLBuffer, look up the a_uv variable
in our GLSL program, enable it, supply data to it and instruct
WebGL how to pull it out:
*/
const uvsBuffer = gl.createBuffer()
const a_uvLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_uv')
gl.bindBuffer(gl.ARRAY_BUFFER, uvsBuffer)
gl.bufferData(gl.ARRAY_BUFFER, uvsArray, drawType)
gl.enableVertexAttribArray(a_uvLocationOnGPU)
gl.vertexAttribPointer(a_uvLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
/*
Stop recording and unbind the Vertex Array Object descriptior for this geometry
*/
gl.bindVertexArray(null)
/*
WebGL has a normalized viewport coordinate system which looks like this:
Device Viewport
------- 1.0 ------
| | |
| | |
-1.0 --------------- 1.0
| | |
| | |
------ -1.0 -------
However as you can see, we pass the position and size of our quad in actual pixels
To convert these pixels values to the normalized coordinate system, we will
use the simplest 2D projection matrix.
It will be represented as an array of 16 32bit floats
You can read a gentle introduction to 2D matrices here
https://webglfundamentals.org/webgl/lessons/webgl-2d-matrices.html
*/
const projectionMatrix = new Float32Array([
2 / innerWidth, 0, 0, 0,
0, -2 / innerHeight, 0, 0,
0, 0, 0, 0,
-1, 1, 0, 1,
])
/*
In order to supply uniform data to our quad GLSL program, we first need to enable the GLSL program responsible for rendering our quad
*/
gl.useProgram(quadProgram)
/*
Just like the a_position attribute variable earlier, we also need to look up
the location of uniform variables in the GLSL program in order to supply them data
*/
const u_projectionMatrixLocation = gl.getUniformLocation(quadProgram, 'u_projectionMatrix')
/*
Supply our projection matrix as a Float32Array of 16 items to the u_projection uniform
*/
gl.uniformMatrix4fv(u_projectionMatrixLocation, false, projectionMatrix)
/*
We have set up our uniform variables correctly, stop using the quad program for now
*/
gl.useProgram(null)
/*
Return our GLSL program and the Vertex Array Object descriptor of our geometry
We will need them to render our quad in our updateFrame method
*/
return {
quadProgram,
quadVertexArrayObject,
}
}
/* rest of code */
function makeGLShader (shaderType, shaderSource) {}
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {}
function updateFrame (timestampMs) {}
We have successfully created a GLSL program quadProgram, which is running on the GPU, waiting to be drawn on the screen. We also have obtained a Vertex Array Object quadVertexArrayObject, which describes our geometry and can be referenced before we draw. We can now draw our quad. Let’s augment our updateFrame() method like so:
function updateFrame (timestampMs) {
/* rest of our code */
/*
Bind the Vertex Array Object descriptor of our quad we generated earlier
*/
gl.bindVertexArray(quadVertexArrayObject)
/*
Use our quad GLSL program
*/
gl.useProgram(quadProgram)
/*
Issue a render command to paint our quad triangles
*/
{
const drawPrimitive = gl.TRIANGLES
const vertexArrayOffset = 0
const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
}
/*
After a successful render, it is good practice to unbind our
GLSL program and Vertex Array Object so we keep WebGL state clean.
We will bind them again anyway on the next render
*/
gl.useProgram(null)
gl.bindVertexArray(null)
/* Issue next frame paint */
requestAnimationFrame(updateFrame)
}
And here is our result:
We can use the great SpectorJS Chrome extension to capture our WebGL operations on each frame. We can look at the entire command list with their associated visual states and context information. Here is what it takes to render a single frame with our updateFrame() call:
Some gotchas:
We declare the vertices positions of our triangles in a counter clockwise order. This is important.
We need to explicitly enable blending in WebGL and specify it’s blend operation. For our demo we will use gl.ONE_MINUS_SRC_ALPHA as a blend function (multiplies all colors by 1 minus the source alpha value).
In our vertex shader you can see we expect the input variable a_position to be vector with 4 components (vec4), while in Javascript we specify only 2 items per vertex. That’s because the default attribute value is 0, 0, 0, 1. It doesn’t matter that you’re only supplying x and y from your attributes. z defaults to 0 and w defaults to 1.
As you can see, WebGL is a state machine, where you have to constantly bind stuff before you are able to work on it and you always have to make sure you unbind it afterwards. Consider how in the code snippet above we supplied a Float32Array with out positions to the vertexBuffer:
const vertexArray = new Float32Array([/* ... */])
const vertexBuffer = gl.createBuffer()
/* Bind our vertexBuffer to the global binding WebGL bind point gl.ARRAY_BUFFER */
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/* At this point, gl.ARRAY_BUFFER represents vertexBuffer */
/* Supply data to our vertexBuffer using the gl.ARRAY_BUFFER binding point */
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW)
/* Do a bunch of other stuff with the active gl.ARRAY_BUFFER (vertexBuffer) here */
// ...
/* After you have done your work, unbind it */
gl.bindBuffer(gl.ARRAY_BUFFER, null)
This is totally opposite of Javascript, where this same operation would be expressed like this for example (pseudocode):
const vertexBuffer = gl.createBuffer()
vertexBuffer.addData(vertexArray)
vertexBuffer.setDrawOperation(gl.STATIC_DRAW)
// etc.
Coming from Javascript background, initially I found WebGL’s state machine way of doing things by constantly binding and unbinding really odd. One must exercise good discipline and always make sure to unbind stuff after using it, even in trivial programs like ours! Otherwise you risk things not working and hard to track bugs.
Drawing lots of quads
We have successfully rendered a single quad, but in order to make things more interesting and visually appealing, we need to draw more.
As we saw already, we can easily create new geometries with different position using our makeQuad() utility helper. We can pass them different positions and radiuses and compile each one of them into a separate GLSL program to be executed on the GPU. This will work, however:
As we saw in our update loop method updateFrame, to render our quad on each frame we must:
Use the correct GLSL program by calling gl.useProgram()
Bind the correct VAO describing our geometry by calling gl.bindVertexArray()
Issue a draw call with correct primitive type by calling gl.drawArrays()
So 3 WebGL commands in total.
What if we want to render 500 quads? Suddenly we jump to 500×3 or 1500 individual WebGL calls on each frame of our animation. If we want 1000quads we jump up to 3000 individual calls, without even counting all of the preparation WebGL bindings we have to do before our updateFrame loop starts.
Geometry Instancing is a way to reduce these calls. It works by letting you tell WebGL how many times you want the same thing drawn (the number of instances) with minor variations, such as rotation, scale, position etc. Examples include trees, grass, crowd of people, boxes in a warehouse, etc.
Just like VAOs, instancing is a 1st class citizen in WebGL2 and does not require extensions, unlike WebGL1. Let’s augment our code to support geometry instancing and render 1000 quads with random positions.
First of all, we need to decide on how many quads we want rendered and prepare the offset positions for each one as a new array of 32bit floats. Let’s do 1000 quads and positions them randomly in our viewport:
/* rest of code */
/* How many quads we want rendered */
const QUADS_COUNT = 1000
/*
Array to store our quads positions
We need to layout our array as a continuous set
of numbers, where each pair represents the X and Y
or a single 2D position.
Hence for 1000 quads we need an array of 2000 items
or 1000 pairs of X and Y
*/
const quadsPositions = new Float32Array(QUADS_COUNT * 2)
for (let i = 0; i < QUADS_COUNT; i++) {
/*
Generate a random X and Y position
*/
const randX = Math.random() * innerWidth
const randY = Math.random() * innerHeight
/*
Set the correct X and Y for each pair in our array
*/
quadsPositions[i * 2 + 0] = randX
quadsPositions[i * 2 + 1] = randY
}
/*
We also need to augment our makeQuad() method
It no longer expects a single position, rather an array of positions
*/
const {
quadProgram,
quadVertexArrayObject,
} = makeQuad(quadsPositions)
/* rest of code */
Instead of a single position, we will now pass an array of positions into our makeQuad() method. Let’s augment this method to receive our offsets array as a new variable input a_offset to our shaders which will contain the correct XY offset for a particular instance. To do this, we need to prepare our offsets as a new WebGLBuffer and instruct WebGL how to upack them, just like we did for a_position and a_uv
function makeQuad (quadsPositions, width = 70, height = 70, drawType = gl.STATIC_DRAW) {
/* rest of code */
/*
Add offset positions for our individual instances
They are declared and used in exactly the same way as
"a_position" and "a_uv" above
*/
const offsetsBuffer = gl.createBuffer()
const a_offsetLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_offset')
gl.bindBuffer(gl.ARRAY_BUFFER, offsetsBuffer)
gl.bufferData(gl.ARRAY_BUFFER, quadsPositions, drawType)
gl.enableVertexAttribArray(a_offsetLocationOnGPU)
gl.vertexAttribPointer(a_offsetLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
/*
HOWEVER, we must add an additional WebGL call to set this attribute to only
change per instance, instead of per vertex like a_position and a_uv above
*/
const instancesDivisor = 1
gl.vertexAttribDivisor(a_offsetLocationOnGPU, instancesDivisor)
/*
Stop recording and unbind the Vertex Array Object descriptor for this geometry
*/
gl.bindVertexArray(null)
/* rest of code */
}
We need to augment our original vertexArray responsible for passing data into our a_position GLSL variable. We no longer need to offset it to the desired position like in the first example, now the a_offset variable will take care of this in the vertex shader:
const vertexArray = new Float32Array([
/*
First set of 3 points are for our first triangle
*/
-width / 2, height / 2, // Vertex 1 (X, Y)
width / 2, height / 2, // Vertex 2 (X, Y)
width / 2, -height / 2, // Vertex 3 (X, Y)
/*
Second set of 3 points are for our second triangle
*/
-width / 2, height / 2, // Vertex 4 (X, Y)
width / 2, -height / 2, // Vertex 5 (X, Y)
-width / 2, -height / 2 // Vertex 6 (X, Y)
])
We also need to augment our vertex shader to consume and use the new a_offset input variable we pass from Javascript:
const vertexShaderSource = `#version 300 es
/* rest of GLSL code */
/*
This input vector will change once per instance
*/
in vec4 a_offset;
void main () {
/* Account a_offset in the final geometry posiiton */
vec4 newPosition = a_position + a_offset;
gl_Position = u_projectionMatrix * newPosition;
}
/* rest of GLSL code */
`
And as a final step we need to change our drawArrays call in our updateFrame to drawArraysInstanced to account for instancing. This new method expects the exact same arguments and adds instanceCount as last one:
And with all these changes, here is our updated example:
Even though we increased the amount of rendered objects by 1000x, we are still making 3 WebGL calls on each frame. That’s a pretty great performance win!
Post Processing with a fullscreen quad
Now that we have our 1000 quads successfully rendering to the device screen on each frame, we can turn them into metaballs. As we established, we need to scan the pixels of the picture we generated in the previous steps and determine the alpha value of each pixel. If it is below a certain threshold, we discard it, otherwise we color it.
To do this, instead of rendering our scene directly to the screen as we do right now, we need to render it to a texture. We will do our post processing on this texture and render the result to the device screen.
Post-Processing is a technique used in graphics that allows you to take a current input texture, and manipulate its pixels to produce a transformed image. This can be used to apply shiny effects like volumetric lighting, or any other filter type effect you’ve seen in applications like Photoshop or Instagram.
The basic technique for creating these effects is pretty straightforward:
A WebGLTexture is created with the same size as the canvas and attached as a color attachment to a WebGLFramebuffer. At the beginning of our updateFrame() method, the framebuffer is set as the render target, and the entire scene is rendered normally to it.
Next, a full-screen quad is rendered to the device screen using the texture generated in step 1 as an input. The shader used during the rendering of the quad is what contains the post-process effect.
/* rest of code */
const renderTexture = makeTexture()
const framebuffer = makeFramebuffer(renderTexture)
function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) {
/*
Create the texture that we will use to render to
*/
const targetTexture = gl.createTexture()
/*
Just like everything else in WebGL up until now, we need to bind it
so we can configure it. We will unbind it once we are done with it.
*/
gl.bindTexture(gl.TEXTURE_2D, targetTexture)
/*
Define texture settings
*/
const level = 0
const internalFormat = gl.RGBA
const border = 0
const format = gl.RGBA
const type = gl.UNSIGNED_BYTE
/*
Notice how data is null. That's because we don't have data for this texture just yet
We just need WebGL to allocate the texture
*/
const data = null
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)
/*
Set the filtering so we don't need mips
*/
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
return renderTexture
}
function makeFramebuffer (texture) {
/*
Create and bind the framebuffer
*/
const fb = gl.createFramebuffer()
gl.bindFramebuffer(gl.FRAMEBUFFER, fb)
/*
Attach the texture as the first color attachment
*/
const attachmentPoint = gl.COLOR_ATTACHMENT0
gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, targetTexture, level)
}
We have successfully created a texture and attached it as color attachment to a framebuffer. Now we can render our scene to it. Let’s augment our updateFrame()method:
function updateFrame () {
gl.viewport(0, 0, canvas.width, canvas.height)
gl.clearColor(0, 0, 0, 0)
gl.clear(gl.COLOR_BUFFER_BIT)
/*
Bind the framebuffer we created
From now on until we unbind it, each WebGL draw command will render in it
*/
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
/* Set the offscreen framebuffer background color to be transparent */
gl.clearColor(0.2, 0.2, 0.2, 1.0)
/* Clear the offscreen framebuffer pixels */
gl.clear(gl.COLOR_BUFFER_BIT)
/*
Code for rendering our instanced quads here
*/
/*
We have successfully rendered to the framebuffer at this point
In order to render to the screen next, we need to unbind it
*/
gl.bindFramebuffer(gl.FRAMEBUFFER, null)
/* Issue next frame paint */
requestAnimationFrame(updateFrame)
}
Let’s take a look at our result:
As you can see, we get an empty screen. There are no errors and the program is running just fine – keep in mind however that we are rendering to a separate framebuffer, not the default device screen framebuffer!
In order to display our offscreen framebuffer back on the screen, we need to render a fullscreen quad and use the framebuffer’s texture as an input.
Creating a fullscreen quad and displaying our texture on it
Let’s create a new quad. We can reuse our makeQuad() method from the above snippets, but we need to augment it to support instancing optionally and be able to put vertex and fragment shader sources as outside argument variables. This time we need only one quad and the shaders we need for it are different.
Take a look at the updated makeQuad()signature:
/* rename our instanced quads program & VAO */
const {
quadProgram: instancedQuadsProgram,
quadVertexArrayObject: instancedQuadsVAO,
} = makeQuad({
instancedOffsets: quadsPositions,
/*
We need different set of vertex and fragment shaders
for the different quads we need to render, so pass them from outside
*/
vertexShaderSource: instancedQuadVertexShader,
fragmentShaderSource: instancedQuadFragmentShader,
/*
support optional instancing
*/
isInstanced: true,
})
Let’s use the same method to create a new fullscreen quad and render it. First our vertex and fragment shader:
const fullscreenQuadVertexShader = `#version 300 es
in vec4 a_position;
in vec2 a_uv;
uniform mat4 u_projectionMatrix;
out vec2 v_uv;
void main () {
gl_Position = u_projectionMatrix * a_position;
v_uv = a_uv;
}
`
const fullscreenQuadFragmentShader = `#version 300 es
precision highp float;
/*
Pass our texture we render to as an uniform
*/
uniform sampler2D u_texture;
in vec2 v_uv;
out vec4 outputColor;
void main () {
/*
Use our interpolated UVs we assigned in Javascript to lookup
texture color value at each pixel
*/
vec4 inputColor = texture(u_texture, v_uv);
/*
0.5 is our alpha threshold we use to decide if
pixel should be discarded or painted
*/
float cutoffThreshold = 0.5;
/*
"cutoff" will be 0 if pixel is below 0.5 or 1 if above
step() docs - https://thebookofshaders.com/glossary/?search=step
*/
float cutoff = step(cutoffThreshold, inputColor.a);
/*
Let's use mix() GLSL method instead of if statement
if cutoff is 0, we will discard the pixel by using empty color with no alpha
otherwise, let's use black with alpha of 1
mix() docs - https://thebookofshaders.com/glossary/?search=mix
*/
vec4 emptyColor = vec4(0.0);
/* Render base metaballs shapes */
vec4 borderColor = vec4(1.0, 0.0, 0.0, 1.0);
outputColor = mix(
emptyColor,
borderColor,
cutoff
);
/*
Increase the treshold and calculate new cutoff, so we can render smaller shapes again, this time in different color and with smaller radius
*/
cutoffThreshold += 0.05;
cutoff = step(cutoffThreshold, inputColor.a);
vec4 fillColor = vec4(1.0, 1.0, 0.0, 1.0);
/*
Add new smaller metaballs color on top of the old one
*/
outputColor = mix(
outputColor,
fillColor,
cutoff
);
}
`
Let’s use them to create and link a valid GLSL program, just like when we rendered our instances:
const {
quadProgram: fullscreenQuadProgram,
quadVertexArrayObject: fullscreenQuadVAO,
} = makeQuad({
vertexShaderSource: fullscreenQuadVertexShader,
fragmentShaderSource: fullscreenQuadFragmentShader,
isInstanced: false,
width: innerWidth,
height: innerHeight
})
/*
Unlike our instances GLSL program, here we need to pass an extra uniform - a "u_texture"!
Tell the shader to use texture unit 0 for u_texture
*/
gl.useProgram(fullscreenQuadProgram)
const u_textureLocation = gl.getUniformLocation(fullscreenQuadProgram, 'u_texture')
gl.uniform1i(u_textureLocation, 0)
gl.useProgram(null)
Finally we can render the fullscreen quad with the result texture as an uniform u_texture. Let’s change our updateFrame() method:
function updateFrame () {
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
/* render instanced quads here */
gl.bindFramebuffer(gl.FRAMEBUFFER, null)
/*
Render our fullscreen quad
*/
gl.bindVertexArray(fullscreenQuadVAO)
gl.useProgram(fullscreenQuadProgram)
/*
Bind the texture we render to as active TEXTURE_2D
*/
gl.bindTexture(gl.TEXTURE_2D, renderTexture)
{
const drawPrimitive = gl.TRIANGLES
const vertexArrayOffset = 0
const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
}
/*
Just like everything else, unbind our texture once we are done rendering
*/
gl.bindTexture(gl.TEXTURE_2D, null)
gl.useProgram(null)
gl.bindVertexArray(null)
requestAnimationFrame(updateFrame)
}
And here is our final result (I also added a simple animation to make the effect more apparent):
And here is the breakdown of one updateFrame() call:
Aliasing issues
On my 2016 MacBook Pro with retina display I can clearly see aliasing issues with our current example. If we are to add bigger radiuses and blow our animation to fullscreen the problem will become only more noticeable.
The issue comes from the fact we are rendering to a 8bit gl.UNSIGNED_BYTE texture. If we want to increase the detail, we need to switch to floating point textures (32 bit float gl.RGBA32F or 16 bit float gl.RGBA16F). The catch is that these textures are not supported on all hardware and are not part of WebGL2 core. They are available through optional extensions, that we need to check if exist.
The extensions we are interested in to render to 32bit floating point textures are
EXT_color_buffer_float
OES_texture_float_linear
If these extensions are present on the user device, we can use internalFormat = gl.RGBA32F and textureType = gl.FLOAT when creating our render textures. If they are not present, we can optionally fallback and render to 16bit floating textures. The extensions we need in that case are:
EXT_color_buffer_half_float
OES_texture_half_float_linear
If these extensions are present, we can use internalFormat = gl.RGBA16F and textureType = gl.HALF_FLOAT for our render texture. If not, we will fallback to what we have used up until now – internalFormat = gl.RGBA and textureType = gl.UNSIGNED_BYTE.
Here is our updated makeTexture() method:
function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) {
/*
Initialize internal format & texture type to default values
*/
let internalFormat = gl.RGBA
let type = gl.UNSIGNED_BYTE
/*
Check if optional extensions are present on device
*/
const rgba32fSupported = gl.getExtension('EXT_color_buffer_float') && gl.getExtension('OES_texture_float_linear')
if (rgba32fSupported) {
internalFormat = gl.RGBA32F
type = gl.FLOAT
} else {
/*
Check if optional fallback extensions are present on device
*/
const rgba16fSupported = gl.getExtension('EXT_color_buffer_half_float') && gl.getExtension('OES_texture_half_float_linear')
if (rgba16fSupported) {
internalFormat = gl.RGBA16F
type = gl.HALF_FLOAT
}
}
/* rest of code */
/*
Pass in correct internalFormat and textureType to texImage2D call
*/
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)
/* rest of code */
}
And here is our updated result:
Conclusion
I hope I managed to showcase the core principles behind WebGL2 with this demo. As you can see, the API itself is low-level and requires quite a bit of typing, yet at the same time is really powerful and let’s you draw complex scenes with fine-grained control over the rendering.
Writing production ready WebGL requires even more typing, checking for optional features / extensions and handling missing extensions and fallbacks, so I would advise you to use a framework. At the same time, I believe it is important to understand the key concepts behind the API so you can successfully use higher level libraries like threejs and dig into their internals if needed.
I am a big fan of twgl, which hides away much of the verbosity of the API, while still being really low level with a small footprint. This demo’s code can easily be reduced by more then half by using it.
I encourage you to experiment around with the code after reading this article, plug in different values, change the order of things, add more draw commands and what not. I hope you walk away with a high level understanding of core WebGL2 API and how it all ties together, so you can learn more on your own.
Hello everyone, introducing myself a little bit first, I’m Luis Henrique Bizarro, I’m a Senior Creative Developer at Active Theory based in São Paulo, Brazil. It’s always a pleasure to me having the opportunity to collaborate with Codrops to help other developers learn new things, so I hope everyone enjoys this tutorial!
In this tutorial I’ll explain you how to create an auto scrolling infinite image gallery. The image grid is also scrollable be user interaction, making it an interesting design element to showcase works. It’s based on this great animation seen on Oneshot.finance made by Jesper Landberg.
I’ve been using the technique of styling images first with HTML + CSS and then creating an abstraction of these elements inside WebGL using some camera and viewport calculations in multiple websites, so this is the approach we’re going to use in this tutorial.
The good thing about this implementation is that it can be reused across any WebGL library, so if you’re more familiar with Three.js or Babylon.js than OGL, you’ll also be able to achieve the same results using a similar code, when it’s about shading and scaling the plane meshes.
So let’s get into it!
Implementing our HTML markup
The first step is implementing our HTML markup. We’re going to use <figure> and <img> elements, nothing special here, just the standard:
<div class="demo-1__gallery">
<figure class="demo-1__gallery__figure">
<img class="demo-1__gallery__image" src="images/demo-1/1.jpg">
</figure>
<!-- Repeating the same markup until 12.jpg. -->
</div>
Setting our CSS styles
The second step is styling our elements using CSS. One of the first things I do in a website is defining the font-size of the html element because I use rem to help with the responsive breakpoints.
This comes in handy if you’re doing creative websites that only require two or three different breakpoints, so I highly recommend starting using it if you haven’t adopted rem yet.
One thing I’m also using is calc() with the size of the designs. In our tutorial we’re going to use 1920 as our main width, scaling our font-size depending on the screen size of 100vw. This results in 10px at a 1920px screen, for example:
html {
font-size: calc(100vw / 1920 * 10);
}
Now let’s style our grid of images. We want to freely place our images across the screen using absolute positioning, so we’re just going to set the height, width and left/top styles across all our demo-1 classes:
Note that we’re hiding the visibility of our HTML, because it’s not going to be visible for the users since we’re going to load these images inside the <canvas> element. But below you can find a screenshot of what the result will look like.
Creating our OGL 3D environment
Now it’s time to get started with the WebGL implementation using OGL. First let’s create an App class that is going to be the entry point of our demo and inside of it, let’s also create the initial methods: createRenderer, createCamera, createScene, onResize and our requestAnimationFrame loop with update.
In our createRenderer method, we’re initializing one renderer with alpha enabled, storing our GL context (this.renderer.gl) reference in the this.gl variable and appending our <canvas> element to our document.body.
In our createCamera method, we’re just creating a new Camera and setting some of its attributes: fov and its z position.
In our createScene method, we’re using the Transform class, that is the representation of a new scene that is going to contain all our planes that represent our images in the WebGL environment.
The onResize method is the most important part of our initial setup. It’s responsible for three different things:
Making sure we’re always resizing the <canvas> element with the correct viewport sizes.
Updating our this.camera perspective dividing the width and height of the viewport.
Storing in the variable this.viewport, the value representations that will help to transform pixels into 3D environment sizes by using the fov from the camera.
The approach of using the camera.fov to transform pixels in 3D environment sizes is an approach used very often in multiple WebGL implementations. Basically what it does is making sure that if we do something like: this.mesh.scale.x = this.viewport.width; it’s going to make our mesh fit the entire screen width, behaving like width: 100%, but in 3D space.
And finally in our update, we’re setting our requestAnimationFrame loop and making sure we keep rendering our scene.
Create our reusable geometry instance
It’s a good practice to keep memory usage low by always reusing the same geometry reference no matter what WebGL library you’re using. To represent all our images, we’re going to use a Plane geometry, so let’s create a new method and store this new geometry inside the this.planeGeometry variable.
import { Renderer, Camera, Transform, Plane } from 'ogl'
createGeometry () {
this.planeGeometry = new Plane(this.gl)
}
Select all images and create a new class for each one
Now it’s time to use document.querySelector to select all our images and create one reusable class that is going to represent our images. (We’re going to create a single Media.js file later.)
createMedias () {
this.mediasElements = document.querySelectorAll('.demo-1__gallery__figure')
this.medias = Array.from(this.mediasElements).map(element => {
let media = new Media({
element,
geometry: this.planeGeometry,
gl: this.gl,
scene: this.scene,
screen: this.screen,
viewport: this.viewport
})
return media
})
}
As you can see, we’re just selecting all .demo-1__gallery__figure elements, going through them and generating an array of `this.medias` with new instances of Media.
Now it’s important to start attaching this array in important pieces of our setup code.
Let’s first include all our media inside the method onResize and also call media.onResize for each one of these new instances:
And inside our update method, we’re going to call media.update() as well:
if (this.medias) {
this.medias.forEach(media => media.update())
}
Setting up our Media.js file and class
Our Media class is going to use Mesh, Program and Texture classes from OGL to create a 3D plane and attribute a texture to it, which in our case is going to be our images.
In our constructor, we’re going to store all variables that we need and that were passed in the new Media() initialization from index.js:
import { Mesh, Program, Texture } from 'ogl'
import fragment from 'shaders/fragment.glsl'
import vertex from 'shaders/vertex.glsl'
export default class {
constructor ({ element, geometry, gl, scene, screen, viewport }) {
this.element = element
this.image = this.element.querySelector('img')
this.geometry = geometry
this.gl = gl
this.scene = scene
this.screen = screen
this.viewport = viewport
this.createMesh()
this.createBounds()
this.onResize()
}
}
In our createMesh method, we’ll load the image texture using the this.image.src attribute, then create a new Program, which is basically a representation of the material we’re applying to our Mesh. So our method looks like this:
Looks pretty simple, right? After we generate a new Mesh, we’re setting the plane as children of this.scene, so we’re including our mesh inside our main scene.
As you’ve probably noticed, our Program receives fragment and vertex. These both represent the shaders we’re going to use on our planes. For now, we’re just using simple implementations of both.
In our vertex.glsl file we’re getting the uv and position attributes, and making sure we’re rendering our planes in the right 3D world position.
In our fragment.glsl file, we’re receiving a tMap texture, as you can see in the tMap: { value: texture } declaration, and rendering it in our plane geometry:
The createBounds method is important to make sure we’re positioning and scaling our planes in the correct DOM elements positions, so it’s basically going to call for this.element.getBoundingClientRect() to get the right position of our planes, and then after that using these values to calculate the 3D values of our plane.
As you’ve probably noticed, the calculations for scale.x and scale.y are going to stretch our plane to make it the same width and height of the <img> elements. And the position.x and position.y takes the offset from the element and makes our translate our planes to the correct x and y axis in 3D.
And let’s not forget our onResize method, which is basically going to call createBounds again to refresh our getBoundingClientRect values and make sure we keep our 3D implementation responsive as well.
onResize (sizes) {
if (sizes) {
const { screen, viewport } = sizes
if (screen) this.screen = screen
if (viewport) this.viewport = viewport
}
this.createBounds()
}
This is the result we’ve got so far.
Implement cover behavior in fragment shaders
As you’ve probably noticed, our images are stretched. It happens because we need to make proper calculations in the fragment shaders in order to have a behavior like object-fit: cover; or background-size: cover; in WebGL.
I like to use an approach to pass the images’ real sizes and do some ratio calculations inside the fragment shader, so let’s adapt our code to this approach. So in our Program, we’re going to pass two new uniforms called uPlaneSizes and uImageSizes:
Before we implement our infinite logic, it’s good to start making scrolling work properly. In our setup code, we have included a onWheel method, which is going to be used to lerp some variables and make our scroll butter smooth.
In our constructor from index.js, let’s create the this.scroll object with these variables:
Now let’s update our onWheel implementation. When working with wheel events, it’s always important to normalize it, because it behaves differently based on the browser, I’ve been using normalize-wheel library to help on it:
Let’s also create our lerp utility function inside the file utils/math.js:
export function lerp (p1, p2, t) {
return p1 + (p2 - p1) * t
}
And now we just need to lerp from the this.scroll.current to the this.scroll.target inside the update method. And finally pass it to the media.update() methods:
The approach of making an infinite scrolling logic is basically repeating the same grid over and over while the user keeps scrolling your page. Since the user can scroll up or down, you also need to take under consideration what direction is being scrolled, so overall the algorithm should work this way:
If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.
To explain it in a visual way, let’s say we’re scrolling down and the red area is our viewport and the blue elements are not in the viewport anymore.
When we are in this state, we just need to move the blue elements to the end of our gallery grid, which is the entire height of our gallery: 295rem.
Let’s include the logic for it then. First, we need to create a new variable called this.scroll.last to store the last value of our scroll, this is going to be checked to give us up or down strings:
Then we need to get the total gallery height and transform it to 3D dimensions, so let’s include a querySelector of .demo-1__gallery and call the createGallery method in our index.js constructor.
onResize (sizes) {
if (sizes) {
const { height, screen, viewport } = sizes
if (height) this.height = height
if (screen) this.screen = screen
if (viewport) this.viewport = viewport
}
}
Now we’re going to include the logic to move our elements based on their viewport position, just like our visual representation of the red and blue rectangles.
If the idea is to keep summing up a value based on the scroll and element position, we can achieve this by just creating a new variable called this.extra = 0, this is going to store how much we need to sum (or subtract) of our media, so in our constructor let’s include it:
Finally, the only thing left now is updating the this.extra variable inside our update method, making sure we’re adding or subtracting the this.height depending on the direction.
Since we’re working in 3D space, we’re dealing with cartesian coordinates, that’s why you can notice we’re dividing most things by two (ex: this.viewport.heighht / 2). So that’s also the reason why we had to do a different logic for the this.isBefore and this.isAfter checks.
Awesome, we’re almost finishing our demo! That’s how it looks now, pretty cool to have it endless right?
Including touch events
Let’s also include touch events, so this demo can be more responsive to user interactions! In our addEventListeners method, let’s include some window.addEventListener calls:
Done! Now we also have touch events support enabled for our gallery.
Implementing direction-aware auto scrolling
Let’s also implement auto scrolling to make our interaction even better. In order to achieve that we just need to create a new variable that will store our speed based on the direction the user is scrolling.
So let’s create a variable called this.speed in our index.js file:
constructor () {
this.speed = 2
}
This variable is going to be changed by our down and up validations we have in our update loop, so if the user is scrolling down, we’re going to keep the speed as 2, if the user is scrolling up, we’re going to replace it with -2, and before that we will sum this.speed to the this.scroll.target variable:
Now let’s make everything even more interesting, it’s time to play a little bit with shaders and distort our planes while the user is scrolling through our page.
First, let’s update our update method from index.js, making sure we expose both current and last scroll values to all our medias, we’re going to do a simple calculation with them.
As you can probably notice, we’re going to need to set uViewportSizes in our onResize method as well, since this.viewport changes when we resize, so to keep this.viewport.width and this.viewport.height up to date, we also need to include the following lines of code in onResize:
onResize (sizes) {
if (sizes) {
const { height, screen, viewport } = sizes
if (height) this.height = height
if (screen) this.screen = screen
if (viewport) {
this.viewport = viewport
this.plane.program.uniforms.uOffset.value = [this.viewport.width, this.viewport.height]
}
}
}
Remember the this.scroll update we’ve made from index.js? Now it’s time to include a small trick to generate a speed value inside our Media.js:
We’re basically checking the difference between the current and last values, which returns us some kind of “speed” of the scrolling, and dividing it by the this.screen.width, to keep our effect value behaving correctly independently of the width of our screen.
Finally now it’s time to play a little bit with our vertex shader. We’re going to bend our planes a little bit while the user is scrolling through the page. So let’s update our vertex.glsl file with this new code:
That’s it! Now we’re also bending our images creating an unique type of effect!
Explaining a little bit of the shader logic: basically what’s implemented in the newPosition.z line is taking into consideration the uViewportSize.y, which is our height from the viewport and the current position.y of our plane, getting the division of both and multiplying by PI that we defined on the very top of our shader file. And then we use the uStrength which is the strength of the bending, that is tight with our scrolling values, making it bend based on how faster you scroll the demo.
That’s the final result of our demo! I hope this tutorial was useful to you and don’t forget to comment if you have any questions!
Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!
Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!
In this live stream of ALL YOUR HTML, we will be coding a Raymarching demo in Three.js from scratch. We’ll be using some cool Matcap textures, and add a wobbly animation to the scene, and all will be done as math functions. The scene is inspired by this demo made by Luigi De Rosa.
This coding session was streamed live on December 6, 2020.
Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!
In this live stream of ALL YOUR HTML, you’ll learn how image transitions with GLSL and Three.js work and how to build a static website with Svelte.js that will be using a third party API. Finally, we’ll code some smooth page transitions using GSAP and Three.js.
This coding session was streamed live on November 29, 2020.
Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!
In this live stream of ALL YOUR HTML, we’ll be replicating the beautiful icosahedron animation from Rogier de Boevé’s website. We’ll be using Three.js and GLSL to make things cool, and also some postprocessing.
This coding session was streamed live on November 22, 2020.
This article focuses adding WebGL effects to <image> and <video> elements of an already “completed” web page. While there are a few helpful resources out there on this subject (like thesetwo), I hope to help simplify this subject by distilling the process into a few steps:
Create a web page as you normally would.
Render pieces that you want to add WebGL effects to with WebGL.
Create (or find) the WebGL effects to use.
Add event listeners to connect your page with the WebGL effects.
Specifically, we’ll focus on the connection between regular web pages and WebGL. What are we going to make? How about a draggle image slider with an interactive mouse hover!
We won’t cover the core functionality of slider or go very far into the technical details of WebGL or GLSL shaders. However, there are plenty of comments in the demo code and links to outside resources if you’d like to learn more.
We’re using the latest version of WebGL (WebGL2) and GLSL (GLSL 300) which currently do not work in Safari or in Internet Explorer. So, use Firefox or Chrome to view the demos. If you’re planning to use any of what we’re covering in production, you should load both the GLSL 100 and 300 versions of the shaders and use the GLSL 300 version only if curtains.renderer._isWebGL2 is true. I cover this in the demo above.
First, create a web page as you normally would
You know, HTML and CSS and whatnot. In this case, we’re making an image slider but that’s just for demonstration. We’re not going to go full-depth on how to make a slider (Robin has a nice post on that). But here’s what I put together:
Each slide is equal to the full width of the page.
After a slide has been dragged, the slider continues to slide in the direction of the drag and gradually slow down with momentum.
The momentum snaps the slider to the nearest slide at the end point.
Each slide has an exit animation that’s fired when the drag starts and an enter animation that’s fired when the dragging stops.
When hovering the slider, a hover effect is applied similar to this video.
Again, this is just for demonstration, but I wanted to at least describe the component a bit. These are the DOM elements that we will keep our WebGL synced with.
Next, use WebGL to render the pieces that will contain WebGL effects
Now we need to render our images in WebGL. To do that we need to:
Load the image as a texture into a GLSL shader.
Create a WebGL plane for the image and correctly apply the image texture to the plane.
Position the plane where the DOM version of the image is and scale it correctly.
The third step is particularly non-trivial using pure WebGL because we need to track the position of the DOM elements we want to port into the WebGL world while keeping the DOM and WebGL parts in sync during scroll and user interactions.
There’s actually a library that helps us do all of this with ease: CurtainsJS! It’s the only library I’ve found that easily creates WebGL versions of DOM images and videos and syncs them without too many other features (but I’d love to be proven wrong on that point, so please leave a comment if you know of others that do this well).
With Curtains, this is all the JavaScript we need to add:
// Create a new curtains instance
const curtains = new Curtains({ container: "canvas", autoRender: false });
// Use a single rAF for both GSAP and Curtains
function renderScene() {
curtains.render();
}
gsap.ticker.add(renderScene);
// Params passed to the curtains instance
const params = {
vertexShaderID: "slider-planes-vs", // The vertex shader we want to use
fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use
// Include any variables to update the WebGL state here
uniforms: {
// ...
}
};
// Create a curtains plane for each slide
const planeElements = document.querySelectorAll(".slide");
planeElements.forEach((planeEl, i) => {
const plane = curtains.addPlane(planeEl, params);
// const plane = new Plane(curtains, planeEl, params); // v7 version
// If our plane has been successfully created
if(plane) {
// onReady is called once our plane is ready and all its texture have been created
plane.onReady(function() {
// Add a "loaded" class to display the image container
plane.htmlElement.closest(".slide").classList.add("loaded");
});
}
});
We also need to update our updateProgress function so that it updates our WebGL planes.
function updateProgress() {
// Update the actual slider
animation.progress(wrapVal(this.x) / wrapWidth);
// Update the WebGL slider planes
planes.forEach(plane => plane.updatePosition());
}
We also need to add a very basic vertex and fragment shader to display the texture that we’re loading. We can do that by loading them via <script> tags, like I do in the demo, or by using backticks as I show in the final demo.
Again, this article will not go into a lot of detail on the technical aspects of these GLSL shaders. I recommend reading The Book of Shaders and the WebGL topic on Codrops as starting points.
If you don’t know much about shaders, it’s sufficient to say that the vertex shader positions the planes and the fragment shader processes the texture’s pixels. There are also three variable prefixes that I want to point out:
ins are passed in from a data buffer. In vertex shaders, they come from the CPU (our program). In fragment shaders, they come from the vertex shader.
uniforms are passed in from the CPU (our program).
outs are outputs from our shaders. In vertex shaders, they are passed into our fragment shader. In fragment shaders, they are passed to the frame buffer (what is drawn to the screen).
Once we’ve added all of that to our project, we have the same exact thing before but our slider is now being displayed via WebGL! Neat.
CurtainsJS easily converts images and videos to WebGL. As far as adding WebGL effects to text, there are several different methods but perhaps the most common is to draw the text to a <canvas> and then use it as a texture in the shader (e.g. 1, 2). It’s possible to do most other HTML using html2canvas (or similar) and use that canvas as a texture in the shader; however, this is not very performant.
Create (or find) the WebGL effects to use
Now we can add WebGL effects since we have our slider rendering with WebGL. Let’s break down the effects seen in our inspiration video:
The image colors are inverted.
There is a radius around the mouse position that shows the normal color and creates a fisheye effect.
The radius around the mouse animates from 0 when the slider is hovered and animates back to 0 when it is no longer hovered.
The radius doesn’t jump to the mouse’s position but animates there over time.
The entire image translates based on the mouse’s position in reference to the center of the image.
When creating WebGL effects, it’s important to remember that shaders don’t have a memory state that exists between frames. It can do something based on where the mouse is at a given time, but it can’t do something based on where the mouse has been all by itself. That’s why for certain effects, like animating the radius once the mouse has entered the slider or animating the position of the radius over time, we should use a JavaScript variable and pass that value to each frame of the slider. We’ll talk more about that process in the next section.
Once we modify our shaders to invert the color outside of the radius and create the fisheye effect inside of the radius, we’ll get something like the demo below. Again, the point of this article is to focus on the connection between DOM elements and WebGL so I won’t go into detail about the shaders, but I did add comments to them.
But that’s not too exciting yet because the radius is not reacting to our mouse. That’s what we’ll cover in the next section.
I haven’t found a repository with a lot of pre-made WebGL shaders to use for regular websites. There’s ShaderToy and VertexShaderArt (which have some truly amazing shaders!), but neither is aimed at the type of effects that fit on most websites. I’d really like to see someone create a repository of WebGL shaders as a resource for people working on everyday sites. If you know of one, please let me know.
Add event listeners to connect your page with the WebGL effects
Now we can add interactivity to the WebGL portion! We need to pass in some variables (uniforms) to our shaders and affect those variables when the user interacts with our elements. This is the section where I’ll go into the most detail because it’s the core for how we connect JavaScript to our shaders.
First, we need to declare some uniforms in our shaders. We only need the mouse position in our vertex shader:
// The un-transformed mouse position
uniform vec2 uMouse;
We need to declare the radius and resolution in our fragment shader:
uniform float uRadius; // Radius of pixels to warp/invert
uniform vec2 uResolution; // Used in anti-aliasing
Then let’s add some values for these inside of the parameters we pass into our Curtains instance. We were already doing this for uResolution! We need to specify the name of the variable in the shader, it’s type, and then the starting value:
const params = {
vertexShaderID: "slider-planes-vs", // The vertex shader we want to use
fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use
// The variables that we're going to be animating to update our WebGL state
uniforms: {
// For the cursor effects
mouse: {
name: "uMouse", // The shader variable name
type: "2f", // The type for the variable - https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html
value: mouse // The initial value to use
},
radius: {
name: "uRadius",
type: "1f",
value: radius.val
},
// For the antialiasing
resolution: {
name: "uResolution",
type: "2f",
value: [innerWidth, innerHeight]
}
},
};
Now the shader uniforms are connected to our JavaScript! At this point, we need to create some event listeners and animations to affect the values that we’re passing into the shaders. First, let’s set up the animation for the radius and the function to update the value we pass into our shader:
If we play the radius animation, then our shader will use the new value each tick.
We also need to update the mouse position when it’s over our slider for both mouse devices and touch screens. There’s a lot of code here, but you can walk through it pretty linearly. Take your time and process what’s happening.
const mouse = new Vec2(0, 0);
function addMouseListeners() {
if ("ontouchstart" in window) {
wrapper.addEventListener("touchstart", updateMouse, false);
wrapper.addEventListener("touchmove", updateMouse, false);
wrapper.addEventListener("blur", mouseOut, false);
} else {
wrapper.addEventListener("mousemove", updateMouse, false);
wrapper.addEventListener("mouseleave", mouseOut, false);
}
}
// Update the stored mouse position along with WebGL "mouse"
function updateMouse(e) {
radiusAnim.play();
if (e.changedTouches && e.changedTouches.length) {
e.x = e.changedTouches[0].pageX;
e.y = e.changedTouches[0].pageY;
}
if (e.x === undefined) {
e.x = e.pageX;
e.y = e.pageY;
}
mouse.x = e.x;
mouse.y = e.y;
updateWebGLMouse();
}
// Updates the mouse position for all planes
function updateWebGLMouse(dur) {
// update the planes mouse position uniforms
planes.forEach((plane, i) => {
const webglMousePos = plane.mouseToPlaneCoords(mouse);
updatePlaneMouse(plane, webglMousePos, dur);
});
}
// Updates the mouse position for the given plane
function updatePlaneMouse(plane, endPos = new Vec2(0, 0), dur = 0.1) {
gsap.to(plane.uniforms.mouse.value, {
x: endPos.x,
y: endPos.y,
duration: dur,
overwrite: true,
});
}
// When the mouse leaves the slider, animate the WebGL "mouse" to the center of slider
function mouseOut(e) {
planes.forEach((plane, i) => updatePlaneMouse(plane, new Vec2(0, 0), 1) );
radiusAnim.reverse();
}
We should also modify our existing updateProgress function to keep our WebGL mouse synced.
// Update the slider along with the necessary WebGL variables
function updateProgress() {
// Update the actual slider
animation.progress(wrapVal(this.x) / wrapWidth);
// Update the WebGL slider planes
planes.forEach(plane => plane.updatePosition());
// Update the WebGL "mouse"
updateWebGLMouse(0);
}
Now we’re cooking with fire! Our slider now mets all of our requirements.
Two additional benefits of using GSAP for your animations is that it provides access to callbacks, like onComplete, and GSAP keeps everything perfectly synced no matter the refresh rate (e.g. this situation).
You take it from here!
This is, of course, just the tip of the iceberg when it comes to what we can do with the slider now that it is in WebGL. For example, common effects like turbulence and displacement can be added to the images in WebGL. The core concept of a displacement effect is to move pixels around based on a gradient lightmap that we use as an input source. We can use this texture (that I pulled from this displacement demo by Jesper Landberg — you should give him a follow) as our source and then plug it into our shader.
To learn more about creating textures like these, see this article, this tweet, and this tool. I am not aware of any existing repositories of images like these, but if you know of one please, let me know.
If we hook up the texture above and animate the displacement power and intensity so that they vary over time and based on our drag velocity, then it will create a nice semi-random, but natural-looking displacement effect:
Some time ago I wrote about a new way to implement runtime polymorphism which is based not on virtual functions but on std::visit and std::variant. Please have a look at this new blog post where I experiment with this approach on my home project. The experiment is more practical than artificial examples.
See advantages, disadvantages and practical code issues.