3D Folding Layout Technique for HTML Elements

Today we’re going to take a look at a cool, small technique to bend and fold HTML elements. This technique is not new by any means, it was explored in some previous works and one great example is Romain’s portfolio. It can be used to create interesting and diverse layouts, but please keep in mind that this is very experimental.

To start the article I’m going to come clean: this effect is all smoke and mirrors. HTML elements can’t actually bend, sorry if that breaks your heart.

This illusion is created by lining up multiple elements together to give the impression that it is a single piece. Then, rotating the elements on the edges making it look like the single piece is bending. Let’s see how that looks in code.

Creating the great fold illusion!

To begin, we’ll add a container with perspective so that we see the rotations happening in 3D. We’ll also create children “folds” with fixed dimensions and overflow hidden. The top and bottom folds are going to be placed absolutely on their respective sides of the middle fold.

Giving the folds fixed dimensions is not necessary; you can even give each fold different sizes if you are up to the challenge! But having fixed dimensions simplifies a lot of the alignment math.

The overflow:hidden is necessary, and it’s what makes the effect work. Because that’s what makes it seem like it’s a single unit even when the folds have different rotations.

<div class="wrapper-3d">
	<div class="fold fold-top"></div>
	<div class="fold fold-center" id="center-fold"></div>
	<div class="fold fold-bottom"></div>	
</div>
.wrapper-3d {
  position: relative;
    /* Based on screen with so the perspective doesn't break on small sizes*/
  perspective: 20vw;
  transform-style: preserve-3d;
}

.fold {
  overflow: hidden;
  width: 100vw;
  height: 80vh;
}

.fold-after {
  background: #dadada;
  position: absolute;
  transform-origin: top center;
  right: 0;
  left: 0;
  top: 100%;
}

.fold-before {
  background: #dadada;
  position: absolute;
  transform-origin: bottom center;
  left: 0;
  right: 0;
  bottom: 100%;
}

Note: In this case, were using the bottom and top attributes to position our extra folds. If you wanted to add more than two you would need to stack transforms. You could for example use a SCSS function that generates the code for all the folds to be in place.

Now let’s add a little bit of content inside the folds and see how that looks like. We’ll insert them inside a new .fold-content division. Each fold needs to have the same copy of the content for it to be seamless.

For now, the content is going to be a bunch of squares and headers. But you can add any HTML elements.

<div class="wrapper-3d">
	<div class="fold fold-top">
		<div class="fold-content">
			<div class="square green"></div>
			<h1>This is my content</h1>
			<div class="square blue"></div>
			<h1>This is my content</h1>
			<div class="square red"></div>
		</div>
	</div>
	<div class="fold fold-center" id="center-fold">
		<div class="fold-content" id="center-content">
			<div class="square green"></div>
			<h1>This is my content</h1>
			<div class="square blue"></div>
			<h1>This is my content</h1>
			<div class="square red"></div>
		</div>
	</div>
	<div class="fold fold-bottom">
		<div class="fold-content">
			<div class="square green"></div>
			<h1>This is my content</h1>
			<div class="square blue"></div>
			<h1>This is my content</h1>
			<div class="square red"></div>
		</div>
	</div>
</div>
.square {
	width: 100%;
	padding-bottom: 75%;
}

.green {
	background-color: lightgreen;
}

.blue {
	background-color: lightblue;
}

.red {
	background-color: lightcoral;
}

Right now the content is out of place because each fold has its content at the top. Well, that’s how HTML works. We want it to be a single unit and be all aligned. So we’ll add an extra .fold-align between the content and the fold.

Each fold is going to have its unique alignment. We’ll position their content to start at the top of the middle fold.

<div class="wrapper-3d">
    <div class="fold fold-top">
        <div class="fold-align">
            <div class="fold-content">
                <!-- Content -->
            </div>
        </div>
    </div>
    <div class="fold fold-center" id="center-fold">
        <div class="fold-align">
            <div class="fold-content" id="center-content">
                <!-- Content -->
            </div>
        </div>
    </div>
    <div class="fold fold-bottom">
        <div class="fold-align">
            <div class="fold-content">
                <!-- Content -->
            </div>
        </div>
    </div>
</div>
.fold-align {
	width: 100%;
	height: 100%;
}

.fold-bottom .fold-align {
	transform: translateY(-100%);
}

.fold-top .fold-align {
	transform: translateY(100%);
}

Now we only need to rotate the top and bottom folds from the respective origin and we’re done with creating the effect!

.fold-bottom {
	transform-origin: top center;
	transform: rotateX(120deg);
}

.fold-top {
	transform-origin: bottom center;
	transform: rotateX(-120deg);
}

Scrolling the folds

Because our folds have overflow: hidden there isn’t a default way to scroll through them. Not to mention that they also need to scroll in sync. So, we need to manage that ourselves!

To make our scroll simple to manage, we’ll take advantage of the regular scroll wheel.

First, we’ll set the body’s height to how big we want the scroll to be. And then we’ll sync our elements to the scroll created by the browser. The height of the body is going to be the screen’s height plus the content overflowing the center fold. This will guarantee that we are only able to scroll if the content overflows its fold height.

let centerContent = document.getElementById('center-content');
let centerFold = document.getElementById('center-fold');

let overflowHeight =  centerContent.clientHeight - centerFold.clientHeight

document.body.style.height = overflowHeight + window.innerHeight + "px";

After we create the scroll, we’ll update the position of the folds’ content to make them scroll with the page.

let foldsContent = Array.from(document.getElementsByClassName('fold-content'))
let tick = () => {
    let scroll = -(
        document.documentElement.scrollTop || document.body.scrollTop
    );
    foldsContent.forEach((content) => {
        content.style.transform = `translateY(${scroll}px)`;
    })
    requestAnimationFrame(tick);
}
tick();

And that’s it! To make it more enjoyable, we’ll remove the background color of the folds. And add a some lerp to make the scrolling experience smoother!

Conclusion

Over this short tutorial, we went over the basic illusion of folding HTML elements. But there’s so much more we can do with this! Each of the demos uses different variations (and styles) of the basic technique we just learned!

With one variation you can use non-fixed size elements. With another variation, you can animate them while sticking some folds to the sides of the screen.

Each demo variation has its benefits and caveats. I encourage you to dig into the code and see how the small changes between demos allow for different results!

Also, it’s good to note that in some browsers this technique has some tiny line gaps between folds. We minimized this by scaling up the parent and down-scaling the child elements. It’s not a perfect solution but it reduced it slightly and did the trick for most cases! If you know how to remove them for good let us know!

If you have any questions or want to share something lets us know in the comments or reach out to me on Twitter @anemolito!

3D Folding Layout Technique for HTML Elements was written by Daniel Velasquez and published on Codrops.

Techniques for Rendering Text with WebGL

As is the rule in WebGL, anything that seems like it should be simple is actually quite complicated. Drawing lines, debugging shaders, text rendering… they are all damn hard to do well in WebGL.

Isn’t that weird? WebGL doesn't have a built-in function for rendering text. Although text seems like the most basic of functionalities. When it comes down to actually rendering it, things get complicated. How do you account for the immense amount of glyphs for every language? How do you work with fixed-width, or proportional-width fonts? What do you do when text needs to be rendered top-to-bottom, left-to-right, or right-to-left? Mathematical equations, diagrams, sheet music?

Suddenly it starts to make sense why text rendering has no place in a low-level graphics API like WebGL. Text rendering is a complex problem with a lot of nuances. If we want to render text, we need to get creative. Fortunately, a lot of smart folks already came up with a wide range of techniques for all our WebGL text needs.

We'll learn at some of those techniques in this article, including how to generate the assets they need and how to use them with ThreeJS, a JavaScript 3D library that includes a WebGL renderer. As a bonus, each technique is going to have a demo showcasing use cases.

Table of Contents


A quick note on text outside of WebGL

Although this article is all about text inside WebGL, the first thing you should consider is whether you can get away with using HMTL text or canvas overlayed on top of your WebGL canvas. The text can't be occluded with the 3D geometry as an overlay, but you can get styling and accessibility out of the box. That's all you need in a lot of cases.

Font geometries

One of the common ways to render text is to build the glyphs with a series of triangles, much like a regular model. After all, rendering points, lines and triangles are a strength of WebGL.

When creating a string, each glyph is made by reading the triangles from a font file of triangulated points. From there, you could extrude the glyph to make it 3D, or scale the glyphs via matrix operations.

Regular font representation (left) and font geometry (right)

Font geometry works best for a small amount of text. That’s because each glyph contains many triangles, which causes drawing to become problematic.

Rendering this exact paragraph you are reading right now with font geometry creates 185,084 triangles and 555,252 vertices. This is just 259 letters. Write the whole article using a font geometry and your computer fan might as well become an airplane turbine.

Although the number of triangles varies by the precision of the triangulation and the typeface in use, rendering lots of text will probably always be a bottleneck when working with font geometry.

How to create a font geometry from a font file

If it were as easy as choosing the font you want and rendering the text. I wouldn't be writing this article. Regular font formats define glyphs with Bezier curves. On the flip side, drawing those in WebGL is extremely expensive on the CPU and is also complicated to do. If we want to render text, we need to create triangles (triangulation) out of Bezier curves.

I've found a few triangulation methods, but by no means are any of them perfect and they may not work with every font. But at least they'll get you started for triangulating your own typefaces.

Method 1: Automatic and easy

If you are using ThreeJS, you pass your typeface through FaceType.js to read the parametric curves from your font file and put them into a .json file. The font geometry features in ThreeJS take care of triangulating the points for you:

const geometry = new THREE.FontGeometry("Hello There", {font: font, size: 80})

Alternatively, if you are not using ThreeJS and don't need to have real-time triangulation. You could save yourself the pain of a manual process by using ThreeJS to triangulate the font for you. Then you can extract the vertices and indices from the geometry, and load them in your WebGL application of choice.

Method 2: Manual and painful

The manual option for triangulating a font file is extremely complicated and convoluted, at least initially. It would require a whole article just to explain it in detail. That said, we'll quickly go over the steps of a basic implementation I grabbed from StackOverflow.

See the Pen
Triangulating Fonts
by Daniel Velasquez (@Anemolo)
on CodePen.

The implementation basically breaks down like this:

  1. Add OpenType.js and Earcut.js to your project.
  2. Get Bezier curves from your .tff font file using OpenType.js.
  3. Convert Bezier curves into closed shapes and sort them by descending area.
  4. Determine the indices for the holes by figuring out which shapes are inside other shapes.
  5. Send all of the points to Earcut with the hole indices as a second parameter.
  6. Use Earcut's result as the indices for your geometry.
  7. Breath out.

Yeah, it's a lot. And this implementation may not work for all typefaces. It'll get you started nonetheless.

Using text geometries in ThreeJS

Thankfully, ThreeJS supports text geometries out of the box. Give it a .json of your favorite font's Bezier curves and ThreeJS takes care of triangulating the vertices for you in runtime.

var loader = new THREE.FontLoader();
var font;
var text = "Hello World"
var loader = new THREE.FontLoader();
loader.load('fonts/helvetiker_regular.typeface.json', function (helvetiker) {
  font = helvetiker;
  var geometry = new THREE.TextGeometry(text, {
    font: font,
    size: 80,
    height: 5,
  });
}

Advantages

  • It’s easily extruded to create 3D strings.
  • Scaling is made easier with matrix operations.
  • It provides great quality depending on the amount of triangles used.

Disadvantages

  • This doesn't scale well with large amounts of text due to the high triangle count. Since each character is defined by a lot of triangles, even rendering something as brief as "Hello World" results in 7,396 triangles and 22,188 vertices.
  • This doesn't lend itself to common text effects.
  • Anti-aliasing depends on your post-processing aliasing or your browser default.
  • Scaling things too big might show the triangles.

Demo: Fading Letters

In the following demo, I took advantage of how easy it is to create 3D text using font geometries. Inside a vertex shader, the extrusion is increased and decreased as time goes on. Pair that with fog and standard material and you get these ghostly letters coming in and out of the void.

Notice how with a low amount of letters the amount of triangles is already in the tens of thousands!

Text (and canvas) textures

Making text textures is probably the simplest and oldest way to draw text in WebGL. Open up Photoshop or some other raster graphics editor, draw an image with some text on it, then render these textures onto a quad and you are done!

Alternatively, you could use the canvas to create the textures on demand at runtime. You’re able to render the canvas as a texture onto a quad as well.

Aside for being the least complicated technique of the bunch. Text textures and canvas textures have the benefit of only needed one quad per texture, or given piece of text. If you really wanted to, you could write the entire British Encyclopedia on a single texture. That way, you only have to render a single quad, six vertices and two faces. Of course, you would do it in a scale, but the idea still remains: You can batch multiple glyphs into same quad. Both text and canvas textures experience have issues with scaling, particularly when working with lots of text.

For text textures, the user has to download all the textures that make up the text, then keep them in memory. For canvas textures, the user doesn't have to download anything — but now the user's computer has to do all the rasterizing at runtime, and you need to keep track of where every word is located in the canvas. Plus, updating a big canvas can be really slow.

How to create and use a text texture

Text textures don't have anything fancy going on for them. Open up your favorite raster graphics editor, draw some text on the canvas and export it as an image. Then you can load it as a texture, and map it on a plane:

// Load texture
let texture = ;
const geometry = new THREE.PlaneBufferGeometry();
const material new THREE.MeshBasicMaterial({map: texture});
this.scene.add(new Mesh(geometry,material));

If your WebGL app has a lot of text, downloading a huge sprite sheet of text might not be ideal, especially for users on slow connections. To avoid the download time, you can rasterize things on demand using an offscreen canvas then sample that canvas as a texture.

Let’s trade download time for performance since rasterizing lots of text takes more than a moment.

function createTextCanvas(string, parameters = {}){
    
    const canvas = document.createElement("canvas");
    const ctx = canvas.getContext("2d");
    
    // Prepare the font to be able to measure
    let fontSize = parameters.fontSize || 56;
    ctx.font = `${fontSize}px monospace`;
    
    const textMetrics = ctx.measureText(text);
    
    let width = textMetrics.width;
    let height = fontSize;
    
    // Resize canvas to match text size 
    canvas.width = width;
    canvas.height = height;
    canvas.style.width = width + "px";
    canvas.style.height = height + "px";
    
    // Re-apply font since canvas is resized.
    ctx.font = `${fontSize}px monospace`;
    ctx.textAlign = parameters.align || "center" ;
    ctx.textBaseline = parameters.baseline || "middle";
    
    // Make the canvas transparent for simplicity
    ctx.fillStyle = "transparent";
    ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);
    
    ctx.fillStyle = parameters.color || "white";
    ctx.fillText(text, width / 2, height / 2);
    
    return canvas;
}

let texture = new THREE.Texture(createTextCanvas("This is text"));

Now you can use the texture on a plane, like the previous snippet. Or you could create a sprite instead.

As an alternative, you could use more efficient libraries to create texture or sprites, like three-text2d or three-spritetext. And if you want text multi-line text, you should check out this amazing tutorial.

Advantages

  • This provides great 1-to-1 quality with static text.
  • There’s a low vertex/face count. Each string can use as little as six vertices and two faces.
  • It’s easy to implement texture on a quad.
  • It’s fairly trivial to add effects, like borders and glows, using canvas or a graphics editor.
  • Canvas makes it easy to create multi-line text.

Disadvantages

  • Looks blurry if scaled, rotated or transformed after rasterizing.
  • On-non retina, text looks crunchy.
  • You have to rasterize all the strings used. A lot of strings means a lot of data to download.
  • On-demand rasterizing with canvas can be slow if you keep constantly updating the canvas.

Demo: Canvas texture

Canvas textures work well with a limited amount of text that doesn’t change often. So I built a simple wall of text with the quads re-using the same texture.

Bitmap fonts

Both font geometries and text textures experience the same problems handling lots of text. Having one million vertices per piece of text is super inefficient, and creating one texture per piece of text doesn't really scale.

Bitmap fonts solve this issue by rasterizing all unique glyphs into a single texture, called a texture atlas. This means you can assemble any given string at runtime by creating a quad for each glyph and sampling the section of the texture atlas.

This means users only have to download and use a single texture for all of the text. It also means you only need to render as little as one quad per glyph:

A visual of bitmap font sampling

Rendering this whole article would be approximately 117,272 vertices and 58,636 triangles. That's 3.1 times more efficient compared to a font geometry rendering just a single paragraph. That a huge improvement!

Because bitmap fonts rasterize the glyph into a texture, they suffer from the same problem as regular images. Zoom in or scale and you start seeing a pixelated and blurry mess. If you want text at a different size, you should send a secondary bitmap with the glyphs on that specific size. Or you could use a Signed Distance Field (SDF) which we’ll cover in the next section.

How to create bitmap fonts

There are a lot of tools to generate bitmaps. Here are some of the more relevant options out there:

  • Angelcode's bmfont - This is by the creators of the bitmap format.
  • Hiero - This is a Java open-source tool. It’s very similar to Anglecode's bmfont, but it allows you to add text effects.
  • Glyphs Designer - This is a paid MacOS app.
  • ShoeBox - This is an tool for dealing with sprites, including bitmap fonts.

We'll use Anglecode's bmfont for our example because I think it's the easiest one to get started. At the bottom of this section, you can find other tools if you think it lacks the functionality you are looking for.

When you open the app, you'll see a screen full of letters that you can select to use.The nice thing about this is that you are able to grab just the glyphs you need instead of sending Greek symbols.

The app’s sidebar allows you to choose and select groups of glyphs.

The BmFont application

Ready to export? Go to OptionsSave bitmap as. And done!

But we’re getting a little ahead of ourselves. Before exporting, there are a few important settings you should check.

Export and Font Option settings
  • Font settings: This let you choose the font and size you want to use. The most important item here is "Match char height." By default, the app’s "size" option uses pixels instead of points. You'll see a substantial difference between your graphics editor's font size and the font size that is generated. Select the "Match char height" options if you want your glyphs to make sense.
  • Export settings: For the export, make sure the texture size is a power of two (e.g. 16x16, 32x32, 64x64, etc.). Then you are able to take advantage of "Linear Mipmap linear" filtering, if needed.

At the bottom of the settings, you'll see the "file format" section. Choosing either option here is fine as long as you can read the file and create the glyphs.

If you are looking for the smallest file size. I ran a ultra non-scientific test where I created a bitmap of all lowecase and uppercase Latin characters and compared each option. For Font Descriptors, the most efficient format is Binary.

Font Descriptor Format File Size
Binary 3 KB
Raw Text 11 KB
XML 12 KB
Texture Format File Size
PNG 7 KB
Targa 64 KB
DirectDraw Surface 65 KB

PNG is the smallest file size for Text Texture.

Of course, it's a little more complicated than just file sizes. To get a better idea of which option to use, you should look into parsing time and run-time performance. If you would like to know the pros and cons of each formats, check out this discussion.

How to use bitmap fonts

Creating bitmap font geometry is a bit more involved than just using a texture because we have to construct the string ourselves. Each glyph has its own height and width, and samples a different section of the texture. We have to create a quad for each glyph on our string so we can give them the correct UVs to sample it's glyph.

You can use three-bmfont-text in ThreeJS to create strings using bitmaps, SDFs and MSDFs. It takes care of multi-line text, and batching all glyphs onto a single geometry. Note that it needs to be installed in a project from npm.

var createGeometry = require('three-bmfont-text')
var loadFont = require('load-bmfont')

loadFont('fonts/Arial.fnt', function(err, font) {
  // create a geometry of packed bitmap glyphs, 
  // word wrapped to 300px and right-aligned
  var geometry = createGeometry({
    font: font,
    text: "My Text"
  })
    
  var textureLoader = new THREE.TextureLoader();
  textureLoader.load('fonts/Arial.png', function (texture) {
    // we can use a simple ThreeJS material
    var material = new THREE.MeshBasicMaterial({
      map: texture,
      transparent: true,
      color: 0xaaffff
    })

    // now do something with our mesh!
    var mesh = new THREE.Mesh(geometry, material)
  })
})

Depending whenever your text is drawn as as full black or full white, use the invert option.

Advantages

  • It’s fast and simple to render.
  • It’s a 1:1 ratio and resolution independent.
  • It can render any string, given the glyphs.
  • It’s good for lots of text that needs to change often.
  • It’s works extremely well with a limited number of glyphs.
  • It’s includes support for things like kerning, line height and word-wrapping at run-time.

Disadvantages

  • It only accepts a limited set of characters and styles.
  • It requires pre-rasterizing glyphs and extra bin packing for optimal usage.
  • It’s blurry and pixelated at large scales, and can also be rotated or transformed.
  • There’s only one quad per rendered glyph.

Interactive Demo: The Shining Credits

Raster bitmap fonts work great for movie credits because we only need a few sizes and styles. The drawback is that the text isn’t great with responsive designs because it’ll look blurry and pixelated at larger sizes.

For the mouse effect, I’m making calculations by mapping the mouse position to the size of the view then calculating the distance from the mouse to the text position. I’m also rotating the text when it hits specific points on the z-axis and y-axis.

Signed distance fields

Much like bitmap fonts, signed distance field (SDF) fonts are also a texture atlas. Unique glyphs are batch into a single "texture atlas" that can create any string at runtime.

But instead of storing the rasterized glyph on the texture the way bitmap fonts do, the glyph's SDF is generated and stored instead which allows for a high resolution shape from a low resolution image.

Like polygonal meshes (font geometries), SDFs represent shapes. Each pixel on an SDF stores the distance to the closest surface. The sign indicates whenever the pixel is inside or outside the shape. If the sign is negative, then the pixel is inside; if it’s positive, then the pixel is outside. This video illustrates the concept nicely.

SDFs are also commonly used for raytracing and volumetric rendering.

Because an SDF stores distance in each pixel, the raw result looks like a blurry version of the original shape. To output the hard shape you’ll need to alpha test it at 0.5 which is the border of the glyph. Take a look at how the SDF of the letter "A" compares to it's regular raster image:

Raster text beside of a raw and an alpha tested SDF

As I mentioned earlier, the big benefit of SDFs is being able to render high resolution shapes from low resolution SDF. This means you can create a 16pt font SDF and scale the text up to 100pt or more without losing much crispness.

SDFs are good at scaling because you can almost perfectly reconstruct the distance with bilinear interpolation, which is a fancy way of saying we can get values between two points. In this case, bilinear interpolating between two pixels on a regular bitmap font gives us the in-between color, resulting in a linear blur.

On an SDF, bilinear interpolating between two pixels provides the in-between distance to the nearest edge. Since these two pixel distances are similar to begin with, the resulting value doesn't lose much information about the glyph. This also means the bigger the SDF, the more accurate and less information is lost.

However, this process comes with a caveat. If the rate change between pixels is not linear — like in the case of sharp corners — bilinear interpolation gives out an inaccurate value, resulting in chipped or rounded corners when scaling an SDF much higher than its original size.

SDF rounded corners

Aside from bumping the texture side, the only real solution is to use multi-channel SDFs, which is what we’ll cover in the next section.

If you want to take a deeper dive into the science behind SDFs, check out the Chris Green’s Master’s thesis (PDF) on the topic.

Advantages

  • They maintain crispness, even when rotated, scaled or transformed.
  • They are ideal for dynamic text.
  • They provide good quality to the size ratio. A single texture can be used to render tiny and huge font sizes without losing much quality.
  • They have a low vertex count of only four vertices per glyph.
  • Antialiasing is inexpensive as is making borders, drop shadows and all kinds of effects with alpha testing.
  • They’re smaller than MSDFs (which we’ll see in a bit).

Disadvantages

  • The can result in rounded or chipped corners when the texture is sampled beyond its resolution. (Again, we’ll see how MSDFs can prevent that.)
  • They’re ineffective at tiny font sizes.
  • They can only be used with monochrome glyphs.

Multi-channel signed distance fields

Multi-channel signed distance field (MSDF) fonts is a bit of a mouthful and a fairly recent variation on SDFs that is capable of producing near-perfect sharp corners by using all three color channels. They do look quite mind blowing at first but don't let that put you off because they are easy to use than they appear.

A multi-channel signed distance field file can look a little spooky at first.

Using all three color channels does result in a heavier image, but that’s what gives MSDFs a far better quality-to-space ratio than regular SDFs. The following image shows the difference between an SDF and an MSDF for a font that has been scaled up to 50px.

The SDF font results in rounded corners, even at 1x zoom, while the MSDF font retains sharp edges, even at 5x zoom.

Like a regular SDF, an MSDF stores the distance to the nearest edge but changes the color channels whenever it finds a sharp corner. We get the shape by drawing where two color channels or more agree. Although there's a bit more technique involved. Check out the README for this MSDF generator for a more thorough explanation.

Advantages

  • They support a higher quality and space ratio than SDFs. and are often the better choice.
  • They maintain sharp corners when scaled.

Disadvantages

  • They may contain small artifacts but those can be avoided by bumping up the texture size of the glyph.
  • They requires filtering the median of the three values at runtime which is a bit expensive.
  • They are only compatible with monochrome glyphs.

How to create MSDF fonts

The quickest way to create an MSDF font is to use the msdf-bmfont-web tool. It has most of the relevant customization options and it gets the job done in seconds, all in the browser. Alternatively, there are a number of Google Fonts that have already been converted into MSDF by the folks at A-Frame.

If you are also looking to generate SDFs or your typeface, it requires some special tweaking thanks to some problematic glyphs. The msdf-bmfont-xml CLI gives you a wide range of options, without making things overly confusing. Let’s take a look at how you would use it.

First, you'll need to install globally it from npm:

npm install msdf-bmfont-xml -g

Next, give it a .ttf font file with your options:

msdf-bmfont ./Open-Sans-Black.ttf --output-type json --font-size 76 --texture-size 512,512

Those options are worth digging into a bit. While msdf-bmfont-xml provides a lot of options to fine-tune your font, there are really just a few options you'll probably need to correctly generate an MSDF:

  • -t <type> or <code>--field-type <msdf or sdf>: msdf-bmfont-xml generates MSDFs glyph atlases by default. If you want to generate an SDF instead, you need to specify it by using -t sdf.
  • -f <xml or json> or --output-type <xml or json>: msdf-bmfont-xml generates an XML font file that you would have to parse to JSON at runtime. You can avoid this parsing step by exporting JSON straight away.
  • -s, --font-size <fontSize>: Some artifacts and imperfections might show up if the font size is super small. Bumping up the font size will get rid of them most of the time. This example shows a small imperfection in the letter "M."
  • -m <w,h> or --texture-size <w,h>: If all your characters don't fit in the same texture, a second texture is created to fit them in. Unless you are trying to take advantage of a multi-page glyph atlas, I recommend increasing the texture size so that it fits over all of the characters on one texture to avoid extra work.

There are other tools that help generate MSDF and SDF fonts:

  • msdf-bmfont-web: A web tool to create MSDFs (but not SDFs) quickly and easily
  • msdf-bmfont: A Node tool using Cairo and node-canvas
  • msdfgen: The original command line tool that all other MSDF tools are based from
  • Hiero: A tool for generating both bitmaps and SDF fonts

How to use SDF and MSDF fonts

Because SDF and MSDF fonts are also glyph atlases, we can use three-bmfont-text like we did for bitmap fonts. The only difference is that we have to get the glyph our of the distance fields with a fragment shader.

Here’s how that works for SDF fonts. Since our distance field has a value greater than .5 outside our glyph and less than 0.5 inside our glyph, we need to alpha test in a fragment shader on each pixel to make sure we only render pixels with a distance less than 0.5, rendering just the inside of the glyphs.

const fragmentShader = `

  uniform vec3 color;
  uniform sampler2D map;
  varying vec2 vUv;
  
  void main(){
    vec4 texColor = texture2D(map, vUv);
    // Only render the inside of the glyph.
    float alpha = step(0.5, texColor.a);

    gl_FragColor = vec4(color, alpha);
    if (gl_FragColor.a < 0.0001) discard;
  }
`;

const vertexShader = `
  varying vec2 vUv;   
  void main {
    gl_Position = projectionMatrix * modelViewMatrix * position;
    vUv = uv;
  }
`;

let material = new THREE.ShaderMaterial({
  fragmentShader, vertexShader,
  uniforms: {
    map: new THREE.Uniform(glyphs),
    color: new THREE.Uniform(new THREE.Color(0xff0000))
  }
})

Similarly, we can import the font from three-bmfont-text which comes with antialiasing out of the box. Then we can use it directly on a RawShaderMaterial:

let SDFShader = require('three-bmfont-text/shaders/sdf');
let material = new THREE.RawShaderMaterial(MSDFShader({
  map: texture,
  transparent: true,
  color: 0x000000
}));

MSDF fonts are a little different. They recreate sharp corners by the intersections of two color channels. Two or more color channels have to agree on it. Before doing any alpha texting, we need to get the median of the three color channels to see where they agree:

const fragmentShader = `

  uniform vec3 color;
  uniform sampler2D map;
  varying vec2 vUv;

  float median(float r, float g, float b) {
    return max(min(r, g), min(max(r, g), b));
  }
  
  void main(){
    vec4 texColor = texture2D(map, vUv);
    // Only render the inside of the glyph.
    float sigDist = median(texColor.r, texColor.g, texColor.b) - 0.5;
    float alpha = step(0.5, sigDist);
    gl_FragColor = vec4(color, alpha);
    if (gl_FragColor.a < 0.0001) discard;
  }
`;
const vertexShader = `
  varying vec2 vUv;   
  void main {
    gl_Position = projectionMatrix * modelViewMatrix * position;
    vUv = uv;
  }
`;

let material = new THREE.ShaderMaterial({
  fragmentShader, vertexShader,
  uniforms: {
    map: new THREE.Uniform(glyphs),
    color: new THREE.Uniform(new THREE.Color(0xff0000))
  }
})

Again, we can also import from three-bmfont-text using its MSDFShader which also comes with antialiasing out of the box. Then we can use it directly on a RawShaderMaterial:

let MSDFShader = require('three-bmfont-text/shaders/msdf');
let material = new THREE.RawShaderMaterial(MSDFShader({
  map: texture,
  transparent: true,
  color: 0x000000
}));

Demo: Star Wars intro

The Star Wars drawl intro is a good example where MSDF and SDF fonts work well because the effect needs the text to come in multiple sizes. We can use a single MSDF and the text always looks sharp! Although, sadly, three-bm-font doesn’t support justified text yet. Applying left justification would make for a more balanced presentation.

For the light saber effect, I’m raycasting an invisible plane the size of the plane, drawing onto a canvas that’s the same size, and sampling that canvas by mapping the scene position to the texture coordinates.

Bonus tip: Generating 3D text with height maps

Aside from font geometries, all the techniques we’ve covered generate strings or glyphs on a single quad. If you want to build 3D geometries out of a flat texture, your best choice is to use a height map.

A height map is a technique where the geometry height is bumped up using a texture. This is normally used to generate mountains in games, but it turns out to be useful rendering text as well.

The only caveat is that you’ll need a lot of faces for the text to look smooth.

Further reading

Different situations call for different techniques. Nothing we saw here is a silver bullet and they all have their advantages and disadvantages.

There are a lot of tools and libraries out there to help make the most of WebGL text, most of which actually originate from outside WebGL. If you want to keep learning, I highly recommend you go beyond WebGL and check out some of these links:

High-speed Light Trails in Three.js

Sometimes I tactically check Pinterest for inspiration and creative exploration. Although one could also call it chronic procrastinating, I always find captivating ideas for new WebGL projects. That’s the way I started my last water distortion effect.

Today’s tutorial is inspired by this alternative Akira poster. It has this beautiful traffic time lapse with infinite lights fading into the distance:

Akira

Based on this creative effect, I decided to re-create the poster vibe but make it real-time, infinite and also customizable. All in the comfort of your browser!

Through this article, we’ll use Three.js and learn how to:

  1. instantiate geometries to create thousands (up to millions) of lights
  2. make the lights move in an infinite loop
  3. create frame rate independent animations to keep them consistent on all devices
  4. and finally, create modular distortions to ease the creation of new distortions or changes to existing ones

It’s going to be an intermediate tutorial, and we’re going to skip over the basic Three.js setup. This tutorial assumes that you are familiar with the basics of Three.js.

Preparing the road and camera

To begin we’ll create a new Road class to encapsulate all the logic for our plane. It’s going to be a basic PlaneBufferGeometry with its height being the road’s length.

We want this plane to be flat on the ground and going further way. But Three.js creates a vertical plane at the center of the scene. We’ll rotate it on the x-axis to make it flat on the ground (y-axis).

We’ll also move it by half it’s length on the z-axis to position the start of the plane at the center of the scene.

We’re moving it on the z-axis because position translation happens after the rotation. While we set the plane’s length on the y-axis, after the rotation, the length is on the z-axis.

export class Road {
  constructor(webgl, options) {
    this.webgl = webgl;
    this.options = options;
  }
  init() {
    const options = this.options;
    const geometry = new THREE.PlaneBufferGeometry(
      options.width,
      options.length,
      20,
      200
    );
    const material = new THREE.ShaderMaterial({ 
       	fragmentShader, 
        vertexShader,
        uniforms: {
           uColor:  new THREE.Uniform(new THREE.Color(0x101012)) 
        }
    });
    const mesh = new THREE.Mesh(geometry, material);

    mesh.rotation.x = -Math.PI / 2;
    mesh.position.z = -options.length / 2;

    this.webgl.scene.add(mesh);
  }
}
const fragmentShader = `
    uniform vec3 uColor;
	void main(){
        gl_FragColor = vec4(uColor,1.);
    }
`;
const vertexShader = `
	void main(){
        vec3 transformed = position.xyz;
        gl_Position = projectionMatrix * modelViewMatrix * vec4(transformed.xyz, 1.);
	}
`

After rotating our plane, you’ll notice that it disappeared. It’s exactly lined up with the camera. We’ll have to move the camera a bit up the y-axis for a better shot of the plane.

We’ll also instantiate and initiate our plane and move it on the z-axis a bit to avoid any issues when we add the distortion later on:

class App {
	constructor(container, options){
		super(container);
		
        this.camera.position.z = -4;
        this.camera.position.y = 7;
        this.camera.position.x = 0;
        
        this.road = new Road(this, options);
	}
	init(){
        this.road.init();
        this.tick();
	}
}

If something is not working or looking right, zooming out the camera in the z-axis can help bring things into perspective.

Creating the lights

For the lights, we’ll create a CarLights class with a single tube geometry. We’ll use this single tube geometry as a base for all other lights.

All our tubes are going to have different lengths and radii. So, we’ll set the original tube’s length and radius to 1. Then, in the tube’s vertex shader, we’ll multiply the original length/radius by the desired values, resulting in the tube getting its final length and radius.

Three.js makes TubeGeometries using a Curve. To give it that length of 1, we’ll create the tube with a lineCurve3 with its endpoint at -1 in the z-axis.

import * as THREE from "three";
export class CarLights {
  constructor(webgl, options) {
    this.webgl = webgl;
    this.options = options;
  }
  init() {
      const options = this.options;
    let curve = new THREE.LineCurve3(
      new THREE.Vector3(0, 0, 0),
      new THREE.Vector3(0, 0, -1)
    );
    let baseGeometry = new THREE.TubeBufferGeometry(curve, 25, 1, 8, false);
    let material = new THREE.MeshBasicMaterial({ color: 0x545454 });
    let mesh = new THREE.Mesh(baseGeometry, material);
	
      this.mesh = mesh;
    this.webgl.scene.add(mesh);
  }
}

Instantiating the lights

Although some lights are longer or thicker than others, they all share the same geometry. Instead of creating a bunch of meshes for each light, and causing lots of draw calls, we can take advantage of instantiation.

Instantiation is the equivalent of telling WebGL “Hey buddy, render this SAME geometry X amount of times”. This process allows you to reduce the amount of draw calls to 1.

Although it’s the same result, rendering X objects, the process is very different. Let’s compare it with buying 50 chocolates at a store:

A draw call is the equivalent of going to the store, buying only one chocolate and then coming back. Then we repeat the process for all 50 chocolates. Paying for the chocolate (rendering) at the store is pretty fast, but going to the store and coming back (draw calls) takes a little bit of time. The more draw calls, the more trips to the store, the more time.

With instantiation, we’re going to the store and buying all 50 chocolates and coming back. You still have to go and come back from the store (draw call) one time. But you saved up those 49 extra trips.

A fun experiment to test this even further: Try to delete 50 different files from your computer, then try to delete just one file of equivalent size to all 50 combined. You’ll notice that even though it’s the same combined file size, the 50 files take more time to be deleted than the single file of equivalent size 😉

Coming back to the code: to instantiate we’ll copy our tubeGeometry over to an InstancedBufferGeometry. Then we’ll tell it how many instances we’ll need. In our case, it’s going to be a number multiplied by 2 because we want two lights per “car”.

Next we’ll have to use that instanced geometry to create our mesh.

class CarLights {
    ...
	init(){
        ...
        let baseGeometry = new THREE.TubeBufferGeometry(curve, 25, 1, 8, false);
        let instanced = new THREE.InstancedBufferGeometry().copy(geometry);
        instanced.maxInstancedCount = options.nPairs * 2;
        ...
        // Use "Instanced" instead of "geometry"
        var mesh = new THREE.Mesh(instanced, material);
    }
}

Although it looks the same, Three.js now rendered 100 tubes in the same position. To move them to their respective positions we’ll use an InstancedBufferAttribute.

While a regular BufferAttribute describes the base shape, for example, it’s position, uvs, and normals, an InstanceBufferAttribute describes each instance of the base shape. In our case, each instance is going to have a different aOffset and a different radius/length aMetrics.

When it’s time each instance passes through the vertex shader. WebGL is going to give us the values corresponding to each instance. Then we can position them using those values.

We’ll loop over all the light pairs and calculate their XYZ position:

  1. For the X-axis we’ll calculate the center of its lane. The width of the car, how separated the lights are, and a random offset.
  2. For its Y-axis, we’ll push it up by its radius to make sure it’s on top of the road.
  3. Finally, we’ll give it a random Z-offset based on the length of the road, putting some lights further away than others.

At the end of the loop, we’ll add the offset twice. Once per each light, with only the x-offset as a difference.

class CarLights {
    ...
    init(){
        ...
        let aOffset = [];

            let sectionWidth = options.roadWidth / options.roadSections;

            for (let i = 0; i < options.nPairs; i++) {
              let radius = 1.;
              // 1a. Get it's lane index
              // Instead of random, keep lights per lane consistent
              let section = i % 3;

              // 1b. Get its lane's centered position
              let sectionX =
                section * sectionWidth - options.roadWifth / 2 + sectionWidth / 2;
              let carWidth = 0.5 * sectionWidth;
              let offsetX = 0.5 * Math.random();

              let offsetY = radius * 1.3;

              aOffset.push(sectionX - carWidth / 2 + offsetX);
              aOffset.push(offsetY);
              aOffset.push(-offsetZ);

              aOffset.push(sectionX + carWidth / 2 + offsetX);
              aOffset.push(offsetY);
              aOffset.push(-offsetZ);
            }
        // Add the offset to the instanced geometry.
        instanced.addAttribute(
          "aOffset",
          new THREE.InstancedBufferAttribute(new Float32Array(aOffset), 3, false)
        );
        ...
    }
}

Now that we've added our aOffset attribute, let's go ahead and use it on a vertex shader like a regular bufferAttribute.

We'll replace our MeshBasicMaterial with a shaderMaterial and create a vertex shader where we'll add aOffset to the position:

class TailLights {
	init(){
		...
		const material = new THREE.ShaderMaterial({
			fragmentShader, 
            vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color('0xfafafa'))
                }
		})
		...
	}
}
const fragmentShader = `
uniform vec3 uColor;
  void main() {
      vec3 color = vec3(uColor);
      gl_FragColor = vec4(color,1.);
  }
`;

const vertexShader = `
attribute vec3 aOffset;
  void main() {
		vec3 transformed = position.xyz;

		// Keep them separated to make the next step easier!
	   transformed.z = transformed.z + aOffset.z;
        transformed.xy += aOffset.xy;
	
        vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
        gl_Position = projectionMatrix * mvPosition;
	}
`;


[https://codesandbox.io/s/infinite-lights-02-road-and-lights-coznb ]

Depending from where you look at the tubes, you'll notice that they might look odd. By default, Three.js' materials don't render the backside of faces side:THREE.FontSide.

While we could fix it by changing it to side: THREE.DoubleSide to render all sides, our tubes are going to be small and fast enough that you won't be able to notice the back faces aren't rendered. We can keep it like that for the sake of performance.

Giving tubes a different length and radius

Creating our tube with a length and radius of 1 was crucial for this section to work. Now we can set the radius and length of each instance only by multiplying on the vertex shader 1 * desiredRadius = desiredRadius.

Let's use the same loop to create a new instancedBufferAttribute called aMetrics. We'll store the length and radius of each instance here.

Remember that wee push to the array twice. One for each of the items in the pair.

class TailLights {
	...
	init(){
	...
	let aMetrics =[];
	for (let i = 0; i < totalLightsPairs; i++) {
     // We give it a minimum value to make sure the lights aren't too thin or short.
     // Give it some randomness but keep it over 0.1
      let radius = Math.random() * 0.1 + 0.1;
     // Give it some randomness but keep it over length *0.02
      let length =
        Math.random() * options.length * 0.08 + options.length * 0.02;
      
      aMetrics.push(radius);
      aMetrics.push(length);

      aMetrics.push(radius);
      aMetrics.push(length);
    }
    instanced.addAttribute(
      "aMetrics",
      new THREE.InstancedBufferAttribute(new Float32Array(aMetrics), 2, false)
    );
    ...
}

Note that we multiplied the position by aMetrics before adding any aOffset. This expands the tubes from their center, and then moves them to their position.

...
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

            float radius = aMetrics.r;
            float len = aMetrics.g;

            // 1. Set the radius and length
            transformed.xy *= radius; 
            transformed.z *= len;
		
    // 2. Then move the tubes
   transformed.z = transformed.z + aOffset.z;
   transformed.xy += aOffset.xy;

   vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
   gl_Position = projectionMatrix * mvPosition;
}
`;

Positioning the lights

We want to have two roads of lights coming from different directions. Let's create the second TailLights and move each to their respective position. To center them both, we'll move them by half the middle island's width and half the road's width.

We'll also give each light its color, and modify the material to use that instead:

class App {
    constructor(){
        this.leftLights  = new TailLights(this, options, 0xff102a);
        this.rightLights = new TailLights(this, options, 0xfafafa);
    }
	init(){
		...
		
        this.leftLights.init();
        this.leftLights.mesh.position.setX(
           -options.roadWidth / 2 - options.islandWidth / 2
        );
        this.rightLights.init();
        this.rightLights.mesh.position.setX(
           options.roadWidth / 2 + options.islandWidth / 2
        );

	}
}
class TailLights {
	constuctor(webgl, options, color){
		this.color = color;
		...
	}
        init(){
            ...
            const material = new THREE.ShaderMaterial({
                fragmentShader, 
                vertexShader,
                    uniforms: {
                        uColor: new THREE.Uniform(new THREE.Color(this.color))
                    }
            })
            ...
        }
}

Looking great! We can already start seeing how the project is coming together!

Moving and looping the lights

Because we created the tube's curve on the z-axis, moving the lights is only a matter of adding and subtracting from the z-axis. We'll use the elapsed time uTime because time is always moving and it's pretty consistent.

Let's begin with adding a uTime uniform and an update method. Then our App class can update the time on both our CarLights. And finally, we'll add time to the z-axis on the vertex shader:

class TailLights {
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                }
		})
		...
	}
        update(t){
            this.mesh.material.uniforms.uTime.value = t;
        }
}
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

            // 1. Add time, and it's position to make it move
            float zOffset = uTime + aOffset.z;
		
            // 2. Then place them in the correct position
            transformed.z += zOffset;

    transformed.xy += aOffset.xy;
	
    vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
    gl_Position = projectionMatrix * mvPosition;
	}
`;
class App {
  ...
  update(delta) {
    let time = this.clock.elapsedTime;
    this.leftLights.update(time);
    this.rightLights.update(time);
  }
}

It moves ultra-slow, but it moves!

Let's create a new uniform uSpeed and multiply it with uTime to make the animation go faster. Because each road has to go to a different side we'll also add it to the CarLights constructor to make it customizable.

class TailLights {
  constructor(webgl, options, color, speed) {
    ...
    this.speed = speed;
  }
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                    uSpeed: new THREE.Uniform(this.speed)
                }
		})
		...
	}
    ...
}
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

    // 1. Set the radius and length
    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

    // 2. Add time, and it's position to make it move
        	float zOffset = uTime * uSpeed + aOffset.z;
			
    // 2. Then place them in the correct position
    transformed.z += zOffset;

    transformed.xy += aOffset.xy;
	
    vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
    gl_Position = projectionMatrix * mvPosition;
}
`;

Now that it's fast, let's make it loop.

We'll use the modulo operator mod to find the remainder of z-offset zOffset divided by the total road length uTravelLength. Getting only the remainder makes zOffset loop whenever it goes over uTravelLength.

Then, we'll subtract that from the z-axis and also add the length len to make it loop outside of the camera's view. And that's looping tubes!

Let's go ahead and add the uTravelLength uniform to our material:

class TailLights {
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                    uSpeed: new THREE.Uniform(this.speed)
                    uTime: new THREE.Uniform(0),
                }
		})
		...
	}
}

And let's modify the vertex shaders zOffset to make it loop:

const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
uniform float uTime;
uniform float uSpeed;
uniform float uTravelLength;
  void main() {
    vec3 transformed = position.xyz;
    
    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

        float zOffset = uTime * uSpeed + aOffset.z;
        // 1. Mod by uTravelLength to make it loop whenever it goes over
        // 2. Add len to make it loop a little bit later
        zOffset = len - mod(zOffset , uTravelLength);

   // Keep them separated to make the next step easier!
   transformed.z = transformed.z +zOffset ;
   transformed.xy += aOffset.xy;

   vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
   gl_Position = projectionMatrix * mvPosition;
}
`;

If you have a hawk's eye for faulty code, you'll noticed the loop isn't perfect. Behind the camera, the tubes go beyond the road's limits (push the camera back to see it in action). But for our use case, it does the job. Imperfect details outside of the camera don't matter.

Going faster and beyond

When holding left click we want our scene to go Speed Racer mode. Faster, and with a wider camera view.

Because the tube's speed is based on time, we'll add an extra offset to time whenever the left click is down. To make this transition extra smooth, we'll use linear interpolation (lerp) for the speedUp variable.

Note: We keep the timeOffset separate from the actual clock's time. Mutating the clock's time is never a good idea.

function lerp(current, target, speed = 0.1, limit = 0.001) {
  let change = (target - current) * speed;
  if (Math.abs(change) < limit) {
    change = target - current;
  }
  return change;
}

class App {
	constructor(){
		...
		this.speedUpTarget = 0.;
		this.speedUp = 0;
		this.timeOffset = 0;
		this.onMouseDown = this.onMouseDown.bind(this);
		this.onMouseUp = this.onMouseUp.bind(this);
	}
	init(){
		...
        this.container.addEventListener("mousedown", this.onMouseDown);
        this.container.addEventListener("mouseup", this.onMouseUp);
        this.container.addEventListener("mouseout", this.onMouseUp);
	}
  onMouseDown(ev) {
    this.speedUpTarget = 0.1;
  }
  onMouseUp(ev) {
    this.speedUpTarget = 0;
  }
  update(delta){
  	
      // Frame-dependent
    this.speedup += lerp(
      this.speedUp,
      this.speedUpTarget,
        // 10% each frame
      0.1,
      0.00001
    );
      // Also frame-dependent
    this.timeOffset += this.speedUp;
      
      
    let time = this.clock.elapsedTime + this.timeOffset;
    ...
    
  }
}

This is a totally functional and valid animation for our super speed mode; after all, it works. But it'll work differently depending on your Frames Per Second (FPS).

Frame rate independent speed up

The issue with the code above is that every frame we are adding a flat amount to the speed. This animation's speed depends on the frame rate.

It means if your frame rate suddenly becomes lower, or your frame rate was low to begin with, the animation is going to become slower as well. And if your frame rate is higher, the animation is going to speed up.

Resulting in the animations running faster or slower or depending on how many frames per second your computer can achieve, a frame rate dependent animation that takes 2 seconds at 30ps, takes 1 second at 60fps.

Our goal is to animate things using real-time. For all computers, the animations should always take X amount of seconds.

Looking back at our code, we have two animations that are frame rate dependent:

  • the speedUp's linear interpolation by 0.1 each frame
  • adding speedUp to timeOffset each frame

Adding speedUp to timeOffset is a linear process; it only depends on the speedup variable. So, we can make it frame rate independent by multiplying it by how many seconds have passed since the last frame (delta).

This one-line change makes the addition one this.speedUp per second. You might need to bump up the speed since the change makes the addition happen through a whole second.

class App {
	update(delta){
		...
         this.timeOffset += this.speedup * delta;		
		...
	} 
 }

Making the speedUp linear interpolation frame rate independent requires a little bit more math.

In the previous case, adding this.speedUp was a linear process, only dependent on the speedUp value. To make it frame rate independent we used another linear process: multiplying it by delta.

In the case of linear interpolation (lerp), we are trying to move towards the target 10% of the difference each time. This is not a linear process but an exponential process. To make it frame rate independent, we need another exponential process that involves delta.

We'll use the functions found in this article about making lerp frame rate independent.

Instead of moving towards the target 10% each frame, we'll move towards the target based on an exponential function based on time delta instead.

let coefficient = 0.1;
let lerpT = Math.exp(-coefficient * delta); 
this.speedup += lerp(
      this.speedup,
      this.speedupTarget,
      lerpT,
      0.00001
    );

This modification completely changes how our coefficient works. Now, a coefficient of 1.0 moves halfway to the target each second.

If we want to use our old coefficients 0.1 that we know already works fine for 60fps, we can convert the old coefficient into the new ones like this:

let coefficient = -60*Math.log2(1 - 0.1);

Plot twist: Math is actually hard. Although there are some great links out there explaining how all the math makes sense, some of it still flies over my head. If you know more about the theory of why all of this works. Feel free to reach out or type it in the comments. I would love to have a chat!

Repeat the process for the Camera's Field Of View camera.fov. And we also get a frame rate independent animation for the fov. We'll reuse the same lerpT to make it easier.

class App {
	constructor(){
		...
        this.fovTarget = 90;
        ...
	}
  onMouseDown(ev) {
    this.fovTarget = 140;
    ...
  }
  onMouseUp(ev) {
    this.fovTarget = 90;
     ...
  }
  update(delta){
      ...
    let fovChange = lerp(this.camera.fov, this.fovTarget, lerpT );
    if (fovChange !== 0) {
      this.camera.fov += fovChange * delta * 6.;
      this.camera.updateProjectionMatrix();
    }
    ...
    
  }
}

Note: Don't forget to update its transformation matrix after you are done with the changes or it won't update in the GPU.

Modularized distortion

The distortion of each object happens on the vertex shader. And as you can see, all objects share the same distortion. But GLSL doesn't have a module system unless you add something like glslify. If you want to reuse and swap pieces of GLSL code, you have to create that system yourself with JavaScript.

Alternatively, if you have only one or two shaders that need distortion, you can always hard code the distortion GLSL code on each mesh's shader. Then, update each one every time you make a change to the distortion. But try to keep track of updating more than two shaders and you start going insane quickly.

In my case, I chose to keep my sanity and create my own little system. This way I could create multiple distortions and play around with the values for the different demos.

Each distortion is an object with three main properties:

  1. distortion_uniforms: The uniforms this distortion is going to need. Each mesh takes care of adding these into their material.
  2. distortion_chunk: The GLSL code that exposes getDistortion function for the shaders that implement it. getDistortion receives a normalized value progress indicating how far into the road is the point. It returns the distortion of that specific position.
  3. (Optional) getJS: The GLSL code ported to JavaScript. This is useful for creating JS interactions following the curve. Like the camera rotating to face the road as we move along.
const distortion_uniforms = {
  uDistortionX: new THREE.Uniform(new THREE.Vector2(80, 3)),
  uDistortionY: new THREE.Uniform(new THREE.Vector2(-40, 2.5))
};

const distortion_vertex = `
#define PI 3.14159265358979
  uniform vec2 uDistortionX;
  uniform vec2 uDistortionY;

    float nsin(float val){
    return sin(val) * 0.5+0.5;
    }
  vec3 getDistortion(float progress){
        progress = clamp(progress, 0.,1.);
        float xAmp = uDistortionX.r;
        float xFreq = uDistortionX.g;
        float yAmp = uDistortionY.r;
        float yFreq = uDistortionY.g;
        return vec3( 
            xAmp * nsin(progress* PI * xFreq   - PI / 2. ) ,
            yAmp * nsin(progress * PI *yFreq - PI / 2.  ) ,
            0.
        );
    }
`;

const myCustomDistortion = {
    uniforms: distortion_uniforms,
    getDistortion: distortion_vertex,
}

Then, you pass the distortion object as a property in the options given when instantiating the main App class like so:

const myApp = new App(
	container, 
	{
        ... // Bunch of other options
		distortion: myCustomDistortion,
        ...
    }
)
...

From here each object can take the distortion from the options and use it as it needs.

Both, the CarLights and Road classes are going to add distortion.uniforms to their material and modify their shader using Three.js' onBeforeCompile:

const material = new THREE.ShaderMaterial({
	...
	uniforms: Object.assign(
		{...}, // The original uniforms of this object
		options.uniforms
	)
})

material.onBeforeCompile = shader => {
  shader.vertexShader = shader.vertexShader.replace(
    "#include ",
    options.distortion.getDistortion
  );
};

Before Three.js sends our shaders to webGL it checks it's custom GLSL to inject any ShaderChunks your shader needs. onBeforeCompile is a function that happens before Three.js compiles your shader into valid GLSL code. Making it easy to extend any built-in materials.

In our case, we'll use onBeforeCompile to inject our distortion's code. Only to avoid the hassle of injecting it another way.

As it stands now, we aren't injecting any code. We first need to add #include <getDistortion_vertex> to our shaders.

In our CarLights vertex shader we need to map its z-position as its distortion progress. And we'll add the distortion after all other math, right at the end:

// Car Lights Vertex shader
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
uniform float uTime;
uniform float uSpeed;
uniform float uTravelLength;
#include 
  void main() {
	...
        

		// Map z-position to progress: A range of 0 to 1.
        float progress = abs(transformed.z / uTravelLength);
        transformed.xyz += getDistortion(progress);

	
        vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
        gl_Position = projectionMatrix * mvPosition;
	}
`;

In our Road class, although we see it flat going towards negative-z because we rotated it, this mesh rotation happens after the vertex shader. In the eyes of our shader, our plane is still vertical y-axis and placed in the center of the scene.

To get the correct distortion, we need to map the y-axis as progress. First, we'll un-center it uTravelLength /2., and then we'll normalize it.

Also, instead of adding the y-distortion to the y-axis, we'll add it to the z-axis instead. Remember, in the vertex shader, the rotation hasn't happened yet.

// Road Vertex shader
const vertexShader = `
uniform float uTravelLength;
#include 
	void main(){
        vec3 transformed = position.xyz;
        
	// Normalize progress to a range of 0 to 1
    float progress = (transformed.y + uTravelLength / 2.) / uTravelLength;
    vec3 distortion  = getDistortion(progress);
    transformed.x += distortion.x;
	// z-axis is becomes the y-axis after mesh rotation. 
    transformed.z += distortion.y;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(transformed.xyz, 1.);
	}
`;

An there you have the final result for this tutorial!

Finishing touches

There are a few ways you can expand and better sell the effect of an infinite road in the middle of the night. Like creating more interesting curves and fading the objects into the background with some fog effect to make the lights seem like they are glowing.

Final Thoughts

I find that re-creating things from outside of the web and simply doing some creative coding, opens me up to a wider range of interesting ideas.

In this tutorial, we learned how to instantiate geometries, create frame rate independent animations and modulized distortions. And we brought it all together to re-create and put some motion into this awesome poster!

Hopefully, you've also liked working through this tutorial! Let me know what you think in the comments and feel free to reach out to me!

High-speed Light Trails in Three.js was written by Daniel Velasquez and published on Codrops.

Creating a Water-like Distortion Effect with Three.js

In this tutorial we’re going to build a water-like effect with a bit of basic math, a canvas, and postprocessing. No fluid simulation, GPGPU, or any of that complicated stuff. We’re going to draw pretty circles in a canvas, and distort the scene with the result.

We recommend that you get familiar with the basics of Three.js because we’ll omit some of the setup. But don’t worry, most of the tutorial will deal with good old JavaScript and the canvas API. Feel free to chime in if you don’t feel too confident on the Three.js parts.

The effect is divided into two main parts:

  1. Capturing and drawing the ripples to a canvas
  2. Displacing the rendered scene with postprocessing

Let’s start with updating and drawing the ripples since that’s what constitutes the core of the effect.

Making the ripples

The first idea that comes to mind is to use the current mouse position as a uniform and then simply displace the scene and call it a day. But that would mean only having one ripple that always remains at the mouse’s position. We want something more interesting, so we want many independent ripples moving at different positions. For that we’ll need to keep track of each one of them.

We’re going to create a WaterTexture class to manage everything related to the ripples:

  1. Capture every mouse movement as a new ripple in an array.
  2. Draw the ripples to a canvas
  3. Erase the ripples when their lifespan is over
  4. Move the ripples using their initial momentum

For now, let’s begin coding by creating our main App class.

import { WaterTexture } from './WaterTexture';
class App{
    constructor(){
        this.waterTexture = new WaterTexture({ debug: true });
        
        this.tick = this.tick.bind(this);
    	this.init();
    }
    init(){
        this.tick();
    }
    tick(){
        this.waterTexture.update();
        requestAnimationFrame(this.tick);
    }
}
const myApp = new App();

Let’s create our ripple manager WaterTexture with a teeny-tiny 64px canvas.

export class WaterTexture{
  constructor(options) {
    this.size = 64;
      this.radius = this.size * 0.1;
     this.width = this.height = this.size;
    if (options.debug) {
      this.width = window.innerWidth;
      this.height = window.innerHeight;
      this.radius = this.width * 0.05;
    }
      
    this.initTexture();
      if(options.debug) document.body.append(this.canvas);
  }
    // Initialize our canvas
  initTexture() {
    this.canvas = document.createElement("canvas");
    this.canvas.id = "WaterTexture";
    this.canvas.width = this.width;
    this.canvas.height = this.height;
    this.ctx = this.canvas.getContext("2d");
    this.clear();
	
  }
  clear() {
    this.ctx.fillStyle = "black";
    this.ctx.fillRect(0, 0, this.canvas.width, this.canvas.height);
  }
  update(){}
}

Note that for development purposes there is a debug option to mount the canvas to the DOM and give it a bigger size. In the end result we won’t be using this option.

Now we can go ahead and start adding some of the logic to make our ripples work:

  1. On constructor() add
    1. this.points array to keep all our ripples
    2. this.radius for the max-radius of a ripple
    3. this.maxAge for the max-age of a ripple
  2. On Update(),

    1. clear the canvas
    2. sing happy birthday to each ripple, and remove those older than this.maxAge
    3. draw each ripple
  3. Create AddPoint(), which is going to take a normalized position and add a new point to the array.
class WaterTexture(){
    constructor(){
        this.size = 64;
        this.radius = this.size * 0.1;
        
        this.points = [];
        this.maxAge = 64;
        ...
    }
    ...
    addPoint(point){
		this.points.push({ x: point.x, y: point.y, age: 0 });
    }
	update(){
        this.clear();
        this.points.forEach(point => {
            point.age += 1;
            if(point.age > this.maxAge){
                this.points.splice(i, 1);
            }
        })
        this.points.forEach(point => {
            this.drawPoint(point);
        })
    }
}

Note that AddPoint() receives normalized values, from 0 to 1. If the canvas happens to resize, we can use the normalized points to draw using the correct size.

Let’s create drawPoint(point) to start drawing the ripples: Convert the normalized point coordinates into canvas coordinates. Then, draw a happy little circle:

class WaterTexture(){
    ...
    drawPoint(point) {
        // Convert normalized position into canvas coordinates
        let pos = {
            x: point.x * this.width,
            y: point.y * this.height
        }
        const radius = this.radius;
        
        
        this.ctx.beginPath();
        this.ctx.arc(pos.x, pos.y, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

For our ripples to have a strong push at the center and a weak force at the edges, we’ll make our circle a Radial Gradient, which looses transparency as it moves to the edges.

Radial Gradients create a dithering-like effect when a lot of them overlap. It looks stylish but not as smooth as what we want it to look like.

To make our ripples smooth, we’ll use the circle’s shadow instead of using the circle itself. Shadows give us the gradient-like result without the dithering-like effect. The difference is in the way shadows are painted to the canvas.

Since we only want to see the shadow and not the flat-colored circle, we’ll give the shadow a high offset. And we’ll move the circle in the opposite direction.

As the ripple gets older, we’ll reduce it’s opacity until it disappears:

export class WaterTexture(){
    ...
    drawPoint(point) {
        ... 
        const ctx = this.ctx;
        // Lower the opacity as it gets older
        let intensity = 1.;
        intensity = 1. - point.age / this.maxAge;
        
        let color = "255,255,255";
        
        let offset = this.width * 5.;
        // 1. Give the shadow a high offset.
        ctx.shadowOffsetX = offset; 
        ctx.shadowOffsetY = offset; 
        ctx.shadowBlur = radius * 1; 
        ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 

        
        this.ctx.beginPath();
        this.ctx.fillStyle = "rgba(255,0,0,1)";
        // 2. Move the circle to the other direction of the offset
        this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

To introduce interactivity, we’ll add the mousemove event listener to app class and send the normalized mouse position to WaterTexture.

import { WaterTexture } from './WaterTexture';
class App {
	...
	init(){
        window.addEventListener('mousemove', this.onMouseMove.bind(this));
        this.tick();
	}
	onMouseMove(ev){
        const point = {
			x: ev.clientX/ window.innerWidth, 
			y: ev.clientY/ window.innerHeight, 
        }
        this.waterTexture.addPoint(point);
	}
}

Great, now we’ve created a disappearing trail of ripples. Now, let’s give them some momentum!

Momentum

To give momentum to a ripple, we need its direction and force. Whenever we create a new ripple, we’ll compare its position with the last ripple. Then we’ll calculate its unit vector and force.

On every update, we’ll update the ripples’ positions with their unit vector and position. And as they get older we’ll move them slower and slower until they retire or go live on a farm. Whatever happens first.

export lass WaterTexture{
	...
    constructor(){
        ...
        this.last = null;
    }
    addPoint(point){
        let force = 0;
        let vx = 0;
        let vy = 0;
        const last = this.last;
        if(last){
            const relativeX = point.x - last.x;
            const relativeY = point.y - last.y;
            // Distance formula
            const distanceSquared = relativeX * relativeX + relativeY * relativeY;
            const distance = Math.sqrt(distanceSquared);
            // Calculate Unit Vector
            vx = relativeX / distance;
            vy = relativeY / distance;
            
            force = Math.min(distanceSquared * 10000,1.);
        }
        
        this.last = {
            x: point.x,
            y: point.y
        }
        this.points.push({ x: point.x, y: point.y, age: 0, force, vx, vy });
    }
	
	update(){
        this.clear();
        let agePart = 1. / this.maxAge;
        this.points.forEach((point,i) => {
            let slowAsOlder = (1.- point.age / this.maxAge)
            let force = point.force * agePart * slowAsOlder;
              point.x += point.vx * force;
              point.y += point.vy * force;
            point.age += 1;
            if(point.age > this.maxAge){
                this.points.splice(i, 1);
            }
        })
        this.points.forEach(point => {
            this.drawPoint(point);
        })
    }
}

Note that instead of using the last ripple in the array, we use a dedicated this.last. This way, our ripples always have a point of reference to calculate their force and unit vector.

Let’s fine-tune the intensity with some easings. Instead of just decreasing until it’s removed, we’ll make it increase at the start and then decrease:

const easeOutSine = (t, b, c, d) => {
  return c * Math.sin((t / d) * (Math.PI / 2)) + b;
};

const easeOutQuad = (t, b, c, d) => {
  t /= d;
  return -c * t * (t - 2) + b;
};

export class WaterTexture(){
	drawPoint(point){
	...
	let intensity = 1.;
        if (point.age < this.maxAge * 0.3) {
          intensity = easeOutSine(point.age / (this.maxAge * 0.3), 0, 1, 1);
        } else {
          intensity = easeOutQuad(
            1 - (point.age - this.maxAge * 0.3) / (this.maxAge * 0.7),
            0,
            1,
            1
          );
        }
        intensity *= point.force;
        ...
	}
}

Now we're finished with creating and updating the ripples. It's looking amazing.

But how do we use what we have painted to the canvas to distort our final scene?

Canvas as a texture

Let's use the canvas as a texture, hence the name WaterTexture. We are going to draw our ripples on the canvas, and use it as a texture in a postprocessing shader.

First, let's make a texture using our canvas and refresh/update that texture at the end of every update:

import * as THREE from 'three'
class WaterTexture(){
	initTexture(){
		...
		this.texture = new THREE.Texture(this.canvas);
	}
	update(){
        ...
		this.texture.needsUpdate = true;
	}
}

By creating a texture of our canvas, we can sample our canvas like we would with any other texture. But how is this useful to us? Our ripples are just white spots on the canvas.

In the distortion shader, we're going to need the direction and intensity of the distortion for each pixel. If you recall, we already have the direction and force of each ripple. But how do we communicate that to the shader?

Encoding data in the color channels

Instead of thinking of the canvas as a place where we draw happy little clouds, we are going to think about the canvas' color channels as places to store our data and read them later on our vertex shader.

In the Red and Green channels, we'll store the unit vector of the ripple. In the Blue channel, we'll store the intensity of the ripple.

Since RGB channels range from 0 to 255, we need to send our data that range to normalize it. So, we'll transform the unit vector range (-1 to 1) and the intensity range (0 to 1) into 0 to 255.

class WaterEffect {
    drawPoint(point){
		...
        
		// Insert data to color channels
        // RG = Unit vector
        let red = ((point.vx + 1) / 2) * 255;
        let green = ((point.vy + 1) / 2) * 255;
        // B = Unit vector
        let blue = intensity * 255;
        let color = `${red}, ${green}, ${blue}`;

        
        let offset = this.size * 5;
        ctx.shadowOffsetX = offset; 
        ctx.shadowOffsetY = offset; 
        ctx.shadowBlur = radius * 1; 
        ctx.shadowColor = `rgba(${color},${0.2 * intensity})`; 

        this.ctx.beginPath();
        this.ctx.fillStyle = "rgba(255,0,0,1)";
        this.ctx.arc(pos.x - offset, pos.y - offset, radius, 0, Math.PI * 2);
        this.ctx.fill();
    }
}

Note: Remember how we painted the canvas black? When our shader reads that pixel, it's going to apply a distortion of 0, only distorting where our ripples are painting.

Look at the pretty color our beautiful data gives the ripples now!

With that, we're finished with the ripples. Next, we'll create our scene and apply the distortion to the result.

Creating a basic Three.js scene

For this effect, it doesn't matter what we render. So, we'll only have a single plane to showcase the effect. But feel free to create an awesome-looking scene and share it with us in the comments!

Since we're done with WaterTexture, don't forget to turn the debug option to false.

import * as THREE from "three";
import { WaterTexture } from './WaterTexture';

class App {
    constructor(){
        this.waterTexture = new WaterTexture({ debug: false });
        
        this.renderer = new THREE.WebGLRenderer({
          antialias: false
        });
        this.renderer.setSize(window.innerWidth, window.innerHeight);
        this.renderer.setPixelRatio(window.devicePixelRatio);
        document.body.append(this.renderer.domElement);
        
        this.camera = new THREE.PerspectiveCamera(
          45,
          window.innerWidth / window.innerHeight,
          0.1,
          10000
        );
        this.camera.position.z = 50;
        
        this.touchTexture = new TouchTexture();
        
        this.tick = this.tick.bind(this);
        this.onMouseMove = this.onMouseMove.bind(this);
        
        this.init();
    
    }
    addPlane(){
        let geometry = new THREE.PlaneBufferGeometry(5,5,1,1);
        let material = new THREE.MeshNormalMaterial();
        let mesh = new THREE.Mesh(geometry, material);
        
        window.addEventListener("mousemove", this.onMouseMove);
        this.scene.add(mesh);
    }
    init(){
    	this.addPlane(); 
    	this.tick();
    }
    render(){
        this.renderer.render(this.scene, this.camera);
    }
    tick(){
        this.render();
        this.waterTexture.update();
        requrestAnimationFrame(this.tick);
    }
}

Applying the distortion to the rendered scene

We are going to use postprocessing to apply the water-like effect to our render.

Postprocessing allows you to add effects or filters after (post) your scene is rendered (processing). Like any kind of image effect or filter you might see on snapchat or Instagram, there is a lot of cool stuff you can do with postprocessing.

For our case, we'll render our scene normally with a RenderPass, and apply the effect on top of it with a custom EffectPass.

Let's render our scene with postprocessing's EffectComposer instead of the Three.js renderer.

Note that EffectComposer works by going through its passes on each render. It doesn't render anything unless it has a pass for it. We need to add the render of our scene using a RenderPass:

import { EffectComposer, RenderPass } from 'postprocessing'
class App{
    constructor(){
        ...
		this.composer = new EffectComposer(this.renderer);
         this.clock = new THREE.Clock();
        ...
    }
    initComposer(){
        const renderPass = new RenderPass(this.scene, this.camera);
    
        this.composer.addPass(renderPass);
    }
    init(){
    	this.initComposer();
    	...
    }
    render(){
        this.composer.render(this.clock.getDelta());
    }
}

Things should look about the same. But now we start adding custom postprocessing effects.

We are going to create the WaterEffect class that extends postprocessing's Effect. It is going to receive the canvas texture in the constructor and make it a uniform in its fragment shader.

In the fragment shader, we'll distort the UVs using postprocessing's function mainUv using our canvas texture. Postprocessing is then going to take these UVs and sample our regular scene distorted.

Although we'll only use postprocessing's mainUv function, there are a lot of interesting functions you can use. I recommend you check out the wiki for more information!

Since we already have the unit vector and intensity, we only need to multiply them together. But since the texture values are normalized we need to convert our unit vector from a range of 1 to 0, into a range of -1 to 0:

import * as THREE from "three";
import { Effect } from "postprocessing";

export class WaterEffect extends Effect {
  constructor(texture) {
    super("WaterEffect", fragment, {
      uniforms: new Map([["uTexture", new THREE.Uniform(texture)]])
    });
  }
}
export default WaterEffect;

const fragment = `
uniform sampler2D uTexture;
#define PI 3.14159265359

void mainUv(inout vec2 uv) {
        vec4 tex = texture2D(uTexture, uv);
		// Convert normalized values into regular unit vector
        float vx = -(tex.r *2. - 1.);
        float vy = -(tex.g *2. - 1.);
		// Normalized intensity works just fine for intensity
        float intensity = tex.b;
        float maxAmplitude = 0.2;
        uv.x += vx * intensity * maxAmplitude;
        uv.y += vy * intensity * maxAmplitude;
    }
`;

We'll then instantiate WaterEffect with our canvas texture and add it as an EffectPass after our RenderPass. Then we'll make sure our composer only renders the last effect to the screen:

import { WaterEffect } from './WaterEffect'
import { EffectPass } from 'postprocessing'
class App{
    ...
	initComposer() {
        const renderPass = new RenderPass(this.scene, this.camera);
        this.waterEffect = new WaterEffect(  this.touchTexture.texture);

        const waterPass = new EffectPass(this.camera, this.waterEffect);

        renderPass.renderToScreen = false;
        waterPass.renderToScreen = true;
        this.composer.addPass(renderPass);
        this.composer.addPass(waterPass);
	}
}

And here we have the final result!

An awesome and fun effect to play with!

Conclusion

Through this article, we've created ripples, encoded their data into the color channels and used it in a postprocessing effect to distort our render.

That's a lot of complicated-sounding words! Great work, pat yourself on the back or reach out on Twitter and I'll do it for you 🙂

But there's still a lot more to explore:

  1. Drawing the ripples with a hollow circle
  2. Giving the ripples an actual radial-gradient
  3. Expanding the ripples as they get older
  4. Or using the canvas as a texture technique to create interactive particles as in Bruno's article.

We hope you enjoyed this tutorial and had a fun time making ripples. If you have any questions, don't hesitate to comment below or on Twitter!

Creating a Water-like Distortion Effect with Three.js was written by Daniel Velasquez and published on Codrops.

A Configurator for Creating Custom WebGL Distortion Effects

In one of our previous tutorials we showed you how to create thumbnail to fullscreen WebGL distortion animations. Today we would like to invite you to build your own personalized effects by using the configurator we’ve created.

We’ll briefly go over some main concepts so you can make full use of the configurator. If you’d like to understand the main idea behind the work, and why the animations behave the way they do in more depth, we highly recommend you to read the main tutorial Creating Grid-to-Fullscreen Animations with Three.js.

Basics of the configurator

The configurator allows you to modify all the details of the effect, making it possible to create unique animations. Even though you don’t have to be a programmer to create your own effect, understanding the options available will give you more insight into what you can achieve with it.

To see your personalized effect in action, either click on the image or drag the Progress bar. The Duration option sets the time of the whole animation.

Under Easings you can control the “rate of change” of your animation. For example:

  • Power1.easeOut: Start really fast but end slowly
  • Power1.easeInOut: Start and end slowly, but go really fast in the middle of the animation
  • Bounce: Bounce around like a basketball

The simplest easings to play around with are Power0-4 with ease-out. If you would like to know the difference between each easing, check out this ease visualizer.

Note that the configurator automatically saves your progress for later use. Feel free to close the page and come back to it later.

Timing, Activation and Transformation

Timing, Activation and Transformation are concepts that come from our previous tutorial. Each on of them has their own list of types, that also have their own set of options for you to explore.

You can explore them by changing the types, and expanding the respective options tab. When you swap one type for another, your previous set of options is saved in case you want to go back to it.

configurator

Timing

The timing function maps the activation into actual progress for each vertex. Without timing, the activation doesn’t get applied and all the vertices move at the same rate. Set timing type to none to see it in action.

  • SameEnd: The vertices have different start times, but they all end at the same time. Or vice versa.
  • sections: Move by sections, wait for the previous section to finish before starting.

The same activation with a different timing will result in a very different result.

Activation

The activation determines how the plane is going to move to full screen:

  • side: From left to right.
  • corners: From top-left to bottom-right
  • radial: From the position of the mouse
  • And others.

For a visual representation of the current activation, toggle debug activation and start the animation to see it in action.

Transformation

Transform the plane into a different shape or position over the course of the animation:

  • Flip: Flip the plane on the X axis
  • simplex: Move the vertices with noise over the while transitioning
  • wavy: Make the plane wavy while transitioning
  • And more

Some effects, use seed for their inner workings. You can set the initial seed and determine when this seed is going to be randomized.

Note that although these three concepts allow for a large amount of possible effects, some options won’t work quite well together.

Sharing your effect

To share the effect you can simply copy and share the URL.

We would love to see what you come up with. Please share your effect in the comments or tag us on Twitter using @anemolito and @codrops.

Adding your effect to your site

Now that you made your custom effect, it is time to add it to your site. Let’s see how to do that, step by step.

First, download the code and copy some of the required files over:

  • THREEjs: js/three.min.js
  • TweenLite: js/TweenLite.min.js
  • ImagesLoaded: js/imagesloaded.pkgd.min.js
  • For preloading the images
  • The effect’s code: js/GridToFullscreenEffect.js
  • TweenLite’s CSSPlugin: js/CSSPlugin.min.js (optional)
  • TweenLite’s EasePack:js/EasePack.min.js (optional; if you use the extra easings)

Include these in your HTML file and make sure to add js/GridToFullscreenEffect.js last.

Now let’s add the HTML structure for the effect to work. We need two elements:

  • div#App: Where our canvas is going to be
  • div#itemsWrapper: Where our HTML images are going to be
<body>
    <div id="app"></div>
    <div id="itemsWrapper"></div>    
</body>

Note: You can use any IDs or classes you want as long as you use them when instantiating the effect.

Inside #itemsWrapper we are going to have the HTML items for our effect.

Our HTML items inside #itemsWrapper can have almost any structure. The only requirement is that it has two image elements as the first two children of the item.

The first element is for the small-scale image and the second element is the large-scale image.

Aside from that, you can have any caption or description you may want to add at the bottom. Take a look at how we did ours in our previous post:

<div id="app"></div>
<div id="itemsWrapper">
    <figure class="grid__item">
        <img class="grid__item-img" src="img/1.jpg" alt="An image" />
        <img class="grid__item-img grid__item-img--large" src="img/1_large.jpg" />
        <figcaption class="grid__item-caption">
            <h2 class="grid__item-title">Our Item Title</h2>
            <p class="grid__item-text">
                Our Item Description
            </p>
        </figcaption>
    </figure>
    ...
</div>

You may add as many items as you want. If you add enough items to make your container scrollable. Make sure to send your container in the options, so the effect can account for its scroll.

With our HTML items in place, let’s get the effect up and running.

We’ll instantiate GridToFullscreenEffect, add our custom options, and initialize it.

<script>
  const transitionEffect = new GridToFullscreenEffect(
        document.getElementById("app"),
        document.getElementById("itemsWrapper"),
      {
          "duration":1.8,
          "timing":{"type":"sameEnd","props":{"latestStart":0.5,"reverse":true}},
          "activation":{"type":"snake","props":{"rows":4}},
          "transformation":{"type":"flipX"},
          "easings":{"toFullscreen":Quint.easeOut,"toGrid":Quint.easeOut}
      }
  );
  transitionEffect.init();
</script>

Our effect is now mounted and working. But clicking on an item makes the image disappear and we end up with a black square.

The effect doesn’t take care of loading the images. Instead, it requires you to give them to the effect whenever they load. This might seem a bit inconvenient, but it allows you to load your images the way it’s most suitable for your application.

You could preload all the images upfront, or you could only load the images that are on screen, and load the other ones when needed. It’s up to how you want to do that.

We decided to preload all the images using imagesLoaded like this:

imagesLoaded(document.querySelectorAll("img"), instance => {
    document.body.classList.remove("loading");

    // Make Images sets for creating the textures.
    let images = [];
    for (var i = 0, imageSet = {}; i < instance.elements.length; i++) {
        let image = {
            element: instance.elements[i],
            image: instance.images[i].isLoaded ? instance.images[i].img : null
        };
        if (i % 2 === 0) {
            imageSet = {};
            imageSet.small = image;
        }

        if (i % 2 === 1) {
            imageSet.large = image;
            images.push(imageSet);
        }
    }
    configurator.effect.createTextures(images);
});

With that last piece of code, our effect is running and it shows the correct images. If you are having troubles with adding it to your site, let us know!

Our Creations

While working on this configurator, we managed to create some interesting results of our own. Here are three examples. You can use the parameters and attach it to the URL or use the settings:

Preset 1

?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
{"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

Preset 2

?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
{"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

Preset 4

?duration=1.75&toFull=Cubic.easeInOut&toGrid=Cubic.easeInOut&timing=sections%2Csections%2C4&transformation=simplex&activation=side%2Ctop%2Ctrue%2Cbottom%2Ctrue
{"timing":{"type":"sections","props":{"sections":4}},"activation":{"type":"side","props":{"top":true,"bottom":true}},"transformation":{"type":"simplex"},"duration":1.75,"easings":{"toFullscreen":"Cubic.easeInOut","toGrid":"Cubic.easeInOut"}}

Check out all the demos to explore more presets!

We hope you enjoy the configurator and find it useful for creating some unique animations!

A Configurator for Creating Custom WebGL Distortion Effects was written by Daniel Velasquez and published on Codrops.

Creating Grid-to-Fullscreen Animations with Three.js

Animations play a big role in how users feels about your website. They convey a lot of the personality and feel of your site. They also help the user navigate new and already known screens with more ease.

In this tutorial we want to look at how to create some interesting grid-to-fullscreen animations on images. The idea is to have a grid of smaller images and when clicking on one, the image enlarges with a special animation to cover the whole screen. We’ll aim for making them accessible, unique and visually appealing. Additionally, we want to show you the steps for making your own.

The building blocks

Before we can start doing all sorts of crazy animations, timing calculations and reality deformation we need to get the basic setup of the effect ready:

  • Initialize Three.js and the plane we’ll use
  • Position and scale the plane so it is similar to the item’s image whenever the user clicks an item
  • Animate the plane so it covers the complete screen

For the sake of not going too crazy with all the effects we can make, we’ll focus on making a flip effect like the one in our first demo.

GridFullscreen_demo1

Initialization

To begin, lets make a basic Three.js setup and add a single 1×1 plane which we’ll re-use for the animation of every grid item. Since only one animation can happen at the time. We can have better performance by only using one plane for all animations.

This simple change is going to allow us to have any number of HTML items without affecting the performance of the animation.

As a side note, in our approach we decided to only use Three.js for the time of the animation. This means all the items are good old HTML.

This allows our code to have a natural fallback for browsers that don’t have WebGL support. And it also makes our effect more accessible.

class GridToFullscreenEffect {
	...
	init(){
		... 
		const segments = 128;
		var geometry = new THREE.PlaneBufferGeometry(1, 1, segments, segments);
		// We'll be using the shader material later on ;)
		var material = new THREE.ShaderMaterial({
		  side: THREE.DoubleSide
		});
		this.mesh = new THREE.Mesh(geometry, material);
		this.scene.add(this.mesh);
	}
}

Note: We are skipping over the Three.js initialization since it’s pretty basic.

Setting the the plane geometry’s size to be 1×1 simplifies things a little bit. It removes a some of the math involved with calculating the correct scale. Since 1 scaled by any number is always going to return that same number.

Positioning and resizing

Now, we’ll resize and position the plane to match the item’s image. To do this, we’ll need to get the item’s getBoundingClientRect. Then we need to transform its values from pixels to the camera’s view units. After, we need to transform them from relative to the top left, to relative from the center. Summarized:

  1. Map pixel units to camera’s view units
  2. Make the units relative to the center instead of the top left
  3. Make the position’s origin start on the plane’s center, not on the top left
  4. Scale and position the mesh using these new values
class GridToFullscreenEffect {
...
 onGridImageClick(ev,itemIndex){
	// getBoundingClientRect gives pixel units relative to the top left of the pge
	 const rect = ev.target.getBoundingClientRect();
	const viewSize = this.getViewSize();
	
	// 1. Transform pixel units to camera's view units
	const widthViewUnit = (rect.width * viewSize.width) / window.innerWidth;
	const heightViewUnit = (rect.height * viewSize.height) / window.innerHeight;
	let xViewUnit =
	  (rect.left * viewSize.width) / window.innerWidth;
	let yViewUnit =
	  (rect.top * viewSize.height) / window.innerHeight;
	
	// 2. Make units relative to center instead of the top left.
	xViewUnit = xViewUnit - viewSize.width / 2;
	yViewUnit = yViewUnit - viewSize.height / 2;
   

	// 3. Make the origin of the plane's position to be the center instead of top Left.
	let x = xViewUnit + widthViewUnit / 2;
	let y = -yViewUnit - heightViewUnit / 2;

	// 4. Scale and position mesh
	const mesh = this.mesh;
	// Since the geometry's size is 1. The scale is equivalent to the size.
	mesh.scale.x = widthViewUnit;
	mesh.scale.y = heightViewUnit;
	mesh.position.x = x;
	mesh.position.y = y;

	}
 }

As a side note, scaling the mesh instead of scaling the geometry is more performant. Scaling the geometry actually changes its internal data which is slow and expensive, while scaling the mesh happens at rendering. This decision will come into play later on, so keep it in mind.

Now, bind this function to each item’s onclick event. Then our plane resizes to match the item’s image.

It’s a very simple concept, yet quite performant in the long run. Now that our plane is ready to go when clicked, lets make it cover the screen.

Basic animation

First, lets initialize the few uniforms:

  • uProgress – Progress of the animation
  • uMeshScale – Scale of the mesh
  • uMeshPosition – Mesh’s position from the center
  • uViewSize – Size of the camera’s view

We’ll also create the base for our shaders.

class GridToFullscreenEffect {
	constructor(container, items){
		this.uniforms = {
		  uProgress: new THREE.Uniform(0),
		  uMeshScale: new THREE.Uniform(new THREE.Vector2(1, 1)),
		  uMeshPosition: new THREE.Uniform(new THREE.Vector2(0, 0)),
		  uViewSize: new THREE.Uniform(new THREE.Vector2(1, 1)),
		}
	}
	init(){
		... 
		const viewSize = this.getViewSize();
		this.uniforms.uViewSize.x = viewSize.width;
		this.uniforms.uViewSize.y = viewSize.height;
		var material = new THREE.ShaderMaterial({
			uniform: this.uniforms,
			vertexShader: vertexShader,
			fragmentShader: fragmentShader,
			side: THREE.DoubleSide
		});
		
		...
	}
	...
}
const vertexShader = `
	uniform float uProgress;
	uniform vec2 uMeshScale;
	uniform vec2 uMeshPosition;
	uniform vec2 uViewSize;

	void main(){
		vec3 pos = position.xyz;
		 gl_Position = projectionMatrix * modelViewMatrix * vec4(pos,1.);
	}
`;
const fragmentShader = `
	void main(){
		 gl_FragColor = vec4(vec3(0.2),1.);
	}
`;

We need to update uMeshScale and uMeshPositon uniforms whenever we click an item.

class GridToFullscreenEffect {
	...
	onGridImageClick(ev,itemIndex){
		...
		// Divide by scale because on the fragment shader we need values before the scale 
		this.uniforms.uMeshPosition.value.x = x / widthViewUnit;
		this.uniforms.uMeshPosition.value.y = y / heightViewUnit;

		this.uniforms.uMeshScale.value.x = widthViewUnit;
		this.uniforms.uMeshScale.value.y = heightViewUnit;
	}
}

Since we scaled the mesh and not the geometry, on the vertex shader our vertices still represent a 1×1 square in the center of the scene. But it ends up rendered in another position and with a different size because of the mesh. As a consequence of this optimization, we need to use “down-scaled” values in the vertex shaders. With that out of the way, lets make the effect happen in our vertex Shader:

  1. Calculate the scale needed to match the screen size using our mesh’s scale
  2. Move the vertices by their negative position so they move to the center
  3. Multiply those values by the progress of the effect
...
const vertexShader = `
	uniform float uProgress;
	uniform vec2 uPlaneSize;
	uniform vec2 uPlanePosition;
	uniform vec2 uViewSize;

	void main(){
		vec3 pos = position.xyz;
		
		// Scale to page view size/page size
		vec2 scaleToViewSize = uViewSize / uPlaneSize - 1.;
		vec2 scale = vec2(
		  1. + scaleToViewSize * uProgress
		);
		pos.xy *= scale;
		
		// Move towards center 
		pos.y += -uPlanePosition.y * uProgress;
		pos.x += -uPlanePosition.x * uProgress;
		
		
		 gl_Position = projectionMatrix * modelViewMatrix * vec4(pos,1.);
	}
`;

Now, when we click an item. We are going to:

  • set our canvas container on top of the items
  • make the HTML item invisible
  • tween uProgress between 0 and 1
class GridToFullscreenEffect {
	...
	constructor(container,items){
		...
		this.itemIndex = -1;
		this.animating = false;
		this.state = "grid";
	}
	toGrid(){
		if (this.state === 'grid' || this.isAnimating) return;
		this.animating = true;
		this.tween = TweenLite.to(
		  this.uniforms.uProgress,1.,
		  {
			value: 0,
			onUpdate: this.render.bind(this),
			onComplete: () => {
			  this.isAnimating = false;
			  this.state = "grid";
			this.container.style.zIndex = "0";
			}
		  }
		);
	}
	toFullscreen(){
	if (this.state === 'fullscreen' || this.isAnimating) return;
		this.animating = true;
		this.container.style.zIndex = "2";
		this.tween = TweenLite.to(
		  this.uniforms.uProgress,1.,
		  {
			value: 1,
			onUpdate: this.render.bind(this),
			onComplete: () => {
			  this.isAnimating = false;
			  this.state = "fullscreen";
			}
		  }
		);
	}

	onGridImageClick(ev,itemIndex){
		...
		this.itemIndex = itemIndex;
		this.toFullscreen();
	}
}

We start the tween whenever we click an item. And there you go, our plane goes back and forth no matter which item we choose.

Pretty good, but not too impressive yet.

Now that we have the basic building blocks done, we can start making the cool stuff. For starters, lets go ahead and add timing.

Activation and timing

Scaling the whole plane is a little bit boring. So, lets give it some more flavor by making it scale with different patterns: Top-to-bottom, left-to-right, topLeft-to-bottomRight.

Lets take a look at how those effects behave and figure out what we need to do:

Grid Effects

By observing the effects for a minute, we can notice that the effect is all about timing. Some parts of the plane start later than others.

What we are going to do is to create an “activation” of the effect. We’ll use that activation to determine which vertices are going to start later than others.

Effects with activations

And lets see how that looks like in code:

...
const vertexShader = `
	...
	void main(){
		vec3 pos = position.xyz;
		
		// Activation for left-to-right
		float activation = uv.x;
		
		float latestStart = 0.5;
		float startAt = activation * latestStart;
		float vertexProgress = smoothstep(startAt,1.,uProgress);
	   
		...
	}
`;

We’ll replace uProgress with vertexprogres for any calculations in the vertex shader.

...
const vertexShader = `
	...
	void main(){
		...
		float vertexProgress = smoothstep(startAt,1.,uProgress);
		
		vec2 scaleToViewSize = uViewSize / uMeshScale - 1.;
		vec2 scale = vec2(
		  1. + scaleToViewSize * vertexProgress
		);
		pos.xy *= scale;
		
		// Move towards center 
		pos.y += -uMeshPosition.y * vertexProgress;
		pos.x += -uMeshPosition.x * vertexProgress;
		...
	}
`;

With this little change, our animation is not much more interesting.

Note that the gradients on the demo are there for demonstration purposes. They have nothing to do with the effect itself.

The great thing about these “activation” and “timing” concepts is that they are interchangeable implementations. This allows us to create a ton of variations.

With the activation and timing in place, lets make it more interesting with transformations.

Transformations

If you haven’t noticed, we already know how to make a transformation. We successfully scaled and moved the plane forwards and backwards.

We interpolate or move from one state to another using vertexProgress. Just like we are doing in the scale and movement:

...
const vertexShader = `
	...
	void main(){
	...
		// Base state = 1.
		// Target state = uScaleToViewSize;
		// Interpolation value: vertexProgress
		scale = vec2(
		  1. + uScaleToViewSize * vertexProgress
		);

		// Base state = pos
		// Target state = -uPlaneCenter;
		// Interpolation value: vertexProgress
		pos.y += -uPlaneCenter.y * vertexProgress;
		pos.x += -uPlaneCenter.x * vertexProgress;
	...
	}
`

Lets apply this same idea to make a flip transformation:

  • Base state: the vertex’s current position
  • Target state: The vertex flipped position
  • Interpolate with: the vertex progress
...
const vertexShader = `
	...
	void main(){
		...
		float vertexProgress = smoothstep(startAt,1.,uProgress);
		// Base state: pos.x
		// Target state: flippedX
		// Interpolation with: vertexProgress 
		float flippedX = -pos.x;
		pos.x = mix(pos.x,flippedX, vertexProgress);
		// Put vertices that are closer to its target in front. 
		pos.z += vertexProgress;
		...
	}
`;

Note that, because this flip sometimes puts vertices on top of each other we need to bring some of them slightly to the front to make it look correctly.

Combining these flips with different activations, these are some of the variations we came up with:

If you pay close attention to the flip you’ll notice it also flips the color/image backwards. To fix this issue we have to flip the UVs along with the position.

And there we have it! We’ve not only created an interesting and exciting flip effect, but also made sure that using this structure we can discover all kinds of effects by changing one or more of the pieces.

In fact, we created the effects seen in our demos using the configurations as part of our creative process.

There is so much more to explore! And we would love to see what you can come up with.

Here are the most interesting variations we came up with:

Different timing creation:

GridFullscreen_demo2

Activation based on mouse position, and deformation with noise:

GridFullscreen_demo4

Distance deformation and mouse position activation:

GridFullscreen_demo5

We hope you enjoyed this tutorial and find it helpful!

GitHub link coming soon!

Creating Grid-to-Fullscreen Animations with Three.js was written by Daniel Velasquez and published on Codrops.