Trigonometry in CSS and JavaScript: Beyond Triangles

In the previous article we looked at how to clip an equilateral triangle with trigonometry, but what about some even more interesting geometric shapes?

This article is the 3rd part in a series on Trigonometry in CSS and JavaScript:

  1. Introduction to Trigonometry
  2. Getting Creative with Trigonometric Functions
  3. Beyond Triangles (this article)

Plotting regular polygons

A regular polygon is a polygon with all equal sides and all equal angles. An equilateral triangle is one, so too is a pentagon, hexagon, decagon, and any number of others that meet the criteria. We can use trigonometry to plot the points of a regular polygon by visualizing each set of coordinates as points of a triangle.

Polar coordinates

If we visualize a circle on an x/y axis, draw a line from the center to any point on the outer edge, then connect that point to the horizontal axis, we get a triangle.

A circle centrally positioned on an axis, with a line drawn along the radius to form a triangle

If we repeatedly rotated the line at equal intervals six times around the circle, we could plot the points of a hexagon.

A hexagon, made by drawing lines along the radius of the circle

But how do we get the x and y coordinates for each point? These are known as cartesian coordinates, whereas polar coordinates tell us the distance and angle from a particular point. Essentially, the radius of the circle and the angle of the line. Drawing a line from the center to the edge gives us a triangle where hypotenuse is equal to the circle’s radius.

Showing the triangle made by drawing a line from one of the vertices, with the hypotenuse equal to the radius, and the angle as 2pi divided by 6

We can get the angle in degrees by diving 360 by the number of vertices our polygon has, or in radians by diving 2pi radians. For a hexagon with a radius of 100, the polar coordinates of the uppermost point of the triangle in the diagram would be written (100, 1.0472rad) (r, θ).

An infinite number of points would enable us to plot a circle.

Polar to cartesian coordinates

We need to plot the points of our polygon as cartesian coordinates – their position on the x and y axis.

As we know the radius and the angle, we need to calculate the adjacent side length for the x position, and the opposite side length for the y position.

Showing the triangle superimposed on the hexagon, and the equations needed to calculate the opposite and adjacent sides.

Therefore we need Cosine for the former and Sine for the latter:

adjacent = cos(angle) * hypotenuse
opposite = sin(angle) * hypotenuse

We can write a JS function that returns an array of coordinates:

const plotPoints = (radius, numberOfPoints) => {

	/* step used to place each point at equal distances */
	const angleStep = (Math.PI * 2) / numberOfPoints

	const points = []

	for (let i = 1; i <= numberOfPoints; i++) {
		/* x & y coordinates of the current point */
		const x = Math.cos(i * angleStep) * radius
		const y = Math.sin(i * angleStep) * radius

		/* push the point to the points array */
		points.push({ x, y })
	}
	
	return points
}

We could then convert each array item into a string with the x and y coordinates in pixels, then use the join() method to join them into a string for use in a clip path:

const polygonCoordinates = plotPoints(100, 6).map(({ x, y }) => {
		return `${x}px ${y}px`
	}).join(',')

shape.style.clipPath = `polygon(${polygonCoordinates})`

See the Pen Clip-path polygon by Michelle Barker (@michellebarker) on CodePen.dark

This clips a polygon, but you’ll notice we can only see one quarter of it. The clip path is positioned in the top left corner, with the center of the polygon in the corner. This is because at some points, calculating the cartesian coordinates from the polar coordinates is going to result in negative values. The area we’re clipping is outside of the element’s bounding box.

To position the clip path centrally, we need to add half of the width and height respectively to our calculations:

const xPosition = shape.clientWidth / 2
const yPosition = shape.clientHeight / 2

const x = xPosition + Math.cos(i * angleStep) * radius
const y = yPosition + Math.sin(i * angleStep) * radius

Let’s modify our function:

const plotPoints = (radius, numberOfPoints) => {
	const xPosition = shape.clientWidth / 2
	const yPosition = shape.clientHeight / 2
	const angleStep = (Math.PI * 2) / numberOfPoints
	const points = []

	for (let i = 1; i <= numberOfPoints; i++) {
		const x = xPosition + Math.cos(i * angleStep) * radius
		const y = yPosition + Math.sin(i * angleStep) * radius

		points.push({ x, y })
	}
	
	return points
}

Our clip path is now positioned in the center.

See the Pen Clip-path polygon by Michelle Barker (@michellebarker) on CodePen.dark

Star polygons

The types of polygons we’ve plotted so far are known as convex polygons. We can also plot star polygons by modifying our code in the plotPoints() function ever so slightly. For every other point, we could change the radius value to be 50% of the original value:

/* Set every other point’s radius to be 50% */
const radiusAtPoint = i % 2 === 0 ? radius * 0.5 : radius
		
/* x & y coordinates of the current point */
const x = xPosition + Math.cos(i * angleStep) * radiusAtPoint
const y = yPosition + Math.sin(i * angleStep) * radiusAtPoint

See the Pen Clip-path star polygon by Michelle Barker (@michellebarker) on CodePen.dark

Here’s an interactive example. Try adjusting the values for the number of points and the inner radius to see the different shapes that can be made.

See the Pen Clip-path adjustable polygon by Michelle Barker (@michellebarker) on CodePen.dark

Drawing with the Canvas API

So far we’ve plotted values to use in CSS, but trigonometry has plenty of applications beyond that. For instance, we can plot points in exactly the same way to draw on a <canvas> with Javascript. In this function, we’re using the same function as before (plotPoints()) to create an array of polygon points, then we draw a line from one point to the next:

const canvas = document.getElementById('canvas')
const ctx = canvas.getContext('2d')

const draw = () => {
	/* Create the array of points */
	const points = plotPoints()
	
	/* Move to starting position and plot the path */
	ctx.beginPath()
	ctx.moveTo(points[0].x, points[0].y)
	
	points.forEach(({ x, y }) => {
		ctx.lineTo(x, y)
	})
	
	ctx.closePath()
	
	/* Draw the line */
	ctx.stroke()
}

See the Pen Canvas polygon (simple) by Michelle Barker (@michellebarker) on CodePen.dark

Spirals

We don’t even have to stick with polygons. With some small tweaks to our code, we can even create spiral patterns. We need to change two things here: First of all, a spiral requires multiple rotations around the point, not just one. To get the angle for each step, we can multiply pi by 10 (for example), instead of two, and divide that by the number of points. That will result in five rotations of the spiral (as 10pi divided by two is five).

const angleStep = (Math.PI * 10) / numberOfPoints

Secondly, instead of an equal radius for every point, we’ll need to increase this with every step. We can multiply it by a number of our choosing to determine how far apart the lines of our spiral are rendered:

const multiplier = 2
const radius = i * multiplier
const x = xPosition + Math.cos(i * angleStep) * radius
const y = yPosition + Math.sin(i * angleStep) * radius

Putting it all together, our adjusted function to plot the points is as follows:

const plotPoints = (numberOfPoints) => {
	const angleStep = (Math.PI * 10) / numberOfPoints
	const xPosition = canvas.width / 2
	const yPosition = canvas.height / 2

	const points = []

	for (let i = 1; i <= numberOfPoints; i++) {
		const radius = i * 2 // multiply the radius to get the spiral
		const x = xPosition + Math.cos(i * angleStep) * radius
		const y = yPosition + Math.sin(i * angleStep) * radius

		points.push({ x, y })
	}
	
	return points
}

See the Pen Canvas spiral – simple by Michelle Barker (@michellebarker) on CodePen.dark

At the moment the lines of our spiral are at equal distance from each other, but we could increase the radius exponentially to get a more pleasing spiral. By using the Math.pow() function, we can increase the radius by a larger number for each iteration. By the golden ratio, for example:

const radius = Math.pow(i, 1.618)
const x = xPosition + Math.cos(i * angleStep) * radius
const y = yPosition + Math.sin(i * angleStep) * radius

See the Pen Canvas spiral by Michelle Barker (@michellebarker) on CodePen.dark

Animation

We could also rotate the spiral, using (using requestAnimationFrame). We’ll set a rotation variable to 0, then on every frame increment or decrement it by a small amount. In this case I’m decrementing the rotation, to rotate the spiral anti-clockwise

let rotation = 0

const draw = () => {
	const { width, height } = canvas
	
	/* Create points */
	const points = plotPoints(400, rotation)
	
	/* Clear canvas and redraw */
	ctx.clearRect(0, 0, width, height)
	ctx.fillStyle = '#ffffff'
	ctx.fillRect(0, 0, width, height)
	
	/* Move to beginning position */
	ctx.beginPath()
	ctx.moveTo(points[0].x, points[0].y)
	
	/* Plot lines */
	points.forEach((point, i) => {
		ctx.lineTo(point.x, point.y)
	})
	
	/* Draw the stroke */
	ctx.strokeStyle = '#000000'
	ctx.stroke()
	
	/* Decrement the rotation */
	rotation -= 0.01
	
	window.requestAnimationFrame(draw)
}

draw()

We’ll also need to modify our plotPoints() function to take the rotation value as an argument. We’ll use this to increment the x and y position of each point on every frame:

const x = xPosition + Math.cos(i * angleStep + rotation) * radius
const y = yPosition + Math.sin(i * angleStep + rotation) * radius

This is how our plotPoints() function looks now:

const plotPoints = (numberOfPoints, rotation) => {
	/* 6 rotations of the spiral divided by number of points */
	const angleStep = (Math.PI * 12) / numberOfPoints 
	
	/* Center the spiral */
	const xPosition = canvas.width / 2
	const yPosition = canvas.height / 2

	const points = []

	for (let i = 1; i <= numberOfPoints; i++) {
		const r = Math.pow(i, 1.3)
		const x = xPosition + Math.cos(i * angleStep + rotation) * r
		const y = yPosition + Math.sin(i * angleStep + rotation) * r

		points.push({ x, y, r })
	}
	
	return points
}

See the Pen Canvas spiral by Michelle Barker (@michellebarker) on CodePen.dark

Wrapping up

I hope this series of articles has given you a few ideas for how to get creative with trigonometry and code. I’ll leave you with one more creative example to delve into, using the spiral method detailed above. Instead of plotting points from an array, I’m drawing circles at a new position on each iteration (using requestAnimationFrame).

See the Pen Canvas spiral IIII by Michelle Barker (@michellebarker) on CodePen.dark

Special thanks to George Francis and Liam Egan, whose wonderful creative work inspired me to delve deeper into this topic!

The post Trigonometry in CSS and JavaScript: Beyond Triangles appeared first on Codrops.

7 Creative Ways to Use Geometry in Web Design

Everyone had one of those subjects in school where they thought, “Why am I studying this? I’m never going to use it again.” 

Geometry, with its measuring of diameters and angles, might not have seemed very useful or exciting at the time. However, there’s actually a lot you can do with geometry on a website that will make your content easier to find as well as more engaging.

Today, we’re going to look at some creative ways to use geometry in web design. We’ll give you some examples of BeTheme pre-built sites and live websites that demonstrate how to do this.

1. Slice your sections on a diagonal so visitors naturally “fall” downwards

Many websites today are built with rectangular blocks, one on top of the other. However, by cutting key sections of your site at a diagonal as opposed to the expected horizontal line, visitors may feel a stronger inclination to keep moving downwards on a page. 

Part of this is due to the downward slope, though the “peekaboo” tease of what comes next certainly helps, too. 

Stripe is one such example of a website that uses this diagonal divider in its design: 

The dividing line allows visitors to get a sneak peek of what’s to come in the next section, building anticipation as they scroll further and further.

This is something you can easily build with BeData, one of BeTheme’s pre-built sites: 

There are a number of instances on this site where the dividing line between two sections sits at a 45-degree angle like in the example above. And it’s a great example of how to take a unique approach to the diagonal divider.

2. Use shapes that simplify decision-making

It’s not uncommon for consumers to feel decision fatigue when presented with yet another choice to make online. While it’s great to have so many options available, it can be quite exhausting always having to research and compare products or services. 

When you build a website that sells something, why not take some of that pressure off of your visitors? 

You can use recognizable geometric shapes to point prospective buyers to: 

  • Plans that offer the greatest value
  • Products that are the most popular with other shoppers
  • Options that have a special properties (like eco-friendliness)

Sephora, for instance, uses a number of circular seals to draw visitors’ attention to certain items:

The green circles indicate that they’re clean products while the ones in red are award winners. 

Your shapes don’t even need to contain text the way Sephora’s do. You can go the route of BeComputerShop, for instance, and use stars to point out where the top sellers are: 

The shape you use and how you design it all depends on what the site sells and what shoppers most commonly look for when choosing one option over the others.

3. Use shapes that open up a window to your brand

Consumers want transparency from brands today. While the words on your site help with this, the images you show can also help. 

Rather than go the usual route of adding images on top of your website’s background, why not design them to look like you’ve created a porthole or window into the brand’s world? 

You can use geometric shapes to carve out these “windows” and let your visitors see something or someone that will help them better connect to the brand on a personal level. 

Web3 shows us one way of handling this: 

Part of the homepage has been carved out to make room for this polygon. Within it, is a short video of the agency at work. 

BeInternet gives us another way to approach this technique: 

The circular hole contains an image as opposed to a video, but it has a similar effect as the Web3 example. Another difference is that this image doesn’t have a filter laid over it, which creates a more open and transparent feel to what the visitors are seeing.

4. Use images containing geometric figures to direct people’s attention

You can do more than just pair geometric figures with your text and imagery. You can select images that contain their own geometric figures and lines. 

Visitors might not overtly notice the geometry and that’s okay. Things like long vertical lines and arrows will subconsciously direct their attention on the page. 

Here’s an example from the Hyatt Place Hotel in Delaware: 

The alignment of the photo was definitely intentional. If we use the F-pattern that users’ eyes typically follow, they’ll start at the left side of the screen, stop to read the text in the middle, and then glance over to the right where the boardwalk then leads them downwards. 

BeFarm is a pre-built site that does something similar: 

The rows within the image will draw people’s eyes into the explainer products below. 

5. Make your content feel more alive by using different planes

When designing websites for activities and experiences, a flat design probably won’t properly convey what you’re selling. 

By placing key objects on different planes and giving the design a 3-dimensional feel, the site — and the experience it sells — will feel more exciting and adventurous, which, in turn, will increase engagement. 

Just be careful about overdoing it with this technique. It’s best to focus on one element instead of trying to make the whole banner or site 3D. For example, this is Ryan’s Island Cafe:

This wooden signpost is an item that’s commonly found on beaches, so it was a good choice to make it stand out as it helps recreate that beachy atmosphere online. 

BeFunFair offers another way to approach this: 

In this case, it’s the letters “JOY” that help visitors see the three-dimensional plane. The illustration of the fair rides behind, in front of, and going through it creatively demonstrate this.

6. Direct visitors with lines and arrows created from shapes

One way to implore visitors to keep moving down a page is to use lines and arrows. We’ve already seen how something like a diagonal divider can do this. 

But simply drawing lines and arrows seems a little too easy, doesn’t it? If you want to mix it up a little bit, consider combining geometric figures to form directional lines and arrowheads. 

HURU’s product pages, for instance, are full of these types of graphics: 

In this example, a bunch of triangles come together to form an arrowhead that points to one of the backpack’s main features. 

You might also follow BePhotography’s lead and use a series of circles or plus-signs to form lines: 

These directional cues effortlessly take visitors from one piece of content to another.

7. Use the psychology of shapes to inspire action

This last one has less to do about using a shape as a directional cue and more about using the psychology of a shape to convince a visitor to take action. 

For instance, here’s what the most common geometric figures mean: 

  • Square – Traditionalism and balance
  • Circle – Harmony, infinity, and protection
  • Triangle – Stability and energy
  • Rhombus – Contemporariness and excitement
  • Hexagon – Unity and strength

Choose the right shape and meaning and you can more easily persuade people to take the next forward step on your site.

Built by Buffalo, for instance, introduces people to its website with this beautiful array of hexagons:

This sends the message to people that this is a trustworthy design agency that can build a strong website on their behalf. 

BePrint, on the other hand, uses a Venn diagram-like set of circles behind various pieces of content on its homepage: 

It’s a subtle design, but it’s a good choice. For one, the design is part of the logo, so it helps reinforce branding. But there’s also the psychological undertones of harmony that it sends to people interested in high-quality, professional printing services. 

What creative ways will you use geometry in web design?

Geometry in web design is about more than placing colorful shapes on top of white space. If you consider the purpose of these figures, lines, and planes when designing with them, your site can play a more active role in driving visitors to the point of conversion.
As you’ve seen here today, many of BeTheme’s pre-built sites have many of these creative uses integrated into their designs. So, if you’re interested in taking advantage of this trend, this theme is a good place to start.

The post 7 Creative Ways to Use Geometry in Web Design appeared first on Codrops.

Drawing 2D Metaballs with WebGL2

While many people shy away from writing vanilla WebGL and immediately jump to frameworks such as three.js or PixiJS, it is possible to achieve great visuals and complex animation with relatively small amounts of code. Today, I would like to present core WebGL concepts while programming some simple 2D visuals. This article assumes at least some higher-level knowledge of WebGL through a library.

Please note: WebGL2 has been around for years, yet Safari only recently enabled it behind a flag. It is a pretty significant upgrade from WebGL1 and brings tons of new useful features, some of which we will take advantage of in this tutorial.

What are we going to build

From a high level standpoint, to implement our 2D metaballs we need two steps:

  • Draw a bunch of rectangles with radial linear gradient starting from their centers and expanding to their edges. Draw a lot of them and alpha blend them together in a separate framebuffer.
  • Take the resulting image with the blended quads from step #1, scan its pixels one by one and decide the new color of the pixel depending on its opacity. For example – if the pixel has opacity smaller then 0.5, render it in red. Otherwise render it in yellow and so on.
Rendering multiple 2D quads and turning them to metaballs with post-processing.
Left: Multiple quads rendered with radial gradient, alpha blended and rendered to a texture.
Right: Post-processing on the generated texture and rendering the result to the device screen. Conditional coloring of each pixel based on opacity.

Don’t worry if these terms don’t make a lot of sense just yet – we will go over each of the steps needed in detail. Let’s jump into the code and start building!

Bootstrapping our program

We will start things by

  • Creating a HTMLCanvasElement, sizing it to our device viewport and inserting it into the page DOM
  • Obtaining a WebGL2RenderingContext to use for drawing stuff
  • Setting the correct WebGL viewport and the background color for our scene
  • Starting a requestAnimationFrame loop that will draw our scene as fast as the device allows. The speed is determined by various factors such as the hardware, current CPU / GPU workloads, battery levels, user preferences and so on. For smooth animation we are going to aim for 60FPS.
/* Create our canvas and obtain it's WebGL2RenderingContext */
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl2')

/* Handle error somehow if no WebGL2 support */
if (!gl) {
  // ...
}

/* Size our canvas and listen for resize events */
resizeCanvas()
window.addEventListener('resize', resizeCanvas)

/* Append our canvas to the DOM and set its background-color with CSS */
canvas.style.backgroundColor = 'black'
document.body.appendChild(canvas)

/* Issue first frame paint */
requestAnimationFrame(updateFrame)

function updateFrame (timestampMs) {
   /* Set our program viewport to fit the actual size of our monitor with devicePixelRatio into account */
   gl.viewport(0, 0, canvas.width, canvas.height)
   /* Set the WebGL background colour to be transparent */
   gl.clearColor(0, 0, 0, 0)
   /* Clear the current canvas pixels */
   gl.clear(gl.COLOR_BUFFER_BIT)

   /* Issue next frame paint */
   requestAnimationFrame(updateFrame)
}

function resizeCanvas () {
   /*
      We need to account for devicePixelRatio when sizing our canvas.
      We will use it to obtain the actual pixel size of our viewport and size our canvas to match it.
      We will then downscale it back to CSS units so it neatly fills our viewport and we benefit from downsampling antialiasing
      We also need to limit it because it can really slow our program. Modern iPhones have devicePixelRatios of 3. This means rendering 9x more pixels each frame!

      More info: https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html 
   */
   const dpr = devicePixelRatio > 2 ? 2 : devicePixelRatio
   canvas.width = innerWidth * dpr
   canvas.height = innerHeight * dpr
   canvas.style.width = `${innerWidth}px`
   canvas.style.height = `${innerHeight}px`
}

Drawing a quad

The next step is to actually draw a shape. WebGL has a rendering pipeline, which dictates how does the object you draw and its corresponding geometry and material end up on the device screen. WebGL is essentially just a rasterising engine, in the sense that you give it properly formatted data and it produces pixels for you.

The full rendering pipeline is out of the scope for this tutorial, but you can read more about it here. Let’s break down what exactly we need for our program:

Defining our geometry and its attributes

Each object we draw in WebGL is represented as a WebGLProgram running on the device GPU. It consists of input variables and vertex and fragment shader to operate on these variables. The vertex shader responsibility is to position our geometry correctly on the device screen and fragment shader’s responsibility is to control its appearance.

It’s up to us as developers to write our vertex and fragment shaders, compile them on the device GPU and link them in a GLSL program. Once we have successfully done this, we must query this program’s input variable locations that were allocated on the GPU for us, supply correctly formatted data to them, enable them and instruct them how to unpack and use our data.

To render our quad, we need 3 input variables:

  1. a_position will dictate the position of each vertex of our quad geometry. We will pass it as an array of 12 floats, i.e. 2 triangles with 3 points per triangle, each represented by 2 floats (x, y). This variable is an attribute, i.e. it is obviously different for each of the points that make up our geometry.
  2. a_uv will describe the texture offset for each point of our geometry. They too will be described as an array of 12 floats. We will use this data not to texture our quad with an image, but to dynamically create a radial linear gradient from the quad center. This variable is also an attribute and will too be different for each of our geometry points.
  3. u_projectionMatrix will be an input variable represented as a 32bit float array of 16 items that will dictate how do we transform our geometry positions described in pixel values to the normalised WebGL coordinate system. This variable is a uniform, unlike the previous two, it will not change for each geometry position.

We can take advantage of Vertex Array Object to store the description of our GLSL program input variables, their locations on the GPU and how should they be unpacked and used.

WebGLVertexArrayObjects or VAOs are 1st class citizens in WebGL2, unlike in WebGL1 where they were hidden behind an optional extension and their support was not guaranteed. They let us type less, execute fewer WebGL bindings and keep our drawing state into a single, easy to manage object that is simpler to track. They essentially store the description of our geometry and we can reference them later.

We need to write the shaders in GLSL 3.00 ES, which WebGL2 supports. Our vertex shader will be pretty simple:

/*
  Pass in geometry position and tex coord from the CPU
*/
in vec4 a_position;
in vec2 a_uv;

/*
  Pass in global projection matrix for each vertex
*/
uniform mat4 u_projectionMatrix;

/*
  Specify varying variable to be passed to fragment shader
*/
out vec2 v_uv;

void main () {
  /*
   We need to convert our quad points positions from pixels to the normalized WebGL coordinate system
  */
  gl_Position = u_projectionMatrix * a_position;
  v_uv = a_uv;
}

At this point, after we have successfully executed our vertex shader, WebGL will fill in the pixels between the points that make up the geometry on the device screen. The way the space between the points is filled depends on what primitives are we using for drawing – WebGL supports points, lines and triangles.

We as developers do not have control over this step.

After it has rasterised our geometry, it will execute our fragment shader on each generated pixel. The fragment shader responsibility is the final appearance of each generated pixel and wether it should even be rendered. Here is our fragment shader:

/*
  Set fragment shader float precision
*/
precision highp float;

/*
  Consume interpolated tex coord varying from vertex shader
*/
in vec2 v_uv;

/*
  Final color represented as a vector of 4 components - r, g, b, a
*/
out vec4 outColor;

void main () {
  /*
    This function will run on each each pixel generated by our quad geometry
  */
  /*
    Calculate the distance for each pixel from the center of the quad (0.5, 0.5)
  */
  float dist = distance(v_uv, vec2(0.5)) * 2.0;
  /*
    Invert and clamp our distance from 0.0 to 1.0
  */
  float c = clamp(1.0 - dist, 0.0, 1.0);
  /*
    Use the distance to generate the pixel opacity. We have to explicitly enable alpha blending in WebGL to see the correct result
  */
  outColor = vec4(vec3(1.0), c);
}

Let’s write two utility methods: makeGLShader() to create and compile our GLSL shaders and makeGLProgram() to link them into a GLSL program to be ran on the GPU:

/*
  Utility method to create a WebGLShader object and compile it on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLShader
*/
function makeGLShader (shaderType, shaderSource) {
  /* Create a WebGLShader object with correct type */
  const shader = gl.createShader(shaderType)
  /* Attach the shaderSource string to the newly created shader */
  gl.shaderSource(shader, shaderSource)
  /* Compile our newly created shader */
  gl.compileShader(shader)
  const success = gl.getShaderParameter(shader, gl.COMPILE_STATUS)
  /* Return the WebGLShader if compilation was a success */
  if (success) {
    return shader
  }
  /* Otherwise log the error and delete the faulty shader */
  console.error(gl.getShaderInfoLog(shader))
  gl.deleteShader(shader)
}

/*
  Utility method to create a WebGLProgram object
  It will create both a vertex and fragment WebGLShader and link them into a program on the device GPU
  https://developer.mozilla.org/en-US/docs/Web/API/WebGLProgram
*/
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {
  /* Create and compile vertex WebGLShader */
  const vertexShader = makeGLShader(gl.VERTEX_SHADER, vertexShaderSource)
  /* Create and compile fragment WebGLShader */
  const fragmentShader = makeGLShader(gl.FRAGMENT_SHADER, fragmentShaderSource)
  /* Create a WebGLProgram and attach our shaders to it */
  const program = gl.createProgram()
  gl.attachShader(program, vertexShader)
  gl.attachShader(program, fragmentShader)
  /* Link the newly created program on the device GPU */
  gl.linkProgram(program) 
  /* Return the WebGLProgram if linking was successfull */
  const success = gl.getProgramParameter(program, gl.LINK_STATUS)
  if (success) {
    return program
  }
  /* Otherwise log errors to the console and delete fauly WebGLProgram */
  console.error(gl.getProgramInfoLog(program))
  gl.deleteProgram(program)
}

And here is the complete code snippet we need to add to our previous code snippet to generate our geometry, compile our shaders and link them into a GLSL program:

const canvas = document.createElement('canvas')
/* rest of code */

/* Enable WebGL alpha blending */
gl.enable(gl.BLEND)
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)

/*
  Generate the Vertex Array Object and GLSL program
  we need to render our 2D quad
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(innerWidth / 2, innerHeight / 2)

/* --------------- Utils ----------------- */

function makeQuad (positionX, positionY, width = 50, height = 50, drawType = gl.STATIC_DRAW) {
  /*
    Write our vertex and fragment shader programs as simple JS strings

    !!! Important !!!!
    
    WebGL2 requires GLSL 3.00 ES
    We need to declare this version on the FIRST LINE OF OUR PROGRAM
    Otherwise it would not work!
  */
  const vertexShaderSource = `#version 300 es
    /*
      Pass in geometry position and tex coord from the CPU
    */
    in vec4 a_position;
    in vec2 a_uv;
    
    /*
     Pass in global projection matrix for each vertex
    */
    uniform mat4 u_projectionMatrix;
    
    /*
      Specify varying variable to be passed to fragment shader
    */
    out vec2 v_uv;
    
    void main () {
      gl_Position = u_projectionMatrix * a_position;
      v_uv = a_uv;
    }
  `
  const fragmentShaderSource = `#version 300 es
    /*
      Set fragment shader float precision
    */
    precision highp float;
    
    /*
      Consume interpolated tex coord varying from vertex shader
    */
    in vec2 v_uv;
    
    /*
      Final color represented as a vector of 4 components - r, g, b, a
    */
    out vec4 outColor;
    
    void main () {
      float dist = distance(v_uv, vec2(0.5)) * 2.0;
      float c = clamp(1.0 - dist, 0.0, 1.0);
      outColor = vec4(vec3(1.0), c);
    }
  `
  /*
    Construct a WebGLProgram object out of our shader sources and link it on the GPU
  */
  const quadProgram = makeGLProgram(vertexShaderSource, fragmentShaderSource)
  
  /*
    Create a Vertex Array Object that will store a description of our geometry
    that we can reference later when rendering
  */
  const quadVertexArrayObject = gl.createVertexArray()
  
  /*
    1. Defining geometry positions
    
    Create the geometry points for our quad
        
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
     
     We need two triangles to form a single quad
     As you can see, we end up duplicating vertices:
     V5 & V3 and V4 & V1 end up occupying the same position.
     
     There are better ways to prepare our data so we don't end up with
     duplicates, but let's keep it simple for this demo and duplicate them
     
     Unlike regular Javascript arrays, WebGL needs strongly typed data
     That's why we supply our positions as an array of 32 bit floating point numbers
  */
  const vertexArray = new Float32Array([
    /*
      First set of 3 points are for our first triangle
    */
    positionX - width / 2,  positionY + height / 2, // Vertex 1 (X, Y)
    positionX + width / 2,  positionY + height / 2, // Vertex 2 (X, Y)
    positionX + width / 2,  positionY - height / 2, // Vertex 3 (X, Y)
    /*
      Second set of 3 points are for our second triangle
    */
    positionX - width / 2, positionY + height / 2, // Vertex 4 (X, Y)
    positionX + width / 2, positionY - height / 2, // Vertex 5 (X, Y)
    positionX - width / 2, positionY - height / 2  // Vertex 6 (X, Y)
  ])

  /*
    Create a WebGLBuffer that will hold our triangles positions
  */
  const vertexBuffer = gl.createBuffer()
  /*
    Now that we've created a GLSL program on the GPU we need to supply data to it
    We need to supply our 32bit float array to the a_position variable used by the GLSL program
    
    When you link a vertex shader with a fragment shader by calling gl.linkProgram(someProgram)
    WebGL (the driver/GPU/browser) decide on their own which index/location to use for each attribute
    
    Therefore we need to find the location of a_position from our program
  */
  const a_positionLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_position')
  
  /*
    Bind the Vertex Array Object descriptior for this geometry
    Each geometry instruction from now on will be recorded under it
    
    To stop recording after we are done describing our geometry, we need to simply unbind it
  */
  gl.bindVertexArray(quadVertexArrayObject)

  /*
    Bind the active gl.ARRAY_BUFFER to our WebGLBuffer that describe the geometry positions
  */
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
  /*
    Feed our 32bit float array that describes our quad to the vertexBuffer using the
    gl.ARRAY_BUFFER global handle
  */
  gl.bufferData(gl.ARRAY_BUFFER, vertexArray, drawType)
  /*
    We need to explicitly enable our the a_position variable on the GPU
  */
  gl.enableVertexAttribArray(a_positionLocationOnGPU)
  /*
    Finally we need to instruct the GPU how to pull the data out of our
    vertexBuffer and feed it into the a_position variable in the GLSL program
  */
  /*
    Tell the attribute how to get data out of positionBuffer (ARRAY_BUFFER)
  */
  const size = 2           // 2 components per iteration
  const type = gl.FLOAT    // the data is 32bit floats
  const normalize = false  // don't normalize the data
  const stride = 0         // 0 = move forward size * sizeof(type) each iteration to get the next position
  const offset = 0         // start at the beginning of the buffer
  gl.vertexAttribPointer(a_positionLocationOnGPU, size, type, normalize, stride, offset)
  
  /*
    2. Defining geometry UV texCoords
    
    V6  _______ V5         V3
       |      /         /|
       |    /         /  |
       |  /         /    |
    V4 |/      V1 /______| V2
  */
  const uvsArray = new Float32Array([
    0, 0, // V1
    1, 0, // V2
    1, 1, // V3
    0, 0, // V4
    1, 1, // V5
    0, 1  // V6
  ])
  /*
    The rest of the code is exactly like in the vertices step above.
    We need to put our data in a WebGLBuffer, look up the a_uv variable
    in our GLSL program, enable it, supply data to it and instruct
    WebGL how to pull it out:
  */
  const uvsBuffer = gl.createBuffer()
  const a_uvLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_uv')
  gl.bindBuffer(gl.ARRAY_BUFFER, uvsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, uvsArray, drawType)
  gl.enableVertexAttribArray(a_uvLocationOnGPU)
  gl.vertexAttribPointer(a_uvLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptior for this geometry
  */
  gl.bindVertexArray(null)
  
  /*
    WebGL has a normalized viewport coordinate system which looks like this:
    
         Device Viewport
       ------- 1.0 ------  
      |         |         |
      |         |         |
    -1.0 --------------- 1.0
      |         |         | 
      |         |         |
       ------ -1.0 -------
       
     However as you can see, we pass the position and size of our quad in actual pixels
     To convert these pixels values to the normalized coordinate system, we will
     use the simplest 2D projection matrix.
     It will be represented as an array of 16 32bit floats
     
     You can read a gentle introduction to 2D matrices here
     https://webglfundamentals.org/webgl/lessons/webgl-2d-matrices.html
  */
  const projectionMatrix = new Float32Array([
    2 / innerWidth, 0, 0, 0,
    0, -2 / innerHeight, 0, 0,
    0, 0, 0, 0,
    -1, 1, 0, 1,
  ])
  
  /*
    In order to supply uniform data to our quad GLSL program, we first need to enable the GLSL program responsible for rendering our quad
  */
  gl.useProgram(quadProgram)
  /*
    Just like the a_position attribute variable earlier, we also need to look up
    the location of uniform variables in the GLSL program in order to supply them data
  */
  const u_projectionMatrixLocation = gl.getUniformLocation(quadProgram, 'u_projectionMatrix')
  /*
    Supply our projection matrix as a Float32Array of 16 items to the u_projection uniform
  */
  gl.uniformMatrix4fv(u_projectionMatrixLocation, false, projectionMatrix)
  /*
    We have set up our uniform variables correctly, stop using the quad program for now
  */
  gl.useProgram(null)

  /*
    Return our GLSL program and the Vertex Array Object descriptor of our geometry
    We will need them to render our quad in our updateFrame method
  */
  return {
    quadProgram,
    quadVertexArrayObject,
  }
}

/* rest of code */
function makeGLShader (shaderType, shaderSource) {}
function makeGLProgram (vertexShaderSource, fragmentShaderSource) {}
function updateFrame (timestampMs) {}

We have successfully created a GLSL program quadProgram, which is running on the GPU, waiting to be drawn on the screen. We also have obtained a Vertex Array Object quadVertexArrayObject, which describes our geometry and can be referenced before we draw. We can now draw our quad. Let’s augment our updateFrame() method like so:

function updateFrame (timestampMs) {
   /* rest of our code */

  /*
    Bind the Vertex Array Object descriptor of our quad we generated earlier
  */
  gl.bindVertexArray(quadVertexArrayObject)
  /*
    Use our quad GLSL program
  */
  gl.useProgram(quadProgram)
  /*
    Issue a render command to paint our quad triangles
  */
  {
    const drawPrimitive = gl.TRIANGLES
    const vertexArrayOffset = 0
    const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
    gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
  }
  /*     
    After a successful render, it is good practice to unbind our 
GLSL program and Vertex Array Object so we keep WebGL state clean.
    We will bind them again anyway on the next render
  */
  gl.useProgram(null)
  gl.bindVertexArray(null)

  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

And here is our result:

We can use the great SpectorJS Chrome extension to capture our WebGL operations on each frame. We can look at the entire command list with their associated visual states and context information. Here is what it takes to render a single frame with our updateFrame() call:

Draw calls needed to render a single 2D quad on the center of our screen.
A screenshot of all the steps we implemented to render a single quad. (Click to see a larger version)

Some gotchas:

  1. We declare the vertices positions of our triangles in a counter clockwise order. This is important.
  2. We need to explicitly enable blending in WebGL and specify it’s blend operation. For our demo we will use gl.ONE_MINUS_SRC_ALPHA as a blend function (multiplies all colors by 1 minus the source alpha value).
  3. In our vertex shader you can see we expect the input variable a_position to be vector with 4 components (vec4), while in Javascript we specify only 2 items per vertex. That’s because the default attribute value is 0, 0, 0, 1. It doesn’t matter that you’re only supplying x and y from your attributes. z defaults to 0 and w defaults to 1.
  4. As you can see, WebGL is a state machine, where you have to constantly bind stuff before you are able to work on it and you always have to make sure you unbind it afterwards. Consider how in the code snippet above we supplied a Float32Array with out positions to the vertexBuffer:
const vertexArray = new Float32Array([/* ... */])
const vertexBuffer = gl.createBuffer()
/* Bind our vertexBuffer to the global binding WebGL bind point gl.ARRAY_BUFFER */
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
/* At this point, gl.ARRAY_BUFFER represents vertexBuffer */
/* Supply data to our vertexBuffer using the gl.ARRAY_BUFFER binding point */
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW)
/* Do a bunch of other stuff with the active gl.ARRAY_BUFFER (vertexBuffer) here */
// ...

/* After you have done your work, unbind it */
gl.bindBuffer(gl.ARRAY_BUFFER, null)

This is totally opposite of Javascript, where this same operation would be expressed like this for example (pseudocode):

const vertexBuffer = gl.createBuffer()
vertexBuffer.addData(vertexArray)
vertexBuffer.setDrawOperation(gl.STATIC_DRAW)
// etc.

Coming from Javascript background, initially I found WebGL’s state machine way of doing things by constantly binding and unbinding really odd. One must exercise good discipline and always make sure to unbind stuff after using it, even in trivial programs like ours! Otherwise you risk things not working and hard to track bugs.

Drawing lots of quads

We have successfully rendered a single quad, but in order to make things more interesting and visually appealing, we need to draw more.

As we saw already, we can easily create new geometries with different position using our makeQuad() utility helper. We can pass them different positions and radiuses and compile each one of them into a separate GLSL program to be executed on the GPU. This will work, however:

As we saw in our update loop method updateFrame, to render our quad on each frame we must:

  1. Use the correct GLSL program by calling gl.useProgram()
  2. Bind the correct VAO describing our geometry by calling gl.bindVertexArray()
  3. Issue a draw call with correct primitive type by calling gl.drawArrays()

So 3 WebGL commands in total.

What if we want to render 500 quads? Suddenly we jump to 500×3 or 1500 individual WebGL calls on each frame of our animation. If we want 1000quads we jump up to 3000 individual calls, without even counting all of the preparation WebGL bindings we have to do before our updateFrame loop starts.

Geometry Instancing is a way to reduce these calls. It works by letting you tell WebGL how many times you want the same thing drawn (the number of instances) with minor variations, such as rotation, scale, position etc. Examples include trees, grass, crowd of people, boxes in a warehouse, etc.

Just like VAOs, instancing is a 1st class citizen in WebGL2 and does not require extensions, unlike WebGL1. Let’s augment our code to support geometry instancing and render 1000 quads with random positions.

First of all, we need to decide on how many quads we want rendered and prepare the offset positions for each one as a new array of 32bit floats. Let’s do 1000 quads and positions them randomly in our viewport:

/* rest of code */

/* How many quads we want rendered */
const QUADS_COUNT = 1000
/*
  Array to store our quads positions
  We need to layout our array as a continuous set
  of numbers, where each pair represents the X and Y
  or a single 2D position.
  
  Hence for 1000 quads we need an array of 2000 items
  or 1000 pairs of X and Y
*/
const quadsPositions = new Float32Array(QUADS_COUNT * 2)
for (let i = 0; i < QUADS_COUNT; i++) {
  /*
    Generate a random X and Y position
  */
  const randX = Math.random() * innerWidth
  const randY = Math.random() * innerHeight
  /*
    Set the correct X and Y for each pair in our array
  */
  quadsPositions[i * 2 + 0] = randX
  quadsPositions[i * 2 + 1] = randY
}

/*
  We also need to augment our makeQuad() method
  It no longer expects a single position, rather an array of positions
*/
const {
  quadProgram,
  quadVertexArrayObject,
} = makeQuad(quadsPositions)

/* rest of code */

Instead of a single position, we will now pass an array of positions into our makeQuad() method. Let’s augment this method to receive our offsets array as a new variable input a_offset to our shaders which will contain the correct XY offset for a particular instance. To do this, we need to prepare our offsets as a new WebGLBuffer and instruct WebGL how to upack them, just like we did for a_position and a_uv

function makeQuad (quadsPositions, width = 70, height = 70, drawType = gl.STATIC_DRAW) {
  /* rest of code */

  /*
    Add offset positions for our individual instances
    They are declared and used in exactly the same way as
    "a_position" and "a_uv" above
  */
  const offsetsBuffer = gl.createBuffer()
  const a_offsetLocationOnGPU = gl.getAttribLocation(quadProgram, 'a_offset')
  gl.bindBuffer(gl.ARRAY_BUFFER, offsetsBuffer)
  gl.bufferData(gl.ARRAY_BUFFER, quadsPositions, drawType)
  gl.enableVertexAttribArray(a_offsetLocationOnGPU)
  gl.vertexAttribPointer(a_offsetLocationOnGPU, 2, gl.FLOAT, false, 0, 0)
  /*
    HOWEVER, we must add an additional WebGL call to set this attribute to only
    change per instance, instead of per vertex like a_position and a_uv above
  */
  const instancesDivisor = 1
  gl.vertexAttribDivisor(a_offsetLocationOnGPU, instancesDivisor)
  
  /*
    Stop recording and unbind the Vertex Array Object descriptor for this geometry
  */
  gl.bindVertexArray(null)

  /* rest of code */
}

We need to augment our original vertexArray responsible for passing data into our a_position GLSL variable. We no longer need to offset it to the desired position like in the first example, now the a_offset variable will take care of this in the vertex shader:

const vertexArray = new Float32Array([
  /*
    First set of 3 points are for our first triangle
  */
 -width / 2,  height / 2, // Vertex 1 (X, Y)
  width / 2,  height / 2, // Vertex 2 (X, Y)
  width / 2, -height / 2, // Vertex 3 (X, Y)
  /*
    Second set of 3 points are for our second triangle
  */
 -width / 2,  height / 2, // Vertex 4 (X, Y)
  width / 2, -height / 2, // Vertex 5 (X, Y)
 -width / 2, -height / 2  // Vertex 6 (X, Y)
])

We also need to augment our vertex shader to consume and use the new a_offset input variable we pass from Javascript:

const vertexShaderSource = `#version 300 es
  /* rest of GLSL code */
  /*
    This input vector will change once per instance
  */
  in vec4 a_offset;

  void main () {
     /* Account a_offset in the final geometry posiiton */
     vec4 newPosition = a_position + a_offset;
     gl_Position = u_projectionMatrix * newPosition;
  }
  /* rest of GLSL code */
`

And as a final step we need to change our drawArrays call in our updateFrame to drawArraysInstanced to account for instancing. This new method expects the exact same arguments and adds instanceCount as last one:

function updateFrame (timestampMs) {
   /* rest of code */
   {
     const drawPrimitive = gl.TRIANGLES
     const vertexArrayOffset = 0
     const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
     gl.drawArraysInstanced(drawPrimitive, vertexArrayOffset, numberOfVertices, QUADS_COUNT)
   }
   /* rest of code */
}

And with all these changes, here is our updated example:

Even though we increased the amount of rendered objects by 1000x, we are still making 3 WebGL calls on each frame. That’s a pretty great performance win!

Steps needed so our WebGL can draw 1000 of quads via geometry instancing.
All WebGL calls needed to draw our 1000 quads in a single updateFrame()call. Note the amount of needed calls did not increase from the previous example thanks to instancing.

Post Processing with a fullscreen quad

Now that we have our 1000 quads successfully rendering to the device screen on each frame, we can turn them into metaballs. As we established, we need to scan the pixels of the picture we generated in the previous steps and determine the alpha value of each pixel. If it is below a certain threshold, we discard it, otherwise we color it.

To do this, instead of rendering our scene directly to the screen as we do right now, we need to render it to a texture. We will do our post processing on this texture and render the result to the device screen.

Post-Processing is a technique used in graphics that allows you to take a current input texture, and manipulate its pixels to produce a transformed image. This can be used to apply shiny effects like volumetric lighting, or any other filter type effect you’ve seen in applications like Photoshop or Instagram.

Nicolas Garcia Belmonte

The basic technique for creating these effects is pretty straightforward:

  1. A WebGLTexture is created with the same size as the canvas and attached as a color attachment to a WebGLFramebuffer. At the beginning of our updateFrame() method, the framebuffer is set as the render target, and the entire scene is rendered normally to it.
  2. Next, a full-screen quad is rendered to the device screen using the texture generated in step 1 as an input. The shader used during the rendering of the quad is what contains the post-process effect.

Creating a texture and framebuffer to render to

A framebuffer is just a collection of attachments. Attachments are either textures or renderbuffers. Let’s create a WebGLTexture and attach it to a framebuffer as the first color attachment:

/* rest of code */

const renderTexture = makeTexture()
const framebuffer = makeFramebuffer(renderTexture)

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) {
  /*
    Create the texture that we will use to render to
  */
  const targetTexture = gl.createTexture()
  /*
    Just like everything else in WebGL up until now, we need to bind it
    so we can configure it. We will unbind it once we are done with it.
  */
  gl.bindTexture(gl.TEXTURE_2D, targetTexture)

  /*
    Define texture settings
  */
  const level = 0
  const internalFormat = gl.RGBA
  const border = 0
  const format = gl.RGBA
  const type = gl.UNSIGNED_BYTE
  /*
    Notice how data is null. That's because we don't have data for this texture just yet
    We just need WebGL to allocate the texture
  */
  const data = null
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /*
    Set the filtering so we don't need mips
  */
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
  
  return renderTexture
}

function makeFramebuffer (texture) {
  /*
    Create and bind the framebuffer
  */
  const fb = gl.createFramebuffer()
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb)
 
  /*
    Attach the texture as the first color attachment
  */
  const attachmentPoint = gl.COLOR_ATTACHMENT0
  gl.framebufferTexture2D(gl.FRAMEBUFFER, attachmentPoint, gl.TEXTURE_2D, targetTexture, level)
}

We have successfully created a texture and attached it as color attachment to a framebuffer. Now we can render our scene to it. Let’s augment our updateFrame()method:

function updateFrame () {
  gl.viewport(0, 0, canvas.width, canvas.height)
  gl.clearColor(0, 0, 0, 0)
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Bind the framebuffer we created
    From now on until we unbind it, each WebGL draw command will render in it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
  
  /* Set the offscreen framebuffer background color to be transparent */
  gl.clearColor(0.2, 0.2, 0.2, 1.0)
  /* Clear the offscreen framebuffer pixels */
  gl.clear(gl.COLOR_BUFFER_BIT)

  /*
    Code for rendering our instanced quads here
  */

  /*
    We have successfully rendered to the framebuffer at this point
    In order to render to the screen next, we need to unbind it
  */
  gl.bindFramebuffer(gl.FRAMEBUFFER, null)
  
  /* Issue next frame paint */
  requestAnimationFrame(updateFrame)
}

Let’s take a look at our result:

As you can see, we get an empty screen. There are no errors and the program is running just fine – keep in mind however that we are rendering to a separate framebuffer, not the default device screen framebuffer!

Break down of our WebGL scene and the steps needed to render it to a separate framebuffer.
Our program produces black screen, since we are rendering to the offscreen framebuffer

In order to display our offscreen framebuffer back on the screen, we need to render a fullscreen quad and use the framebuffer’s texture as an input.

Creating a fullscreen quad and displaying our texture on it

Let’s create a new quad. We can reuse our makeQuad() method from the above snippets, but we need to augment it to support instancing optionally and be able to put vertex and fragment shader sources as outside argument variables. This time we need only one quad and the shaders we need for it are different.

Take a look at the updated makeQuad()signature:

/* rename our instanced quads program & VAO */
const {
  quadProgram: instancedQuadsProgram,
  quadVertexArrayObject: instancedQuadsVAO,
} = makeQuad({
  instancedOffsets: quadsPositions,
  /*
    We need different set of vertex and fragment shaders
    for the different quads we need to render, so pass them from outside
  */
  vertexShaderSource: instancedQuadVertexShader,
  fragmentShaderSource: instancedQuadFragmentShader,
  /*
    support optional instancing
  */
  isInstanced: true,
})

Let’s use the same method to create a new fullscreen quad and render it. First our vertex and fragment shader:

const fullscreenQuadVertexShader = `#version 300 es
   in vec4 a_position;
   in vec2 a_uv;
   
   uniform mat4 u_projectionMatrix;
   
   out vec2 v_uv;
   
   void main () {
    gl_Position = u_projectionMatrix * a_position;
    v_uv = a_uv;
   }
`
const fullscreenQuadFragmentShader = `#version 300 es
  precision highp float;
  
  /*
    Pass our texture we render to as an uniform
  */
  uniform sampler2D u_texture;
  
  in vec2 v_uv;
  
  out vec4 outputColor;
  
  void main () {
    /*
      Use our interpolated UVs we assigned in Javascript to lookup
      texture color value at each pixel
    */
    vec4 inputColor = texture(u_texture, v_uv);
    
    /*
      0.5 is our alpha threshold we use to decide if
      pixel should be discarded or painted
    */
    float cutoffThreshold = 0.5;
    /*
      "cutoff" will be 0 if pixel is below 0.5 or 1 if above
      
      step() docs - https://thebookofshaders.com/glossary/?search=step
    */
    float cutoff = step(cutoffThreshold, inputColor.a);
    
    /*
      Let's use mix() GLSL method instead of if statement
      if cutoff is 0, we will discard the pixel by using empty color with no alpha
      otherwise, let's use black with alpha of 1
      
      mix() docs - https://thebookofshaders.com/glossary/?search=mix
    */
    vec4 emptyColor = vec4(0.0);
    /* Render base metaballs shapes */
    vec4 borderColor = vec4(1.0, 0.0, 0.0, 1.0);
    outputColor = mix(
      emptyColor,
      borderColor,
      cutoff
    );
    
    /*
      Increase the treshold and calculate new cutoff, so we can render smaller shapes again, this time in different color and with smaller radius
    */
    cutoffThreshold += 0.05;
    cutoff = step(cutoffThreshold, inputColor.a);
    vec4 fillColor = vec4(1.0, 1.0, 0.0, 1.0);
    /*
      Add new smaller metaballs color on top of the old one
    */
    outputColor = mix(
      outputColor,
      fillColor,
      cutoff
    );
  }
`

Let’s use them to create and link a valid GLSL program, just like when we rendered our instances:

const {
  quadProgram: fullscreenQuadProgram,
  quadVertexArrayObject: fullscreenQuadVAO,
} = makeQuad({
  vertexShaderSource: fullscreenQuadVertexShader,
  fragmentShaderSource: fullscreenQuadFragmentShader,
  isInstanced: false,
  width: innerWidth,
  height: innerHeight
})
/*
  Unlike our instances GLSL program, here we need to pass an extra uniform - a "u_texture"!
  Tell the shader to use texture unit 0 for u_texture
*/
gl.useProgram(fullscreenQuadProgram)
const u_textureLocation = gl.getUniformLocation(fullscreenQuadProgram, 'u_texture')
gl.uniform1i(u_textureLocation, 0)
gl.useProgram(null)

Finally we can render the fullscreen quad with the result texture as an uniform u_texture. Let’s change our updateFrame() method:

function updateFrame () {
 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer)
 /* render instanced quads here */
 gl.bindFramebuffer(gl.FRAMEBUFFER, null)
 
 /*
   Render our fullscreen quad
 */
 gl.bindVertexArray(fullscreenQuadVAO)
 gl.useProgram(fullscreenQuadProgram)
 /*
  Bind the texture we render to as active TEXTURE_2D
 */
 gl.bindTexture(gl.TEXTURE_2D, renderTexture)
 {
   const drawPrimitive = gl.TRIANGLES
   const vertexArrayOffset = 0
   const numberOfVertices = 6 // 6 vertices = 2 triangles = 1 quad
   gl.drawArrays(drawPrimitive, vertexArrayOffset, numberOfVertices)
 }
 /*
   Just like everything else, unbind our texture once we are done rendering
 */
 gl.bindTexture(gl.TEXTURE_2D, null)
 gl.useProgram(null)
 gl.bindVertexArray(null)
 requestAnimationFrame(updateFrame)
}

And here is our final result (I also added a simple animation to make the effect more apparent):

And here is the breakdown of one updateFrame() call:

Breakdown of our WebGL scene amd the steps needed to render 1000 quads and post-process them to metaballs.
You can clearly see how we render our 1000 instanced quads in separate framebuffer in steps 1 to 3. We then draw and manipulate the resulting texture to a fullscreen quad that we render in steps 4 to 7.

Aliasing issues

On my 2016 MacBook Pro with retina display I can clearly see aliasing issues with our current example. If we are to add bigger radiuses and blow our animation to fullscreen the problem will become only more noticeable.

The issue comes from the fact we are rendering to a 8bit gl.UNSIGNED_BYTE texture. If we want to increase the detail, we need to switch to floating point textures (32 bit float gl.RGBA32F or 16 bit float gl.RGBA16F). The catch is that these textures are not supported on all hardware and are not part of WebGL2 core. They are available through optional extensions, that we need to check if exist.

The extensions we are interested in to render to 32bit floating point textures are

  • EXT_color_buffer_float
  • OES_texture_float_linear

If these extensions are present on the user device, we can use internalFormat = gl.RGBA32F and textureType = gl.FLOAT when creating our render textures. If they are not present, we can optionally fallback and render to 16bit floating textures. The extensions we need in that case are:

  • EXT_color_buffer_half_float
  • OES_texture_half_float_linear

If these extensions are present, we can use internalFormat = gl.RGBA16F and textureType = gl.HALF_FLOAT for our render texture. If not, we will fallback to what we have used up until now – internalFormat = gl.RGBA and textureType = gl.UNSIGNED_BYTE.

Here is our updated makeTexture() method:

function makeTexture (textureWidth = canvas.width, textureHeight = canvas.height) { 
  /*
   Initialize internal format & texture type to default values
  */
  let internalFormat = gl.RGBA
  let type = gl.UNSIGNED_BYTE
  
  /*
    Check if optional extensions are present on device
  */
  const rgba32fSupported = gl.getExtension('EXT_color_buffer_float') && gl.getExtension('OES_texture_float_linear')
  
  if (rgba32fSupported) {
    internalFormat = gl.RGBA32F
    type = gl.FLOAT
  } else {
    /*
      Check if optional fallback extensions are present on device
    */
    const rgba16fSupported = gl.getExtension('EXT_color_buffer_half_float') && gl.getExtension('OES_texture_half_float_linear')
    if (rgba16fSupported) {
      internalFormat = gl.RGBA16F
      type = gl.HALF_FLOAT
    }
  }

  /* rest of code */
  
  /*
    Pass in correct internalFormat and textureType to texImage2D call 
  */
  gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, textureWidth, textureHeight, border, format, type, data)

  /* rest of code */
}

And here is our updated result:

Conclusion

I hope I managed to showcase the core principles behind WebGL2 with this demo. As you can see, the API itself is low-level and requires quite a bit of typing, yet at the same time is really powerful and let’s you draw complex scenes with fine-grained control over the rendering.

Writing production ready WebGL requires even more typing, checking for optional features / extensions and handling missing extensions and fallbacks, so I would advise you to use a framework. At the same time, I believe it is important to understand the key concepts behind the API so you can successfully use higher level libraries like threejs and dig into their internals if needed.

I am a big fan of twgl, which hides away much of the verbosity of the API, while still being really low level with a small footprint. This demo’s code can easily be reduced by more then half by using it.

I encourage you to experiment around with the code after reading this article, plug in different values, change the order of things, add more draw commands and what not. I hope you walk away with a high level understanding of core WebGL2 API and how it all ties together, so you can learn more on your own.

The post Drawing 2D Metaballs with WebGL2 appeared first on Codrops.

Coding a 3D Lines Animation with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be recreating the cool 3D lines animation seen on the Voice of Racism website by Assembly using Three.jsDepthTexture.

This coding session was streamed live on December 13, 2020.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a 3D Lines Animation with Three.js appeared first on Codrops.

Coding a Simple Raymarching Scene with Three.js

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we will be coding a Raymarching demo in Three.js from scratch. We’ll be using some cool Matcap textures, and add a wobbly animation to the scene, and all will be done as math functions. The scene is inspired by this demo made by Luigi De Rosa.

This coding session was streamed live on December 6, 2020.

Check out the live demo.

The Art of Code channel: https://www.youtube.com/c/TheArtofCodeIsCool/

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Coding a Simple Raymarching Scene with Three.js appeared first on Codrops.

Replicating the Icosahedron from Rogier de Boevé’s Website

Editor’s note: We want to share more of the web dev and design community directly here on Codrops, so we’re very happy to start featuring Yuriy’s newest live coding sessions!

In this live stream of ALL YOUR HTML, we’ll be replicating the beautiful icosahedron animation from Rogier de Boevé’s website. We’ll be using Three.js and GLSL to make things cool, and also some postprocessing.

This coding session was streamed live on November 22, 2020.

This is what we’ll be learning to code:

Original website: https://rogierdeboeve.com/

Developer’s Twitter: https://twitter.com/rogierdeboeve

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Replicating the Icosahedron from Rogier de Boevé’s Website appeared first on Codrops.

Simulating Depth of Field with Particles using the Blurry Library

Blurry is a set of scripts that allow you to easily visualize simple geometrical shapes with a bokeh/depth of field effect of an out-of-focus camera. It uses Three.js internally to make it easy to develop the shaders and the WebGL programs required to run it.

The bokeh effect is generated by using millions of particles to draw the primitives supported by the library. These particles are then accumulated in a texture and randomly displaced in a circle depending on how far away they are from the focal plane.

These are some of the scenes I’ve recently created using Blurry:

blurry examples

Since the library itself is very simple and you don’t need to know more than three functions to get started, I’ve decided to write this walk-through of a scene made with Blurry. It will teach you how to use various tricks to create geometrical shapes often found in the works of generative artists. This will also hopefully show you how simple tools can produce interesting and complex looking results.

In this little introduction to Blurry we’ll try to recreate the following scene, by using various techniques borrowed from the world of generative art:

targetscene

Starting out

You can download the repo here and serve index.html from a local server to render the scene that is currently coded inside libs/createScene.js. You can rotate, zoom and pan around the scene as with any Three.js project using OrbitControls.js.

There are also some additional key-bindings to change various parameters of the renderer, such as the focal length, exposure, bokeh strength and more. These are visible at the bottom left of the screen.

All the magic happens inside libs/createScene.js, where you can implement the two functions required to render something with Blurry. All the snippets defined in this article will end up inside createScene.js.

The most important function we’ll need to implement to recreate the scene shown at the beginning of the article is createScene(), which will be called by the other scripts just before the renderer pushes the primitives to the GPU for the actual rendering of the scene.

The other function we’ll define is setGlobals(), which is used to define the parameters of the shaders that will render our scene, such as the strength of the bokeh effect, the exposure, background color, etc.

Let’s head over to createScene.js, remove everything that’s already coded in there, and define setGlobals() as:

function setGlobals() {
    pointsPerFrame = 50000;

    cameraPosition = new THREE.Vector3(0, 0, 115);
    cameraFocalDistance = 100;

    minimumLineSize = 0.005;

    bokehStrength = 0.02;
    focalPowerFunction = 1;
    exposure = 0.009;
    distanceAttenuation = 0.002;

    useBokehTexture = true;
    bokehTexturePath = "assets/bokeh/pentagon2.png";

    backgroundColor[0] *= 0.8;
    backgroundColor[1] *= 0.8;
    backgroundColor[2] *= 0.8;
}

There’s an explanation for each of these parameters in the Readme of the GitHub repo. The important info at the moment is that the camera will start positioned at (x: 0, y: 0, z: 115) and the cameraFocalDistance (the distance from the camera where our primitives will be in focus) will be set at 100, meaning that every point 100 units away from the camera will be in focus.

Another variable to consider is pointsPerFrame, which is used internally to assign a set number of points to all the primitives to render in a single frame. If you find that your GPU is struggling with 50000, lower that value.

Before we start implementing createScene(), let’s first define some initial global variables that will be useful later:

let rand, nrand;
let vec3 = function(x,y,z) { return new THREE.Vector3(x,y,z) };

I’ll explain the usage of each of these variables as we move along; vec3() is just a simple shortcut to create Three.js vectors without having to type THREE.Vector3(…) each time.

Let’s now define createScene():

function createScene() {
    Utils.setRandomSeed("3926153465010");

    rand = function() { return Utils.rand(); };
    nrand = function() { return rand() * 2 - 1; };
}

Very often I find the need to “repeat” the sequence of randomly generated numbers I had in a bugged scene. If I had to rely on the standard Math.random() function, each page-refresh would give me different random numbers, which is why I’ve included a seeded random number generator in the project. Utils.setRandomSeed(…) will take a string as a parameter and use that as the seed of the random numbers that will be generated by Utils.rand(), the seeded generator that is used in place of Math.random() (though you can still use that if you want).

The functions rand & nrand will be used to generate random values in the interval [0 … 1] for rand, and [-1 … +1] for nrand.

Let’s draw some lines

At the moment you can only draw two simple primitives in Blurry: lines and quads. We’ll focus on lines in this article. Here’s the code that generates 10 consecutive straight lines:

function createScene() {
    Utils.setRandomSeed("3926153465010");

    rand = function() { return Utils.rand(); };
    nrand = function() { return rand() * 2 - 1; };

    for(let i = 0; i < 10; i++) {
        lines.push(
            new Line({
                v1: vec3(i, 0, 15),
                v2: vec3(i, 10, 15),
                
                c1: vec3(5, 5, 5),
                c2: vec3(5, 5, 5),
            })
        );
    }
}

lines is simply a global array used to store the lines to render. Every line we .push() into the array will be rendered.

v1 and v2 are the two vertices of the line. c1 and c2 are the colors associated to each vertex as an RGB triplet. Note that Blurry is not restricted to the [0…1] range for each component of the RGB color. In this case using 5 for each component will give us a white line.

If you did everything correctly up until now, you’ll see 10 straight lines in the screen as soon as you launch index.html from a local server.

Here’s the code we have so far.

Since we’re not here to just draw straight lines, we’ll now make more interesting shapes with the help of these two new functions:

function createScene() {
    Utils.setRandomSeed("3926153465010");
    
    rand = function() { return Utils.rand(); };
    nrand = function() { return rand() * 2 - 1; };
    
    computeWeb();
    computeSparkles();
}

function computeWeb() { }
function computeSparkles() { }

Let’s start by defining computeWeb() as:

function computeWeb() {
    // how many curved lines to draw
    let r2 = 17;
    // how many "straight pieces" to assign to each of these curved lines
    let r1 = 35;
    for(let j = 0; j < r2; j++) {
        for(let i = 0; i < r1; i++) {
            // definining the spherical coordinates of the two vertices of the line we're drawing
            let phi1 = j / r2 * Math.PI * 2;
            let theta1 = i / r1 * Math.PI - Math.PI * 0.5;

            let phi2 = j / r2 * Math.PI * 2;
            let theta2 = (i+1) / r1 * Math.PI - Math.PI * 0.5;

            // converting spherical coordinates to cartesian
            let x1 = Math.sin(phi1) * Math.cos(theta1);
            let y1 = Math.sin(theta1);
            let z1 = Math.cos(phi1) * Math.cos(theta1);

            let x2 = Math.sin(phi2) * Math.cos(theta2);
            let y2 = Math.sin(theta2);
            let z2 = Math.cos(phi2) * Math.cos(theta2);

            lines.push(
                new Line({
                    v1: vec3(x1,y1,z1).multiplyScalar(15),
                    v2: vec3(x2,y2,z2).multiplyScalar(15),
                    c1: vec3(5,5,5),
                    c2: vec3(5,5,5),
                })
            );
        }
    }
}

The goal here is to create a bunch of vertical lines that follow the shape of a sphere. Since we can’t make curved lines, we’ll break each line along this sphere in tiny straight pieces. (x1,y1,z1) and (x2,y2,z2) will be the endpoints of the line we’ll draw in each iteration of the loop. r2 is used to decide how many vertical lines in the surface of the sphere we’ll be drawing, whereas r1 is the amount of tiny straight pieces that we’re going to use for each one of the curved lines we’ll draw.

The phi and theta variables represent the spherical coordinates of both points, which are then converted to Cartesian coordinates before pushing the new line into the lines array.

Each time the outer loop (j) is entered, phi1 and phi2 will decide at which angle the vertical line will start (for the moment, they’ll hold the same exact value). Every iteration inside the inner loop (i) will construct the tiny pieces creating the vertical line, by slightly incrementing the theta angle at each iteration.

After the conversion, the resulting Cartesian coordinates will be multiplied by 15 world units with .multiplyScalar(15), thus the curved lines that we’re drawing are placed on the surface of a sphere which has a radius of exactly 15.

To make things a bit more interesting, let’s twist these vertical lines a bit with this simple change:

let phi1 = (j + i * 0.075) / r2 * Math.PI * 2;
...
let phi2 = (j + (i+1) * 0.075) / r2 * Math.PI * 2;

If we twist the phi angles a bit as we move up the line while we’re constructing it, we’ll end up with:

t1

And as a last change, let’s swap the z-axis of both points with the y-axis:

...
lines.push(
    new Line({
        v1: vec3(x1,z1,y1).multiplyScalar(15),
        v2: vec3(x2,z2,y2).multiplyScalar(15),
        c1: vec3(5,5,5),
        c2: vec3(5,5,5),
    })
);
...

Here’s the full source of createScene.js up to this point.

Segment-Plane intersections

Now the fun part begins. To recreate these type of intersections between the lines we just did

t2

…we’ll need to play a bit with ray-plane intersections. Here’s an overview of what we’ll do:

Given the lines we made in our 3D scene, we’re going to create an infinite plane with a random direction and we’ll intersect this plane with all the lines we have in the scene. Then we’ll pick one of these lines intersecting the plane (chosen at random) and we’ll find the closest line to it that is also intersected by the plane.

Let’s use a figure to make the example a bit easier to digest:

t6

Let’s assume all the segments in the picture are the lines of our scene that intersected the random plane. The red line was chosen randomly out of all the intersected lines. Every line intersects the plane at a specific point in 3D space. Let’s call “x” the point of contact of the red line with the random plane.

The next step is to find the closest point to “x”, from all the other contact points of the other lines that were intersected by the plane. In the figure the green point “y” is the closest.

As soon as we have these two points “x” and “y”, we’ll simply create another line connecting them.

If we run this process several times (creating a random plane, intersecting our lines, finding the closest point, making a new line) we’ll end up with the result we want. To make it possible, let’s define findIntersectingEdges() as:

function findIntersectingEdges(center, dir) {

    let contactPoints = [];
    for(line of lines) {
        let ires = intersectsPlane(
            center, dir,
            line.v1, line.v2
        );

        if(ires === false) continue;

        contactPoints.push(ires);
    }

    if(contactPoints.length < 2) return;
}

The two parameters of findIntersectingEdges() are the center of the 3D plane and the direction that the plane is facing towards. contactPoints will store all the points of intersection between the lines of our scene and the plane, intersectsPlane() will tell us if a given line intersects a plane. If the returned value ires isn’t undefined, which means there’s a point of intersection stored inside the ires variable, we’ll save the ires variable in the contactPoints array.

intersectsPlane() is defined as:

function intersectsPlane(planePoint, planeNormal, linePoint, linePoint2) {

    let lineDirection = new THREE.Vector3(linePoint2.x - linePoint.x, linePoint2.y - linePoint.y, linePoint2.z - linePoint.z);
    let lineLength = lineDirection.length();
    lineDirection.normalize();

    if (planeNormal.dot(lineDirection) === 0) {
        return false;
    }

    let t = (planeNormal.dot(planePoint) - planeNormal.dot(linePoint)) / planeNormal.dot(lineDirection);
    if (t > lineLength) return false;
    if (t < 0) return false;

    let px = linePoint.x + lineDirection.x * t;
    let py = linePoint.y + lineDirection.y * t;
    let pz = linePoint.z + lineDirection.z * t;
    
    let planeSize = Infinity;
    if(vec3(planePoint.x - px, planePoint.y - py, planePoint.z - pz).length() > planeSize) return false;

    return vec3(px, py, pz);
}

I won’t go over the details of how this function works, if you want to know more check the original version of the function here.

Let’s now go to step 2: Picking a random contact point (we’ll call it randCp) and finding its closest neighbor contact point. Append this snippet at the end of findIntersectingEdges():

function findIntersectingEdges(center, dir) {
    ...
    ...

    let randCpIndex = Math.floor(rand() * contactPoints.length);
    let randCp = contactPoints[randCpIndex];

    // let's search the closest contact point from randCp
    let minl = Infinity;
    let minI = -1;
    
    // iterate all contact points
    for(let i = 0; i < contactPoints.length; i++) {
        // skip randCp otherwise the closest contact point to randCp will end up being... randCp!
        if(i === randCpIndex) continue;

        let cp2 = contactPoints[i];

        // 3d point in space of randCp
        let v1 = vec3(randCp.x, randCp.y, randCp.z);
        // 3d point in space of the contact point we're testing for proximity
        let v2 = vec3(cp2.x, cp2.y, cp2.z);

        let sv = vec3(v2.x - v1.x, v2.y - v1.y, v2.z - v1.z);
        // "l" holds the euclidean distance between the two contact points
        let l = sv.length();

        // if "l" is smaller than the minimum distance we've registered so far, store this contact point's index as minI
        if(l < minl) {
            minl = l;
            minI = i;
        }
    }

    let cp1 = contactPoints[randCpIndex];
    let cp2 = contactPoints[minI];

    // let's create a new line out of these two contact points
    lines.push(
        new Line({
            v1: vec3(cp1.x, cp1.y, cp1.z),
            v2: vec3(cp2.x, cp2.y, cp2.z),
            c1: vec3(2,2,2),
            c2: vec3(2,2,2),
        })
    );
}

Now that we have our routine to test intersections against a 3D plane, let’s use it repeatedly against the lines that we already made in the surface of the sphere. Append the following code at the end of computeWeb():

function computeWeb() {

    ...
    ...

    // intersect many 3d planes against all the lines we made so far
    for(let i = 0; i < 4500; i++) {
        let x0 = nrand() * 15;
        let y0 = nrand() * 15;
        let z0 = nrand() * 15;
        
        // dir will be a random direction in the unit sphere
        let dir = vec3(nrand(), nrand(), nrand()).normalize();
        findIntersectingEdges(vec3(x0, y0, z0), dir);
    }
}

If you followed along, you should get this result

t3

Click here to see the source up to this point.

Adding sparkles

We’re almost done! To make the depth of field effect more prominent we’re going to fill the scene with little sparkles. So, it’s now time to define the last function we were missing:

function computeSparkles() {
    for(let i = 0; i < 5500; i++) {
        let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);

        let c = 1.325 * (0.3 + rand() * 0.7);
        let s = 0.125;

        if(rand() > 0.9) {
            c *= 4;
        }

        lines.push(new Line({
            v1: vec3(v0.x - s, v0.y, v0.z),
            v2: vec3(v0.x + s, v0.y, v0.z),

            c1: vec3(c, c, c),
            c2: vec3(c, c, c),
        }));

        lines.push(new Line({
            v1: vec3(v0.x, v0.y - s, v0.z),
            v2: vec3(v0.x, v0.y + s, v0.z),
    
            c1: vec3(c, c, c),
            c2: vec3(c, c, c),
        }));
    }
}

Let’s start by explaining this line:

let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);

Here we’re creating a 3D vector with three random values between -1 and +1. Then, by doing .normalize() we’re making it a “unit vector”, which is a vector whose length is exactly 1.

If you drew many points by using this method (choosing three random components between [-1, +1] and then normalizing the vector) you’d notice that all the points you draw end up on the surface of a sphere (which have a radius of exactly one).

Since the sphere we’re drawing in computeWeb() has a radius of exactly 15 units, we want to make sure that all our sparkles don’t end up inside the sphere generated in computeWeb().

We can make sure that all points are far enough from the sphere by multiplying each vector component by a scalar that is bigger than the sphere radius with .multiplyScalar(18 … and then adding some randomness to it by adding + rand() * 65.

let c = 1.325 * (0.3 + rand() * 0.7);

c is a multiplier for the color intensity of the sparkle we’re computing. At a minimum, it will be 1.325 * (0.3), if rand() ends up at the highest possible value, c will be 1.325 * (1).

The line if(rand() > 0.9) c *= 4; can be read as “every 10 sparkles, make one whose color intensity is four times bigger than the others”.

The two calls to lines.push() are drawing a horizontal line of size s, and center v0, and a vertical line of size s, and center v0. All the sparkles are in fact little “plus signs”.

And here’s what we have up to this point:

t4

… and the code for createScene.js

Adding lights and colors

The final step to our small journey with Blurry is to change the color of our lines to match the colors of the finished scene.

Before we do so, I’ll give a very simplistic explanation of the algebraic operation called “dot product”. If we plot two unit vectors in 3D space, we can measure how “similar” the direction they’re pointing to is.

Two parallel unit vectors will have a dot product of 1 while orthogonal unit vectors will instead have a dot product of 0. Opposite unit vectors will result in a dot product of -1.

Take this picture as a reference for the value of the dot product depending on the two input unit vectors:

t5

We can use this operation to calculate “how close” two directions are to each other, and we’ll use it to fake diffuse lighting and create the effect that two light sources are lighting up the scene.

Here’s a drawing which will hopefully make it easier to understand what we’ll do:

lighting

The red and white dot on the surface of the sphere has the red unit vector direction associated with it. Now let’s imagine that the violet vectors represent light emitted from a directional light source, and the green vector is the opposite vector of the violet vector (in algebraic terms the green vector is the negation of the violet vector). If we take the dot product between the red and the green vector, we’ll get an estimate of how much the two vectors point to the same direction. The bigger the value is, the bigger the amount of light received at that point will be. The intuitive reasoning behind this process is essentially to imagine each of the points in our lines as if they were very small planes. If these little planes are facing toward the light source, they’ll absorb and reflect more light from it.

Remember though that the dot operation can also return negative values. We’ll catch that by making sure that the minimum value returned by that function is greater or equal than 0.

Let’s now code what we said so far with words and define two new global variables just before the definition of createScene():

let lightDir0 = vec3(1, 1, 0.2).normalize();
let lightDir1 = vec3(-1, 1, 0.2).normalize();

You can think about both variables as two green vectors in the picture above, pointing to two different directional light sources.

We’ll also create a normal1 variable which will be used as our “red vector” in the picture above and calculate the dot products between normal1 and the two light directions we just added. Each light direction will have a color associated to it. After we calculate with the dot products how much light is reflected from both light directions, we’ll just sum the two colors together (we’ll sum the RGB triplets) and use that as the new color of the line we’ll create.

Lets finally append a new snippet to the end of computeWeb() which will change the color of the lines we computed in the previous steps:

function computeWeb() {
    ...

    // recolor edges
    for(line of lines) {
        let v1 = line.v1;
        
        // these will be used as the "red vectors" of the previous example
        let normal1 = v1.clone().normalize();
        
        // lets calculate how much light normal1
        // will get from the "lightDir0" light direction (the white light)
        // we need Math.max( ... , 0.1) to make sure the dot product doesn't get lower than
        // 0.1, this will ensure each point is at least partially lit by a light source and
        // doesn't end up being completely black
        let diffuse0 = Math.max(lightDir0.dot(normal1), 0.1);
        // lets calculate how much light normal1
        // will get from the "lightDir1" light direction (the reddish light)
        let diffuse1 = Math.max(lightDir1.dot(normal1), 0.1);
        
        let firstColor = [diffuse0, diffuse0, diffuse0];
        let secondColor = [2 * diffuse1, 0.2 * diffuse1, 0];
        
        // the two colors will represent how much light is received from both light directions,
        // so we'll need to sum them togheter to create the effect that our scene is being lit by two light sources
        let r1 = firstColor[0] + secondColor[0];
        let g1 = firstColor[1] + secondColor[1];
        let b1 = firstColor[2] + secondColor[2];
        
        let r2 = firstColor[0] + secondColor[0];
        let g2 = firstColor[1] + secondColor[1];
        let b2 = firstColor[2] + secondColor[2];
        
        line.c1 = vec3(r1, g1, b1);
        line.c2 = vec3(r2, g2, b2);
    }
}

Keep in mind what we’re doing is a very, very simple way to recreate diffuse lighting, and it’s incorrect for many reasons, starting from the fact we’re only considering the first vertex of each line, and assigning the calculated light contribution to both, the first and second vertex of the line, without considering the fact that the second vertex might be very far away from the first vertex, thus ending up with a different normal vector and consequently different light contributions. But we’ll live with this simplification for the purpose of this article.

Let’s also update the lines created with computeSparkles() to reflect these changes as well:

function computeSparkles() {
    for(let i = 0; i < 5500; i++) {
        let v0 = vec3(nrand(), nrand(), nrand()).normalize().multiplyScalar(18 + rand() * 65);

        let c = 1.325 * (0.3 + rand() * 0.7);
        let s = 0.125;

        if(rand() > 0.9) {
            c *= 4;
        }

        let normal1 = v0.clone().normalize();

        let diffuse0 = Math.max(lightDir0.dot(normal1), 0.1);
        let diffuse1 = Math.max(lightDir1.dot(normal1), 0.1);

        let r = diffuse0 + 2 * diffuse1;
        let g = diffuse0 + 0.2 * diffuse1;
        let b = diffuse0;

        lines.push(new Line({
            v1: vec3(v0.x - s, v0.y, v0.z),
            v2: vec3(v0.x + s, v0.y, v0.z),

            c1: vec3(r * c, g * c, b * c),
            c2: vec3(r * c, g * c, b * c),
        }));

        lines.push(new Line({
            v1: vec3(v0.x, v0.y - s, v0.z),
            v2: vec3(v0.x, v0.y + s, v0.z),
    
            c1: vec3(r * c, g * c, b * c),
            c2: vec3(r * c, g * c, b * c),
        }));
    }
}

And that’s it!

The scene you’ll end up seeing will be very similar to the one we wanted to recreate at the beginning of the article. The only difference will be that I’m calculating the light contribution for both computeWeb() and computeSparkles() as:

let diffuse0 = Math.max(lightDir0.dot(normal1) * 3, 0.15);
let diffuse1 = Math.max(lightDir1.dot(normal1) * 2, 0.2 );

Check the full source here or take a look at the live demo.

Final words

If you made it this far, you’ll now know how this very simple library works and hopefully you learned a few tricks for your future generative art projects!

This little project only used lines as primitives, but you can also use textured quads, motion blur, and a custom shader pass that I’ve used recently to recreate volumetric light shafts. Look through the examples in libs/scenes/ if you’re curious to see those features in action.

If you have any question about the library or if you’d like to suggest a feature/change feel free to open an issue in the github repo. I’d love to hear your suggestions!

Simulating Depth of Field with Particles using the Blurry Library was written by Domenico Bruzzese and published on Codrops.