Qi Theme

Qi Theme 21

Qi Theme is a free WordPress theme created by Qode Interactive – an award-winning studio. This theme perfectly combines top speed and performance with a beautiful design.

It comes with 100 demos, allowing you to easily set up any type of website, whether it’s an online store, an artist’s portfolio, or a simple blog. If you can think it up, this theme will help you build it – it will even grant you free access to premium stock photos.

Qi Theme is fully supported by video tutorials as well as an extensive knowledge base, so you’ll always have a place to look for help.

Texture Ripples and Video Zoom Effect with Three.js and GLSL

In this ALL YOUR HTML coding session we’ll be replicating the ripples and video zoom effect from The Avener by TOOSOON studio using Three.js and GLSL coding.

This coding session was streamed live on April 25, 2021.

Check out the live demo.

Support: https://www.patreon.com/allyourhtml

Setup: https://gist.github.com/akella/a19954…

The post Texture Ripples and Video Zoom Effect with Three.js and GLSL appeared first on Codrops.

Spark SEO

spark seo

Spark SEO offers search engine Optimisation services to small and medium local businesses. We are a remote team of absolute subject matter experts, and we are ready to help your local business grow

The post Spark SEO appeared first on WeLoveWP.

Playing with Texture Projection in Three.js

Texture projection is a way of mapping a texture onto a 3D object and making it look like it was projected from a single point. Think of it as the batman symbol projected onto the clouds, with the clouds being our object and the batman symbol being our texture. It’s used both in games and visual effects, and more parts of the creative world. Here is a talk by Yi-Wen Lin which contains some other cool examples.

Looks neat, huh? Let’s achieve this in Three.js!

Minimum viable example

First, let’s set up our scene. The setup code is the same in every Three.js project, so I won’t go into details here. You can go to the official guide and get familiar with it if you haven’t done that before. I personally use some utils from threejs-modern-app, so I don’t need to worry about the boilerplate code.

So, first we need a camera from which to project the texture from.

const camera = new THREE.PerspectiveCamera(45, 1, 0.01, 3)
camera.position.set(-1, 1.2, 1.5)
camera.lookAt(0, 0, 0)

Then, we need our object on which we will project the texture. To do projection mapping, we will write some custom shader code, so let’s create a new ShaderMaterial:

// create the mesh with the projected material
const geometry = new THREE.BoxGeometry(1, 1, 1)
const material = new THREE.ShaderMaterial({
  uniforms: { 
    texture: { value: assets.get(textureKey) },
  },
  vertexShader: '',
  fragShader: '',
})
const box = new THREE.Mesh(geometry, material)

However, since we may need to use our projected material multiple times, we can put it in a component by itself and use it like this:

class ProjectedMaterial extends THREE.ShaderMaterial {
  constructor({ camera, texture }) {
    // ...
  }
}

const material = new ProjectedMaterial({
  camera,
  texture: assets.get(textureKey),
})

Let’s write some shaders!

In the shader code we’ll basically sample the texture as if it was projected from the camera. Unfortunately, this involves some matrix multiplication. But don’t be scared! I’ll explain it in a simple, easy to understand way. If you want to dive deeper into the subject, here is a really good article about it.

In the vertex shader, we have to treat each vertex as if it’s being viewed from the projection camera, so we just use the projection camera’s projectionMatrix and viewMatrix instead of the ones from the scene camera. We pass this transformed position into the fragment shader using a varying variable.

vTexCoords = projectionMatrixCamera * viewMatrixCamera * modelMatrix * vec4(position, 1.0);

In the fragment shader, we have to transform the position from world space into clip space. We do this by dividing the vector by its .w component. The GLSL built-in function texture2DProj (or the newer textureProj) does this internally also.

In the same line, we also transform from clip space range, which is [-1, 1], to the uv lookup range, which is [0, 1]. We use this variable to later sample from the texture.

vec2 uv = (vTexCoords.xy / vTexCoords.w) * 0.5 + 0.5;

And here’s the result:

Notice that we wrote some code to project the texture only on the faces of the cube that are facing the camera. By default, every face gets the texture projected onto, so we check if the face is actually facing the camera by looking at the dot product of the normal and the camera direction. This technique is really common in lighting, here is an article if you want to read more about this topic.

// this makes sure we don't render the texture also on the back of the object
vec3 projectorDirection = normalize(projPosition - vWorldPosition.xyz);
float dotProduct = dot(vNormal, projectorDirection);
if (dotProduct < 0.0) {
  outColor = vec4(color, 1.0);
}

First part down, we now want to make it look like the texture is actually sticked on the object.

We do this simply by saving the object’s position at the beginning, and then we use it instead of the updated object position to do the calculations of the projection, so that if the object moves afterwards, the projection doesn’t change.

We can store the object initial model matrix in the uniform savedModelMatrix, and so our calculations become:

vTexCoords = projectionMatrixCamera * viewMatrixCamera * savedModelMatrix * vec4(position, 1.0);

We can expose a project() function which sets the savedModelMatrix with the object’s current modelMatrix.

export function project(mesh) {
  // make sure the matrix is updated
  mesh.updateMatrixWorld()

  // we save the object model matrix so it's projected relative
  // to that position, like a snapshot
  mesh.material.uniforms.savedModelMatrix.value.copy(mesh.matrixWorld)
}

And here is our final result:

That’s it! Now the cube looks like it has a texture slapped onto it! This can scale up and work with any kind of 3D model, so let’s make a more interesting example.

More appealing example

For the previous example we created a new camera from which to project, but what if we would use the same camera that renders the scene to project? This way we would see exactly the 2D image! This is because the point of projection coincides with the view point.

Also, let’s try projecting onto multiple objects:

That looks interesting! However, as you can see from the example, the image looks kinda warped, this is because the texture is stretched to fill the camera frustum. But what if we would like to retain the image’s original proportion and dimensions?

Also we didn’t take lighting into consideration at all. There needs to be some code in the fragment shader which tells how the surface is lighted regarding the lights we put in the scene.

Furthermore, what if we would like to project onto a much bigger number of objects? The performance would quickly drop. That’s where GPU instancing comes to aid! Instancing moves the the heavy work onto the GPU, and Three.js recently implemented an easy-to-use API for it. The only requirement is that all of the instanced objects must have the same geometry and material. Luckily, this is our case! All of the objects have the same geometry and material, the only difference is the savedModelMatrix, since each object had a different position when it was projected on. But we can pass that as a uniform to every instance like in this Three.js example.

Things starts to get complicated, but don’t worry! I already coded this stuff and put it in a library, so it’s easier to use and you don’t have to rewrite the same things each time! It’s called three-projected-material, go check it out if you’re interested in how I overcame the remaining challenges.

We’re gonna use the library from this point on.

Useful example

Now that we can project onto and animate a lot of objects, let’s try making something actually useful out of it.

For example, let’s try integrating this into a slideshow, where the images are projected onto a ton of 3D objects, and then the objects are animated in an interesting way.

For the first example, the inspiration comes from Refik Anadol. He does some pretty rad stuff. However, we can’t do full-fledged simulations with velocities and forces like him, we need to have control over the object’s movement; we need it to arrive in the right place at the right time.

We achieve this by putting the object on some trails: we define a path the object has to follow, and we animate the object on that path. Here is a Stack Overflow answer that explains how to do it.

Tip: you can access this mode by putting ?debug at the end of the URL of each demo.

To do the projection, we

  1. Move the elements to the middle point
  2. Do the texture projection calling project()
  3. Put the elements back to the start

This happens synchronously, so the user won’t see anything.

Now we have the freedom to model these paths any way we want!

But first, we have to make sure that at the middle point, the elements will cover the image’s area properly. To do this I used the poisson-disk sampling algorithm, which distributes the points more evenly on a surface rather than random positioning them.

this.points = poissonSampling([this.width, this.height], 7.73, 9.66) // innerradius and outerradius

// here is what this.points looks like,
// the z component is 0 for every one of them
// [
//   [
//     2.4135735314978937, --> x
//     0.18438944023363374 --> y
//   ],
//   [
//     2.4783704056100464,
//     0.24572635574719284
//   ],
//   ...

Now let’s take a look at how the paths are generated in the first demo. In this demo, there is a heavy of use of perlin noise (or rather its open source counterpart, open simplex noise). Notice also the mapRange() function (map() in processing) which basically maps a number from one interval to another. Another library that does this is d3-scale with its d3.scaleLinear(). Some easing functions are also used.

const segments = 51 // must be odds so we have the middle frame
const halfIndex = (segments - 1) / 2
for (let i = 0; i < segments; i++) {
  const offsetX = mapRange(i, 0, segments - 1, startX, endX)

  const noiseAmount = mapRangeTriple(i, 0, halfIndex, segments - 1, 1, 0, 1)
  const frequency = 0.25
  const noiseAmplitude = 0.6
  const noiseY = noise(offsetX * frequency) * noiseAmplitude * eases.quartOut(noiseAmount)
  const scaleY = mapRange(eases.quartIn(1 - noiseAmount), 0, 1, 0.2, 1)

  const offsetZ = mapRangeTriple(i, 0, halfIndex, segments - 1, startZ, 0, endZ)

  // offsetX goes from left to right
  // scaleY shrinks the y before and after the center
  // noiseY is some perlin noise on the y axis
  // offsetZ makes them enter from behind a little bit
  points.push(new THREE.Vector3(x + offsetX, y * scaleY + noiseY, z + offsetZ))
}

Another thing we can work on is the delay with which each element arrives. We also use Perlin noise here, which makes it look like they arrive in “clusters”.

const frequency = 0.5
const delay = (noise(x * frequency, y * frequency) * 0.5 + 0.5) * delayFactor

We use perlin noise also in the waving effect, which modifies each point of the curve giving it a “flag waving” effect.

const { frequency, speed, amplitude } = this.webgl.controls.turbulence
const z = noise(x * frequency - time * speed, y * frequency) * amplitude
point.z = targetPoint.z + z

For the mouse interaction instead, we check if the point of the path is closer than a certain radius, if so, we calculate a vector which goes from the mouse point to the path point. We then move the path point a little bit along that vector’s direction. We use the lerp() function for this, which returns the interpolated value in the range specified, at one specific percentage. For example 0.2 means at 20%.

// displace the curve points
if (point.distanceTo(this.mousePoint) < displacement) {
  const direction = point.clone().sub(this.mousePoint)
  const displacementAmount = displacement - direction.length()
  direction.setLength(displacementAmount)
  direction.add(point)

  point.lerp(direction, 0.2) // magic number
}

// and move them back to their original position
if (point.distanceTo(targetPoint) > 0.01) {
  point.lerp(targetPoint, 0.27) // magic number
}

The remaining code handles the slideshow style animation, go check out the source code if you’re interested!

In the other two demos I used some different functions to shape the paths the elements move on, but overall the code is pretty similar.

Final words

I hope this article was easy to understand and simple enough to give you some insight into texture projection techniques. Make sure to check out the code on GitHub and download it! I made sure to write the code in an easy to understand manner with plenty of comments.

Let me know if something is still unclear and feel free to reach out to me on Twitter @marco_fugaro!

Hope this was fun to read and that you learned something along the way! Cheers!

References

Playing with Texture Projection in Three.js was written by Marco Fugaro and published on Codrops.

SVG Filter Effects: Creating Texture with <feTurbulence>

SVGFilterEffects_feturbulance_featured

feTurbulence is one of the most powerful SVG filter primitives. The specification defines this primitive as follows:

This filter primitive creates an image using the Perlin turbulence function. It allows the synthesis of artificial textures like clouds or marble. […]
The resulting image will fill the entire filter primitive subregion for this filter primitive.

In other words, the feTurbulence filter primitive generates and renders Perlin noise. This kind of noise is useful in simulating several natural phenomena like clouds, fire and smoke, and in generating complex texture like marble or granite. And like feFlood, feTurbulence fills the filter region with new content.

In this article, we’re going to go over how we can create noise with feTurbulence and how that noise can be used to distort images and text, much like we did with the feDisplacementMap texture in the previous article. Then, we’re going to see how the generated noise can be used in combination with SVG lighting effects to create a simple rough paper texture.

But first, let’s get an overview of feTurbulence and its attributes and see how each one affects the generated noise.

Creating Turbulence and Fractal Noise with feTurbulence

When I set out to write this series, I made the decision to avoid the gnarly technical details behind filter primitives as much as possible. This is why we won’t get into the technical details behind the functions used to generate Perlin noise.

After reading up on the function underlying noise generation, I found that it didn’t help me at all when I put the primitive into experimentation. After all, we are working with a random noise generator here. So, most of the times, you’ll find that generating texture will be a matter of experimenting and tweaking until you get the desired result. With time, it gets a little easier to predict what a texture might look like.

I’ve found that playing with feTurbulence and tweaking its attributes visually was the best way to learn about them and has helped me understand what each of the attributes does. So, we will be taking a visual approach to understanding feTurbulence, with a few interactive demos to help.

Now, feTurbulence generates noise using the Perlin Turbulence function. It has 5 main attributes that control the function and therefore the visual result of that function:

  • type
  • baseFrequency
  • numOctaves
  • seed
  • stitchTiles

We’ll go over how each of these attributes affects the visual result without going into the technical details of the function. You’ll find that, most of the times, you’ll only need to worry about three of these attributes: type, baseFrequency and numOctaves.

baseFrequency

In order to generate noise, only the baseFrequency attribute is required. The baseFrequency affects the size (or scale) and the grain of the generated noise.

baseFrequency’s effect is best understood when it is visualized and animated. That’s why I created the following live demo. Using the slider, you can change the value of the base frequency used and see how it affects the generated noise in real-time. You’ll notice that as you increase or decrease the value of the baseFrequency attribute, the generated pattern remains intact as it becomes smaller or larger, respectively, and looks like it’s zooming in and out of its origin at the top left corner.

See the Pen feTurbluence: baseFrequency by Sara Soueidan (@SaraSoueidan) on CodePen.light

Lower baseFrequency values (such as 0.001) generate larger patterns, while higher values (0.5+) produce smaller patterns. The values start from 0 (no frequency == no pattern) and up. Negative values are not allowed. as Michael Mullany mentions, “values in the 0.02 to 0.2 range are useful starting points for most textures.

Note that the noise generated does not have a background color. Meaning that, if you remove the white background color on the SVG, you’ll be able to see the dark body’s background through the noise.

The baseFrequency attribute also accepts two values. When you provide two values, the first one will be used for the base frequency on the x-axis and the second one will correspond to the y-axis. By providing two different values, you can generate vertical or horizontal noise that can be used to achieve some fantastic effects, as we’re going to see in a following section.

Play with the values of the baseFrequency again in this live demo and notice how it changes along the X and Y axes as you give it different values. The demo starts with a nice horizontal noise. The 0.01 x-baseFrequency value is relatively small, which makes the horizontal pattern larger (like it’s stretched out). If you decrease it further (to 0.001, for example), you’ll see the horizontal pattern become more like lines. Try it.

See the Pen feTurbluence: x & y baseFrequency by Sara Soueidan (@SaraSoueidan) on CodePen.light

type

As its name suggests, the type attribute is used to specify the type of noise generated by feTurbulence. There are two types available:

  • turbulence, which is the default value, and
  • fractalNoise.

fractalNoise generates a more cloudy and smooth pattern and is a suitable base for creating gas-base textures like clouds. turbulence generates more lines that simulate ripples and are thus suitable as a base for liquid textures.

Screen Shot 2019-01-20 at 16.55.25
turbulence type noise on the left, and fractalNoise type on the right.

Change the value of the type attribute in the following demo to see how the generated pattern changes:

See the Pen feTurbluence: stitchTiles by Sara Soueidan (@SaraSoueidan) on CodePen.light

numOctaves

numOctaves is short for the “number of octaves”, which represent the level of detail in a noise.

In music, an octave is the difference in pitch between two notes where one has twice the frequency of the other. So the higher the octaves, the higher the frequency. In feTurbulence, the higher the number of octaves, the more detail you can see in the noise it generates. By default, the generated noise has one octave, which means that the default value for the numOctaves attribute is 1.

Drag the slider in the following demo to see the effect of increasing the number of octaves on the generated texture:

See the Pen feTurbluence: numOctaves by Sara Soueidan (@SaraSoueidan) on CodePen.light

You’ll notice that starting from numOctaves="5" the effect of adding more octaves becomes practically unnoticeable.

seed

The seed, as defined in the specification, is “the starting number for the pseudo random number generator”. In other words, it provides a different starting number for the random function used to generated our random noise.

Visually, you’ll see that it affects where and how the “ripple lines” are generated. It is also better understood when you see how it affects the noise generated in two adjacent rectangles.

When the same seed is used for the two adjacent rectangles, the function used to generate the noise across the two rectangles is continuous, and this will be reflected visually by the continuity of the “ripple lines” across the edges of the two rectangles.

Group
The continuity of the function generating the random noise can be seen along the edges of the two rectangles using the same seed value.

Play with the value of the seed attribute in the following demo, see how it affects the generated noise, and notice how the noise is continuous across the edges of the two rectangles that are using the same seed value.

See the Pen feTurbluence: seed by Sara Soueidan (@SaraSoueidan) on CodePen.light

stitchTiles

stitchTiles can be used to create a stitching effect between “tiles” of noise. The effect of this attribute is very similar to that of the seed, meaning that it is most evident when you have two adjacent areas (or “tiles”) of noise.

As the specification mentions, sometimes the result of the noise generation will show clear discontinuities at the tile borders. You can tell the browser to try to smooth the results out so that the two tiles appear to be “stitched” together. (I really like how the attribute and its effect are compared to stitching.)

By default, no attempt is made to achieve smooth transitions at the border of tiles which contain a turbulence function because the default value for stitchTiles is noStitch. If you want to create that stitching effect, you can change the value to stitch.

In order to compare the result of stitchTiles to that of seed, I have applied the same seed value to the noise generated in the two rectangles in the following demo. You can already see that the noise appears to be continuous between the two. Switch the stitchTiles option “on” (by changing its value to stitch) to see how the noise changes to accommodate across the edges.

See the Pen feTurbluence: stitchTiles by Sara Soueidan (@SaraSoueidan) on CodePen.light

As I mentioned earlier, the only three attributes you’ll most likely be using are type, baseFrequency and numOctaves. So we’ll be focusing on these three moving forward.

Using feTurbulence-Generated Noise to Distort Content

This is where the fun starts. And this is where we start putting the generated noise to use. After all, just filling the filter region with the noise has no use in and of itself.

In the previous article we used feDisplacementMap to conform a piece of text to the texture in an external image. And we mentioned that feDisplacementMap uses the color information in one image to distort another. The image that is used as a displacement map can be any image. This means that it can be an external image or an image generated within SVG, such as a gradient image or a pattern… or a noise texture.

In other words, the noise we generate with feTurbulence can as well be used to distort content if it is used with feDisplacementMap. In the following demo, we used the output of feTurbulence to displace the image with feDisplacementMap. I’m using a horizontal noise pattern by providing two different values for the baseFrequency attribute similar to what we did earlier.

<svg viewBox="0 0 180 100">
    <filter id="noise" x="0%" y="0%" width="100%" height="100%">
        <feTurbulence baseFrequency="0.01 0.4" result="NOISE" numOctaves="2" />
        <feDisplacementMap in="SourceGraphic" in2="NOISE" scale="20" xChannelSelector="R" yChannelSelector="R"></feDisplacementMap>
    </filter>

    <image xlink:href="..." x="0" y="0" width="100%" height="100%" filter="url(#noise)"></image>
</svg>

See the Pen feTurbluence as a displacementMap by Sara Soueidan (@SaraSoueidan) on CodePen.light

The intensity by which the turbulence distorts the image is specified in the sale attribute on feDisplacementMap. I’ve used a large value so that the effect looks more dramatic.

Now, going from this simple application, we can open a lot more possibilities when we combine the facts that:

  • SVG filters can be applied to HTML content, and
  • the values of baseFrequency are numbers and can thus be animated..

A little less than a couple of years ago, Adrien Denat wrote an article right here on Codrops in which he experimented with a similar effect applied to HTML buttons. We’re going to break down and recreate the following button click effect:

b7w

We’re going to start by creating the noise texture. We’re going to start with the final state—the state where the button is distorted, and then, once we’ve got that, we’re going to animate the initial state of the button to that distorted state and back on click.

Our aim here is to distort the button horizontally. So we will be using and tweaking the horizontal noise from the previous demo a little bit. Its distortion effect on the image is a little too strong, so I’m going to dial it down first by changing the turbulence value from 0.01 0.4 to 0 0.2:

<filter id='noise' x='0%' y='0%' width='100%' height='100%'>
        <feTurbulence type="turbulence" baseFrequency="0 0.2" result="NOISE" numOctaves="2" />
        <feDisplacementMap in="SourceGraphic" in2="NOISE" scale="30" xChannelSelector="R" yChannelSelector="R"></feDisplacementMap>
</filter>

The effect gets a little better, but the button is still distorted more than we’d like it to:

Screen Shot 2019-01-21 at 10.40.52

We want the distortion to be less dramatic. A useful tip to keep in mind is that we can dial the effect of the noise down instantly by switching the type of noise from the default turbulence to the smoother fractalNoise. As soon we do that, we can see that the distortion effect has also been “smoothed” down:

Screen Shot 2019-01-21 at 10.44.20

This looks much better.

Now that we’ve got a distortion effect we’re happy with, we will start our demo with a filter that, initially, does practically nothing:

<filter id='noise' x='0%' y='0%' width='100%' height='100%'>
        <feTurbulence type="fractalNoise" baseFrequency="0 0.000001" result="NOISE" numOctaves="2" />
        <feDisplacementMap in="SourceGraphic" in2="NOISE" scale="30" xChannelSelector="R" yChannelSelector="R"></feDisplacementMap>
</filter>

We’re going to apply that filter to our button in CSS:

button {
    -webkit-filter: url(#noise);
            filter: url(#noise);
}

At this point, the button still looks un-distorted.

Next, we’re going to use (a slightly modified version of) Adrien’s code which uses GSAP to animate the value inside feTurbulence’s baseFrequency to 0 0.2 and back on click:

var bt = document.querySelectorAll('.button')[0],
	turbVal = { val: 0.000001 },
	turb = document.querySelectorAll('#noise feTurbulence')[0],
	
	btTl = new TimelineLite({ paused: true, onUpdate: function() {
  turb.setAttribute('baseFrequency', '0 ' + turbVal.val);
} });

btTl.to(turbVal, 0.2, { val: 0.2 })
    .to(turbVal, 0.2, { val: 0.000001 });

bt.addEventListener('click', function() {
  btTl.restart();
});

And that’s all there is to it, really. You can play with the live demo here:

See the Pen feTurbluence on BUTTONs by Sara Soueidan (@SaraSoueidan) on CodePen.light

The demo works in Chrome and Firefox at the time of writing of this article. It is buggy in the current version of Safari but the issue is resolved in the next version, as the Safari Tech Preview shows the demo works perfectly. It doesn’t work in MS Edge, though, but the button isn’t distorted at all which means that the lack of support does not affect the usability of the button. This is great because you can still use this effect as an enhancement.If the effect isn’t supported, the button will simply look and behave like a normal, effect-less button. Adrien’s article includes quite a few more button distortion effects that use the same principles we’ve just covered that are definitely worth checking out and breaking down. There are one or two nice tricks to learn from each.

Squiggly Text using feTurbulence

One of my favorite examples of feTurbulence in action is Lucas Bebber’s Squiggly Text effect. In his demo, Lucas is using multiple feTurbulence functions:

<svg xmlns="http://www.w3.org/2000/svg" version="1.1">
    <defs>
        <filter id="squiggly-0">
            <feTurbulence id="turbulence" baseFrequency="0.02" numOctaves="3" result="noise" seed="0" />
            <feDisplacementMap id="displacement" in="SourceGraphic" in2="noise" scale="6" />
        </filter>
        <filter id="squiggly-1">
            <feTurbulence id="turbulence" baseFrequency="0.02" numOctaves="3" result="noise" seed="1" />
            <feDisplacementMap in="SourceGraphic" in2="noise" scale="8" />
        </filter>

        <filter id="squiggly-2">
            <feTurbulence id="turbulence" baseFrequency="0.02" numOctaves="3" result="noise" seed="2" />
            <feDisplacementMap in="SourceGraphic" in2="noise" scale="6" />
        </filter>
        <filter id="squiggly-3">
            <feTurbulence id="turbulence" baseFrequency="0.02" numOctaves="3" result="noise" seed="3" />
            <feDisplacementMap in="SourceGraphic" in2="noise" scale="8" />
        </filter>

        <filter id="squiggly-4">
            <feTurbulence id="turbulence" baseFrequency="0.02" numOctaves="3" result="noise" seed="4" />
            <feDisplacementMap in="SourceGraphic" in2="noise" scale="6" />
        </filter>
    </defs>
</svg>

..and applying them via CSS to a piece of HTML text using CSS animations, animating from one to another:

@keyframes squiggly-anim {
  0% {
    -webkit-filter: url("#squiggly-0");
            filter: url("#squiggly-0");
  }
  25% {
    -webkit-filter: url("#squiggly-1");
            filter: url("#squiggly-1");
  }
  50% {
    -webkit-filter: url("#squiggly-2");
            filter: url("#squiggly-2");
  }
  75% {
    -webkit-filter: url("#squiggly-3");
            filter: url("#squiggly-3");
  }
  100% {
    -webkit-filter: url("#squiggly-4");
            filter: url("#squiggly-4");
  }
}

..thus creating the squiggly effect.

Once again, the text used is real, which means that it is searchable, selectable, accessible and editable (using the contenteditable attribute). Check the live demo out, but beware that this demo is resource-intensive, so you may want to avoid opening the Codepen on mobile.

An animated screenshot of Lucas’s squiggly text demo.


So, some useful takeaways from this section are:

  • The noise generated using feTurbulence can be used to distort both SVG and HTML content.
  • The value of baseFrequency can be animated.
  • You can dial the amount of distortion down by tweaking the values in baseFrequency and by smoothing the noise out with the fractalNoise type.
  • Even though you can animate SVG filters in general, it’s usually recommended to not overdo it because they can be quite resource-intensive. Try to keep the animations limited to smaller areas; the larger the animated area, the more resource-consuming it will be.

feTurbulence is rarely—if ever—useful when used alone. It is pretty much always used by (an)other filter primitive(s) to achieve particular effects. In this section, we used it as a displacement map in feDisplacementMap. Let’s see what more we can do with it.

Simulating Natural Texture with feTurbulence

Another useful way feTurbulence-generated noise can be used is to simulate natural texture. If you’ve ever used the noise generation plugins in After Effects, you may have already come across this functionality and examples of doing so.

Screen Shot 2019-01-21 at 11.09.05
Examples of textures created in After Effects using the Fractal Noise plug-in. (Source)

feTurbulence generates noise (random values) across each of the R, G, B, and A components. You can tweak the values for each of these components to get different variations of the noise. In order to simulate a texture, we usually need to do exactly that: tweak the R/G/B/A components (canceling out components, saturating others, etc.) to get our desired result. Other times, all we need to do is shed some light on it. Literally.

In this section, we’re going to break down a rough paper texture effect created by Michael Mullany. In order to create this texture, we will need to shine a light on a noise texture generated by feTurbulence using SVG’s lighting sources.

Lighting Sources in SVG

SVG conveniently provides a few primitives that can be used to shine a light on objects or images.

There are two filter primitives that are used to specify the type of light you want:

  • feDiffuseLighting which indicates indirect light from an outside source, and is best used for sunlight effects, and
  • feSpecularLighting which specifies secondary light that bounced from reflective surfaces.

Both primitives shine a light on an object or image by using the alpha channel of that image as a bump map. Transparent values remain flat, while opaque values rise to form peaks that are illuminated more prominently.

In other words, a light source filter uses an input’s alpha channel to provide depth information: higher opacity areas are raised toward the viewer and lower opacity areas recede away from the viewer. This means that the alpha value of a pixel in the input is used as the height of that pixel in the z-dimension, and the filter uses that height to calculate a virtual surface, which will reflect a particular amount of light from the light source. (This is pretty powerful stuff!)

Both types of light accept an attribute called surfaceScale which is practically a z-index multiplier. If you increase this value, the “slopes” in the surface texture become steeper.

“Because feTurbulence generates an alpha channel full of noisy values from 0 to 1, it produces a nice variable Z terrain that creates highlights when we shine our light on it.” —Michael Mullany

After deciding on the type of light you need, you’ll want to choose a light source.

There are three kinds of light sources in SVG:

  1. feDistantLight: this represents a distant light source which is arbitrarily far away, and so is specified in terms of its angle from the target. This is the most appropriate way to represent sunlight.
  2. fePointLight: this represents a point light that emanates from a specific point that is represented as a three-dimensional x/y/z coordinate. This is similar to a light source inside a room or within a scene.
  3. feSpotLight: this represents a spotlight and which behaves much like a point light, but its beam can be narrowed to a cone, and the light can pivot to other targets.

Each of these three light sources comes with its own attributes that are used to customize the light it generates by specifying the location of the source in the 3D-space. The attributes are outside the scope of this article, but you can learn more about them in the specification.

To create and apply a lighting effect, you need to nest the light source inside the light type. So, you start by choosing the type of light you want and then picking the source you want it to emanate from. And then finally you need to specify the color of your light. The lighting-color property is used to define the color of the light source for feDiffuseLighting and feSpecularLighting.

With the basics of lighting sources covered, we’ll now get to our example.

For the rough paper texture, we’ll be using sun-like light. This means that we will use a white diffuse lighting that emanates from a distant source. Translated to code, our light looks like this:

<feDiffuseLighting lighting-color="white" surfaceScale="2" in=".." result="..">
    <feDistantLight azimuth="45" elevation="60" />
</feDiffuseLighting>

The azimuth and elevation attributes determine the position of the source of light in 3D space. There’s an article by Rafael Pons that is absolutely fantastic at explaining these two concepts in a simple, easy-to-understand manner, along with beautiful and friendly illustrations to assist with his explanation. I highly recommend checking it out.

Now that we have a light set up, we want to generate our noise that we want to shine this light on. We’ll break the demo down into steps to learn how it’s made.

We gotta start somewhere, so we’ll start by generating a random, basic noise as a base for our texture:

<feTurbulence baseFrequency='0.04' result='noise' />

Our noise looks like this:

Screen Shot 2019-01-21 at 14.33.33

Next, we’ll shine our light onto it and then take it from there:

<feTurbulence baseFrequency='0.04' result='noise' />

<feDiffuseLighting in='noise' lighting-color='white' surfaceScale='2'>
      <feDistantLight azimuth='45' elevation='60' />
</feDiffuseLighting>

Shining the light on our noise gives us the following texture:

Screen Shot 2019-01-21 at 14.35.48

This isn’t the texture result we’re after just yet. The first thing we notice here is the presence of a lot of sharp lines in the texture. We want to get rid of these because a paper surface does not have sharp lines in it. We need to smooth these lines out. We can do that by changing the type of the generated noise to fractalNoise:

<feTurbulence type="fractalNoise" baseFrequency='0.04' result='noise' />

<feDiffuseLighting in='noise' lighting-color='white' surfaceScale='2'>
      <feDistantLight azimuth='45' elevation='60' />
</feDiffuseLighting>

This removes all those sharp lined edges from our texture:

Screen Shot 2019-01-21 at 14.37.59

We’re now one step closer to our rough paper texture.

The above texture isn’t rough enough, though. It lacks the necessary “roughness”. Increasing the amount of tiny detail in it should make it look rougher. To do that, we will increase the value of numOctaves. We’ll find that around 5 is a great place to get the level of roughness we need:

<feTurbulence type="fractalNoise" baseFrequency='0.04' numOctaves="5" result='noise' />

<feDiffuseLighting in='noise' lighting-color='white' surfaceScale='2'>
      <feDistantLight azimuth='45' elevation='60' />
</feDiffuseLighting>

And our paper texture now looks like this:

Screen Shot 2019-01-21 at 14.40.32

Excellent!

You can play with the live demo here:

See the Pen Rough Paper Texture with SVG Filters by Sara Soueidan (@SaraSoueidan) on CodePen.light

The demo works across all major browsers, including MSEdge.

If you want, you can tweak the effect a little further by playing with the source and distance of the light. For example, decreasing the elevation of the light source from 60 to 40 should increase the contrast between the small hills in the texture. The texture would then look more like this:

Screen Shot 2019-01-21 at 14.42.28

I highly recommend playing with the values of the attributes of the light source and the noise and seeing how they impact the resulting texture.

Final Words

feTurbulence is one of SVG’s most interesting and powerful operations. Combined with other primitives and animated, it is capable of generating some really interesting and appealing effects, textures, and interactions.

I strongly believe that feTurbulence is one of those filters that you’d want to experiment with and break other people’s code down to learn more about it. I still find myself guessing how a texture would look like a lot of times. And since there’s so much we can do with only one texture when used by other primitives, there’s an almost countless set of possible effects that you can make with it. I highly encourage you to check out other people’s work and breaking it down to learn more.

Yoksel has been experimenting with SVG filters on Codepen since my SVG Filters talk came out a few months ago. So you can find quite a bunch of effects to break down and learn from on her Codepen profile.

An animated screenshot of one of Yoksel‘s feTurbulence codepen demos.
One of Yoksel’s latest SVG filter experiments leveraging feTurbulence: SVG Filters are 💕

I hope that this article has inspired you and opened a new door in your imagination to see what you can do with SVG Filters. In the last article in this series, I’ll be sharing some further resources and tools to help you move forward with SVG filters and to start making your own experiments. Stay tuned.

SVG Filter Effects: Creating Texture with <feTurbulence> was written by Sara Soueidan and published on Codrops.