Case Study: Sophie Studio

With a gaming interface as its core concept, Sophie Studio's website design showcases its values, purpose, and expertise in a fun and engaging way.

Case Study: Studiogusto

Get a glimpse of the creative and innovative techniques used by Studiogusto, a dynamic agency, in designing their new website to better reflect their values and showcase their expertise.

Case Study: Crosswire

Take a look behind the scenes of the creation of a unique website for Crosswire that used a 3D environment made with WebGL to simplify their complex service offering.

Case Study: Windland — An Immersive Three.js Experience

In this article we’ll look at the creation of a mini-city full of post effects and micro-interactions using Three.js.

Visit Windland

See how you can create your own using this free boilerplate.

Introduction

I am an unconditional game lover. I’ve always dreamed of creating an interactive mini-city, using saturated colors, similar to SimCity and alike. The challenge was that I had neither enough 3D knowledge nor a library.

At the end of 2021, I finally decided to fulfill an old desire and I took Bruno Simon’s course – Three.js Journey. I’m a designer who likes to program. I ended up discovering myself as a creative developer because of this course, where I was able to use part of my dormant knowledge of ActionScript 2.0, from the late Macromedia Flash.

The entire project was created in approximately 2 weeks, between shifts at my work at Neotix. It felt amazing and I loved doing it, so I decided to share some interesting information about it, so that I could help in some way those who are at the beginning of this journey.

Creative challenge

It was crucial to use various post-processing effects present in games to give this city a decent level of realism without affecting performance. The artistic path I chose to follow was to use a mix of realistic lights with low-poly models.

image3.png
Image showing a 3D model of the city and the Three.js experience next to it.

Performance

Part of what I wanted with this project was to apply techniques that would perform well on different devices, especially mobile. It needed to work on as many devices as possible, with an acceptable frame rate (at least 30 fps). I also wanted the experience to load as quickly as possible, with a file size smaller than 2MB.

In order to accomplish this, I had to use a series of techniques that I will describe below.

Image showing the wireframe on the model within Blender of the model, where it all started.

Creating the 3D model in Blender

I used Blender to make the city model. I imported part of the buildings from free templates on the Internet. I modified a few of them to better match the setting. To make the terrain, I used Blender’s sculpt mode, creating valleys and peaks that look beautiful with light and shadow.

Image showing the 3D model in Blender in wireframe mode.

Each model was optimized considering the number of triangles when exporting. I chose to use the GLB format because the compression with Draco does incredible compression – sometimes 7x smaller, in file size. In addition, all project resources are also compressed at runtime, on the server with gzip, for a more reduced transfer.

Creating natural light

Lighting in games is fascinating – the way shadow interacts with the terrain in order to create a scene that’s pleasing to the eye while not being held back by reality. I used Blender’s global lighting system, with a “world” node, using “Nishita” ambient lighting. This allows for very natural lighting, with ambient settings that quickly give a pleasing result.

Image showing the lighting of the sun in different positions and superimposed on the blender’s nishita sun parameter.

Distributing trees in Blender with geometry nodes

Trees play an important role, as they help create a cast shadow that gives the terrain a touch of realism. I used Blender’s GeometryNodes to distribute the trees in the model and create a variation in size, shape, and rotation. I also used the material selector, to choose the regions that would have more or fewer trees, painting the density with the material selection.

Image showing the blender interface with the geometry nodes created to distribute the trees in the scene.

Bake lighting for export

In order for the experience to function and perform well in Three.js, it’s important that the scene loads the lighting baked into the textures. I created a single texture for the floor of 2048×2048, containing all of the shadows. The process of how to do the bake of shadows can be found in several tutorials on the internet. The end result is impressive and has no impact on performance.

Image showing the 3D model in blender with the baked lighting texture.

Export to Three.js and the tree performance issue

After finishing the bake and connecting the texture to the color node in the ground mesh, I exported all of the meshes to GLTF format. The entire model, using DRACO COMPRESSION, is 1.2MB. However, we have a problem with the trees: they cannot be exported all at once, as it would take too long for the GPU to finish the process.

I created the trees using the MESH SURFACE SAMPLER from Three.js, which serves exactly this purpose. You can use a model and distribute it on a surface, creating variations of the same model, but making modifications to each of them. Thus, the performance is incredible, even with a very large number of variations.

You can see an example of this in the official Three.js documentation.

Image showing the MeshSurfaceSampler code inside the VSCODE and the Three.js scene image with the projected trees.

Loading everything in Three.js

Using my boilerplate (see more about this at the end of the article), which I created to simplify things, I loaded the exported model. Afterwards, I spent a lot of time adjusting the light coloring, intensities, and other small details that make all the difference. The result of rendering in 3D is no always great for the experience in Three.js using the default parameters.

Image showing the evolution of the model within the Three.js and the color changes as a before and after.

It is essential to use the DAT.GUI to be able to visually adjust the parameters. It is impossible to get the colors and intensities right by guessing the numbers.

Image showing the Three.js DAT.GUI highlighted

Using VertexShader for the animation of the trees

One thing that brings reality to the scene is the smooth animation of the trees. Doing this is possible by exporting the animation directly from Blender, but performance would be greatly impacted – especially given the large number of trees.

The best approach in these cases is to animate using VertexShader, using GPU processing directly on the positioning of vertices in the 3D world. With that, the performance is very good and the animations are beautiful.

VSCODE screenshot showing part of the vertex shader responsible for animating the trees along with an image of the animated trees inside the Three.js

Animating the birds and other elements of the experience

The other animated elements of the experience, such as the helicopter, car, and wind turbines, were animated by changing the rotation of the model pieces directly in the render loop. It’s a very simple way to animate.

The birds were animated differently. I wanted them to have wing movement and a sense of grouping. So, I animated the whole group inside blender and exported the animation along with the GLFT file. I used the Animation Mixer to animate the wings while changing the group’s position. The result is quite convincing and very lightweight (only 200kb).

Image showing the birds and their animation timeline inside blender and the same birds inside Three.js.

Lights, shadows, and the night mode + apocalyptic

As the shadows are baked inside the imported GLB file, I was able to gain some performance by not having to use a dynamically generated shadow map inside Three.js.

I played around with the lighting effects, creating a night mode and an apocalyptic mode. It’s a lot of fun to have that kind of creative freedom without having to modify the template. The possibilities are endless.

The apocalyptic mode is an easter egg, accessible to anyone who knows how to activate it :).

Image showing night mode and apocalyptic mode.

Post Processing with Effect Composer

I’ve always loved the depth-of-field effect in games, but I thought it would be very difficult to use something like that in a Three.js experience. Thanks to the latest updates to the library, it’s much easier.

Using EffectComposer, I was able to use the BokehPass effect in day mode, which generates a dynamic depth-of-field effect, based on the distance from the camera. For night mode, I use UnrealBlooomPass, which makes the lights super exposed, ideal for this type of situation.

I change the effects between night and day mode for performance reasons – using the insertPass() and removePass() methods.

Clicking and Selecting a Building

A lot of people asked me how to make buildings clickable UI items. This was done using Three.js’ RayCaster, which detects an intersection between an invisible ray, fired by the camera, and the mouse. With this, I can detect when a building has been selected and – based on its name – trigger an event.

Image showing the source code inside the VSCODE responsible for making the object selection raycaster and the camera animation.

The animations that happen when clicking on a building were done using TWEEN.JS, by animating the initial camera position into the position of the clicked building. That way, I can place multiple buildings and have an animation generated automatically.

Responsive tweaks: Also working on mobile

Part of the work also involved tweaking the experience parameters to work well on mobile devices. Not just the responsive adjustments to the HTML and CSS, but also to the camera parameters, animation duration, and several other details.

image16.jpg
Image showing Windland running in mobile mode with responsiveness.

Dynamic quality: Adjusting performance dynamically with the power of the user device

Despite all the optimizations, some devices still cannot run all the effects, especially the post-processing ones. So I created a script that measures the FPS at the beginning of the experiment (during the loading process). That way, when the experiment starts, Three.js knows whether or not to activate certain effects to save on processing and ensure that the performance is within what is possible for that device.

image9.png
Image showing code inside the VSCODE responsible for detecting the FPS.

Also working on smartwatches

As a proof of concept, I wanted to demonstrate that experiments done in Three.js are not heavy to process and run even on a smartwatch. During this process, I found that the number of vertices in the model is what would most impact performance on these devices. So, I created an “ultra-low poly” mode of the model to be used on mobile devices. Ready! Nothing else in the code needed to be changed.

image8.png
Image showing Windland running on a smartwatch

Three.js Boilerplate: Create your own Windland

To make it easier to create projects in Three.js, I’ve created an easy-to-use, well-documented, and free starter project. So, using this boilerplate, you will be able to create a scene in Three.js in just a few lines and import a model. The boilerplate also has instructions on how to export the model and some other information that can help you create your city.

Find the repository on GitHub: https://github.com/ektogamat/threejs-andy-bolierplate

Watch the video on how to use it: https://www.youtube.com/watch?v=qM6Ih_cC6Gc

image17.png

Interesting facts

Some people ask: What is the advantage of making a project using Three.js and not just use a rendered image for a website? Well, a rendered image of this scene at 1920×1080 is approximately 2.8mb in size, with reasonable compression. This whole scene in Three.js, with all its interactivity, animations, interface and everything you see in the animations is only 1.8mb.

Windland was awarded an Honorable Mention at Awwwards on March 31, 2022. It transformed itself from a test project into a Three.js use case to complex scenes that mimic the look and feel of games.

Three.js is increasingly my favorite library. You can find more videos on my YouTube channel or on my official Twitter.

Thanks for your time 🙂

The post Case Study: Windland — An Immersive Three.js Experience appeared first on Codrops.

Lessons to Learn from Etsy’s DevOps Strategy

Every organization adopting DevOps has stories to tell to the world. Some of them turned out to be success stories, while others are more like lessons to learn. While it's true that Etsy is one of such organizations that benefited a lot from their DevOps adoption, they also learned a few lessons from their mistakes during their journey. Today, we will be talking in brief about those lessons in detail. But first, let’s try to understand why Etsy first became interested in DevOps.

Why Did Etsy Adopt DevOps?

Back in 2005, Etsy’s engineering teams were siloed into developers, operations teams, and database admins. Although the team was relatively small — close to 35 employees — they faced many team collaboration challenges. This barrier was hindering Etsy’s progress as an organization. 

Case Study: Anatole Touvron’s Portfolio

Like many developers, I’ve decided to redo my portfolio in the summer of 2021. I wanted to do everything from scratch, including the design and development, because I liked the idea of:

  • doing the whole project on my own
  • working on something different than coding
  • being 100% responsible for the website
  • being free to do whatever I want

I know that I wanted to do something simple but efficient. I’ve tried many times to do another portfolio but I’ve always felt stuck at some point and wanted to wipe everything clean by remaking the design or the entire development. I talked to many developers about this and I think that we all know this feeling of getting tired of a project because of how long it is taking to finish. For the first time, I’ve managed to never get this feeling simply by designing this entire project in one afternoon and developing it in two weeks.

Without going further, I want to point out that prior to that I’ve studied @lhbizarro’s course on how to create an immersive website from scratch and I would not have been able to create my portfolio this fast without. It is a true wealth of knowledge that I now use on a regular basis.

Going fast

One of the best tips that I can give to any developer in order to not get tired of doing their portfolio is to go as fast as possible. If you know that you get tired easily with a project, I highly suggest doing it when you have the time and then rush it like crazy. This goes for the coding and for the design part.

Get inspired by websites that you like, find colours that match your vibe, fonts that you like, use your favourite design tool and don’t overthink it.

Things can always get better but if you don’t stick to what you’re doing you will never be able to finish it.

I’m really proud and happy about my portfolio but I know that it isn’t that creative and that crazy compared to stuff that you can see online. I’ve set the bar at a level that I knew I could reach and I think that this is a good way of being sure that you’ll have a new website that will make you progress and feel good about yourself.

Text animations

For the different text animations I’ve chose to do something that I’ve been doing a long time, which is animating lines for paragraphs and letters or words for titles with the help of the Intersection Observer API.

Using it is the most efficient and optimal way to do such animations as it only triggers once in your browser and allows you to have many spans translating without making your user’s machine launch like a rocket.

You have to know that, the more elements there are to animate, the harder it will be to have a smooth experience on every device, which is why I used the least amount of spans to do my animations.

To split my text I always use GSAP’s SplitText because it spoon-feeds you the least fun part of the work. It’s really easy to use and never disappoints.

How to animate

There is multiple ways of animating your divs or spans and I’ve decided to show you two ways of doing it:

  • fully with JavaScript
  • CSS and JavaScript-based

Controlling an animation entirely with JavaScript allows you to have code that’s easier to read and understand because everything will be in the same file. But it could a bit less optimal because you’ll be using JavaScript animation libraries.

Open the following example to see the code:

With CSS your code can get a bit out of hand if you are not organised. There’s a lot more classes to take care of. It’s up to you what you prefer to use but I feel like it’s important to talk about both options because each can have their advantages and disadvantages depending on your needs.

The next example shows how to do it with CSS + JS (open the Codesandbox):

Infinite slider

I’d like to show you the HTML and CSS part of the infinite slider on my portfolio. If you’d like to understand the WebGL part, I can recommend the following tutorial: Creating an Infinite Circular Gallery using WebGL with OGL and GLSL Shaders

The main idea of an infinite slider is to duplicate the actual content so that you can never see the end of it.

You can see the example here (open the Codesandbox for a better scroll experience):

Basically, you wrap everything in a big division that contains the doubled content and when you’ve translated the equivalent of 50% of the big wrapper, you put everything back to its initial position. It’s more like an illusion and it will make more sense when you look at this schematic drawing:

At this very moment the big wrapper snaps back to its initial position to mimic the effect of an infinite slider.

WebGL

Although I won’t explain the WebGL part because Luis already explains it in his tutorial, I do want to talk about the post-processing part with OGL because I love this process and it provides a very powerful way of adding interactions to your website.

To do the post-processing, all you need is the following:

this.post = new Post(this.gl);
this.pass = this.post.addPass({
  fragment,
  uniforms: {
    uResolution: {
      value: new Vec2(1, window.innerHeight / window.innerWidth)
    },
    uMouse: {
      value: new Vec2()
    },
    uVelo: {
      value: 0
    },
    uAmount: {
      value: 0
    }
  }
});

We pass the mouse position, velocity and the time to the uniform. It will allow us to create a grain effect, and an RGB distortion based on the mouse position. You can find many shaders like this and I find it really cool to have it. And it’s not that hard to implement.

float circle(vec2 uv, vec2 disc_center, float disc_radius, float border_size) {
  uv -= disc_center;
  uv*= uResolution;
  float dist = sqrt(dot(uv, uv));
  return smoothstep(disc_radius+border_size, disc_radius-border_size, dist);
}

float random( vec2 p )
{
  vec2 K1 = vec2(
    23.14069263277926,
    2.665144142690225
  );
  return fract( cos( dot(p,K1) ) * 12345.6789 );
}

void main() {
  vec2 newUV = vUv;

  float c = circle(newUV, uMouse, 0.0, 0.6);

  float r = texture2D(tMap, newUV.xy += c * (uVelo * .9)).x;
	float g = texture2D(tMap, newUV.xy += c * (uVelo * .925)).y;
	float b = texture2D(tMap, newUV.xy += c * (uVelo * .95)).z;

  vec4 newColor = vec4(r, g, b, 1.);

  newUV.y *= random(vec2(newUV.y, uAmount));
  newColor.rgb += random(newUV)* 0.10;

  gl_FragColor = newColor;
}

You can find the complete implementation here:

Conclusion

I hoped that you liked this case study and learned a few tips and tricks! If you have any questions you can hit me up @anatoletouvron 🙂

The post Case Study: Anatole Touvron’s Portfolio appeared first on Codrops.

Case Study: A Unique Website for Basement Grotesque

Basement Grotesque is one of basement studio’s first self-initiated projects. We wanted to create a typeface tailored to the studio’s visual language. It became the perfect excuse to show it off with a website that could suit its inherent bold and striking style.

The Beginnings

Our design team had just redesigned the studio’s identity. For the wordmark, instead of using an existing font, we decided to draw each of the characters in an effort to capture the distinct voice and style of the studio. Once we had drawn those final seven letters (and the period sign), it seemed only natural to expand them to a full-fledged bespoke font. That was the beginning of Basement Grotesque.

From then on, our task was fairly clear: we had to create a strong typeface, suitable for large headings and titling, and it had to be bold. Inspired by the specimens of grotesque typefaces of the early 19th century, and combining them with the renewed brutalist aesthetics of the contemporary visual landscape, Basement Grotesque would become our first venture into the daunting world of type design. We weren’t just designing and releasing a new font; we were on the journey of learning a new craft by doing.

Coincidentally, the studio had established quarterly hackathons to devote specific time and space to develop self-initiated projects where the whole team would get together, work on and ship anything we wanted to explore and test that wouldn’t normally be possible with client work. After some brainstorming, the first project chosen for our first hackathon was the site to launch and share our first typeface. Again, it just felt like a natural decision.

“This font is really the result of a creative itch we wanted to scratch a long time ago. We took on the challenge and now we have a growing typeface, open and available to anyone who wants to use it” — Andres Briganti, Head of Design

The Process

Once the goal was set, the challenge was to finish at least one weight of the typeface and then make a website to promote it and its future additions. We had to do something captivating so that it would motivate our community to use it and share it.

The font quickly got to a stage where it was advanced enough to be released. Part of the appeal of the project was the idea to create something that was an ongoing effort and a work in progress. New weights and widths, and even revisions and updates would become available and open to the public as time goes by.

Starting with the Black Weight (800), each character was carefully designed to convey the studio’s visual style, and some of the features of the current version are:

  • 413 unique glyphs
  • 14 discretionary ligatures
  • Bold, our voice and tone, of course
  • Old style and lining figures
  • Language support for Western and Central European languages

Create, Develop and Show Off

With the font in hand, we now had to build a website to present it. It had to be interactive enough to show the typeface’s features, but also feel solid and performant.

The preferred tech stack at basement is Next.js with TypeScript. Having those as basics, we then chose GSAP for animations, Stitches for CSS-in-JS, and Locomotive Scroll for better scrolling experiences — among other libraries. We started from a template repository, next-typescript, for faster development.

“We worked on a tight deadline, and the team really delivered. Each of us worked on a specific section, and then we arranged everything together to get to the final result.” Julian Benegas, Head of Development.

The repository was made open-source, so feel free to fork and play around with it: https://github.com/basementstudio/basement-grotesque.

Interesting Code Challenges

1) In the “Try this beauty” section, we have an interactive preview, where you can adjust the font size, leading, and tracking, to see how it looks. We had to create a ResizableTextarea component to solve some spacing issues. The syncHeights function synced the height of a placeholder div with the actual textarea node. A bit hacky!

function syncHeights(textarea: HTMLTextAreaElement, div: HTMLDivElement) {
  const value = textarea.value.replace(/\n/g, '<br>')
  div.innerHTML = value + '<br style="line-height: 3px;">'
  div.style.display = 'block'
  textarea.style.height = div.offsetHeight + 'px'
  div.style.display = 'none'
}

Full source here.

2) On the footer, we used the 2d physics library p2.js to get that cool “falling letters” animation. The code is a bit long, but please, feel free to follow the source from here.

3) We integrated with Twitter’s API to get and render tweets hashtagged #basementgrotesque on the website. On the download flow, we suggested that users tweet something to show some love 🖤

const handleDownload = useCallback(() => {
    const encoded = {
      text: encodeURIComponent(
        'I’m downloading the #basementgrotesque typeface, the boldest font I’ll ever use on my side-projects. Thank you guys @basementstudio 🏴 Get it now: https://grotesque.basement.studio/'
      )
    }
    window.open(
      `https://twitter.com/intent/tweet?text=${encoded.text}`,
      '_blank'
    )
    download(
      encodeURI(location.origin + '/BasementGrotesque-Black_v1.202.zip')
    )
  }, [])

Full source here.

The Results

See it for yourself: https://grotesque.basement.studio/.

The code is open-source: https://github.com/basementstudio/basement-grotesque.

More About Us

basement.studio is a small team of like-minded individuals whose purpose is to create visually compelling brands and websites, focusing not only on their looks but also on their performance. As we like to say: we make cool shit that performs.

Our award-winning team of designers and developers are hungry for even more interesting projects to build.

Do you have any projects in mind? Don’t be shy: https://basement.studio/contact.

The post Case Study: A Unique Website for Basement Grotesque appeared first on Codrops.

Efficient Model Training in the Cloud with Kubernetes, TensorFlow, and Alluxio

Alibaba Cloud Container Service Team Case Study

This article presents the collaboration of Alibaba, Alluxio, and Nanjing University in tackling the problem of Deep Learning model training in the cloud. Various performance bottlenecks are analyzed with detailed optimizations of each component in the architecture. Our goal was to reduce the cost and complexity of data access for Deep Learning training in a hybrid environment, which resulted in an over 40% reduction in training time and cost.

1. New Trends in AI: Kubernetes-Based Deep Learning in The Cloud

Background

Artificial neural networks are trained with increasingly massive amounts of data, driving innovative solutions to improve data processing. Distributed Deep Learning (DL) model training can take advantage of multiple technologies, such as:

HarmonyOS QR Code Generation

Introduction

A QR code (abbreviated from Quick Response code) is a type of matrix barcode (or two-dimensional barcode).

HarmonyOS provides code generation to return the byte stream of a quick response (QR) code image based on a given string and QR code image size. The caller can use the QR code byte stream to generate a QR code image.

Why We Chose a Scale-Out Database as a MySQL Alternative

Chehaoduo is an online trading platform for both new cars and personal used cars. Founded in 2015, it is now one of the largest auto trading platforms in China, valued at $9 billion in its series D round of funding last year.

In the early stages of Chehaoduo, to quickly adapt to our application development, we chose MySQL as our major database. However, as our business evolved, we were greatly troubled by the complication of MySQL sharding and schema changes. In the face of this dilemma, we found an alternative database to MySQL: TiDB, an open-source, MySQL compatible database that scales out to hold massive data.

Case Study: itsnotviolent.com

SOS Violence Conjugale is an organization that helps victims of violence with the goal of improving their safety. Their objective with It’s not violent was to implement an awareness campaign educating the target audience on the signs of violence through text messaging, a medium where this subject can be addressed a bit more subtly. Our proposal was therefore to provide an interactive conversation experience in which the user would be placed in a victim’s situation and would have to reply to text messages by choosing from the suggested responses. The aim is to denounce the various facets of violence through direct experience.

Concept

Domestic violence being a difficult topic, we decided to use a lighter approach. Firstly, to bring the user to participate in a playful manner, but also to make the message even more striking.

The illustrative universe echoes that of Emojis, with magnified emotions and expressions, which fits well with the theme of text message conversations. A somewhat candy-colour palette was used, in contrast with the dark backgrounds used to emphasize the gravity of the subject.

Typographical choices were also made to amplify this contrast: a bold and more “in your face” font, and one that is a little more playful and naive. Editorial effort was put into the navigation by the use of “stickers”, creating intrigue through the titles without revealing the main subject of the conversation.

Creating the stickers

Design

Personifying the objects on the stickers was deliberately done in such a way as to render the illustrations genderless and representative for the widest possible audience. The stickers’ animation gives a playful aspect and adds a narrative side, in addition to encouraging participation. The stickers were created in Illustrator and were then animated with After Effects. Lottie’s Body Movin plugin was used for rendering the web animations.

Implementation

We used the Lottie library, which allowed us to take animations made with After Effects and use them on the site. On the home page, the stickers are in “pause” mode by default, and when we hover on one of them that sticker goes into “play” mode.

 To make the hover state more visible, the title appears in the background in large type in a scrolling strip, while on a conversation page the stickers are in a constant loop.

Initiation of Lottie’s instance:

this.animation = lottie.loadAnimation({
    container: this.el,
    animType: ‘svg’,
    loop: true,
    autoplay: false,
    animationData: data
})

Condition if not in autoplay mode like the home page:

if (!this.autoplay) {
    this.mouseenterBind = this.enter.bind(this)
    this.el.addEventListener('mouseenter', this.mouseenterBind)

    this.mouseleaveBind = this.leave.bind(this)
    this.el.addEventListener('mouseleave', this.mouseleaveBind)
}

On “mouseenter” and on “mouseleave”, we are returning a different function:

enter() {
    this.animation.play()
}

leave() {
    this.animation.pause()
}

Creating the conversations

Design

On desktop, we chose to place the conversation in the context of a mobile phone. Across a few exchanges a choice of three replies is always offered to the user.  The response from “the abuser” varies according to the user’s chosen reply. The choice of using a very large cursor also emphasizes that the user must choose from the suggested answers.

When a conversation is finished, an information page is displayed on the theme of violence exploited through that conversation. Finally, we find the list of conversations, presented as stickers, as an invitation to continue the experience.

Implementation

We used Vue.js to manage dialogues because it was easier to handle the components since the content changes and it has to be responsive to the user’s choices.

All the storylines are included in a JSON file which includes the victim’s reply choices (a short version of which appears in the choices and a detailed version which appears in the conversation), and depending on the reply selected, we return the abuser’s answer associated with it.

Example of JSON structure for a conversation:

"dialogues": [
{
    "choices": [
    {
        "author": "victim",
        "short": "It's not lame...",
        "content": "It's not lame, it's important to me..."
    },
    {
        "author": "victim",
        "short": "You're not OK with it?",
        "content": "You're not OK with it?"
    },
    {
        "author": "victim",
        "short": "I don't understand your reaction.",
        "content": "Um, I don't understand why you're reacting like this..."
    }],
    "replies": [
    {
        "author": "aggressor",
        "content": "Naturally, you're the important one!"
    },
    {
        "author": "aggressor",
        "content": "Wow! So perceptive!! ?"
    },
    {
        "author": "aggressor",
        "content": "And that's the problem!"
    }],
    "outro": [
    {
        "author": "aggressor",
        "content": "No biggie, I'll just come with you."
    }]
}]

When the conversation is finished an overlay is launched with a question for the user, and then it displays the results and some statistics on the subject. A fun, animated GIF transition is then launched depending on the chosen answer.

Challenges

Giving a realistic feel to the dialogues was one of the main challenges. We had to do several tests to give the impression of a real SMS conversation. We decided to display the messages with a delay, as if the abuser was writing. The longer the message, the longer it takes to receive it. We had to set a time limit, however, so that the user experience was not excessively long.

Message delays:

/**
 * Set the duration according to the number of parameters. Longer message will take more time to be send
 *
 * @param {string}
 * @return {int} | duration in ms
 */
messageDuration(message) {
    if (message.length < 100)
        return (message.length * 20)
    else
        return 1600
}

Tech Stack

In order to fully support the accompanying media campaign, we decided to keep the back-end as simple as possible, just like we decided to use flat JSON files for the dialogue instead of storing it in a database, for example. In fact, we ended up rendering static HTML from our PHP templates to ensure there is absolutely no back-end processing necessary.

Although we usually use our Charcoal Content Management System, which is equipped with an aggressive caching system for those kinds of websites, you just can’t beat static files 🙂 With the help of CloudFlare acting as a CDN, we handled 100,000 visits in the first 2 weeks without a hiccup.

Front-end frameworks

Conclusion

Itsnotviolent.com was the kind of project where every star aligned for us. A great cause, a wonderful client and a showcase of creativity through a clever experience. The will and courage to take a lighter, more playful and interactive route to discuss the theme of violence has been so far rewarded with the high popularity of the campaign. It has been embraced by organizations in Canada and is attracting more interest outside the country.

Case Study: itsnotviolent.com was written by Marie-Christine Dion and published on Codrops.

Case Study: Akaru 2019

In 2019, a new version of our Akaru studio website has been released.

After long discussions between developers and designers, we found the creative path we wanted to take for the redesign. The idea was to create a connection between our name Akaru and the graphic style. Meaning “to highlight” in Japanese, we wanted Akaru to transmit the light spectrum, the iridescence and reflections that light can have on some surfaces. 

The particularity of the site is the mixing of regular content in the DOM/CSS and interactive background content in WebGL. We’ll have a look at how we planned and decided on the visuals of the main effect, and in the second part will share a technical overview and show how the “iridescent oil” effect was coded.

Design of the liquid effect

In the following, we will go through our iteration process between design and implementation and how we decided on the visuals of the interactive liquid/oil effect.

Visual Search

After some in-depth research, we built an inspirational mood board inspired by 3D artists and photographers. We have therefore selected several colors, and used all the details present in the images of liquids we considered: the mixture of fluids, streaks of colors and lights.

Processus

We started to create our first texture tests in Photoshop using vector shapes, brushes, distortions, and blurs. After several tests we were able to make our first environmental test with an interesting graphic rendering. The best method was to first draw the waves and shapes, then paint over and mix the colors with the different fusion modes.

Challenges

The main challenge was to “feel” a liquid effect on the textures. It was from this moment that the exchange between designers and developers became essential. To achieve this effect, the developers created a parametric tool where the designers could upload a texture, and then decide on the fluid movements using a Flowmap. From there, we could manage amplitudes, noise speed, scale and a multitude of options.

Implementing the iridescent oil effect

Now we will go through the technical details of how the iridescent oil effect was implemented on every page. We are assuming some basic knowledge of WebGL with Three.js and the GLSL language so we will skip over commonly used code, like scene initialization.

Creating the plane

For this project, we use the OrthographicCamera of Three.js. This camera removes all perspective so we can create our plane without caring about the depth of it.

We will create our plane with a geometry which has the width of the viewport, and we get the height by multiplying the width of the plane by the aspect ratio of our texture:

const PLANE_ASPECT_RATIO = 9 / 16;

const planeWidth = window.innerWidth;
const planeHeight = planeWidth * PLANE_ASPECT_RATIO;

const geometry = new PlaneBufferGeometry(planeWidth, planeHeight);

We could keep the number of segments by default since this effect runs on the fragment shader. By doing so, we reduce the amount of vertices we have to render, which is always good for performance.

Then, in the shader we use the UVs to sample our texture:

vec3 color = texture2D(uTexture, vUv).rgb;

gl_FragColor = vec4(color, 1.0);

Oil motion

Now that our texture is rendered on our plane, we need to make it flow.

To create some movement, we sampled the texture with an offset two times with a different offset:

float phase1 = fract(uTime * uFlowSpeed + 0.5);
float phase2 = fract(uTime * uFlowSpeed + 1.0);

// mirroring phase
phase1 = 1.0 - phase1;
phase2 = 1.0 - phase2;

vec3 color1 = texture2D(
    uTexture,
    uv + vec2(0.5, 0.0) * phase1).rgb;

vec3 color2 = texture2D(
    uTexture,
    uv + vec2(0.5, 0.0) * phase2).rgb;

Then we blend our two textures together:

float flowLerp = abs((0.5 - phase1) / 0.5);
vec3 finalColor = mix(color1, color2, flowLerp);

return finalColor;

But we don’t want our texture to always flow in the same direction, we want some areas to flow up, some others to flow to the right, and so on. To achieve this, we used Flow Map or Vector Map, which look like this:

Example Flow Map

A flow map is a texture in which every pixel contains a direction represented as a 2d vector x and y. In this texture, the red component stores the direction on the x axis, while the green component stores the direction on the y axis. Areas where the liquid is stationary are mid red and mid green (you can find those areas on top of the map). In fact, the direction could be in two ways, for example on the x axis the liquid could go to the left or to the right. To store this information a red value of 0 will make the texture go to the left and a red value of 255 will make the texture go to the right. In the shader, we implement this logic like this:

vec2 flowDir = texture2D(uFlowMap, uv).rg;
// make mid red and mid green the "stationary flow" values
flowDir -= 0.5;

// mirroring phase
phase1 = 1.0 - phase1;
phase2 = 1.0 - phase2;

vec3 color1 = texture2D(
    uTexture,
    uv + flowDir * phase1).rgb;

vec3 color2 = texture2D(
    uTexture,
    uv + flowDir * phase2).rgb;

We painted this map using Photoshop and unfortunately, with all exports (jpeg, png, etc.), we always got some weird artefacts. We found out that using PNG resulted in the least “glitchy” exports we could obtain. We guess that it comes from the compression algorithm for exports in Photoshop. These artefacts are invisible to the eye and can only be seen when we use it as a map. To fix that, we blurred the texture two times with glsl-fast-gaussian-blur (one vertically and one horizontally) and blended them together:

vec4 horizontalBlur = blur(
    uFlowMap,
    uv,
    uResolution,
    vec2(uFlowMapBlurRadius, 0.0)
  );
vec4 verticalBlur = blur(
    uFlowMap,
    uv,
    uResolution,
    vec2(0.0, uFlowMapBlurRadius)
  );
vec4 texture = mix(horizontalBlur, verticalBlur, 0.5);

As you can see, we used glslify to import glsl modules hosted on npm; it’s very useful to keep you shader’s code split and as simple as possible.

Make it feel more natural

Now that our liquid flows, we can clearly see when the liquid is repeating. But liquid doesn’t flow this way in real life. To create a better illusion of a realistic flow movement, we added some turbulence to distort our textures. 

To create this turbulence we use glsl-noise to compute a 3D Noise in which x and y will be the UV downscaled a bit to create a large distortion, and Z will be the time elapsed since the first frame, this will create an animated seamless noise:

float x = uv.x * uNoiseScaleX;
float y = uv.y * uNoiseScaleY;

float n = cnoise3(vec3(x, y, uTime * uNoiseSpeed));

Then, instead of sampling our flow with the default UV, we distort them with the noise:

vec2 distordedUv = uv + applyNoise(uv);

vec3 color1 = texture2D(
    uTexture,
    distordedUv + flowDir * phase1).rgb;

...

On top of that, we use a uniform called uNoiseAmplitude to control the noise strength.

To observe how the noise influences the rendering, you can tweak it inside the “Noise” folder in the GUI at the top right of the screen. For example, try to tweak the “amplitude” value:

Adding the mouse trail

To add some user interaction, we wanted to create a trail inside the oil, like a finger pushing the oil, where the finger would be the user’s pointer. This effect consists of three things:

1. Computing the mouse trail

To achieve this we used a Frame Buffer (or FBO). I will not go very deep into what frame buffers are here but if you want to you can learn everything about it here

Basically, it will: 

  1. Draw a circle in the current mouse position
  2. Render this as a texture
  3. Store this texture
  4. Use this texture the next frame to draw on top of it the new mouse position
  5. Repeat

By doing so, we have a trail drawn by the mouse and everything run on the GPU! For this kind of simulations, running them on the GPU is way more performant than running them on the CPU.

2. Blending the trail with the flow map

We can use the frame buffer as texture. It will be a black texture, with a white trail painted by the mouse. So we pass our trail texture via uniform to our Oil shader and we can compute it like this:

float trail = texture2D(uTrailTexture, uv).r;

We use only the red component of our texture since it’s a grayscale map and all colors are equal.

Then inside our flow function we use our trail to change the direction our liquid texture flow:

flowDir.x -= trail;
flowDir.y += trail * 0.075;

3. Adding the mouse acceleration

When we move our finger in a liquid, the trail it will create depends on the speed our finger moves. To recreate this feeling we make the radius of the trail depending on the mouse speed: the faster the mouse will go, the bigger the trail will be.

To find the mouse speed we compute the difference between the damped and the current mouse position:

const deltaMouse = clamp(this.mouse.distanceTo(this.smoothedMouse), 0.0, 1.0) * 100;

Then we normalize it and apply an easing to this value with the easing functions provided by TweenMax to avoid creating a linear acceleration.

const normDeltaMouse = norm(deltaMouse, 0.0, this.maxRadius);
const easeDeltaMouse = Circ.easeOut.getRatio(normDeltaMouse);

The Tech Stack

Here’s an overview of the technologies we’ve used in our project:

  • three.js for the WebGL part
  • Vue.js for the DOM part, it allows us to wrap up the WebGL inside a component which communicate easily with the rest of the UI 
  • GSAP is the tweening library we love and use in almost every project as it is well optimized and performant
  • Nuxt.js to pre-render during deployment and serve all our pages as static files

Prismic is a really easy to use headless CMS with a nice API for image formatting and a lot of others useful features.

Conclusion

We hope you liked this Case Study, if you have any questions, feel free to ask us on Twitter (@lazyheart and @colinpeyrat), we would be very happy to receive your feedback!

Case Study: Akaru 2019 was written by Jeremy Franzese and published on Codrops.

Case Study: Portfolio of Bruno Arizio

Introduction

Bruno Arizio, Designer — @brunoarizio

Since I first became aware of the energy in this community, I felt the urge to be more engaged in this ‘digital avant-garde landscape’ that is being cultivated by the amazing people behind Codrops, Awwwards, CSSDA, The FWA, Webby Awards, etc. That energy has propelled me to set up this new portfolio, which acted as a way of putting my feet into the water and getting used to the temperature.

I see this community being responsible for pushing the limits of what is possible on the web, fostering the right discussions and empowering the role of creative developers and creative designers across the world.

With this in mind, it’s difficult not to think of the great art movements of the past and their role in mediating change. You can easily draw a parallel between this digital community and the Impressionists artists in the last century, or as well the Bauhaus movement leading our society into modernism a few decades ago. What these periods have in common is that they’re pushing the boundaries of what is possible, of what is the new standard, doing so through relentless experimentation. The result of that is the world we live in, the products we interact with, and the buildings we inhabit.

The websites that are awarded today, are so because they are innovating in some aspects, and those innovations eventually become a new standard. We can see that in the apps used by millions of people, in consumer websites, and so on. That is the impact that we make.

I’m not saying that a new interaction featured on a new portfolio launched last week is going to be in the hands of millions of people across the globe in the following week, although constantly pushing these interactions to its limits will scale it and eventually make these things adopted as new standards. This is the kind of responsibility that is in our hands.

Open Source

We decided to be transparent and take a step forward in making this entire project open source so people can learn how to make the things we created. We are both interested in supporting the community, so feel free to ask us questions on Twitter or Instagram (@brunoarizio and @lhbzr), we welcome you to do so!

The repository is available on GitHub.

Design Process

With the portfolio, we took a meticulous approach to motion and collaborated to devise deliberate interactions that have a ‘realness’ to it, especially on the main page.

The mix of the bending animation with the distortion effect was central to making the website ‘tactile’. It is meant to feel good when you shuffle through the projects, and since it was published we received a lot of messages from people saying how addictive the navigation is.

A lot of my new ideas come from experimenting with shaders and filters in After Effects, and just after I find what I’m looking for — the ‘soul’ of the project — I start to add the ‘grid layer’ and begin to structure the typography and other elements.

In this project, before jumping to Sketch, I started working with a variety of motion concepts in AE, and that’s when the version with the convection bending came in and we decided to take it forward. So we can pretty much say that the project was born from motion, not from a layout in this matter. After the main idea was solid enough, I took it to Sketch, designed a simple grid and applied the typography.

Collaboration

Working in collaboration with Luis was so productive. This is the second (of many to come) projects working together and I can safely say that we had a strong connection from start to finish, and that was absolutely important for the final results. It wasn’t a case in which the designer creates the layouts and hands them over to a developer and period. This was a nuanced relationship of constant feedback. We collaborated daily from idea to production, and it was fantastic how dev and design had this keen eye for perfectionism.

From layout to code we were constantly fine-tuning every aspect: from the cursor kinetics to making overhaul layout changes and finding the right tone for the easing curves and the noise mapping on the main page.

When you design a portfolio, especially your own, it feels daunting since you are free to do whatever you want. But the consequence is that this will dictate how people will see your work, and what work you will be doing shortly after. So making the right decisions deliberately and predicting its impact is mandatory for success.

Technical Breakdown

Luis Henrique Bizarro, Creative Developer — @lhbzr

Motion Reference

This was the video of the motion reference that Bruno shared with me when he introduced me his ideas for his portfolio. I think one of the most important things when starting a project like this with the idea of implementing a lot of different animations, is to create a little prototype in After Effects to drive the developer to achieve similar results using code.

The Tech Stack

The portfolio was developed using:

That’s my favorite stack to work with right now; it gives me a lot of freedom to focus on animations and interactions instead of having to follow guidelines of a specific framework.

In this particular project, most of the code was written from scratch using ECMAScript 2015+ features like Classes, Modules, and Promises to handle the route transitions and other things in the application.

In this case study, we’ll be focusing on the WebGL implementation, since it’s the core animation of the website and the most interesting thing to talk about.

1. How to measure things in Three.js

This specific subject was already covered in other articles of Codrops, but in case you’ve never heard of it before, when you’re working with Three.js, you’ll need to make some calculations in order to have values that represent the correct sizes of the viewport of your browser.

In my last projects, I’ve been using this Gist by Florian Morel, which is basically a calculation that uses your camera field-of-view to return the values for the height and width of the Three.js environment.

// createCamera()
const fov = THREEMath.degToRad(this.camera.fov);
const height = 2 * Math.tan(fov / 2) * this.camera.position.z;
const width = height * this.camera.aspect;
        
this.environment = {
  height,
  width
};

// createPlane()
const { height, width } = this.environment;

this.plane = new PlaneBufferGeometry(width * 0.75, height * 0.75, 100, 50);

I usually store these two variables in the wrapper class of my applications, this way we just need to pass it to the constructor of other elements that will use it.

In the embed below, you have a very simple implementation of a PlaneBufferGeometry that covers 75% of the height and width of your viewport using this solution.

2. Uploading textures to the GPU and using them in Three.js

In order to avoid the textures to be processed in runtime while the user is navigating through the website, I consider a very good practice to upload all images to the GPU immediately when they’re ready. On Bruno’s portfolio, this process happens during the preloading of the website. (Kudos to Fabio Azevedo for introducing me this concept a long time ago in previous projects.)

Another two good additions, in case you don’t want Three.js to resize and process the images you’re going to use as textures, are disabling mipmaps and change how the texture is sampled by changing the generateMipmaps and minFilter attributes.

this.loader = new TextureLoader();

this.loader.load(image, texture => {
  texture.generateMipmaps = false;
  texture.minFilter = LinearFilter;
  texture.needsUpdate = true;

  this.renderer.initTexture(texture, 0);
});

The method .initTexture() was introduced back in the newest versions of Three.js in the WebGLRenderer class, so make sure to update to the latest version of the library to be able to use this feature.

But my texture is looking stretched! The default behavior of Three.js map attribute from MeshBasicMaterial is to make your image fit into the PlaneBufferGeometry. This happens because of the way the library handles 3D models. But in order to keep the original aspect ratio of your image, you’ll need to do some calculations as well.

There’s a lot of different solutions out there that don’t use GLSL shaders, but in our case we’ll also need them to implement our animations. So let’s implement the aspect ratio calculations in our fragment shader that will be created for the ShaderMaterial class.

So, all you need to do is pass your Texture to your ShaderMaterial via the uniforms attribute. In the fragment shader, you’ll be able to use all variables passed via the uniforms attribute.

In Three.js Uniform documentation you have a good reference of what happens internally when you pass the values. For example, if you pass a Vector2, you’ll be able to use a vec2 inside your shaders.

We need two vec2 variables to do the aspect ratio calculations: the image resolution and the resolution of the renderer. After passing them to the fragment shader, we just need to implement our calculations.

this.material = new ShaderMaterial({
  uniforms: {
    image: {
      value: texture
    },
    imageResolution: {
      value: new Vector2(texture.image.width, texture.image.height)
    },
    resolution: {
      type: "v2",
      value: new Vector2(window.innerWidth, window.innerHeight)
    }
  },
  fragmentShader: `
    uniform sampler2D image;
    uniform vec2 imageResolution;
    uniform vec2 resolution;

    varying vec2 vUv;

    void main() {
        vec2 ratio = vec2(
          min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
          min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
        );

        vec2 uv = vec2(
          vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
          vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
        );

        gl_FragColor = vec4(texture2D(image, uv).xyz, 1.0);
    }
  `,
  vertexShader: `
    varying vec2 vUv;

    void main() {
        vUv = uv;

        vec3 newPosition = position;

        gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
    }
  `
});

In this snippet we’re using template strings to represent the code of our shaders only to keep it simple when using CodeSandbox, but I highly recommend using glslify to split your shaders into multiple files to keep your code more organized in a more robust development environment.

We’re all good now with the images! Our images are preserving their original aspect ratio and we also have control over how much space they’ll use in our viewport.

3. How to implement infinite scrolling

Infinite scrolling can be something very challenging, but in a Three.js environment the implementation is smoother than it’d be without WebGL by using CSS transforms and HTML elements, because you don’t need to worry about storing the original position of the elements and calculate their distance to avoid browser repaints.

Overall, a simple logic for the infinite scrolling should follow these two basic rules:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

Sounds reasonable right? So, first we need to detect in which direction the user is scrolling.

this.position.current += (this.scroll.values.target - this.position.current) * 0.1;

if (this.position.current < this.position.previous) {
  this.direction = "up";
} else if (this.position.current > this.position.previous) {
  this.direction = "down";
} else {
  this.direction = "none";
}

this.position.previous = this.position.current;

The variable this.scroll.values.target is responsible for defining to which scroll position the user wants to go. Then the variable this.position.current represents the current position of your scroll, it goes smoothly to the value of the target with the * 0.1 multiplication.

After detecting the direction the user is scrolling towards, we just store the current position to the this.position.previous variable, this way we’ll also have the right direction value inside the requestAnimationFrame.

Now we need to implement the checking method to make our items have the expected behavior based on the direction of the scroll and their position. In order to do so, you need to implement a method like this one below:

check() {
  const { height } = this.environment;
  const heightTotal = height * this.covers.length;

  if (this.position.current < this.position.previous) {
    this.direction = "up";
  } else if (this.position.current > this.position.previous) {
    this.direction = "down";
  } else {
    this.direction = "none";
  }

  this.projects.forEach(child =>; {
    child.isAbove = child.position.y > height;
    child.isBelow = child.position.y < -height;

    if (this.direction === "down" && child.isAbove) {
      const position = child.location - heightTotal;

      child.isAbove = false;
      child.isBelow = true;

      child.location = position;
    }

    if (this.direction === "up" && child.isBelow) {
      const position = child.location + heightTotal;

      child.isAbove = true;
      child.isBelow = false;

      child.location = position;
    }

    child.update(this.position.current);
  });
}

Now our logic for the infinite scroll is finally finished! Drag and drop the embed below to see it working.

You can also view the fullscreen demo here.

4. Integrate animations with infinite scrolling

The website motion reference has four different animations happening while the user is scrolling:

  • Movement on the z-axis: the image moves from the back to the front.
  • Bending on the z-axis: the image bends a little bit depending on its position.
  • Image scaling: the image scales slightly when moving out of the screen.
  • Image distortion: the image is distorted when we start scrolling.

My approach to implementing the animations was to use a calculation of the element position divided by the viewport height, giving me a percentage number between -1 and 1. This way I’ll be able to map this percentage into other values inside the ShaderMaterial instance.

  • -1 represents the bottom of the viewport.
  • 0 represents the middle of the viewport.
  • 1 represents the top of the viewport.
const percent = this.position.y / this.environment.height; 
const percentAbsolute = Math.abs(percent);

The implementation of the z-axis animation is pretty simple, because it can be done directly with JavaScript using this.position.z from Mesh, so the code for this animation looks like this:

this.position.z = map(percentAbsolute, 0, 1, 0, -50);

The implementation of the bending animation is slightly more complex, we need to use the vertex shaders to bend our PlaneBufferGeometry. I’ve choose distortion as the value to control this animation inside the shaders. Then we also pass two other parameters distortionX and distortionY which controls the amount of distortion of the x and y axis.

this.material.uniforms.distortion.value = map(percentAbsolute, 0, 1, 0, 5);
uniform float distortion;
uniform float distortionX;
uniform float distortionY;

varying vec2 vUv;

void main() {
  vUv = uv;

  vec3 newPosition = position;

  // 50 is the number of x-axis vertices we have in our PlaneBufferGeometry.
  float distanceX = length(position.x) / 50.0;
  float distanceY = length(position.y) / 50.0;

  float distanceXPow = pow(distortionX, distanceX);
  float distanceYPow = pow(distortionY, distanceY);

  newPosition.z -= distortion * max(distanceXPow + distanceYPow, 2.2);

  gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}

The implementation of image scaling was made with a single function inside the fragment shader:

this.material.uniforms.scale.value = map(percent, 0, 1, 0, 0.5);
vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  // ...

  uv = zoom(uv, scale);

  // ...
}

The implementation of distortion was made with glsl-noise and a simple calculation displacing the texture on the x and y axis based on user gestures:

onTouchStart() {
  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0.1
  });
}

onTouchEnd() {
  TweenMax.killTweensOf(this.material.uniforms.displacementY);

  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0
  });
}
#pragma glslify: cnoise = require(glsl-noise/classic/3d)

void main() {
  // ...

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  // ...
}

And that’s our final code of the fragment shader merging all the three animations together.

#pragma glslify: cnoise = require(glsl-noise/classic/3d)

uniform float alpha;
uniform float displacementX;
uniform float displacementY;
uniform sampler2D image;
uniform vec2 imageResolution;
uniform vec2 resolution;
uniform float scale;
uniform float time;

varying vec2 vUv;

vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  vec2 ratio = vec2(
    min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
    min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
  );

  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  uv = zoom(uv, scale);

  gl_FragColor = vec4(texture2D(image, uv).xyz, alpha);
}

You can also view the fullscreen demo here.

Photos used in examples of the article were taken by Willian Justen and Azamat Zhanisov.

Conclusion

We hope you liked the Case Study we’ve written together, if you have any questions, feel free to ask us on Twitter or Instagram (@brunoarizio and @lhbzr), we would be very happy to receive your feedback.

Case Study: Portfolio of Bruno Arizio was written by Bruno Arizio and published on Codrops.