Safeguarding the IoT Landscape With Data Masking Techniques

As businesses aim to provide personalized experiences to their customers, they are increasingly integrating connected IoT devices into their operations. However, as the IoT ecosystem expands, protecting data from malicious individuals who may try to access and misuse personal information becomes essential. According to MarketsandMarkets forecasts, the global IoT security Market size will grow from USD $20.9 billion in 2023 to USD $59.2 billion by 2028 at a Compound Annual Growth Rate (CAGR) of 23.1% during the forecast period. 

One of the key strategies for safeguarding data in this complex ecosystem is data masking. It can impact the IoT landscape and its role in protecting Personally Identifiable Information (PII), preserving data utility, and mitigating cybersecurity risks.

Yes! OpenTelemetry Is a Critical Part of Securing Your Systems

OpenTelemetry (OTel) is an open-source standard used in the collection, instrumentation, and export of telemetry data from distributed systems. As a framework widely adopted by SRE teams and security teams, OTel is more than just one nice-to-have tool among many; it is critical.

In this post, we’ll explore the role that OTel plays in system security. We’ll look at how telemetry data is used to secure systems along with how OTel securely handles telemetry data. Then, we’ll consider concrete practices — basic and advanced — that you can adopt as you use OTel in your organization.

Three Stages of the Product Development Process

A product can be anything; a product manager's role and responsibilities change across different industries. In this post, I will remove some myths about the Product Manager role and share a bird-eye view of the product development process and some frameworks that may be useful in remembering the overall process.

Product Manager Role

Product Managers are not managers of anybody except for school interns who aspire to become product managers themselves. The PM acts as a central node in the product development process and is ultimately responsible for the product's success. The role brings all the viewpoints together and is designed with no direct reports so that the engineering/design team can develop an open-communication relationship to express their ideas and concerns.

How To Download and Install Maven?

The Apache Group created the well-liked open-source build tool Maven to build, publish, and deploy multiple projects simultaneously for improved project management. The lifecycle framework can be built and documented using the provided tool. 

Maven is built in Java and is used to create projects in C#, Scala, Ruby, and other languages. This tool, which is based on the Project Object Model (POM), has made the life of Java developers simpler when producing reports, checking builds, and testing automated settings. 

Python IDE Gotcha

I came across this item while programming something that should have been very simple. VLC Media Player will save the last viewed position in a media file (assuming you have the resume option enabled). It maintains two lists, file names, and offset times. The file names are encoded such that certain characters are replaced with "%" strings (%20 for a blank, for example). I wrote a short script to remove the resume entries for files that were no longer on my computer.

The Python urllib has two handy methods for encoding/decoding these strings within URLs. In my code I had

import urllib
.
.
.
file = urllib.parse.unquote(file_list[i])

I developed the code using idlex (an idle variant). Once the code was debugged I attached it to a hotkey. Much to my surprise, running the code with the hotkey did nothing. Stripping it down to the bare essentials gave me

import urllib

print(urllib.parse.unquote('my%20file.txt'))

Running it from within idlex resulted in

======================= RESTART: D:\temp\test.py =======================
my file.txt

but running it from a cmd shell gave me

Traceback (most recent call last):
  File "D:\temp\test.py", line 3, in <module>
    print(urllib.parse.unquote('my%20file.txt'))
AttributeError: module 'urllib' has no attribute 'parse'

From the idlex shell, typing dir(urllib) gives

>>> dir(urllib)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'parse']

and running the same from Python invoked in a cmd shell gives

>>> dir(urllib)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']

The idlex environment adds parse. I found the same addition when I edited and ran the code in the Visual Studio Code IDE.

The solution (if you can call it that) was to code it as

from urllib import parse

print(parse.unquote('my%20file.txt'))

I have found no explanation for this behaviour, nor any reason why an IDE would provide an environment that does not accurately reflect a production environment.

Chris’ Corner: Hot New Web Features

Yuri Mikhin and Travis Turner over at Evil Martians implore us: Don’t wait, let’s use the browser Contact Picker API now. As I write, it’s only available essentially on Chrome for Android. But the spec exists and iOS also has an experimental flag for it. I’m an iOS guy so I flipped it on. I actually didn’t even know you could do that! (Settings > Safari > Advanced > Experimental Features).

You can see the contact picker in there and other cool stuff like text-wrap.

Now if you’re in a browser that supports it…

The select method will show users a modal UI to select contacts, and it will then return a Promise. If the promise resolves, it will return an array (even if only one contact was selected) filled with ContactInfo interfaces.

I’m sure you can imagine it. You tap a button or something, and it launches a contact picker. You select a contact from your device’s contacts. It returns data from those contacts, like names, emails, and phone numbers.

Not every app needs it, but I imagine a lot could make use of it (particularly in a progressive enhancement style). Does your app have any kind of invite or sharing UI? You could use it there. I’m thinking of something like Figma’s share modal:

I’m just on my own to write in email addresses in there. If this was Google Docs, well, they have the distinct advantage that they already have a contact list for you thanks to the likelihood that you also use Gmail and keep some form of your contacts there. But very few of us are Google. The Contact Picker API levels that playing field!

I gave the Evil Martians demo a spin, and it works great.

Weirdest part? No “search” ability in the contact picker popup.

If there were only some kind of easy-to-use web app that makes it really easy to play with new APIs like this and get a feel for them and save them for later reference. Some kind of playpen for code.


I think I was bitching about text-wrap: balance; the other day. Just like me, to be given such a glorious new property that helps makes headlines look better across the web and find something to not like about it. The balance value takes multi-line text and makes all those lines as roughly even as it can. I feel like that looks pretty good for headlines, generally. The kicker is that “balancing” isn’t always what people are looking to achieve, and what they really want is just to avoid an awkward orphan single word from wrapping down onto the next line.

Adam Argyle said to me: have you seen text-wrap: pretty;?

  1. No, I have not.
  2. Awww, pretty is a great keyword value in CSS.

I googled it and found Amit Merchant’s quick coverage. Then I set about making a demo and trying it out (only works in Chrome Canary until 117 ships to stable).

See what I mean about balance above? There is just far too much space left over above when all I really wanted to do was prevent the single-word orphan. Now pretty can prevent that.

That’s so generically useful I might be tempted to do something like…

p, li, dt, dd, blockquote, .no-orphan {
  text-wrap: pretty;
}

… in a “reset” stylesheet.


People reflecting on formative moments in their lives usually makes for a good story. And especially relatable when they are like: “… and that’s how I became a nerd.” That’s what happened when Alexander Miller’s dad gave him some paper:

When I was a kid, my dad gave me a piece of paper with a grid printed on it. It consisted of larger squares than standard graph paper, about an inch in size. It was basically a blank chessboard.

GRID WORLD

I didn’t become an incredible code artist like Alexander, but I can still relate. My first really “successful” programs were grid-based. First, a Conways’ Game of Life thing (that I’m still a little obsessed with) and then a Battleship clone (like Alexander’s father). These were absolutely formative moments for me.


Do you know one of the major JavaScript frameworks better than another? I bet you do, don’t you? You’re a Svelte groupie, I can tell.

Component party is a website that shows you how to do the basic and important stuff in each major framework (React, Svelte, Vue2/3, Angular, Lit, Ember, and several more). Very clever, I think! It’s helpful to know one of the frameworks so you can verify that’s how you would have done it there, then see how it works in another framework. Like if you need to loop over data in React, you probably end up doing a .map() thing, but in Svelte there is #each, where in Vue it’s a v-for attribute. I don’t work across different frameworks enough to have all this memorized so a big 👍 from me for making a useful reference here.


“Sometime this fall (2023)” is all we know for the release date of macOS Sonoma. Normally operating system releases aren’t that big of a deal for web designs and developers, who are more interested in browser version releases. But Sonoma has a trick up it’s sleeve.

With macOS Sonoma, Apple goes all-in on the concept of installable web apps. They’re highly integrated in the overall macOS experience and don’t give away their web roots by not showing any Safari UI at all.

Thomas Steiner, Web Apps on macOS Sonoma 14 Beta

Installable web apps, you say? Like… PWAs? (Progressive Web Apps). The point of PWAs, at least in my mind, is that they are meant to be real competitors to native apps. After installation, they are clickable icons right there next to any other app. Level playing field. But to become installable, there was kind of a minimum checklist of requirements, starting with a manifest.json.

The word from Apple is that there are literally zero requirements for this. You can “Add to Dock” any website, and it’ll work. I guess that means it’s possible to have a docked app that just entirely doesn’t work offline, but 🤷‍♀️.

Sites installed this way do respect all the PWA mechanics like one would hope! Sites with a manifest won’t show any Safari UI at all. I don’t think there are install prompts offered yet, so users would have to know about this and/or find the menu item. There are prompts though in regular Safari if users go to a website that is already installed (to “Open” the “app”).

Overall, apps installed this way look pretty nicely integrated into the OS. But I also agree with Thomas’ wishlist, all of which seem like they would make things much better still.

The post Chris’ Corner: Hot New Web Features appeared first on CodePen Blog.

How to Translate a WordPress Plugin in Your Language

Are you looking for a way to translate a WordPress plugin into your language?

By translating a WordPress plugin into another language, you will make it accessible to a broader audience. This allows users from different countries to use the plugin in their native languages.

In this article, we will show you how to easily translate a WordPress plugin into your language.

Translate a WordPress plugin in your language

Why Translate WordPress Plugins?

By default, WordPress is available in many languages and can be used to easily create a multilingual website using a plugin.

Similarly, most of the top WordPress plugins are also translation-ready. All you have to do is ask the plugin author if you can help by contributing translations in other languages.

By translating the plugin, you can increase its reach and create a larger user base. This can lead to more installs, feedback, and exposure for the plugin.

It can also help you establish yourself in the WordPress community and provide you with new networking opportunities with other developers, translators, and businesses in the industry.

You can even add the translation to your portfolio and demonstrate your skills and contributions to the WordPress community.

That being said, let’s take a look at how to easily translate WordPress plugins in your language. We will cover a few different methods in this post, and you can use the quick links below to jump to the method you want to use:

Method 1: Translate a WordPress Plugin Into Your Language for Everyone

If you want to translate a WordPress plugin in a way that helps other people use the plugin in their languages, then this method is for you.

WordPress.org currently hosts a web-based translation tool that allows anyone to contribute translations for plugins within the WordPress repository.

First, you will need to visit a plugin’s page on the WordPress.org website. Once you are there, just switch to the ‘Development’ tab at the top.

Here, you will see a link asking you to help translate the plugin into your language.

You can simply click on it to start contributing to the plugin translation.

Translate a WordPress plugin

However, if the link isn’t available, then you can visit the Translating WordPress website.

Once there, you will see a list of languages on the screen. From here, find your language and simply click the ‘Contribute Translation’ button under it.

Choose a language for translation

This will take you to a new screen, where you need to switch to the ‘Plugins’ tab.

After that, search for the plugin you want to translate using the search field and then click the ‘Translate Project’ button under it.

Click Translate Project button

This will direct you to the plugin translation page, where you must select the ‘Stable (latest release)’ sub-project from the left column.

If you want to translate the plugin’s development or readme files, then you can choose those sub-projects from the list instead.

Choose stable latest release option

Finally, you will be taken to a new page where you will see the original strings in one column and the translations in another.

Keep in mind that you will need to be logged in to your WordPress.org account to contribute translations.

From here, just click on the ‘Details’ link in the right column to open up the string you want to translate.

Translate plugin

Once you have done that, a text field will open where you can add a translation for the original string.

Once you are done, simply click the ‘Save’ button to submit your translations.

Method 2: Translate a WordPress Plugin for Your Own Website

If you only want to translate a WordPress plugin for your own website, then this method is for you.

First, you will need to install and activate the Loco Translate plugin. For detailed instructions, please see our beginner’s guide on how to install a WordPress plugin.

Upon activation, head over to the Loco Translate » Plugins page from the WordPress admin sidebar.

Here, you will see a list of plugins that are currently installed on your website. Just click on the plugin you want to translate.

Choose plugin to translate

This will take you to a new screen, where you will see a list of languages available for the plugin, along with the translation progress status for each language.

If the language you want to translate the plugin into is listed there, then simply click on the ‘Edit’ link under it.

If not, then you need to click the ‘New language’ button at the top.

Click New Language button

This will direct you to a new page where you can start by selecting a language.

From here, you can pick the ‘WordPress language’ option and then choose your language from the dropdown menu under it.

This option will automatically start using the language file if a user sets the WordPress admin area in this language.

Choose a translation language

If you don’t want to use a WordPress language, then you can select the ‘Custom Language’ option.

Next, you have to choose where you want to store the translation files. By default, Loco Translate will recommend saving the translation files in its own folder.

However, you can easily change that to save the files in WordPress languages or the plugin’s own languages folder.

Once you have done that, just click the ‘Start translating’ button to continue.

Choose translation file location

This will take you to a new screen, where you will see a text source section along with a translation field.

You can now start by adding a translation for the source string and then select the next string to translate.

Once you are done, don’t forget to click the ‘Save’ button at the top to store your settings.

Translate strings

Method 3: Translate a WordPress Plugin on Your Computer

If you want to translate a WordPress plugin on your computer using gettext translation apps, then this method is for you.

Keep in mind that you can also submit these translations to plugin authors so that they can include them in their plugins.

First, you need to download the plugin you want to translate on your computer. Next, double-click the plugin zip file to extract it.

Once you have done that, you need to open the plugin’s folder and then find and click on the ‘languages’ folder.

Choose the languages folder in the plugin folder

You should find a .pot file inside this folder. This is the translation template file that you will need to translate the plugin.

If the plugin doesn’t have a .pot file or a languages folder, then it is most likely not translation-ready.

In that case, you can contact the plugin author and ask if they have any plans for their plugin translation. For more details, please see our guide on how to ask for WordPress support and get it.

Once you have the .pot file, you are ready to translate the plugin into any language.

Locate plugin pot file

Next, you need to download and install the Poedit app on your computer, which is a free translation app for Mac and Windows.

After you have the app installed, go ahead and open it up. This will launch the Poedit home screen, where you must click the ‘Create New’ option.

Choose Create New option

You will now be directed to your computer’s file manager. From here, simply find and select the .pot file for the plugin that you want to translate.

Once you have done that, Poedit will ask you to choose a language for translation from the dropdown menu.

After that, click the ‘OK’ button to continue forward.

Choose a translation language in Poedit

Poedit will now show the translation interface, where you will see the list of strings available.

All you have to do is click on a string to select it and provide a translation in the ‘Translation’ field.

Translate plugin with Poedit

Once you are done translating the plugin, go to File » Save from the menu bar at the top and name your file after the language name and country code.

For example, if you are saving a French translation for the plugin, then you should save the file as ‘fr_FR’ for French and France.

save file

Poedit will save your translation as .po and .mo files.

Now, all you need to do is place these files in your plugin’s languages folder to start using the translated plugin on your website.

We hope this article helped you learn how to translate a WordPress plugin easily. You may also want to see our beginner’s guide on how to translate your WooCommerce store and our top picks for the best WordPress translation plugins.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Translate a WordPress Plugin in Your Language first appeared on WPBeginner.

Improving Inventory Management Using Machine Learning and Artificial Intelligence

In today's digital age, managing inventory efficiently and accurately is a challenge that many businesses face. The use of Artificial Intelligence (AI) can greatly enhance the effectiveness of inventory management systems, helping to forecast demand, optimize stock levels, and reduce waste. Let's delve into the details and illustrate with practical examples.

AI has the ability to analyze large amounts of data quickly and accurately. In inventory management, this translates into capabilities like predicting product demand, identifying patterns in sales, detecting anomalies, and making recommendations for restocking. Here's how you might use AI to accomplish these tasks:

Tips To Keep Track of Code and Infrastructure Security Risks

Nowadays, most people take it as a fact that the software we use daily is secure, and that is not really representative of the reality we live in in the software industry. A lot of the software on the market today has been written with the priority of being in production as soon as possible and without much consideration for the security aspect. This neglect of code and infrastructure security risks poses a significant threat. A single security vulnerability can lead to a wide variety of problems, including data breaches, financial losses, legal concerns, and a long list of harms to customers and to companies as well.

In this article, we will go through potential security vulnerabilities that can be found in the code and in the infrastructure, specifically focusing on code and infrastructure security risks. By understanding these risks, we can better address the challenges associated with maintaining secure software systems. Additionally, we will explore some metrics that can be useful to keep track of potential security vulnerabilities and mitigate them effectively.

Continuous Integration for iOS and macOS

This is an article from DZone's 2023 Development at Scale Trend Report.

For more:


Read the Report

The no-code approach to continuous integration (CI) on mobile projects works reasonably well when teams start with one or two developers, a small project, and a cloud service. Over time, as a team grows and a project becomes more complex, it is natural to transition to self-hosted runners for faster feedback, more reliable tests, and code in production. This is the low-code approach to automation, as it has evolved. 

Transforming Text Messaging With AI: An In-Depth Exploration of Natural Language Processing Techniques

In today's fast-paced world, text messaging has become an integral part of our daily communication. With billions of messages exchanged every day, the need for more efficient, engaging, and personalized messaging experiences has grown exponentially. Thanks to the advancements in Artificial Intelligence (AI) and Machine Learning (ML), we are witnessing a transformative shift in the way text messaging platforms operate. This article delves into the deep technical aspects of how Natural Language Processing (NLP) techniques are at the forefront of this transformation, enhancing the capabilities of text messaging and revolutionizing the way we communicate.

Understanding Natural Language Processing

At the core of the AI revolution in text messaging lies Natural Language Processing. NLP is a subfield of AI that focuses on enabling machines to understand, interpret, and generate human language. Its applications in text messaging encompass a wide range of tasks, such as sentiment analysis, part-of-speech tagging, named entity recognition, and more. NLP algorithms process unstructured text data and extract meaningful information, paving the way for more intelligent and context-aware conversations. 

Is It Okay To Stop Running Your Tests After the First Failure?

Running tests is the necessary evil. I have never heard two or more engineers at the water cooler talking about the joys of test execution. Don't get me wrong, tests are great; you should definitely have plenty of them in your project. It is just not a pleasant experience to wait for their execution. If they start to fail, that is even worse. Most probably, someone could become very popular if they could reliably tell which tests will fail before executing them. As for the rest of us, I think we should keep running our tests.

Can We Somehow Reduce the Execution Time at Least?

We can at least try! We could start by understanding the problem more and looking at solutions later.

Data Warehouse Using Azure

Businesses in the modern, data-driven economy significantly rely on data to make wise decisions. A data warehouse is an essential part of data architecture because it offers a centralized location for storing, managing, and analyzing massive amounts of data from many sources. Microsoft Azure provides a robust and scalable platform for developing and deploying data warehouses. With the help of real-world examples, we will walk you through the steps of creating a data warehouse using Azure services in this step-by-step manual.

1. Requirements

It's important to have a clear understanding of your data warehouse requirements. Identify the different data sources, the volume of data, the types of data, and the reporting and analytics needs. Connect with stakeholders to create a solid foundation for your data warehouse project.

Recreating YouTube’s Ambient Mode Glow Effect

I noticed a charming effect on YouTube’s video player while using its dark theme some time ago. The background around the video would change as the video played, creating a lush glow around the video player, making an otherwise bland background a lot more interesting.

This effect is called Ambient Mode. The feature was released sometime in 2022, and YouTube describes it like this:

“Ambient mode uses a lighting effect to make watching videos in the Dark theme more immersive by casting gentle colors from the video into your screen’s background.”
— YouTube

It is an incredibly subtle effect, especially when the video’s colors are dark and have less contrast against the dark theme’s background.

Curiosity hit me, and I set out to replicate the effect on my own. After digging around YouTube’s convoluted DOM tree and source code in DevTools, I hit an obstacle: all the magic was hidden behind the HTML <canvas> element and bundles of mangled and minified JavaScript code.

Despite having very little to go on, I decided to reverse-engineer the code and share my process for creating an ambient glow around the videos. I prefer to keep things simple and accessible, so this article won’t involve complicated color sampling algorithms, although we will utilize them via different methods.

Before we start writing code, I think it’s a good idea to revisit the HTML Canvas element and see why and how it is used for this little effect.

HTML Canvas

The HTML <canvas> element is a container element on which we can draw graphics with JavaScript using its own Canvas API and WebGL API. Out of the box, a <canvas> is empty — a blank canvas, if you will — and the aforementioned Canvas and WebGL APIs are used to fill the <canvas> with content.

HTML <canvas> is not limited to presentation; we can also make interactive graphics with them that respond to standard mouse and keyboard events.

But SVG can also do most of that stuff, right? That’s true, but <canvas> is more performant than SVG because it doesn’t require any additional DOM nodes for drawing paths and shapes the way SVG does. Also, <canvas> is easy to update, which makes it ideal for more complex and performance-heavy use cases, like YouTube’s Ambient Mode.

As you might expect with many HTML elements, <canvas> accepts attributes. For example, we can give our drawing space a width and height:

<canvas width="10" height="6" id="js-canvas"></canvas>

Notice that <canvas> is not a self-closing tag, like an <iframe> or <img>. We can add content between the opening and closing tags, which is rendered only when the browser cannot render the canvas. This can also be useful for making the element more accessible, which we’ll touch on later.

Returning to the width and height attributes, they define the <canvas>’s coordinate system. Interestingly, we can apply a responsive width using relative units in CSS, but the <canvas> still respects the set coordinate system. We are working with pixel graphics here, so stretching a smaller canvas in a wider container results in a blurry and pixelated image.

The downside of <canvas> is its accessibility. All of the content updates happen in JavaScript in the background as the DOM is not updated, so we need to put effort into making it accessible ourselves. One approach (of many) is to create a Fallback DOM by placing standard HTML elements inside the <canvas>, then manually updating them to reflect the current content that is displayed on the canvas.

Numerous canvas frameworks — including ZIM, Konva, and Fabric, to name a few — are designed for complex use cases that can simplify the process with a plethora of abstractions and utilities. ZIM’s framework has accessibility features built into its interactive components, which makes developing accessible <canvas>-based experiences a bit easier.

For this example, we’ll use the Canvas API. We will also use the element for decorative purposes (i.e., it doesn’t introduce any new content), so we won’t have to worry about making it accessible, but rather safely hide the <canvas> from assistive devices.

That said, we will still need to disable — or minimize — the effect for those who have enabled reduced motion settings at the system or browser level.

requestAnimationFrame

The <canvas> element can handle the rendering part of the problem, but we need to somehow keep the <canvas> in sync with the playing <video>and make sure that the <canvas> updates with each video frame. We’ll also need to stop the sync if the video is paused or has ended.

We could use setInterval in JavaScript and rig it to run at 60fps to match the video’s playback rate, but that approach comes with some problems and caveats. Luckily, there is a better way of handling a function that must be called on so often.

That is where the requestAnimationFrame method comes in. It instructs the browser to run a function before the next repaint. That function runs asynchronously and returns a number that represents the request ID. We can then use the ID with the cancelAnimationFrame function to instruct the browser to stop running the previously scheduled function.

let requestId;

const loopStart = () => {
  /* ... */

  /* Initialize the infinite loop and keep track of the requestId */
  requestId = window.requestAnimationFrame(loopStart);
};

const loopCancel = () => {
  window.cancelAnimationFrame(requestId);
  requestId = undefined;
};

Now that we have all our bases covered by learning how to keep our update loop and rendering performant, we can start working on the Ambient Mode effect!

The Approach

Let’s briefly outline the steps we’ll take to create this effect.

First, we must render the displayed video frame on a canvas and keep everything in sync. We’ll render the frame onto a smaller canvas (resulting in a pixelated image). When an image is downscaled, the important and most-dominant parts of an image are preserved at the cost of losing small details. By reducing the image to a low resolution, we’re reducing it to the most dominant colors and details, effectively doing something similar to color sampling, albeit not as accurately.

Next, we’ll blur the canvas, which blends the pixelated colors. We will place the canvas behind the video using CSS absolute positioning.

And finally, we’ll apply additional CSS to make the glow effect a bit more subtle and as close to YouTube’s effect as possible.

HTML Markup

First, let’s start by setting up the markup. We’ll need to wrap the <video> and <canvas> elements in a parent container because that allows us to contain the absolute positioning we will be using to position the <canvas> behind the <video>. But more on that in a moment.

Next, we will set a fixed width and height on the <canvas>, although the element will remain responsive. By setting the width and height attributes, we define the coordinate space in CSS pixels. The video’s frame is 1920×720, so we will draw an image that is 10×6 pixels image on the canvas. As we’ve seen in the previous examples, we’ll get a pixelated image with dominant colors somewhat preserved.

<section class="wrapper">
  <video controls muted class="video" id="js-video" src="video.mp4"></video>
  <canvas width="10" height="6" aria-hidden="true" class="canvas" id="js-canvas"></canvas>
</section>
Syncing <canvas> And <video>

First, let’s start by setting up our variables. We need the <canvas>’s rendering context to draw on it, so saving it as a variable is useful, and we can do that by using JavaScript’s getCanvasContext function. We’ll also use a variable called step to keep track of the request ID of the requestAnimationFrame method.

const video = document.getElementById("js-video");
const canvas = document.getElementById("js-canvas");
const ctx = canvas.getContext("2d");

let step; // Keep track of requestAnimationFrame id

Next, we’ll create the drawing and update loop functions. We can actually draw the current video frame on the <canvas> by passing the <video> element to the drawImage function, which takes four values corresponding to the video’s starting and ending points in the <canvas> coordinate system, which, if you remember, is mapped to the width and height attributes in the markup. It’s that simple!

const draw = () => {
  ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
};

Now, all we need to do is create the loop that calls the drawImage function while the video is playing, as well as a function that cancels the loop.

const drawLoop = () => {
  draw();
  step = window.requestAnimationFrame(drawLoop);
};

const drawPause = () => {
  window.cancelAnimationFrame(step);
  step = undefined;
};

And finally, we need to create two main functions that set up and clear event listeners on page load and unload, respectively. These are all of the video events we need to cover:

  • loadeddata: This fires when the first frame of the video loads. In this case, we only need to draw the current frame onto the canvas.
  • seeked: This fires when the video finishes seeking and is ready to play (i.e., the frame has been updated). In this case, we only need to draw the current frame onto the canvas.
  • play: This fires when the video starts playing. We need to start the loop for this event.
  • pause: This fires when the video is paused. We need to stop the loop for this event.
  • ended: This fires when the video stops playing when it reaches its end. We need to stop the loop for this event.
const init = () => {
  video.addEventListener("loadeddata", draw, false);
  video.addEventListener("seeked", draw, false);
  video.addEventListener("play", drawLoop, false);
  video.addEventListener("pause", drawPause, false);
  video.addEventListener("ended", drawPause, false);
};

const cleanup = () => {
  video.removeEventListener("loadeddata", draw);
  video.removeEventListener("seeked", draw);
  video.removeEventListener("play", drawLoop);
  video.removeEventListener("pause", drawPause);
  video.removeEventListener("ended", drawPause);
};

window.addEventListener("load", init);
window.addEventListener("unload", cleanup);

Let’s check out what we’ve achieved so far with the variables, functions, and event listeners we have configured.

Creating A Reusable Class

Let’s make this code reusable by converting it to an ES6 class so that we can create a new instance for any <video> and <canvas> pairing.

class VideoWithBackground {
  video;
  canvas;
  step;
  ctx;

  constructor(videoId, canvasId) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }

  draw = () => {
    this.ctx.drawImage(this.video, 0, 0, this.canvas.width, this.canvas.height);
  };

  drawLoop = () => {
    this.draw();
    this.step = window.requestAnimationFrame(this.drawLoop);
  };

  drawPause = () => {
    window.cancelAnimationFrame(this.step);
    this.step = undefined;
  };

  init = () => {
    this.ctx = this.canvas.getContext("2d");
    this.ctx.filter = "blur(1px)";

    this.video.addEventListener("loadeddata", this.draw, false);
    this.video.addEventListener("seeked", this.draw, false);
    this.video.addEventListener("play", this.drawLoop, false);
    this.video.addEventListener("pause", this.drawPause, false);
    this.video.addEventListener("ended", this.drawPause, false);
  };

  cleanup = () => {
    this.video.removeEventListener("loadeddata", this.draw);
    this.video.removeEventListener("seeked", this.draw);
    this.video.removeEventListener("play", this.drawLoop);
    this.video.removeEventListener("pause", this.drawPause);
    this.video.removeEventListener("ended", this.drawPause);
  };
    }

Now, we can create a new instance by passing the id values for the <video> and <canvas> elements into a VideoWithBackground() class:

const el = new VideoWithBackground("js-video", "js-canvas");
Respecting User Preferences

Earlier, we briefly discussed that we would need to disable or minimize the effect’s motion for users who prefer reduced motion. We have to consider that for decorative flourishes like this.

The easy way out? We can detect the user’s motion preferences with the prefers-reduced-motion media query and completely hide the decorative canvas if reduced motion is the preference.

@media (prefers-reduced-motion: reduce) {
  .canvas {
    display: none !important;
  }
}

Another way we respect reduced motion preferences is to use JavaScript’s matchMedia function to detect the user’s preference and prevent the necessary event listeners from registering.

constructor(videoId, canvasId) {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");

  if (!mediaQuery.matches) {
    this.video = document.getElementById(videoId);
    this.canvas = document.getElementById(canvasId);

    window.addEventListener("load", this.init, false);
    window.addEventListener("unload", this.cleanup, false);
  }
}
Final Demo

We’ve created a reusable ES6 class that we can use to create new instances. Feel free to check out and play around with the completed demo.

See the Pen Youtube video glow effect - dominant color [forked] by Adrian Bece.

Creating A React Component

Let’s migrate this code to the React library, as there are key differences in the implementation that are worth knowing if you plan on using this effect in a React project.

Creating A Custom Hook

Let’s start by creating a custom React hook. Instead of using the getElementById function for selecting DOM elements, we can access them with a ref on the useRef hook and assign it to the <canvas> and <video> elements.

We’ll also reach for the useEffect hook to initialize and clear the event listeners to ensure they only run once all of the necessary elements have mounted.

Our custom hook must return the ref values we need to attach to the <canvas> and <video> elements, respectively.

import { useRef, useEffect } from "react";

export const useVideoBackground = () => {
  const mediaQuery = window.matchMedia("(prefers-reduced-motion: reduce)");
  const canvasRef = useRef();
  const videoRef = useRef();

  const init = () => {
    const video = videoRef.current;
    const canvas = canvasRef.current;
    let step;

    if (mediaQuery.matches) {
      return;
    }

    const ctx = canvas.getContext("2d");

    ctx.filter = "blur(1px)";

    const draw = () => {
      ctx.drawImage(video, 0, 0, canvas.width, canvas.height);
    };

    const drawLoop = () => {
      draw();
      step = window.requestAnimationFrame(drawLoop);
    };

    const drawPause = () => {
      window.cancelAnimationFrame(step);
      step = undefined;
    };

    // Initialize
    video.addEventListener("loadeddata", draw, false);
    video.addEventListener("seeked", draw, false);
    video.addEventListener("play", drawLoop, false);
    video.addEventListener("pause", drawPause, false);
    video.addEventListener("ended", drawPause, false);

    // Run cleanup on unmount event
    return () => {
      video.removeEventListener("loadeddata", draw);
      video.removeEventListener("seeked", draw);
      video.removeEventListener("play", drawLoop);
      video.removeEventListener("pause", drawPause);
      video.removeEventListener("ended", drawPause);
    };
  };

  useEffect(init, []);

  return {
    canvasRef,
    videoRef,
  };
};

Defining The Component

We’ll use similar markup for the actual component, then call our custom hook and attach the ref values to their respective elements. We’ll make the component configurable so we can pass any <video> element attribute as a prop, like src, for example.

import React from "react";
import { useVideoBackground } from "../hooks/useVideoBackground";

import "./VideoWithBackground.css";

export const VideoWithBackground = (props) => {
  const { videoRef, canvasRef } = useVideoBackground();

  return (
    <section className="wrapper">
      <video ref={ videoRef } controls className="video" { ...props } />
      <canvas width="10" height="6" aria-hidden="true" className="canvas" ref={ canvasRef } />
    </section>
  );
};

All that’s left to do is to call the component and pass the video URL to it as a prop.

import { VideoWithBackground } from "../components/VideoWithBackground";

function App() {
  return (
    <VideoWithBackground src="http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4" />
  );
}

export default App;
Conclusion

We combined the HTML <canvas> element and the corresponding Canvas API with JavaScript’s requestAnimationFrame method to create the same charming — but performance-intensive — visual effect that makes YouTube’s Ambient Mode feature. We found a way to draw the current <video> frame on the <canvas>, keep the two elements in sync, and position them so that the blurred <canvas> sits properly behind the <video>.

We covered a few other considerations in the process. For example, we established the <canvas> as a decorative image that can be removed or hidden when a user’s system is set to a reduced motion preference. Further, we considered the maintainability of our work by establishing it as a reusable ES6 class that can be used to add more instances on a page. Lastly, we converted the effect into a component that can be used in a React project.

Feel free to play around with the finished demo. I encourage you to continue building on top of it and share your results with me in the comments, or, similarly, you can reach out to me on Twitter. I’d love to hear your thoughts and see what you can make out of it!

References

Software Project Management Methodologies

Software development projects are complex endeavors that require careful planning, execution, and monitoring to ensure successful outcomes. Software project management methodologies are a set of practices, techniques, and frameworks that guide the planning, execution, and control of software projects. These methodologies are designed to ensure that software development projects are completed on time, within budget, and meet the quality requirements of stakeholders. These methodologies provide a systematic approach to software development, from planning to deployment, to ensure that the software project is completed on time, within budget, and meets quality standards.

Several software project management methodologies are widely used in the industry. Each methodology has its own unique characteristics, advantages, and disadvantages and is suitable for different types of projects.