6 Steps SREs Should Take to Prepare for Black Friday and Cyber Monday 2021

Being an SRE is a tough (if rewarding) job on any day of the year. But it's especially challenging on Black Friday and Cyber Monday, the post-Thanksgiving event that has become the biggest online shopping day of the year. We'll focus on calling it Cyber Monday throughout this guide.

And for 2021, Cyber Monday promises to include not just the standard challenges associated with massive spikes in traffic but also a spike in cybersecurity attacks, which the FBI expects to surge in frequency this holiday season. And although security may not be SREs' main job, they'll be expected to assist security and DevSecOps teams in confronting the reliability threats that hackers pose.

Building a Unicorn Engineering Org at GRIN

How do you build an engineering organization that can drive your company to a billion-dollar valuation and unicorn status?

And how do you do it in an emerging and highly-competitive product category like influencer/creator management? Brent Bartlett, VP of Engineering at GRIN, joins the podcast this week to share his blueprint for success and his path to leadership.

Know These Risks Before You Dive Into WebRTC

WebRTC is changing the way we live by establishing new norms in communication. WebRTC makes this possible by supporting real-time browser-to-browser communication without additional plugins. It provides peer-to-peer (P2P) file sharing and streaming of P2P audio and video calls. And all these are done by incorporating real-time communication directly in the end user’s browser. 

Security Measures Implemented by WebRTC

So, now that this technology is selling like hotcakes, you might be tempted to dig into it. However, it would be advisable to first understand the risks and threats that come with it. The good news is that most of these risks could be mitigated, and this article helps you with it.

Network Admission Control

The NAC solution implements security control over access users to provide end-to-end security.  

What Are the Capabilities of NAC?

NAC provides the following capabilities:

The 10 Commandments for Performing a Data Science Project

In designing a data science project, establishing what we, or the users we are building models for, want to achieve is vital, but this understanding only provides a blueprint for success. To truly deliver against a well-established brief, data science teams must follow best practices in executing the project. To help establish what that might mean, I have come up with ten points to provide a framework that can be applied to any data science project.

1. Understand the Problem 

The most fundamental part of solving any problem is knowing exactly what problem you are solving. Make sure that you understand what you are trying to predict, any constraints, and what the ultimate purpose for this project will be. Ask questions early on and validate your understanding with peers, domain experts, and end-users. If you find that answers are aligning with your understanding, you know that you are on the right path. 

How to Use Minimal Hybrid to Quickly Migrate Exchange Mailboxes to Office 365

With the increasing popularity of cloud-based services, more and more organizations and businesses are shifting their on-premises Exchange to Office 365 or Microsoft 365. If you are planning to migrate your Exchange on-premises Server to Microsoft 365 or Office 365, you have several options, such as 

  • Cutover Migration
  • Staged Migration
  • Hybrid Migration
  • IMAP-Based
  • Office 365 Import Service
  • Third-Party Software 

You can choose the Office 365 migration option based on the on-premises Exchange Server version your organization is running on. 

How (and Why) to Move from Spark on YARN to Kubernetes

Apache Spark is among the most usable open-source distributed computing frameworks because it allows data engineers to parallelize the processing of large amounts of data across a cluster of machines.

When it comes to data operations, Spark provides a tremendous advantage as a resource for data operations because it aligns with the things that make data ops valuable. It is optimized for machine learning and AI, which are used for batch processing (in real-time and at scale), and it is adept at operating within different types of environments.

When is it “Right” to Reach for contain and will-change in CSS?

I’ve got some blind spots in CSS-related performance things. One example is the will-change property. It’s a good name. You’re telling the browser some particular property (or the scroll-position or content) uh, will, change:

.el {
  will-change: opacity;
}
.el.additional-hard-to-know-state {
  opacity: 0;
}

But is that important to do? I don’t know. The point, as I understand it, is that it will kick .el into processing/rendering/painting on the GPU rather than CPU, which is a speed boost. Sort of like the classic transform: translate3d(0, 0, 0); hack. In the exact case above, it doesn’t seem to my brain like it would matter. I have in my head that opacity is one of the “cheapest” things to animate, so there is no particular benefit to will-change. Or maybe it matters noticeably on some browsers or devices, but not others? This is front-end development after all.

There was a spurt of articles about will-change around 2014/2015 that warn about weird behavior, like unexpected changes in stacking contexts and being careful not to use it “too much.” There was also advice spreading around that you should never use this property directly in CSS stylesheets; you should only apply it in JavaScript before the state change, then remove it after you no longer need it.

I have no idea if any of those things are still true. Sorry! I’d love to read a 2022 deep dive on will-change. We’re capable of that kind of testing, so I’ll put it in the idea pile. But my point is that there are things in CSS that are designed explicitly for performance that are confusing to me, and I wish I had a more full understanding of them because they seem like Very Big Deals.

Take “How I made Google’s data grid scroll 10x faster with one line of CSS” by Johan Isaksson. A 10✕ scrolling performance improvement is a massive deal! Know how they fixed it?

[…] as I was browsing the “Top linking sites” page I noticed major scroll lag. This happens when choosing to display a larger dataset (500 rows) instead of the default 10 results.

[…]

So, what did I do? I simply added a single line of CSS to the <table> on the Elements panel, specifying that it will not affect the layout or style of other elements on the page

table {
  contain: strict; 
}

The contain property is another that I sort of get, but I’d still call it a blind spot because my brain doesn’t just automatically think of when I could (or should?) use it. But that’s a bummer, because clearly I’m not building interfaces as performant as I could be if I did understand contain better.

There’s another! The content-visibility property. The closest I came to understanding it was after watching Jake and Surma’s video on it where they used it (along with contain-intrinsic-size and some odd magic numbers) to dramatically speed up a long page. What hasn’t stuck with me is when I should use it on my pages.

Are all three of these features “there if you need them” features? Is it OK to ignore them until you notice poor performance on something (like a massive page) and then reach for them to attempt to solve it? Almost “don’t use these until you need them,” otherwise you’re in premature optimization territory. The trouble with that is the classic situation where you won’t actually notice the poor performance unless you are very actively testing on the lowest-specced devices out there.

Or are these features “this is what modern CSS is and you should be thinking of them like you think of padding” territory? I kind of suspect it’s more like that. If you’re building an element you know won’t change in certain ways, it’s probably worth “containing” it. If you’re building an element you know will change in certain ways, it’s probably worth providing that info to browsers. If you’re building a part of page you know is always below the fold, it’s probably worth avoiding the paint on it. But personally, I just don’t have enough of this fully grokked to offer any solid advice.


The post When is it “Right” to Reach for contain and will-change in CSS? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

How to Build a 3D Product Model In Just 5 Minutes

Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!

The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping. 

Video on Demand (VOD) Processing Using AWS

The demand for video is growing, and even more, businesses find infinite possibilities in this sector. We’re not just referring to entertainment or instructional materials here. Content providers, small businesses, and corporate brands are all benefiting from video on demand. Brands can build stronger relationships with their customers by offering them access to the information they want, whenever and however they want it. Thus, such visualization became the most convenient way to share data with software users.

VOD (video on demand) refers to any content delivery method that allows users to select when, where, and how they interact with media integration. This can be accomplished by either simultaneous broadcasting from an internet source or by the user downloading the video to a personal device for subsequent viewing. This is in contrast to traditional streaming, when the viewer may only see their film on a gadget with a satellite or cable connection at a certain period.

From Naked Objects to Naked Functions

Functional programming (FP) is, today, roughly where object-oriented programming (OOP) was in the late 1990s. Pure FP languages are gaining popularity, and mainstream languages increasingly support FP idioms. There are some application domains where FP has already become the dominant paradigm – scientific computing, big data, some fin-tech – but there are also substantial application domains where FP has made little impact, one such being transactional enterprise applications built on relational databases. Granted, this is no longer considered to be the ‘hot’ end of systems development, but it still accounts for a huge proportion of commercial programming. Developers working on such systems today might use functional idioms where they can, but it is rare to see one built using FP as the core design ethic.

This situation might be attributed to traditional conservatism in that sector, but I believe there is a bigger issue, which derives from the central conundrum of FP, elegantly articulated by Simon Peyton Jones (lead on the Glasgow Haskell Compiler):

The Math Behind a Scanning App

In 2012, I made a simple camera app for Android that undoes the natural projection in a picture.

Of course, now we have all sorts of scanning apps on every phone, but at the time this was a novelty. Let's dissect the application and see what's it made of.

Bootstrap carousel

Hi guys, I am using bootstrap carousel and I have some problem with the slide display.
I can't switch from one slide to another
In the code I have commented the <div class = "item active">
This way the video appears to me as desired, fully functional.
If I remove the comment the video continues to work but is displayed in the lower part of the "container"
In both cases the buttons to scroll the slides do not work.
Could it be a problem that I have inserted a video instead of an image?

<div class="carousel-inner">

                    <!--<div class="item active">-->
                        <div class="video-container">
                        <video playsinline autoplay muted loop>
                        <source src= "images/video.mp4" width="400" height="200" type="video/mp4">
                        </video>

                        <!-- Content -->
                        <div class="container"> 
                            <div class="row blurb scrollme animateme" data-when="exit" data-from="0" data-to="1" data-opacity="0" data-translatey="100">
                                <div class="col-md-9">                              
                                    <span class="title"></span>
                                    <h1>Prova video</h1>    
                                    <div class="buttons">                               
                                <a href="" data-vbtype="video" class="venobox btn btn-default">
                                    <i class="material-icons">play_arrow</i>
                                    <span>Guarda il video</span>
                                </a>
                                    </div>
                                </div>
                            </div>
                        </div>
                    </div>

Thanks to those who want to help me

A Handy Little System for Animated Entrances in CSS

I love little touches that make a website feel like more than just a static document. What if web content wouldn’t just “appear” when a page loaded, but instead popped, slid, faded, or spun into place? It might be a stretch to say that movements like this are always useful, though in some cases they can draw attention to certain elements, reinforce which elements are distinct from one another, or even indicate a changed state. So, they’re not totally useless, either.

So, I put together a set of CSS utilities for animating elements as they enter into view. And, yes, this pure CSS. It not only has a nice variety of animations and variations, but supports staggering those animations as well, almost like a way of creating scenes.

You know, stuff like this:

Which is really just a fancier version of this:

We’ll go over the foundation I used to create the animations first, then get into the little flourishes I added, how to stagger animations, then how to apply them to HTML elements before we also take a look at how to do all of this while respecting a user’s reduced motion preferences.

The basics

The core idea involves adding a simple CSS @keyframes animation that’s applied to anything we want to animate on page load. Let’s make it so that an element fades in, going from opacity: 0 to opacity: 1 in a half second:

.animate {
  animation-duration: 0.5s;
  animation-name: animate-fade;
  animation-delay: 0.5s;
  animation-fill-mode: backwards;
}

@keyframes animate-fade {
  0% { opacity: 0; }
  100% { opacity: 1; }
}

Notice, too, that there’s an animation-delay of a half second in there, allowing the rest of the site a little time to load first. The animation-fill-mode: backwards is there to make sure that our initial animation state is active on page load. Without this, our animated element pops into view before we want it to.

If we’re lazy, we can call it a day and just go with this. But, CSS-Tricks readers aren’t lazy, of course, so let’s look at how we can make this sort of thing even better with a system.

Fancier animations

It’s much more fun to have a variety of animations to work with than just one or two. We don’t even need to create a bunch of new @keyframes to make more animations. It’s simple enough to create new classes where all we change is which frames the animation uses while keeping all the timing the same.

There’s nearly an infinite number of CSS animations out there. (See animate.style for a huge collection.) CSS filters, like blur(), brightness() and saturate() and of course CSS transforms can also be used to create even more variations.

But for now, let’s start with a new animation class that uses a CSS transform to make an element “pop” into place.

.animate.pop {
  animation-duration: 0.5s;
  animation-name: animate-pop;
  animation-timing-function: cubic-bezier(.26, .53, .74, 1.48);
}

@keyframes animate-pop {
  0% {
    opacity: 0;
    transform: scale(0.5, 0.5);
  }

  100% {
    opacity: 1;
    transform: scale(1, 1);
  }
}

I threw in a little cubic-bezier() timing curve, courtesy of Lea Verou’s indispensable cubic-bezier.com for a springy bounce.

Adding delays

We can do better! For example, we can animate elements so that they enter at different times. This creates a stagger that makes for complex-looking motion without a complex amount of code.

This animation on three page elements using a CSS filter, CSS transform, and staggered by about a tenth of a second each, feels really nice:

All we did there was create a new class for each element that spaces when the elements start animating, using animation-delay values that are just a tenth of a second apart.

.delay-1 { animation-delay: 0.6s; }  
.delay-2 { animation-delay: 0.7s; }
.delay-3 { animation-delay: 0.8s; }

Everything else is exactly the same. And remember that our base delay is 0.5s, so these helper classes count up from there.

Respecting accessibility preferences

Let’s be good web citizens and remove our animations for users who have enabled their reduced motion preference setting:

@media screen and (prefers-reduced-motion: reduce) {
  .animate { animation: none !important; }
}

This way, the animation never loads and elements enter into view like normal. It’s here, though, that is worth a reminder that “reduced” motion doesn’t always mean “remove” motion.

Applying animations to HTML elements

So far, we’ve looked at a base animation as well as a slightly fancier one that we were able to make even fancier with staggered animation delays that are contained in new classes. We also saw how we can respect user motion preferences at the same time.

Even though there are live demos that show off the concepts, we haven’t actually walked though how to apply our work to HTML. And what’s cool is that we can use this on just about any element, whether its a div, span, article, header, section, table, form… you get the idea.

Here’s what we’re going to do. We want to use our animation system on three HTML elements where each element gets three classes. We could hard-code all the animation code to the element itself, but splitting it up gives us a little animation system we can reuse.

  • .animate: This is the base class that contains our core animation declaration and timing.
  • The animation type: We’ll use our “pop” animation from before, but we could use the one that fades in as well. This class is technically optional but is a good way to apply distinct movements.
  • .delay-<number>: As we saw earlier, we can create distinct classes that are used to stagger when the animation starts on each element, making for a neat effect. This class is also optional.

So our animated elements might now look like:

<h2 class="animate pop">One!</h2>
<h2 class="animate pop delay-1">Two!</h2>
<h2 class="animate pop delay-2">Three!</h2>

Let’s count them in!

Conclusion

Check that out: we went from a seemingly basic set of @keyframes and turned it into a full-fledged system for applying interesting animations for elements entering into view.

This is ridiculously fun, of course. But the big takeaway for me is how the examples we looked at form a complete system that can be used to create a baseline, different types of animations, staggered delays, and an approach for respecting user motion preferences. These, to me, are all the ingredients for a flexible system that easy to use, while giving us a lot with a little and without a bunch of extra cruft.

What we covered could indeed be a full animation library. But, of course, I did’t stop there and have my entire CSS file of animations in all its glory for you. There are several more types of animations in there, including 15 classes of different delays that can be used for staggering things. I’ve been using these on my own projects, but it’s still an early draft and I love feedback on it—so please enjoy and let me know what you think in the comments!

/* ==========================================================================
Animation System by Neale Van Fleet from Rogue Amoeba
========================================================================== */
.animate {
  animation-duration: 0.75s;
  animation-delay: 0.5s;
  animation-name: animate-fade;
  animation-timing-function: cubic-bezier(.26, .53, .74, 1.48);
  animation-fill-mode: backwards;
}

/* Fade In */
.animate.fade {
  animation-name: animate-fade;
  animation-timing-function: ease;
}

@keyframes animate-fade {
  0% { opacity: 0; }
  100% { opacity: 1; }
}

/* Pop In */
.animate.pop { animation-name: animate-pop; }

@keyframes animate-pop {
  0% {
    opacity: 0;
    transform: scale(0.5, 0.5);
  }
  100% {
    opacity: 1;
    transform: scale(1, 1);
  }
}

/* Blur In */
.animate.blur {
  animation-name: animate-blur;
  animation-timing-function: ease;
}

@keyframes animate-blur {
  0% {
    opacity: 0;
    filter: blur(15px);
  }
  100% {
    opacity: 1;
    filter: blur(0px);
  }
}

/* Glow In */
.animate.glow {
  animation-name: animate-glow;
  animation-timing-function: ease;
}

@keyframes animate-glow {
  0% {
    opacity: 0;
    filter: brightness(3) saturate(3);
    transform: scale(0.8, 0.8);
  }
  100% {
    opacity: 1;
    filter: brightness(1) saturate(1);
    transform: scale(1, 1);
  }
}

/* Grow In */
.animate.grow { animation-name: animate-grow; }

@keyframes animate-grow {
  0% {
    opacity: 0;
    transform: scale(1, 0);
    visibility: hidden;
  }
  100% {
    opacity: 1;
    transform: scale(1, 1);
  }
}

/* Splat In */
.animate.splat { animation-name: animate-splat; }

@keyframes animate-splat {
  0% {
    opacity: 0;
    transform: scale(0, 0) rotate(20deg) translate(0, -30px);
    }
  70% {
    opacity: 1;
    transform: scale(1.1, 1.1) rotate(15deg));
  }
  85% {
    opacity: 1;
    transform: scale(1.1, 1.1) rotate(15deg) translate(0, -10px);
  }

  100% {
    opacity: 1;
    transform: scale(1, 1) rotate(0) translate(0, 0);
  }
}

/* Roll In */
.animate.roll { animation-name: animate-roll; }

@keyframes animate-roll {
  0% {
    opacity: 0;
    transform: scale(0, 0) rotate(360deg);
  }
  100% {
    opacity: 1;
    transform: scale(1, 1) rotate(0deg);
  }
}

/* Flip In */
.animate.flip {
  animation-name: animate-flip;
  transform-style: preserve-3d;
  perspective: 1000px;
}

@keyframes animate-flip {
  0% {
    opacity: 0;
    transform: rotateX(-120deg) scale(0.9, 0.9);
  }
  100% {
    opacity: 1;
    transform: rotateX(0deg) scale(1, 1);
  }
}

/* Spin In */
.animate.spin {
  animation-name: animate-spin;
  transform-style: preserve-3d;
  perspective: 1000px;
}

@keyframes animate-spin {
  0% {
    opacity: 0;
    transform: rotateY(-120deg) scale(0.9, .9);
  }
  100% {
    opacity: 1;
    transform: rotateY(0deg) scale(1, 1);
  }
}

/* Slide In */
.animate.slide { animation-name: animate-slide; }

@keyframes animate-slide {
  0% {
    opacity: 0;
    transform: translate(0, 20px);
  }
  100% {
    opacity: 1;
    transform: translate(0, 0);
  }
}

/* Drop In */
.animate.drop { 
  animation-name: animate-drop; 
  animation-timing-function: cubic-bezier(.77, .14, .91, 1.25);
}

@keyframes animate-drop {
0% {
  opacity: 0;
  transform: translate(0,-300px) scale(0.9, 1.1);
}
95% {
  opacity: 1;
  transform: translate(0, 0) scale(0.9, 1.1);
}
96% {
  opacity: 1;
  transform: translate(10px, 0) scale(1.2, 0.9);
}
97% {
  opacity: 1;
  transform: translate(-10px, 0) scale(1.2, 0.9);
}
98% {
  opacity: 1;
  transform: translate(5px, 0) scale(1.1, 0.9);
}
99% {
  opacity: 1;
  transform: translate(-5px, 0) scale(1.1, 0.9);
}
100% {
  opacity: 1;
  transform: translate(0, 0) scale(1, 1);
  }
}

/* Animation Delays */
.delay-1 {
  animation-delay: 0.6s;
}
.delay-2 {
  animation-delay: 0.7s;
}
.delay-3 {
  animation-delay: 0.8s;
}
.delay-4 {
  animation-delay: 0.9s;
}
.delay-5 {
  animation-delay: 1s;
}
.delay-6 {
  animation-delay: 1.1s;
}
.delay-7 {
  animation-delay: 1.2s;
}
.delay-8 {
  animation-delay: 1.3s;
}
.delay-9 {
  animation-delay: 1.4s;
}
.delay-10 {
  animation-delay: 1.5s;
}
.delay-11 {
  animation-delay: 1.6s;
}
.delay-12 {
  animation-delay: 1.7s;
}
.delay-13 {
  animation-delay: 1.8s;
}
.delay-14 {
  animation-delay: 1.9s;
}
.delay-15 {
  animation-delay: 2s;
}

@media screen and (prefers-reduced-motion: reduce) {
  .animate {
    animation: none !important;
  }
}

The post A Handy Little System for Animated Entrances in CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Is Flutter a Good Choice for Creating iOS Apps?

Recently, Flutter app development has become a new easy, and productive way to create applications. Lots of the teams considering it as a possible technology for the next project, which is no surprise, since it has advantages of the native framework while being cross-platform. In this article, we will focus on how Flutter development is different in general and how Flutter mobile development for iOS works in particular.

Now developers are offered a lot of possibilities by the growing number of frameworks. Flutter app development is one of the most recent options that has become available for both Android and iOS engineers. The market is full of well-established technologies as well as new ones. The foundation teams and communities work on their constant improvement and develop new front-end frameworks that make programmers’ work easier and faster. Suddenly, a new big player has arrived and its name is Flutter.

How To Maintain A Large Next.js Application

Maintaining a large application is always a difficult task. It might have outdated dependencies which can cause maintainability issues. It can also have tests that are flaky and don’t inspire any confidence. There can also be issues with large JavaScript and CSS bundles causing the application to provide a non-optimal user experience for the end-users.

However, there are a few ways in which you can make a large code-base easy to maintain. In this article, we will discuss a few of those techniques as well as some of the things I wish I had known earlier to help manage large Next.js applications.

Note: While this article is specific to Next.js, some of the points will also work for a wide variety of front-end applications.

Use TypeScript

TypeScript is a strongly typed programming language which means that it enforces certain strictness while intermixing different types of data. According to StackOverflow Developer Survey 2021, TypeScript is one of the languages that developers want to work with the most.

Using a strongly typed language like TypeScript will help a lot when working with a large codebase. It will help you understand if there is a possibility that your application will break when there is a change. It is not guaranteed that TypeScript will always complain when there is a chance of breakage. However, most of the time, TypeScript will help you eliminate bugs even before you build your application. In certain cases, the build will fail if there are type mismatches in your code as Next.js checks for type definition during build time.

From the Next.js docs:

“By default, Next.js will do type checking as part of the next build. We recommend using code editor type checking during development.”

Note that next build is the script that creates an optimized production build of your Next.js application. From my personal experience, it helped me a lot when I was trying to update Next.js to version 11 for one of my applications. As a part of that update, I also decided to update a few other packages. Because of TypeScript and VSCode, I was able to figure out when those breaking changes even before I had built the application.

Use A Mono-Repo Structure Using Lerna Or Nx

Imagine that you are building a component library along with your main Next.js application. You might want to keep the library in a separate repository to add new components, build and release them as a package. This seems clean and works fine when you want to work in the library. But when you want to integrate the library in your Next.js application, the development experience will suffer.

This is because when you integrate the component library with your Next.js application, you might have to go back into the library’s repository, make changes, release the updates and then install the new version in your Next.js application. Only after that, the new changes from the component library will start reflecting in the Next.js application. Imagine your whole team doing this multiple times. The amount of time spent on building and releasing the component library separately will add up to a huge chunk.

This problem can be resolved if you use a mono-repo structure where your component library resides with your Next.js application. In this case, you can simply update your component library and it will immediately reflect in your Next.js application. There is no need for a separate build and release of your component library.

You can use a package like next-transpile-modules so that you don’t even need to build your component library before your Next.js application can consume it. However, if you are planning to release your component library as an npm package, you might need to have a build step.

For managing a mono-repo, you can use tools like Lerna, Nx, Rush, Turborepo, yarn workspaces, or npm workspaces. I liked using Lerna together with yarn workspaces when I needed to configure my build pipeline. If you prefer something which will automate a bunch of things via CLI, you can take a look at Nx. I feel that all of them are good but solve slightly different problems.

Use Code Generators Like Hygen To Generate Boilerplate Code

When a lot of developers start contributing to a large code-base, there is a good chance that there will be a lot of duplicate code. This happens mainly because there is a need to build a page, component, or utility function which is similar to an already existing one with slight modifications.

You can think of writing unit test cases for your components or utility functions. You might want to copy the boilerplate code as much as possible and do certain modifications as per the new file. However, this adds a lot of code consisting of bad variable naming in your code-base. This can be reduced by a proper code-review process. However, there is a better way to reduce this by automating the generation of the boilerplate code.

Unless you are using Nx, you will need to have a way in which you can automate a lot of code generation. I have used Hygen to generate the boilerplate code for Redux, React components, and utility functions. You can check out the documentation to get started with Hygen. They also have a dedicated section for generating Redux boilerplate. You can also use Redux Toolkit to reduce the boilerplate code necessary for your Redux applications. We will discuss this package next.

Use A Well-Established Pattern Like Redux With Lesser Boilerplate Via Redux Toolkit

Many developers will argue that Redux increases the complexity of the code-base or React Context is much easier to maintain. I think that it depends mostly on the type of application that you are building as well as the expertise of the whole development team. You can choose whatever state management solution your team is most comfortable with, but try to choose one that doesn’t need to have a lot of boilerplate.

In this article, I’m mentioning Redux because it is still the most popular state management solution out there according to npm trends. In the case of Redux, you can reduce a lot of boilerplate code by using Redux Toolkit. This is a very opinionated and powerful library that you can use to simplify your state management. Check out their documentation regarding how to get started with Redux Toolkit.

I have used Redux, Zustand, and Redux Toolkit while building Next.js applications. I feel that Zustand is very simple and easy to understand. However, I still use Redux in case I need to build something complex. I haven’t used XState but it is also a popular choice.

Use React Query Or SWR For Fetching Async Data

Most front-end applications will fetch data from a back-end server and render it on the page. In a Next.js application or any JavaScript application, you can fetch data using the Fetch API, Axios, or similar libraries. However, as the application grows, it becomes very difficult to manage this async state of your data. You might create your abstractions using utility functions or wrappers around Fetch or Axios but when multiple developers are working on the same application, these utility functions or wrappers will soon become difficult to manage. Your application might also suffer from caching, and performance issues.

To resolve these kinds of issues, it is better to use packages like React Query or SWR. These packages provide a default set of configurations out of the box. They handle a lot of things like caching and performance which are difficult to manage on your own. Both of these packages provide some default configuration and options which you can use to customize their behaviors according to the requirements of your application. These packages will fetch and cache async data from your back-end API endpoints and make your application state much more maintainable.

I have used both React Query and SWR in my projects and I like both of them. You can take a look at their comparison and features to decide which one you should use.

Use Commitizen And Semantic Release With Husky

If you deploy and release your application often, then you might have encountered issues with versioning. When you are working on a big application and multiple developers are contributing to it, managing releases becomes even more difficult. It becomes very difficult to keep track of the changelog. Manually updating the changelog becomes very difficult and slowly your changelog becomes out of date.

You can combine packages like Commitizen and Semantic Release to help you with versioning and maintaining a changelog. These tools help you in automating part of your release process by keeping the changelog in sync with what changes were deployed in a particular release. You can use a tool like Husky to ensure that all the contributors are following the established pattern for writing commit messages and helping you in managing your changelog.

Use Storybook For Visualizing UI Components

In a large code-base, your application will most likely consist of a lot of components. Some of these components will be outdated, buggy, or not necessary anymore. However, it is very difficult to keep track of this kind of thing in a large application. Developers might create new components whose behavior might be similar to an already existing component because they don’t know that the previous component exists. This happens often because there is no way to keep track of what components the application currently has and how they interact with each other.

Tools like Storybook will help you keep track of all the components that your code-base currently consists of. Setting up Storybook is easy and can integrate with your existing Next.js application. Next.js has an example that shows how to set up Storybook with your application.

I have always liked using Storybook because it helps my team of developers understand how each component behaves and what APIs it exposes. It serves as a source of documentation for every developer. Storybook also helps designers understand the behavior of all the components and interactions. You can also use Chromatic along with Storybook for visual testing and catching regression issues during each release.

Recommended Reading: Building React Apps With Storybook” by Abdulazeez Adeshina

Write Maintainable Tests From The Start

Writing tests consumes time. As a result, many companies tend not to invest time in writing any sort of test. Because of this, the application might suffer in the long run. As the application grows, the complexity of the application also increases. In a complex application, refactoring becomes difficult because it is very hard to understand which files might break because of the changes.

One solution to this problem would be to write as many tests as possible from the start. You can follow Test Driven Development (or TDD),software%20against%20all%20test%20cases.) or any other similar concept that works for you. There is an excellent article The Testing Trophy and Testing Classifications by Kent C. Dodds which talks about different types of tests that you can write.

Although writing maintainable tests take time. But I think that tests are very essential for large applications as it gives developers the confidence to refactor files. Generally, I use Jest, React Testing Library, and Cypress for writing tests in my application.

Use Dependabot To Update Packages Automatically

When multiple feature teams contribute to the same application, there is a good chance that the packages used in it will become outdated. This happens because if there are any breaking changes while updating packages, there is a possibility that a considerable amount of time needs to be invested in doing that update. This might result in missing deadlines for shipping features. However, this practice might hurt in the long run. Working with outdated packages can cause a lot of issues like security vulnerabilities, performance issues, and so on.

Fortunately, tools like Dependabot can help your team by automating the update process. Dependabot can be configured to check for outdated packages and send updated pull requests as often as you need. Using tools like Dependabot has helped me a lot in keeping the dependencies of my applications updated.

Things I Wish I Had Known Earlier

There are many things that I wish I had known earlier while building the Next.js application. However, the most important is the going to the Production section of the Next.js documentation. This section outlines some of the most important things that one should implement before deploying a Next.js application to production. Before I read this section, I used to arbitrarily guess about what to do before deploying any application to production.

Always check what browsers you need to support before deploying your application to production and shipping them to your customers. Next.js supports a wide range of browsers. But it is essential to understand what type of users you are shipping your application to and what type of browsers they use.

Conclusion

These are some of the things that I learned while building and maintaining a large Next.js application. Most of these points will apply to any front-end application. For any front-end application, the main priority should always be shipping a product that has a very good user experience, is fast, and feels smooth to use.

I try to keep all these points in mind whenever I develop any application. I hope that they’ll prove to be useful to you, too!