What Is Screen Scraping?

Have you ever thought of extracting the UI elements present on your screen? This can be done using a simple method called Screen Scraping. Let us begin by understanding the concept of Screen Scraping.

What Is Screen Scraping?

Screen scraping is a technique used to fetch the UI elements on the screen which can be used to feed other applications. For example, scraping mark sheets of various students and using them to calculate the overall result of a school through software. 

Easily Find & Kill MongoDB Operations from MongoLab’s UI

A few months ago, we wrote a blog post on finding and terminating long-running operations in MongoDB. To help make it even easier for MongoLab users* to quickly identify the cause behind database unresponsiveness, we’ve integrated the currentOp() and killOp() methods into our management portal.

* currentOp and killOp functionality is not available on our free Sandbox databases because they run on multi-tenanted mongod processes.

Creating Custom HTML

(Photo by Kenny Louie / Flickr) An exciting feature of the HTML specification is creating custom HTML elements. These allow you to create your own HTML elements along with their own JavaScript API. This can be useful when building interfaces with reused...

The post Creating Custom HTML appeared first on Treehouse Blog.

Things I Wish I Had Known About Angular When I Started

I’ve been using Angular since version 2, and it has come a long way since those days to what it is right now. I’ve worked on various Angular projects over the years, yet I keep finding new things. It goes to say how massive the framework is. Here are some things I wish I had known about Angular when I started so you don’t have to learn it the hard way.

Modularize Your Application

Angular has detailed documentation outlining the recommended approach to structure your application. Angular also provides a CLI to help scaffold your application that adheres to their recommendations.

I’ve had my fair share of mistakes when it comes to structuring the application. As you follow tutorials, you’re guided through where you should put your files and which modules the components or services belong to. However, when you venture beyond the tutorial, you sometimes end up with a structure that doesn’t scale well. This could lead to issues down the road.

Below are some mistakes I’ve made that came back and bit me.

Split Your Components Into Modules

The release of Standalone Components in Angular 14 makes NgModules no longer a requirement when creating components. You can choose not to use modules for your components, directives, and pipes. However, you could still follow the folder structure outlined below, omitting the module files.

Initially, I put all the components into the default module you get when creating a new Angular app. As the application grew, I ended up with a lot of components in the same module. They were separate components and didn’t have any need to be in the same module.

Split your components into separate modules, so you can import and load only the required modules. The common approach is to divide your application into the following modules:

  • Core module for singleton services and components that are used once at the app level (example: navigation bar and footer).
  • Feature modules for each feature — code related to the specific functionality of your application. For example, a simple e-commerce application could have a feature module for products, carts, and orders.
  • Shared module for the module that is referenced across different parts of the application. These can include components, directives, and pipes.

Dividing the application into separate modules helps partition your application into smaller, more focused areas. It creates clear boundaries between the different types of modules and each feature module. This separation helps maintain and scale the application as different teams can work on separate parts with a lower risk of breaking another part of the application.

Lazy Load Your Routes

This is a result of my first mistake of putting everything in a single module. Because all the components were in the same module, I couldn’t lazy load the modules. All the modules were imported at the root level, eventually affecting the initial load time. After separating your components into modules, lazy load your routes, so the modules only get loaded when you navigate to the route that requires them.

Single Responsibility

This applies to all types of files in an Angular app. I’ve let my service and component files grow beyond their scope, which made them difficult to work with. The general rule is to keep each component/service/pipe/directive performing a specific set of tasks. If a component is trying to do more than what it was initially made for, it might be worth refactoring and splitting it into several smaller components. This will make testing and maintenance a lot easier.

Use The Angular CLI

You’ve probably used the ng serve command either directly in your command line or through a script in your package.json file. This is one of Angular CLI’s commands. However, the CLI comes with more handy commands that can speed up your development especially when it comes to initializing and scaffolding.

Initially, I did most of these manually as I didn’t understand how to use the CLI except for starting and stopping the local server. I would create component files manually, add the boilerplate code, and add them to the right modules. This was okay for smaller projects but became a tedious task as the project grew. That’s when I learned how to use the CLI and use it to automate most of the manual work I do. For example, instead of creating all the boilerplate for a card component, the following command will create them for you:

ng g c card

You can use the CLI by installing it globally via npm using the command below:

npm install -g @angular/cli

To view the available commands, execute the code below:

ng help

Most projects have custom configurations that are project-specific, and you have to do some modifications to the code generated by the CLI. Angular provides an elegant solution for these scenarios, such as schematics. A schematic is a template-based code generator — a set of instructions to generate or modify code for your project. Similar to Angular CLI, your custom schematics are packaged and can be installed via npm in whichever project needs it.

Path Aliases And Barrel Exports

As I was learning Angular, I tried to keep my project neat by putting all the services into a services folder, models in a models folder, and so on. However, after some time, I end up with a growing list of import statements like this:

import { UserService } from '../../services/user.service';
import { RolesService } from '../../services/roles.service';

Typescript path alias can help simplify your import statements. To setup path aliases, open your tsconfig.json and add the desired path name and its actual path:

{
 "compilerOptions": {
 "paths": {
 "@services/*": ["src/app/services/*"],
 }
 }
}

Now the import statements above can be re-written as:

import { UserService } from '@services/user.service';
import { RolesService } from '@services/roles.service';

An added benefit of using path aliases is that it allows you to move your files around without having to update your imports. You’d have to update them if you were using relative paths.

This can be further simplified by using barrel exports. Barrels are a handy way to export multiple files from a single folder (think of it as a proxy for your files). Add an index.ts in the services folder with the following contents:

export * from './user.service';
export * from './roles.service';

Now, update the tsconfig.json to point to the index.ts file instead of the asterisk (*).

{
 "compilerOptions": {
 "paths": {
 "@services": ["src/app/services/index.ts"],
 }
 }
}

The import statements can now be further simplified into:

import { UserService, RolesService } from '@services';
Embrace Typescript’s Features

I started by learning JavaScript, so I wasn’t used to the type system and the other features that TypeScript offers. My exposure to TypeScript was through Angular, and it was overwhelming to learn both a new language (although it’s a superset of JavaScript, some differences trip me up every time) and a new framework. I often find TypeScript slowing me down instead of helping me with the development. I avoided using TypeScript features and overused the any type in my project.

However, as I got more acquainted with the framework, I began to understand the benefits of TypeScript when used correctly. TypeScript offers a lot of useful features that improve the overall developer experience and make the code you write cleaner. One of the benefits of using TypeScript that I’ve grown accustomed to is the IntelliSense or autocomplete it provides in your IDE. Their type safety and static type checking have also helped catch potential bugs at compile time that could have snuck in.

The nice thing about TypeScript is its flexible configuration. You can toggle their settings easily via their tsconfig.json as per your project’s needs. You can change these settings again if you decide on a different setting. This allows you to set the rules as loose or strict as you’d like.

Improve Performance By Using trackBy

Performance is crucial for applications, and Angular provides various ways to optimize your applications. This is often a problem that you won’t run into at the beginning as you are probably working with small data sets and a limited number of components. However, as your application grows and the number of components being rendered grows and becomes increasingly complex, you’ll start to notice some performance degradation. These performance degradations are usually in the form of slowness in the app: slow to respond, load, or render and stuttering in the UI.

Identifying the source of these problems is an adventure on its own. I’ve found that most of the performance issues I’ve run into in the applications are UI related (this doesn’t mean that other parts of the application don’t affect performance). This is especially prominent when rendering components in a loop and updating an already rendered component. This usually causes a flash in the component when the components are updated.

Under the hood, when a change occurs in these types of components, Angular needs to remove all the DOM elements associated with the data and re-create them with the updated data. That is a lot of DOM manipulations that are expensive.

A solution I’ve found to fix this issue is to use the trackBy function whenever you’re rendering components using the ngFor directive (especially when you’re frequently updating the rendered components).

The ngFor directive needs to uniquely identify items in the iterable to correctly perform DOM updates when items in the iterable are reordered, new items are added, or existing items are removed. For these scenarios, it is desirable only to update the elements affected by the change to make the updates more efficient. The trackBy function lets you pass in a unique identifier to identify each component generated in the loop, allowing Angular to update only the elements affected by the change.

Let’s look at an example of a regular ngFor that creates a new div for each entry in the users array.

@Component({
 selector: 'my-app',
 template: `
 <div *ngFor="let user of users">
 {{ user.name }}
 </div>
 `,
})

export class App {
 users = [
 {id: 1, name: 'Will'},
 {id: 2, name: 'Mike'},
 {id: 3, name: 'John'},
 ]
}

Keeping most of the code the same, we can help Angular keep track of the items in the template by adding the trackBy function and assigning it to a function that returns the unique identifier for each entry in the array (in our case, the user’s id).

@Component({
 selector: 'my-app',
 template: `
 <div *ngFor="let user of users; trackBy: trackByFn">
 {{ user.name }}
 </div>
 `,
})

export class App {
 users = [
 {id: 1, name: 'Will'},
 {id: 2, name: 'Mike'},
 {id: 3, name: 'John'},
 ]
 trackByFn(index, item) {
 return item.id;
 }
}
Use Pipes For Data Transformations

Data transformations are inevitable as you render data in your templates. My initial approach to this was to:

  • Bind the template to a function that accepts the data as the input:
interface User {
 firstName: string,
 middleName: string,
 lastName: string
}
@Component({
 selector: 'my-app',
 template: `
 <h1>{{ formatDisplayName(user) }}</h1>
 `,
})

export class App {
 user: User = {
 firstName: 'Nick',
 middleName: 'Piberius',
 lastName: 'Wilde'
 }
 formatDisplayName(user: User): string {
 return `${user.firstName} ${user.middleName.substring(0,1)}. ${user.lastName}`; 
 }
}
  • Create a new variable, assign the formatted data to the variable, and bind the new variable in the template:
interface User {
 firstName: string,
 middleName: string,
 lastName: string
}
@Component({
 selector: 'my-app',
 template: `
 <h1>{{ displayName }}</h1>
 `,
})

export class App {
 user: User = {
 firstName: 'Nick',
 middleName: 'Piberius',
 lastName: 'Wilde'
 }
 displayName = `${this.user.firstName} ${this.user.middleName.substring(0,1)}. ${this.user.lastName}`; 
}

Neither approach was clean nor performant and wasn’t what Angular recommends to perform data transformations. For these scenarios, angular recommends using pipes. Pipes are functions specifically designed to be used in templates.

Angular provides built-in pipes for common data transformations such as internationalization, date, currency, decimals, percentage, and upper and lower case strings. In addition, Angular also lets you create custom pipes that can be reused throughout your application.

The data transformation above can be re-written using a pipe as follows:

@Pipe({name: 'displayName'})
export class DisplayNamePipe implements PipeTransform {
 transform(user: User): string {
 return `${user.firstName} ${user.middleName.substring(0,1)}. ${user.lastName}`; 
 }
}

The pipe can then be used in the template by using the pipe (|) character followed by the pipe name.

@Component({
 selector: 'my-app',
 template: `
 <h1>{{ user | displayName }}</h1>
 `,
})

export class App {
 user: User = {
 firstName: 'Nick',
 middleName: 'Piberius',
 lastName: 'Wilde'
 }
}
Improve Performance With OnPush Change Detection

Angular applications are made up of a tree of components that rely on their change detectors to keep the view and their corresponding models in sync. When Angular detects a change in the model, it immediately updates the view by walking down the tree of change detectors to determine if any of them have changed. If the change detector detects the change, it will re-render the component and update the DOM with the latest changes.

There are two change detection strategies provided by Angular:

  • Default
    The change detection cycle runs on every event that occurs inside the component.
  • OnPush
    The change detection cycle only runs when a component’s event handler is triggered, an async pipe is used in the template, a new value is emitted, and when any of the component’s input reference changes.

In addition to the reduced number of change detection cycles and its performance boost, the restrictions imposed by using the OnPush change detection strategy also make you architect your app better by pushing you to create more modular components that utilize one of the three recommended ways mentioned above to update the DOM.

RxJS Is Your Friend

RxJS is a JavaScript library that uses observables for reactive programming. While RxJS isn’t exclusively used in Angular, it plays a big role in the Angular ecosystem. Angular’s core features, such as Routing, HttpClient, and FormControl, leverage observables by default.

RxJS is a part of Angular that has been largely unexplored for me as I was learning the framework. I’ve avoided using it unless I had to. It was a new concept, and I found it quite hard to wrap my head around it. I’ve worked with JavaScript Promises, but observables and streams are a new paradigm for me.

After working for a while with Angular, I eventually took the time to learn and understand RxJS and try to use them in my projects. It wasn’t long before I realized the numerous benefits of RxJS that I’ve been missing out on all this time. RxJS, with its large collection of chainable operators, excels in handling async tasks.

I’ve been using RxJS with Angular for a few years now, and my experience has been nothing less than positive. The set of operators RxJS offers is really handy. They seem to have an operator (or a chain of operators) for every use case. Commonly used operators include:

  • map: passes each source value through a transformation function to get corresponding output values.
  • tap: modify the outside state when the observable emits a new value without altering the stream.
  • switchMap: maps each value to an Observable, then flattens all of these inner Observables.
  • filter: emits a value from the source if it passes a criterion function.
  • combineLatestWith: create an observable that combines the latest values from all passed observables and the source into an array and emits them.
Learn How To Spot And Prevent Memory Leaks

Memory leaks are one of the worst types of issues you run into — hard to find, debug, and often hard to solve. This might not be a concern initially, but it becomes crucial when your application reaches a certain size. Common symptoms of memory leaks are degrading performance the longer the app is being used or the same events being fired multiple times. Two of the most common source of memory leaks I’ve run into are:

1. Subscriptions That Are Not Cleaned Up

Unlike the async pipe, listening to an observable using the subscribe method won’t get cleaned up automatically. You will have to manually clean up the subscriptions by calling unsubscribe on the subscription or using the takeUntil operator.

The example below shows a memory leak introduced by listening to the route params observable. Every new instance of MyComponent creates a new subscription which will continue to run even after the component is destroyed.

export class MyComponent {
 constructor(private route: ActivatedRoute){
 this.route.params.subscribe((params) => {
 // Do something
 });
 }
}

As mentioned above, you can fix the memory leak by either calling unsubscribe or using the takeUntil operator.

  • Fixing the memory leak using the unsubscribe method:
export class MyComponent {
 private routeSubscription;
 constructor(private route: ActivatedRoute){
 this.routeSubscription = this.route.params.subscribe((params) => {
 // Do something
 });

 }
 ngOnDestroy() {
 this.routeSubscription.unsubcribe();
 }
}
  • Fixing the memory leak using the takeUntil operator:
export class MyComponent {
 private componentDestroyed$ = new Subject<boolean>();
 constructor(private route: ActivatedRoute){
 this.route.params.pipe(
 takeUntil(this.componentDestroyed$)
 ).subscribe((params) => {
 // Do something
 });

 }
 ngOnDestroy() {
 this.componentDestroyed$.next(true);
 this.componentDestroyed$.complete();
 }
}

2. Event Listeners That Are Not Cleaned Up

Another common source of memory leaks is event listeners that aren’t unregistered when no longer used. For example, the scroll event listener in the code below gets instantiated on every new instance of MyComponent and continuously runs even after the component is destroyed unless you unregister it.

export class MyComponent {
 constructor(private renderer: Renderer2) {}
 ngOnInit() {
 this.renderer.listen(document.body, 'scroll', () => {
 // Do something
 });
 }
}

To fix this and stop listening to the event after the component is destroyed, assign it to a variable and unregister the listener on the ngOnDestroy lifecycle method.

export class MyComponent {
 private listener;
 constructor(private renderer: Renderer2) {}
 ngOnInit() {
 this.listener = this.renderer.listen(
 document.body,
 ‘scroll’,
 () => {
 // Do something
 });

 }
 ngOnDestroy() {
 this.listener();
 }
}
Consider Using A State Management Library (If Applicable)

State management is another part of the stack that you don’t usually think about until you need it. Most small and simple applications don’t need any external state management library. However, as the project grows and managing your application’s state gets more complicated, it might be time to re-think if the project could benefit from implementing more robust state management.

There is no correct solution for state management as every project’s requirements are different. Luckily, there are a few state management libraries for Angular that offer different features. These are a few of the commonly used state management libraries in the Angular ecosystem:

Wrapping Up

If you’ve just started to learn Angular and it hasn’t quite clicked yet, be patient! It will eventually start to make sense, and you’ll see what the framework has to offer. I hope my personal experience can help you accelerate your learning and avoid the mistakes I’ve made.

Everything Developers Must Know About Figma

We must understand the possibilities and limitations of each other’s tools to work hand in hand, so let me show you the design side of things and all the little Figma treasures you might not yet understand fully.

  1. We work with components and variants in Figma.
  2. We work with styles in Figma, but they are not very smart.
  3. We can set up and test responsive design!
  4. We have no breakpoints in Figma.
  5. We can also work with actual data (sort of).
  6. You might want to point out soft grid vs. hard grid to us.
  7. Why we sometimes mess up line-height.
  8. All we have in Figma is PX.
  9. We can set up pretty sweet prototypes in Figma.
  10. We will invite you to ‘View Only’ rights, giving you access to everything you need as a developer.
1. We Work With Components And Variants In Figma

Components In Figma

In Figma, we can set up re-usable UI components and create instances. Components can also be nested. Hence we can follow a nice atomic design path.

Tip: With true/false or yes/no, you can create a toggle of the entire component. This is a great way to create a light/dark mode.I saw this setup in Joey Banks’s excellent iOS 16 UI Kit for Figma. Best file setup I have ever seen in general!

We Have Props!

Component properties were released in March 2022. So I assume a lot of developers do not know about the possibility of using them in design yet. So far, we have text props, for instance, swap and toggle props. And of course, we can combine them all together.

Opportunities Between Design And Code

Align UI And Code Components In Naming And Structure

Due to the use of components, variants, and props, we can align our UI components with code components. However, to do so, we need information about the structure, naming, behavior, etc., from development. So sit down with us, have a coffee, and show us the code base you have or dream of building. Many videos and tutorials show how different teams handle this alignment process. I leave you to the rabbit hole.

Quick Link UI And Code Components In Figma

If you want to link components to a code base without much effort and documentation, you can simply add a link and a description to the Figma component documentation (a bit hidden). The link will create a button in the inspect tab linking directly to, e.g., the Github section of the same component in code. The Figma component search also picks up the description, which is handy for larger systems.

Note: Aligning components is fantastic, but it also takes a lot of effort and, most of all, maintenance, so use it where it makes sense, e.g., a design system. If you just design a one-pager website, you still use components with a clean and scalable design and clear building blocks to be coded, but they do not necessarily need to align with the code. It’s like you would not build an assembly line to streamline the process of making a cake if you would only want to bake a birthday cake for your friend. Yet you still use the same basic ingredients.

2. We Work With Styles In Figma, But They Are Not Very Smart

Styles In Figma

In Figma, we can create styles for color, text, grids, and things like shadows or blurs and re-apply them across our design. However, that’s pretty much it.

Opportunities Between Design And Code

Figma Token Plugin to create or connect with existing tokens

As you can see, Figma styles are a bit isolated and do not interact with one another. So you cannot set a base font size to scale and adapt the scaling rate. You can only set a fixed size. Also, we have no styles for spacing systems (yet). However, with the Figma Tokens plugin, you can create tokens in Figma and work with them. And even more impressive, you can connect and can align with code tokens. Check out the (really well-made) documentation and this fantastic video by the creator Jan Six. So amazing!!!

3. We Can Set Up And Test Responsive Design!

This is a big one! Let’s look at it step by step. The tools we have for responsive design are the following:

Our tools in Figma for responsive design:

We can use the above tools individually, not at all, or combine them. It depends a lot on what we want to build. There is no right or wrong.

Very important to know from a developer’s point of view is that we have no automated breakpoints in Figma (I will talk about how to deal with that in a bit).

Auto Layout

Auto layout is really powerful but takes some practice to work with (and will drive you nuts to start with, but stick with it!!!). It is (loosely) based on flexbox, as you will notice when you glimpse at the Inspect tab.

Combine Grids And Constraints

The cool thing is that as soon as a grid is applied to a frame, the constraints will assume the columns as the parent frame. So we can set up really nice and straightforward responsive behavior by combining grids and constraints.

Combine Grid, Constraints, And Auto Layout Elements

So even though we cannot combine auto layout and constraints within a frame, we can place auto layout elements/instances inside a parent frame and then use constraints around them. In this way, the content reshuffles nicely, keeping all set parameters.

We Can Make Our Own Breakpoints By Hand!

However, we can create our own breakpoints by hand! So with the technical information given, we can set up the visual representation in Figma. I am just using a random example of breakpoints here.

We can then place our auto layout components within those ranges and see where adjustments are necessary. In my example, I switch from a full fluid screen on mobile to an overlay with a fixed size at breakpoint S.

Note: Sometimes, you might use the same grid for several breakpoints, then just note, e.g., Grid: S+M (from 576 to 992). This way, you could always split it in two again in case the margin or anything changes in the future.

Responsive Typography Is Non-Existent

Unfortunately, what kicks in automatically with media queries in CSS needs to be added by hand in Figma. We can set up a responsive Typescale and then need to make sure to change text style (if applicable) when breakpoints are changing. It’s a bit annoying and full of potential errors, I know.

If you want to work with fluid typography (VW units, clamps(), calc(), you name it), this is best tested in the browser as we cannot simulate fluid text behavior with Figma. We can, however, pick a specific min and max screen size to get a rough idea of the situation at a specific width.

Breakpoint Plugin

However, to end on an exciting topic: Once you go through the effort of setting up your components and pages responsively, you can chuck them into the breakpoints plugin and get a really lovely overall idea of the design.

5. We Can Also Work With Actual Data (Sort Of)

Figma cannot connect to a classic database, but we can use actual data with some preparation. You can use the Google Sheets Sync plugin and just add actual content there. By simply naming our layers with #columnname, run the plugin, add the link, and hit sync. And boom, there you go. There is also a Plugin for Airtable and Notion Sync working pretty much in the same way.

In general, we should test components with different content such as ideal state, little content, heavy content, empty, error, and loading states where applicable. I made a checklist for components you can use before release.

Working with actual data gives us a good idea of potential shortcomings. We can also see if the database needs some grooming or if the image pool needs a bit of love and attention to live up to the brand promise.

6. You Might Want To Point Out Soft Grid vs. Hard Grid To Us

When we click on Grids, Figma adds this px grid to the background. Order! Structure! As a designer, you jump at this, and as you were told to space with 8pt, you use the grid.

So we have this grid, which is why many designers jump to conclusions using a hard grid to set their spacing (it can be used for other alignment and handy in mobile setup, though). We have no spacing blocks or cubes to create a soft grid, we can set this by hand, though, and nudge in steps of 8, but that is about it.

Tip: In Figma, we can alter the nudge amount. Press Cmd + / and type “nudge” and change to 8. Make sure to keep alt pressing when nudging to see the distances. By pressing shift and up and down arrows, we then nudge in, e.g., 8pt steps.

Opportunities Between Design And Code

How Does Spacing Work For You In CSS?

Feel free to point out (preferably at the beginning of the project) that there is no such magic background grid in CSS and that the spacing system means measuring in spacing blocks from element to element (including the line height!). Or, in other words, explain the difference between the hard grid vs. the soft Grid that we use later in UI Design and CSS.

And yet again: Use the Figma Tokens Plugin.

Here we can just pull the real spacing system with spacing tokens and apply it to our components. We can also set up our own tokens just in Figma right in the plugin.

Note: We cannot set line height in Figma to something like 1.5 notation! By default, it uses px. But we can cheat a little and use %, so 1.5 in CSS would be 150% in Figma. You will still find the px value only in the inspect tab.

Opportunities Between Design And Code

Explain It!

So as a developer, you might find that the line height is randomly set to 1. This is a desperate design attempt to get rid of the “random” space we do not understand (yet). So it makes sense to remind (new) designers that UI Design is dynamic. Screen sizes change, and content length will vary (either because the content is added or translated into a new language). Thus, we can never assume a single line of the text remains a single line of text forever. Also, we do not want to create too many styles. So explain that working with the natural line height is just fine, and you will do the same in CSS.

8. All We Have In Figma Is PX

In Figma, we can only work with px, and we work at 1px=1pt. We do not have rem, em, or any other relative way to define things like font size. So if you see px everywhere in a UI Design, this does not mean we want it hard coded!!!

9. We Can Set Up Pretty Sweet Prototypes In Figma

We can create rather impressive prototypes directly from our design files in Figma. If you hit the play button in the file (top right), you can see them. We can link frames to new pages or overlays and also animate within component sets from variant to variant.

Opportunities Between Design And Code

As a developer, you will be able to navigate the file and pull out all information you need:

Pages

You can navigate the different frames on the canvas but note how there are different pages above the layers menu on the left. Every team uses pages differently, some for versions and sprints, some to structure the file into the design, components, and testing. In any case, ensure not to overlook the pages as they are the file’s structure.

Inspect Mode

When entering a file with view mode, you will see the inspect menu open per default. Click on an element, and you will be shown the distance to the nearest objects and the specs on the right-hand side menu.

You can switch between CSS, iOS, and Android.

When clicking on the main component, you will see the link to the code documentation (if applicable) and any comments in inspect mode.

This only shows up if it was added to the design tab’s component documentation. And you obviously only need this if you want to align UI and code components.

By the way, it works with any link. However, some such Github links create a nice custom button.

Styles Overview

Click on the canvas to get an overview of all styles in the file. Note that this only shows local styles; some might be pulled in from an external library. So the best is to check for style documentation (every design team should set this up for you) to make sure you have all information.

You should, however, still receive a general overview of all styles from your design team, including internal and external styles used now or in the future.

Jump To The Main Component

This is really important yet a bit hidden. Click on any instance on the canvas and then click on the diamond-shaped symbol sign, and you will jump to the main component and documentation. This is where you can get all information and measurements.

You should then be led to the Figma UI component library. This might be a local page or an external UI component document giving you all the necessary information and specs defined by the UI team. If you do not find such an overview, kindly ask your design team to set this up for you.

There is no magic automation for style and component overview in Figma. This needs to be set up and documented by the design team, and the format may vary.

Export Assets Of Any Size And Form

Assets can be exported to any asset in the format (JPG, PNG, SVG) and @size from the “view only” mode, so no bulk export by the design team is needed anymore.

Tip: For a specific height or width, instead of 3x, 2x, just enter the width followed by w (e.g., 300w), and it will export it, keeping the image proportions. It also works for height (h).

Comment

Leave comments and discuss within your team.

Prototype

Hit the play button (top right corner of your design file), and you will jump to the presentation mode seeing your prototype in action. Usually, the designer was nice enough to add some flows and structure the prototype, so you get a good idea of different flows.

Tip: Individual links can be created from every flow of the prototype menu. I like using this to set up an overview of the design and testing stages. You can also link to any other team planning file here.

Stay In Touch!

If you liked this article, make sure to subscribe and visit me on moonlearning.io, where I teach about UX/UI Design+Figma. This article is also the base of my talk and workshop during the Smashing Conference New York, the 10th to the 13th of October 2022. See you there!

Testable Frontend: The Good, The Bad And The Flaky

I often come across front-end developers, managers, and teams facing a repeating and legitimately difficult dilemma: how to organize their testing between unit, integration, and E2E testing and how to test their UI components.

Unit tests often seem not to catch the “interesting” things happening to users and systems, and E2E tests usually take a long time to run or require a messy configuration. In addition to that, there are so many tools around (JEST, Cypress, Playwright, and so on). How does one make sense of it all?

Note: This article uses React for examples and semantics, but some of the values apply to any UI development paradigm.

Why Is Testing Front-end Difficult?

We don’t tend to author our front-end as a system but rather as a bunch of components and functions that make up the user-interface stories. With component code mainly living in JavaScript or JSX, rather than separating between HTML, JS, and CSS, it’s also more tempting than ever to mix view code and business-logic code. When I say “we,” I mean almost every web project I encountered as a developer or consultant.

When we come around to test this code, we often start from something like the React Testing Library which renders React components and tests the result, or we faff about with configuring Cypress to work nicely with our project and many times end up with a misconfiguration or give up.

When we talk with managers about the time required to set up the front-end testing system, neither they nor we know exactly what it entails and whether our efforts there would bear fruit, and how whatever we build would be valuable to the quality of the final product and the velocity of building it.

Tools And Processes

It gets worse if we have some sort of a “mandatory TDD” (test-driven development) process in the team, or even worse, a code-coverage gate where you have to have X% of your code covered by tests. We finish the day as a front-end developer, fix a bug by fixing a few lines sprinkled across several React components, custom hooks, and Redux reducers, and then we need to come up with a “TDD” test to “cover” what we did.

Of course, this is not TDD; in TDD, we would have written a failing test first. But in most front-end systems I’ve encountered, there is no infrastructure to do something like that, and the request to write a failing test first while trying to fix a critical bug is often unrealistic.

Coverage tools and mandatory unit tests are a symptom of our industry being obsessed with specific tools and processes. “What is your testing strategy?” is often answered by “We use TDD and Cypress” or “we mock things with MSW,” or “we use Jest with React Testing Library.”

Some companies with separate QA/testing organizations do try to create something that looks more like a test plan. Still, those often reach a different problem, where it’s hard to author the tests together with development.

Tools like Jest, Cypress and Playwright are great, code coverage has its place, and TDD is a crucial practice for maintaining code quality. But too often, they replace architecture: a good plan of interfaces, good function signatures between units, a clear API for a system, and a clear UI definition of the product — a good-old separation of concerns. A process is not architecture.

The Bad

To respect our organization’s process, like the mandatory testing rule or some code-coverage gate in CI, we use Jest or whatever tool we have at hand, mock everything around the parts of the codebase we’ve changed, and add one or more “unit” tests that verify that it now gives the “correct” result.

The problem with it, apart from the test being difficult to write, is that we’ve now created a de-facto contract. We’re not only verifying that a function gives some set of expected results, but we’re also verifying that this function has the signature the test expects and uses the environment in the same way our mocks simulate. If we ever want to refactor that function signature or how it uses the environment, the test will become dead weight, a contract we don’t intend to keep. It might fail even though the feature works, and it might succeed because we changed something internal, and the simulated environment doesn’t match the real environment anymore.

If you’re writing tests like this, please stop. You’re wasting time and making the quality and velocity of your product worse.

It’s better to not have auto-tests at all than to have tests that create fantasy worlds of unspecified simulated environments and rely on internal function signatures and internal environment states.

Contracts

A good way to understand if a test is good or bad is to write its contract in plain English (or in your native language). The contract needs to represent not just the test but also the assumptions about the environment. For example, “Given the username U and password Y, this login function should return OK.” A contract is usually a state and an expectation. The above is a good contract; the expectations and the state are clear. For companies with transparent testing practices, this is not news.

It gets worse when the contract becomes muddied with implementation detail: “Given an environment where this useState hook currently holds the value 14 and the Redux store holds an array called userCache with three users, the login function should…”.

This contract is highly specific to implementation choices, which makes it very brittle. Keep contracts stable, change them when there is a business requirement, and let implementations be flexible. Make sure the things you rely on from the environment are sturdy and well-defined.

The Flaky

When separation of concerns is missing, our systems don’t have a clear API between them, and we lack functions with a clear signature and expectation, we end up with E2E as the only way to test features or regressions. This is not bad as E2E tests run the whole system and ensure that a particular story that’s close to the user works as expected.

The problem with E2E tests is that their scope is very wide. By testing a whole user journey, the environment usually needs to be set up from scratch by authenticating, going through the entire process of finding the right spot where the new feature lives or regression occurred, and then running the test case.

Because of the nature of E2E, each of these steps might incur unpredictable delays as it relies on many systems, any of which could be down or laggy at the time the CI run, as well as on careful crafting of “selectors” (how to programmatically mimic what the user is doing). Some bigger teams have systems in place for root-cause analysis to do this, and there are solutions like testim.io that address this problem. However, this is not an easy problem to solve.

Often a bug is in a function or system, and running the whole product to get there tests too much. New code changes might show regressions in unrelated user journey paths because of some failure in the environment.

E2E tests definitely have their place in the overall blend of tests and are valuable in finding issues that are not specific to a subsystem. However, relying too much on them is an indication that perhaps the separation of concerns and API barriers between the different systems is not defined well enough.

The Good

Since unit-testing is limited or relies on a heavily-mocked environment, and E2E tests tend to be costly and flaky, integration tests often supply a good middle ground. With UI integration tests, our whole system runs in isolation from other systems, which can be mocked, but the system itself is running without modification.

When testing the front-end, it means running the whole front-end as a system and simulating the other systems/”backends” it relies on to avoid flakiness and downtimes unrelated to your system.

If the front-end system gets too complicated, also consider porting some of the logic code to subsystems and define a clear API for these subsystems.

Strike A Balance

Separating code into subsystems is not always the right choice. If you find yourself updating both the subsystem and the front-end for every change, the separation may become unhelpful overhead.

Separate UI logic to subsystems when the contract between them can make them somewhat autonomous. This is also where I would be careful with micro-frontends as they are sometimes the right approach, but they focus on the solution rather than on understanding your particular problem.

Testing UI Components: Divide And Conquer

The difficulty in testing UI components is a special case of the general difficulty in testing. The main issue with UI components is that their API and environments are often not properly defined. In the React world, components have some set of dependencies; some are “props,” and some are hooks (e.g., context or Redux). Components outside the React world often rely on globals instead, which is a different version of the same thing. When looking at the common React component code, the strategy of how to test it can be confusing.

Some of this is inescapable as UI testing is hard. But by dividing the problem in the following ways, we reduce it substantially.

Separate UI From Logic

The main thing that makes testing component code easier is having less of it. Look at your component code and ask, does this part actually need to be connected to the document in any way? Or is it a separate unit/system that can be tested in isolation?

The more code you have as plain JavaScript “logic,” agnostic to a framework and unaware that it’s used by the UI, the less code you need to test in confusing, flaky, or costly ways. Also, this code is more portable and can be moved into a worker or to the server, and your UI code is more portable across frameworks because there is less of it.

Separate UI Building Blocks From App Widgets

The other thing that makes UI code difficult to test is that components are very different from each other. For example, your app can have a “TheAppDashboard” component, which contains all the specifics of your app’s dashboard, and a “DatePicker” component, which is a general-purpose reusable widget that appears in many places throughout your app.

DatePicker is a UI building block, something that can be composed into the UI in multiple situations but doesn’t require a lot from the environment. It is not specific to the data of your own app.

TheAppDashboard, on the other hand, is an app widget. It probably doesn’t get re-used a lot throughout the application; perhaps it appears only once. So, it doesn’t require many parameters, but it does require a lot of information from the environment, such as data related to the purpose of the app.

Testing UI Building Blocks

UI building blocks should, as much as possible, be parametric (or “prop-based” in React). They shouldn’t draw too much from the context (global, Redux, useContext), so they also should not require a lot in terms of per-component environment setup.

A sensible way to test parametric UI building blocks is to set up an environment once (e.g., a browser, plus whatever else they need from the environment) and run multiple tests without resetting the environment.

A good example for a project that does this is the Web Platform Tests — a comprehensive set of tests used by the browser vendors to test interoperability. In many cases, the browser and the test server are set up once, and the tests can re-use them rather than have to set up a new environment with each test.

Testing App Widgets

App widgets are contextual rather than parametric. They usually require a lot from the environment and need to operate in multiple scenarios, but what makes those scenarios different is usually something in the data or user interaction.

It’s tempting to test app widgets the same way we test UI building blocks: create some fake environment for them that satisfies all the different hooks, and see what they produce. However, those environments tend to be brittle, constantly changing as the app evolves, and those tests end up being stale and give an inaccurate view of what the widget is supposed to do.

The most reliable way to test contextual components is within their true context — the app, as seen by the user. Test those app widgets with UI integration tests and sometimes with e2e tests, but don’t bother unit-testing them by mocking the other parts of the UI or utils.

Testable UI Cheat Sheet

Summary

Front-end testing is complex because often UI code is lacking in terms of separation of concerns. Business logic state-machines are entangled with framework-specific view code, and context-aware app widgets are entangled with isolated, parametric UI building blocks. When everything is entangled, the only reliable way to test is to test “everything” in a flaky and costly e2e test.

To manage this problem, rely on architecture rather than specific processes and tools:

  • Convert some of your business-logic flows into view-agnostic code (e.g., state machines).
  • Separate building blocks from app widgets and test them differently.
  • Mock your backends and subsystems, not other parts of your front-end.
  • Think and think again about your system signatures and contracts.
  • Treat your testing code with respect. It’s an important piece of your code, not an afterthought.

Striking the right balance between front-end and subsystems and between different strategies is a software architecture craft. Getting it right is difficult and requires experience. The best way to gain this kind of experience is by trying and learning. I hope this article helps a bit with learning!

My gratitude to Benjamin Greenbaum and Yehonatan Daniv for reviewing this from the technical side.

A New Pattern For The Jamstack: Segmented Rendering

If you think that static rendering is limited to generic, public content that is the same for every user of your website, you should definitely read this article.

Segmented Rendering is a new pattern for the Jamstack that lets you personalize content statically, without any sort of client-side rendering or per-request Server-Side Rendering. There are many use cases: personalization, internationalization, theming, multi-tenancy, A/B tests…

Let’s focus on a scenario very useful for blog owners: handling paid content.

Congratulations On Your New Job

Wow, you just got promoted! You are now “Head of Performance” at Repairing Magazine, the most serious competitor to Smashing Magazine. Repairing Magazine has a very peculiar business model. The witty jokes in each article are only visible to paid users.

Why did the programmer cross the road?

I bet you’d pay to know the answer.

Your job for today is to implement this feature with the best possible performances. Let’s see how you can do that. Hint: we are going to introduce a new pattern named “Segmented Rendering.”

The Many Ways To Render A Web Page With Modern JavaScript Frameworks

Next.js popularity stems from its mastery of the “Triforce of Rendering:” the ability to combine client-side rendering, per-request server-rendering and static rendering in a single framework.

CSR, SSR, SSG… Let’s Clarify What They Are

Repairing Magazine user interface relies on a modern JavaScript library, React. Like other similar UI libraries, React provides two ways of rendering content: client-side and server-side.

Client-Side Rendering (CSR) happens in the user’s browser. In the past, we would have used jQuery to do CSR.

Server-side rendering happens on your own server, either at request-time (SSR) or at build-time (static or SSG). SSR and SSG also exist outside of the JavaScript ecosystem. Think PHP or Jekyll, for instance.

Let’s see how those patterns apply to our use case.

CSR: The Ugly Loader Problem

Client-Side Rendering (CSR) would use JavaScript in the browser to add witty jokes after the page is loaded. We can use “fetch” to get the jokes content, and then insert them in the DOM.

// server.js
const wittyJoke =
  "Why did the programmer cross the road?\
   There was something he wanted to C.";
app.get("/api/witty-joke", (req) => {
  if (isPaidUser(req)) {
    return { wittyJoke };
  } else {
    return { wittyJoke: null };
  }
});

// client.js
const ClientArticle = () => {
  const { wittyJoke, loadingJoke } = customFetch("/api/witty-jokes");
  // THIS I DON’T LIKE...
  if (loadingJoke) return <p>Ugly loader</p>;
  return (
    <p>
      {wittyJoke
        ? wittyJoke
        : "You have to pay to see jokes.\
         Humor is a serious business."}
    </p>
  );
};

CSR involves redundant client-side computations and a lot of ugly loaders.

It works, but is it the best approach? Your server will have to serve witty jokes for each reader. If anything makes the JavaScript code fail, the paid user won’t have their dose of fun and might get angry. If users have a slow network or a slow computer, they will see an ugly loader while their joke is being downloaded. Remember that most visitors browse via a mobile device!

This problem only gets worse as the number of API calls increases. Remember that a browser can only run a handful of requests in parallel (usually 6 per server/proxy). Server-side rendering is not subject to this limitation and will be faster when it comes to fetching data from your own internal services.

SSR Per Request: Bitten By The First Byte

Per-request Server-Side Rendering (SSR) generates the content on demand, on the server. If the user is paid, the server returns the full article directly as HTML. Otherwise, it returns the bland article without any fun in it.

// page.js: server-code
async function getServerSideProps(req) {
  if (isPaidUser(req)) {
    const { wittyJoke } = getWittyJoke();
    return { wittyJoke };
  } else {
    return { wittyJoke: null };
  }
}
// page.js: client-code
const SSRArticle = ({ wittyJoke }) => {
  // No more loader! But...
  // we need to wait for "getServerSideProps" to run on every request
  return (
    <p>
      {wittyJoke
        ? wittyJoke
        : "You have to pay to see jokes. Humor is a serious business."}
    </p>
  );
};

SSR removes client-side computations, but not the loading time.

We don’t rely on client-side JavaScript anymore. However, it’s not energy-efficient to render the article for each and every request. The Time To First Byte (TTFB) is also increased because we have to wait for the server to finish its work before we start seeing some content.

We’ve replaced the ugly client-side loader with an even uglier blank screen! And now we even pay for it!

The “stale-while-revalidate” cache control strategy can reduce the TTFB issue by serving a cached version of the page until it’s updated. But it won’t work out-of-the-box for personalized content, as it can cache only one version of the page per URL without taking cookies into account and cannot handle the security checks needed for serving paid content.

Static Rendering: The Key To The Rich Guest/Poor Customer Problem

At this point, you are hitting what I call the “rich guest/poor customer” problem: your premium users get the worst performance instead of getting the best.

By design, client-side rendering and per-request server-side rendering involve the most computations compared to static rendering, which happens only once at build time.

99% of the websites I know will pick either CSR or SSR and suffer from the rich guest/poor customer problem.

Deep-Dive Into Segmented Rendering

Segmented Rendering is just a smarter way to do static rendering. Once you understand that it’s all about caching renders and then getting the right cached render for each request, everything will click into place.

Static Rendering Gives The Best Performances But Is Less Flexible

Static Site Generation (SSG) generates the content at build-time. That’s the most performant approach because we render the article once for all. It is then served as pure HTML.

This explains why pre-rendering at build-time is one of the cornerstones of the Jamstack philosophy. As a newly promoted “Head of Performance,” that’s definitely what you want!

As of 2022, all Jamstack frameworks have roughly the same approach of static rendering:

  • you compute a list of all possible URLs;
  • you render a page for each URL.
const myWittyArticles = [
  "/how-to-repair-a-smashed-magazine",
  "/segmented-rendering-makes-french-web-dev-famous",
  "/jamstack-is-so-2022-discover-haystack",
];

Result of the first step of static rendering: computing a bunch of URLs that you will prerender. For a blog, it’s usually a list of all your articles. In step 2 you simply render each article, one per URL.

This means that one URL strictly equals one version of the page. You cannot have a paid and a free version of the article on the same URL even for different users. The URL /how-to-repair-a-smashed-magazine will deliver the same HTML content to everyone, without any personalization option. It’s not possible to take request cookies into account.

Segmented Rendering can go a step further and render different variations for the same URL. Let’s learn how.

Decoupling URL And Page Variation

The most naive solution to allow personalized content is to add a new route parameter to the URL, for instance, “with-jokes” versus “bland.”

const premiumUrl = "/with-jokes/how-to-repair-a-smashed-magazine";
const freeUrl = "/bland/how-to-repair-a-smashed-magazine";

An implementation with Next.js will look roughly like this:

async function getStaticPaths() {
  return [
    // for paid users
    "/with-jokes/how-to-repair-a-smashed-magazine",
    // for free users
    "/bland/how-to-repair-a-smashed-magazine",
  ];
}
async function getStaticProps(routeParam) {
  if (routeParam === "with-jokes") {
    const { wittyJoke } = getWittyJoke();
    return { wittyJoke };
  } else if (routeParam === "bland") {
    return { wittyJoke: null };
  }
}

The first function computes 2 URLs for the same article, a fun one and a bland one. The second function gets the joke, but only for the paid version.

Great, you have 2 versions of your articles. We can start seeing the “Segments” in “Segmented Rendering” — paid users versus free users, with one rendered version for each segment.

But now, you have a new problem: how to redirect users to the right page? Easy: redirect users to the right page, literally! With a server and all!

It may sound weird at first that you need a web server to achieve efficient static rendering. But trust me on this: the only way to achieve the best performances for a static website is by doing some server optimization.

A Note On “Static” Hosts

If you come from the Jamstack ecosystem, you may be in love with static hosting. What’s a better feeling than pushing a few files and getting your website up and running on GitHub Pages? Or hosting a full-fledged application directly on a Content Delivery Network (CDN)?

Yet “static hosting” doesn’t mean that there is no server. It means that you cannot control the server. There is still a server in charge of pointing each URL to the right static file.

Static hosting should be seen as a limited but cheap and performant option to host a personal website or a company landing page. If you want to go beyond that, you will need to take control over the server, at least to handle things such as redirection based on the request cookies or headers.

No need to call a backend expert though. We don’t need any kind of fancy computation. A very basic redirection server that can check if the user is paid will do.

Great news: modern hosts such as Vercel or Netlify implements Edge Handlers, which are exactly what we need here. Next.js implements those Edge Handlers as “middlewares,” so you can code them in JavaScript.

The “Edge” means that the computations happen as close as possible to the end-user, as opposed to having a few big centralized servers. You can see them as the outer walls of your core infrastructure. They are great for personalization, which is often related to the actual geographical location of the user.

Easy redirection with Next.js middlewares

Next.js middlewares are dead fast and dead simple to code. Contrary to cloud proxies such as AWS Gateway or open source tools such as Nginx, middlewares are written in JavaScript, using Web standards, namely the fetch API.

In the “Segmented Rendering” architecture, middlewares are simply in charge of pointing each user request to the right version of the page:

import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";

async function middleware(req: NextRequest) {
  // isPaidFromReq can read the cookies, get the authentication token,
  // and verify if the user is indeed a paid member or not
  const isPaid = await isPaidFromReq(req);
  const routeParam = isPaid ? "with-jokes" : "bland";
  return NextResponse.redirect(
    /${routeParam}/how-to-repair-a-smashed-magazine
  );
}

A middleware that implements Segmented Rendering for paid and free users.

Well, that’s it. Your first day as a “Head of Performance” is over. You have everything you need to achieve the best possible performances for your weird business model!

Of course, you can apply this pattern to many other use cases: internationalized content, A/B tests, light/dark mode, personalization… Each variation of your page makes up a new “Segment:” French users, people who prefer the dark theme, or paid users.

Cherry On The Top: URL Rewrites

But hey, you are the “Head of Performance,” not the “Average of Performance”! You want your web app to be perfect, not just good! Your website is certainly very fast on all metrics, but now your article URLs look like this:

/bucket-A/fr/light/with-jokes/3-tips-to-make-an-url-shorter

That’s not really good-looking… Segmented Rendering is great, but the end-user doesn’t have to be aware of its own “segments.” The punishment for good work is more work, so let’s add a final touch: instead of using URL redirects, use URL rewrites. They are exactly the same thing, except that you won’t see parameters in the URL.

// A rewrite won’t change the URL seen
// by the end user => they won’t see the "routeParam"
return NextResponse.rewrite(/${routeParam}/how-to-repair-a-smashed-magazine);

The URL /how-to-make-an-url-shorter, without any route parameter, will now display the right version of the page depending on the user’s cookies. The route parameter still “exists” in your app, but the end-user cannot see it, and the URL stays clean. Perfect.

Summary

To implement Segmented Rendering:

  1. Define your “segments” for a page.
    Example: paid users versus free users, users from company A versus users from company B or C, etc.
  2. Render as many static variations of a page you need, with one URL per segment.
    Example: /with-jokes/my-article, /bland/my-article. Each variation matches a segment, for instance, paid or free users.
  3. Setup a very small redirection server, that checks the HTTP request content and redirects the user to the right variation, depending on their segment.
    Example: paid users are redirected to /with-jokes/my-article. We can tell if a user is paid or not by checking their request cookies.

What’s Next? Even More Performance!

Now you can have as many variations of the same page as you want. You solved your issue with paid users elegantly. Better, you implemented a new pattern, Segmented Rendering, that brings personalization to the Jamstack without sacrificing performances.

Final question: what happens if you have a lot of possible combinations? Like 5 parameters with 10 values each? You cannot render an infinite number of pages at build-time — that would take too long. And maybe you don’t actually have any paid users in France that picked the light theme and belong to bucket B for A/B testing. Some variations are not even worth rendering.

Hopefully, modern frontend frameworks got you covered. You can use an intermediate pattern such as Incremental Static Regeneration from Next or Deferred Static Generation from Gatsby, to render variations only on demand.

Website personalization is a hot topic, sadly adversarial to performance and energy consumption. Segmented Rendering resolves this conflict elegantly and lets you statically render any content, be it public or personalized for each user segment.

More Resources on This Topic

Further Reading on Smashing Magazine

Resolving Conflicts Between Designers And Engineers

In software development, UX designers and software engineers can get locked in verbal combat, which feels like a chess game of debate and wordplay. I’ve been there too many times and have the battle scars to prove it. If there has been anything I’ve learned from conflicts in the software development process, it’s that it’s not exclusively a cultural phenomenon, and obviously, it’s not productive.

The people involved in the conflicts generally react to a negative state of the world that no one wishes to be in. Sometimes it says more about a particular culture than us as individuals. Regardless of the reason, it’s not a good state for the organization or company to be in and will rob you of productivity, team cohesion, and focus on delivering customer value. Every company and department has its own challenges. In this article, we will go over some areas where you might find the challenges manifesting, what some of the contributing factors are, and strategies to work through the challenges.

I used to see the conflict between design and engineering in the software development process as annoying and an impediment to software design. When I began to see patterns in people and organizations involved in the conflicts mentioned above, I started to lean into the conflicts and see them more as an opportunity. I want to highlight some common contributing factors to design and engineering conflicts and lay out some strategies to work through them.

While engineers hold the keys to feasibility and how the technology works, designers hold the keys to the customer/market need and principles toward a delightful user experience.

If the two sides (design and engineering) don’t come together in perfect harmony, it could result in a functional feature that customers don’t enjoy or a good experience that’s not efficient and/or expensive to build. It’s not an either-or but an “and.” The design and the tech have to come together in harmony to balance the feasibility with aspects of a delightful experience.

I have found that, often, it’s easier for an engineer to see the aspects of a good user experience than for a designer to understand the technical aspects of how something functions, let alone how feasible it is. Regardless, the two sides work best when both lower their guards and reach across the aisle. Only then are your teams truly committed to delivering a great customer experience.

Assessing The Chasms

There can be several contributors to the aforementioned dissension. From an organizational structure standpoint, contributors to the chasm could be boiled down to the following categories: culture, teams, roles, and individuals.

A review of the literature in saylor.org’s course lays these categories out in a bit more detail, and they note that “sometimes the structure of the organization itself can directly lead to conflict” whether that be based on the organizational structure or the authority provided within the structure.

I’ve seen these categories manifested in several ways. You may be able to relate to some of them. Let’s discuss them below.

Culture

People do what they get rewarded for. If your organization incentivizes teams based on productivity, they will get good at cranking out widgets. Even if those widgets fall short of the customer expectations, the teams will become good at delivering what you asked for. Their focus becomes output over outcomes. This can cause teams to cut corners and produce “good-enough” solutions that miss the mark once released to customers. This is largely driven by the culture within the organization and can permeate the management hierarchy.

Teams

Similar to what I mentioned regarding culture, a team, while aligned with the larger culture, typically has a team sub-culture. Sometimes a team culture can be seemingly at odds with the corporate culture. For example, a corporate culture that is collaborative and open can have teams that are hyper-focused on metrics and productivity that misconstrue the intent of the culture to meet productivity metrics. This is generally done to reduce uncertainty. I’ve worked with teams that became so hyper-focused on themselves that sizing a single user story practically took up an entire meeting.

Roles

Staying in our lanes as designers and engineers. Personally, I couldn’t be more opposed to people staying in their lanes. I recognize that designers and engineers fulfill specific business needs, but we should be a unified body when it comes down to delivering outcomes. Engineers should have opinions on design, and designers should have opinions on the applied technical solution. The cross-pollination of views should support one another, which almost always leads to a better outcome.

Individuals

I can be a contributor or a hindrance. I can become so focused on myself and my desires that I close myself off to others’ views. Rather than focusing on the outcome and the team, my views are forced on others, including the customer. I have seen this causes engineers to be entirely focused on output, “Just tell me what you want from me,” rather than collaborating on the outcome.

United We Stand Or Divided We Fall

Let me provide a few examples from my experience. I worked at one company where I tried several experiments to work more effectively with engineering. At a certain point, a prevailing schism was entrenched and a rather strong engineering-focused culture of “let us build it, get out of the way, and we’ll let you know what we need.” This tension had created an “us vs. them” mentality.

For this same company, I was brainstorming with a team for an upcoming project. As we started to get into sketching, I was met with blank stares followed by a litany of technical constraints. It was awkward, to say the least. Reflecting on it, I don’t think it went well because there wasn’t Product Owner buy-in, nor the proper context was set. I realized from that experience that when we are entrenched in a particular culture/team mindset focused on output for an extended period of time, it can be poisonous and entrench not only our behavior but also our mindset.

Last but not least, while working at a different company, I noticed a middle management cultural problem. The senior leadership spoke about collaboration, great design, and high quality. The agile teams executing on work agreed with this philosophy and were trying to work towards the leadership vision. However, middle management was making decisions that conflicted with senior leadership. The engineering manager would tell engineers to build things that completely excluded design. When designers tried to engage in the process, the engineers were just following orders. There were engineering building and shipping things with little to no Product Management or UX Design input, which showed in the product.

For an organization to be successful, they need to be united. We need to balance our thought processes and cross-pollinate our skills for our organizations to thrive.

It is wise to give the organization and teams the benefit of the doubt. Too often, we’re all just caught in the middle of what appears to be dysfunction. That dysfunction can be an opportunity for change if we learn to navigate the distraction that leads to conflicts.

Distractions

Several studies have dug into this problem space of conflicts with teams, specifically UX designers and software engineers. A 2002 study by Dejana Tomasevic and Tea Pavicevic found that “tasks conflicts are the most common type of conflict. A 2019 study by Marissa Wilson also found that “100% of conflicts reported were task conflicts”.

After being in the trenches for a while, I’m not surprised by these studies, so I’d like to add some color to the study findings directly from the trenches. From my experience, most of the barriers to engineering and design collaboration are simply distractions. Although cultural issues can be barriers to collaboration, if people really want to bring about a positive change, they will find a way.

Let’s review some distractions that you’ll likely need to overcome:

  • Timing
    Designers generally build up ideas (induction) while engineers break them down (deduction) to chunk out building the solution. While there is time to build up ideas and time to break down work, getting the timing right for either of these is important to avoid conflicts. Too often, we’re trying to build up ideas, and our partners are trying to break down the work.
  • Miscommunications
    Designers and engineers have different skill sets and backgrounds. As a result, both groups come from different perspectives, speak different languages, and use different terms. This can lead to tremendous frustration, as Ari Joury points out in his article on designer and developer clashes.
  • Mis-alignments
    If you don’t have the same frame of reference, you will have disagreements. It might be impossible to share every detail at every moment with everyone on the team, but there must be a shared understanding of the problem and the value the team is targeting. Misalignments can cause teams to pull in different directions.
  • Role factions
    Sometimes teams get too focused on who’s responsible/owns what part of the solution. In a truly collaborative environment, the team owns the solution they’re working toward. Designers should get comfortable leaning into the engineering space, even if it’s to learn more about it. Same for engineers, lean into the design space and learn.
  • Metrics
    Metrics help teams to be more focused on the outcome. They also help us be more incremental in our approach. You definitely want a healthy balance here because metrics on value delivery vs. metrics on team productivity can sometimes feel at odds. This can lead teams to focus on getting all the details right, which robs time away from delivering an outcome. Winston Churchill once said, “perfection is the enemy of progress.”

Though the list above is not exhaustive, there have been common themes in the various companies and teams I’ve worked with. The teams you work with within your company may be experiencing one or multiple of these distractions. Hopefully, not all of them! Regardless, designers and engineers will have conflicts if they work together. It’s our job as business partners to lean into them and work through them for teams to be successful.

Striving For Unity

I think it’s crucial to understand that trust has to be the cornerstone of any team unity. In an article by Built In in 2021, they provide a variety of examples for uniting teams. In it, Jillian Priese, Engineering Manager at Granular tells us that for her teams, “When trust is present, it makes all the difference in the world.” And that without trust, it’s “easy for engineers and designers to question one another’s motivations and abilities.” Whatever the barrier, we must employ strategies to close the gaps and bring unity to the team.

Here are some tips that I’ve found to be effective:

  • Discover together.
    While the design team is in the discovery phase of a project, make a point to include the engineers. In the research stage, they can be observers while hearing from actual users of their code. Don’t forget to include them in the ideation process as well. Give them room to fall in love with the problem space. As you discover, iterate, and refine, pull in engineers to get their input, particularly in areas where you want special interactions to take place. They can help you balance the approach to provide more value and be more feasible.
  • Be curious.
    Try not to assume too much and be prepared to learn. Designers have a lot to learn from engineers, and engineers can learn a lot from designers. Cross-pollination of skills strengthens teams. You don’t have to be a designer or engineer, but you should spend some time learning a little about your partner’s role and their work. Exercise empathy and keep each other honest.
  • Speak their language.
    As Ari Joury notes in his article, designers and engineers speak different languages. Designers and engineers sometimes assume they’re speaking the same language and using the same terms only to find that when the wires get crossed, they are talking about different things. Sometimes you will learn to slow down for clarity and shared understanding. Engineers need to be willing to patiently translate foreign technology to designers. Designers need to be ready to patiently sketch and use visuals to translate concepts to engineers sometimes.
  • Be together.
    I literally mean that you sit with each other when you can. As a designer, I have learned a lot from sitting with or in near proximity to engineers about their work, each of them individually, about myself, and the need to modify my work behavior to be a better partner. If you happen to be remote, make a team commitment to be available whenever needed, and be sure you follow through on that commitment to help build trust.

It’s really powerful and rewarding when engineers are more aligned with UX designers because they can elevate good designs to be great designs when they’re fully engaged. I like to believe that engineers breathe life into UX designs through the power of technology.

Practical Examples

As noted above, no company is the same, and different tactics should be used depending on your team’s challenges. Here are some practical examples of how I have put the tips above into practice.

Growing By Learning

At one organization, I came in as the lone designer being dropped into an existing team. It was awkward because the culture had generally been tech-centric. I was the outsider and struggled to make headway. Over time, I realized that the team was open to more design collaboration but was a bit new to working with a designer. The team was in another country, so I petitioned to spend a few days working with them.

Part of my plan was to focus on our epic, which has a lot of frontend work; the other part of applying some design exercises. Since the team was new to design thinking, we did a lateral thinking exercise and UI pattern workshop. After that, things began to gel with us. The team became more user aware and empathetic, and the team started to come to me with UI defects and great ideas for solutions. I enjoyed working with that team.

Make Yourself Available

At another smaller organization, the UX team was positioned with the Product Management (PM) department. The PM and Engineering departments were located on separate floors in the same building. It didn’t take long to realize that the barriers to collaboration while manifesting in several ways, were rooted in physical separation.

To start working to resolve this, I set up shop in the engineering space a few times per week. A sort of “UX Help Deck,” if you will. At first, I think they thought it was weird, but eventually, people began to open up. It facilitated many opportunities to better understand the team’s needs, educate them on the users’ needs, learn about their tech stack, and find in-roads with Product Owners and Engineering Managers. Fortunately, much of the engineering team appreciated it. So, we built great relationships and made a lot of progress in a short amount of time.

Playing To Their Tune

At a much larger organization, I worked against a heavily entrenched engineering-centric culture. I made quite a few mistakes in that environment, such as not seeking more clarity on the authority of roles for the project, not pursuing more clarity in the project direction and priorities, and pushing back more against unreasonable hot-headed stakeholders.

I gained a lot of patience working with architects that had little experience working with UX designers. We were speaking different languages at different levels about different needs. They had a ton of domain knowledge from years of experience. So, they would pull these obscure edge cases out of thin air in conversations as a sort of trump card to any reasonable design recommendation. It was frustrating and humbling. To them, UX was all about “looking pretty” (the visceral aspects of the user interface). Sigh.

From my end, they saw UX as the lipstick they could just apply to the pigs they wanted to build. The in-road there was playing to their mindset. The architects fundamentally wanted to build a system that was robust, scalable, and easy to maintain quickly. The system being user-centered was the least important in their mind, and even that was generally boiled down to “looking nice,” which is not user-centered.

However, I believed we could build user-centered solutions and teach them along the way, but I had to think more modular and scalable. We needed to establish a frontend framework quickly and lay down some foundational guidelines we could build upon. We used that as building blocks that engineering could buy into. That helped them see UX as an ally to their goals rather than an adversary. We created a design system that helped us focus on user needs yet efficiently design at scale. While we got buy-in pretty quickly with engineers, we eventually began to see traction with the architects as the slow, grueling process slugged on.

Conclusion

Finding the impediments that are preventing the unification of the clan is important. It’s essential for your organization, your customers, and your sanity. It does entail effort but is well worth it. Experiment with your teams to find what works for them. The same strategy might not work for every team. When you meet resistance, don’t pull away, but lean into it and be patient.

As a reminder of the things we covered:

  • Assess the level at which you’re finding the biggest chasms;
  • Identify the distractions you’re seeing in your teams and what might be contributing to them;
  • Take action and experiment with different tactics to establish unity.

Challenge the status quo when appropriate. You may ruffle some feathers at first, but sometimes disruption is needed to get to a better state.

You may not make friends at first, but you will get respect. You may find that thirty percent of the people are on board with you. Another thirty percent are interested but not yet sold. The remaining percent of nay-sayers who want to continue with the status quo will eventually come along as the rest of the clan unites around them. Fight the good fight, my friends, and unite the clans!

Further Reading on Smashing Magazine

Collective #719





Preline

Preline UI is an open-source set of prebuilt UI components based on the utility-first Tailwind CSS framework.

Check it out





Bun

Bun is a new JavaScript runtime with a native bundler, transpiler, task runner and npm client built-in.

Check it out







Felt

Felt brings the simple elegance of modern creative software to the world of maps.

Check it out



Masonry? In CSS?!

Michelle Barker explains the current approach for creating a Masonry layout and what the downsides are.

Read it







Adobe X Bowie

Step into Bowie’s virtual dressing room. An amazing gaming project made by Bruno Arizio and Resn for Adobe

Check it out






The post Collective #719 appeared first on Codrops.

Collective #715






Collective 715 item image

Meet Web Push

WebKit now supports the W3C standards for Push API, Notifications API, and Service Workers to enable Web Push.

Read it





Collective 715 item image

GitNoter

GitNoter is a web application that allows users to store notes in their git repository.

Check it out


Collective 715 item image

Orbit Gallery

Infinite orbit gallery made with THREE.js by Michal Zalobny, based on Luis Bizarro’s Awwwards course. Code can be found .

Check it out


Collective 715 item image

Monorepos in JavaScript & TypeScript

A tutorial on how to use a monorepo architecture in frontend JavaScript and TypeScript with tools like npm/yarn/pnpm workspaces, Turborepo/NX/Lerna, Git Submodules and more.

Read it













Collective 715 item image

Redactle

A daily puzzle game where you have to find the title of a random Wikipedia article by guessing words to reveal them on the page.

Check it out


The post Collective #715 appeared first on Codrops.

The Case For Prisma In The Jamstack

The Jamstack approach originated from a speech given by Netlify’s CEO Matt Biilmann at Smashing Magazine’s very own Smashing Conf in 2016.

Jamstack sites serve static pre-rendered content through a CDN and generate dynamic content through microservices, APIs & serverless functions. They are commonly created using JavaScript frameworks, such as Next.js or Gatsby, and static site generators — Hugo or Jekyll, for example. Jamstack sites often use a Git-based deployment workflow through tools, such as Vercel and Netlify. These deployment services can be used in tandem with a headless CMS, such as Strapi.

The goal of using Jamstack to build a site is to create a site that is high performant and economical to run. These sites achieve high speeds by pre-rendering as much content as possible and by caching responses on “the edge” (A.K.A. executing on servers as close to the user as possible, e.g. serving a Mumbai-based user from a server in Singapore instead of San Francisco).

Jamstack sites are more economical to run, as they don’t require using a dedicated server as a host. Instead, they can provision usage from cloud services (PAASs) / hosts / CDNs for a lower price. These services are also set up to scale in a cost-efficient manner, without developers changing their infrastructure and reducing their workload.

The other tool that makes up this combination is Prisma — an open source ORM (object relational mapping) built for TypeScript & JavaScript.

Prisma is a JavaScript / TypeScript tool that interpretes a schema written in Prisma’s standards and generates a type-safe module that provides methods to create records, read records, update records, and delete records (CRUD).

Prisma handles connections to the database (including pooling) and database migrations. It can connect with databases that use PostgreSQL, MySQL, SQL Server or SQLite (additionally MongoDB support is in preview).

To help you get a sense of Prisma, here’s the some basic example code to handle the CRUD of users:

import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

const user = await prisma.user.create({
  data: {
    name: Sam,
    email: 'sam@sampoder.com',
  },
})

const users = await prisma.user.findMany()

const updateUser = await prisma.user.update({
  where: {
    email: 'sam@sampoder.com',
  },
  data: {
    email: 'deleteme@sampoder.com',
  },
})

const deleteUser = await prisma.user.delete({
  where: {
    email: 'deleteme@sampoder.com',
  },
})

The associated project’s Prisma schema would look like:

datasource db {
  url      = env("DATABASE_URL")
  provider = "postgresql"
}

generator client {
  provider = "prisma-client-js"
}

model User {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  name      String?
}
The Use Cases for Prisma

Armed with a knowledge of how Prisma operates, let’s now explore where we can use it within Jamstack projects. Data is important in two aspects of the Jamstack: whilst pre-rendering static pages and on API routes. These are tasks often achieved using JavaScript tools, such as Next.js for static pages and Cloudfare Workers for API routes. Admitally, these aren’t always achieved with JavaScript — Jekyll, for example, uses Ruby! So, maybe I should amend the title for the case of Prisma in JavaScript-based Jamstack. Anyhow, onwards!

A very common use-case for the Jamstack is a blog, where Prisma will come in handy for a blog to create a reactions system. You’d use it in API routes with one that would fetch and return the reaction count and another that could register a new reaction. To achieve this, you could use the create and findMany methods of Prisma!

Another common use-case for the Jamstack is a landing page, and there’s nothing better than a landing with some awesome stats! In the Jamstack, we can pre-render these pages with stats pulled from our databases which we can achieve using Prisma’s reading methods.

Sometimes, however, Prisma can be slightly overkill for certain tasks. I’d recommend avoiding using Prisma and relational databases in general for solutions that need only a single database table, as it adds additional and often unnecessary development complexity in these cases. For example, it’d be overkill to use Prisma for an email newsletter signup box or a contact form.

Alternatives to Prisma

So, we could use Prisma for these tasks, but we could use a plethora of other tools to achieve them. So, why Prisma? Let’s go through three Prisma alternatives, and I’ll try to convince you that Prisma is preferable.

Cloud Databases / Services

Services like Airtable are incredibly popular in the Jamstack space (I myself have used it a ton), they provide you with a database (like platform) that you can access through a REST API. They’re good fun to use and prototype with, however, Prisma is arguably a better choice for Jamstack projects.

Firstly, with cost being a major factor in Jamstack’s appeal, you may want to avoid some of these services. For example, at Hack Club, we spent $671.54 on an Airtable Pro subscription last month for our small team (yikes!).

On the other hand, hosting an equivalent PostgreSQL database on Heroku’s platform costs $9 a month. There certainly is an argument to make for these cloud services based on their UI and API, but I would respond by pointing you to Prisma’s Studio and aforementioned JavaScript / TypeScript client.

Cloud services also suffer from a performance-issue, especially considering that you, as the user, have no ability to change / improve the performance. The cloud services providing the database put a middleman in between your program and the database they’re using, slowing down how fast you can get to the database. However, with Prisma you’re making direct calls to your database from your program which reduces the time to query / modify the database.

Writing Pure SQL

So, if we’re going to access our PostgreSQL database directly, why not just use the node-postgres module or — for many other databases — their equivalent drivers? I’d argue that the developer experience of using Prisma’s client makes it worth the slightly increased load.

Where Prisma shines is with its typings. The module generated for you by Prisma is fully type-safe — it interprets the types from your Prisma schema — which helps you prevent type errors with your database. Furthermore, for projects using TypeScript, Prisma auto-generates type definitions that reflect the structure of your model. Prisma uses these types to validate database queries at compile-time to ensure they are type-safe.

Even if you aren’t using TypeScript, Prisma also offers autocomplete / Intelli-sense, linting, and formatting through its Visual Studio Code extension. There are also community built / maintained plugins for Emacs (emacs-prisma-mode), neovim (coc-prisma), Jetbrains IDE (Prisma Support), and nova (the Prisma plugin) that implement the Prisma Language Server to achieve code validation. Syntax highlighting is also available for a wide array of editors through plugins.

Other ORMs

Prisma is, of course, not the only ORM available for JavaScript / TypeScript. For example, TypeORM is another high quality ORM for JavaScript projects. And in this case, it is going to come down to personal preference, and I encourage you to try a range of ORMs to find your favourite. I personally choose Prisma to use for my project for three reasons: the extensive documentation (especially this CRUD page, which is a lifesaver), the additional tooling within the Prisma ecosystem (e.g. Prisma Migrate and Prisma Studio), and the active community around the tool (e.g. Prisma Day and the Prisma Slack).

Using Prisma in Jamstack Projects

So, if I’m looking to use Prisma in a Jamstack project, how do I do that?

Next.js

Next.js is growing to be a very popular framework in the Jamstack space, and Prisma is a perfect fit for it. The examples below will serve as pretty standard examples that you can transfer into other projects using different JavaScript / TypeScript Jamstack tools.

The main rule of using Prisma within Next.js is that it must be used in a server-side setting, this means that it can be used in getStaticProps, getServerSideProps, and in API routes (e.g. api/emojis.js).

In code, it looks like this (example taken from a demo app I made for a talk at Prisma Day 2021 which was a virtual sticker wall):

import prisma from '../../../lib/prisma'
import { getSession } from 'next-auth/client'

function getRandomNum(min, max) {
  return Math.random() * (max - min) + min
}

export async function getRedemptions(username) {
  let allRedemptions = await prisma.user.findMany({
    where: {
      name: username,
    },
    select: {
      Redemptions: {
        select: {
          id: true,
          Stickers: {
            select: { nickname: true, imageurl: true, infourl: true },
          },
        },
        distinct: ['stickerId'],
      },
    },
  })
  allRedemptions = allRedemptions[0].Redemptions.map(x => ({
    number: getRandomNum(-30, 30),
    ...x.Stickers,
  }))
  return allRedemptions
}

export default async function RedeemCodeReq(req, res) {
  let data = await getRedemptions(req.query.username)
  res.send(data)
}

As you can see, it integrates really well into a Next.js project. But you may notice something interesting: '../../../lib/prisma'. Previously, we imported Prisma like this:

import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

Unfortunately, this is due to a quirk in Next.js’ live refresh system. So, Prisma recommends you paste this code snippet into a file and import the code into each file.

Redwood

Redwood is a bit of an anomaly in this section, as it isn’t necessarily a Jamstack framework. It began under the banner of bringing full stack to the Jamstack but has transitioned to being inspired by Jamstack. I’ve chosen to include it here, however, as it takes an interesting approach of including Prisma within the framework.

It starts, as always, with creating a Prisma schema, this time in api/db/schema.prisma (Redwood adds this to every new project). However, to query and modify the database, you don’t use Prisma’s default client. Instead, in Redwood, GraphQL mutations and queries are used. For example, in Redwood’s example todo app, this is the GraphQL mutation used to create a new todo:

const CREATE_TODO = gql`
  mutation AddTodo_CreateTodo($body: String!) {
    createTodo(body: $body) {
      id
      __typename
      body
      status
    }
  }
`

And in this case, the Prisma model for a todo is:

model Todo {
  id     Int    @id @default(autoincrement())
  body   String
  status String @default("off")
}

To trigger the GraphQL mutation, we use the useMutation function which is based on Apollo’s GraphQL client imported from @redwoodjs/web:

const [createTodo] = useMutation(CREATE_TODO, {
    //  Updates Apollo's cache, re-rendering affected components
    update: (cache, { data: { createTodo } }) => {
      const { todos } = cache.readQuery({ query: TODOS })
      cache.writeQuery({
        query: TODOS,
        data: { todos: todos.concat([createTodo]) },
      })
    },
  })

  const submitTodo = (body) => {
    createTodo({
      variables: { body },
      optimisticResponse: {
        __typename: 'Mutation',
        createTodo: { __typename: 'Todo', id: 0, body, status: 'loading' },
      },
    })
  }

With Redwood, you don’t need to worry about setting up the GraphQL schema / SDLs after creating your Prisma schema, as you can use Redwood’s scaffold command to convert the Prisma schema into GraphQL SDLs and services — yarn rw g sdl Todo, for example.

Cloudfare Workers

Cloudfare Workers is a popular platform for hosting Jamstack APIs, as it puts your code on the “edge”. However, the platform has its limitations, including a lack of TCP support, which the traditional Prisma Client uses. Though now, through Prisma Data Proxy, it is possible.

To use it, you’ll need a Prisma Cloud Platform account which is currently free. Once you’ve followed the setup process (make sure to enable Prisma Data Proxy), you’ll be provided with a connection string that begins with prisma://. You can use that Prisma connection string in your .env file in place of the traditional database URL:

DATABASE_URL="prisma://aws-us-east-1.prisma-data.com/?api_key=•••••••••••••••••"

And then, instead of using npx prisma generate, use this command to generate a Prisma client:

PRISMA_CLIENT_ENGINE_TYPE=dataproxy npx prisma generate

Your database requests will be proxied through, and you can use the Prisma Client as usual. It isn’t a perfect set-up, but for those looking for database connections on Cloudfare Workers, it’s a relatively good solution.

Conclusion

To wrap up, if you’re looking for a way to connect Jamstack applications with a database, I wouldn’t look further than Prisma. Its developer experience, extensive tooling, and performance make it the perfect choice. Next.js, Redwood, and Cloudfare Workers — each of them has a unique way of using Prisma, but it still works very well in all of them.

I hope you’ve enjoyed exploring Prisma with me. Thank you!

Further Reading on Smashing Magazine

Magical SVG Techniques

SVGs have become more and more popular in the past few years. For good reasons. They are scalable, flexible, and, most importantly, lightweight. And, well, they have even more to offer than you might think. We came across some magical SVG techniques recently that we’d love to share with you. From SVG grids and fractional SVG stars to SVG masks, fancy grainy SVG gradients, and handy SVG tools. We hope you’ll find something useful in here.

By the way, a while ago, we also looked at SVG Generators — for everything from shapes and backgrounds to SVG path visualizers, cropping tools, and SVG → JSX generators. If you’re tinkering with SVG, these might come in handy, too.

Generative SVG Grids

Generative art is a wonderful opportunity for everyone who would love to create art but feels more at home in code. Let’s say you want to create geometric patterns, for example. Generative art will take away the difficult decisions from you: What shapes do I use? Where do I put them? And what colors should I use? If you want to give it a try, Alex Trost wrote a tutorial on creating generative art with SVG grids that is bound to tickle your creativity — and teach you more about SVG.

The generative art that Alex creates is a grid of blocks with a random number of rows and columns. Each block has a randomly chosen design and colors from a shared color palette. Alex takes you step by step through the process of coding this piece: from setting up the grid and creating isolated functions to draw SVGs to working with color palettes, adding animations, and more. A fun little project — not only if you’re new to generative art and creative coding.

Generative Landscape Rolls

An awe-inspiring project that bridges the gap between a century-old tradition and state-of-the-art coding is {Shan, Shui}. Created by Lingdong Huan and inspired by traditional Chinese landscape rolls, it creates procedurally generated, infinitely-scrolling Chinese landscapes in SVG format. The mountains and trees in the landscape are modeled from scratch using noise and mathematical functions. Fascinating!

Now, if you’re asking yourself how something as complex might work, you’re not alone. Victor Shepelev wanted to get behind the secret of {Shan, Shui}* and made it his advent project to understand how it works. And, indeed, it took him 24 days to fully dig into the code. He summarized his findings in a series of articles.

SVG Paths With Masks

SVGs have a lot of benefits compared to raster images. They are small in size, scalable, animatable, they can be edited with code, and a lot more. You can’t get the textured feel that raster graphics can provide, though. However, we can combine the strengths of vector and raster to create some charming effects. Like Tom Miller did in his Silkscreen Squiggles demo.

Silkscreen Squiggles is an animation where squiggles fill a rectangular canvas. What makes the squiggles special is that they appear to have a paintbrush texture. The secret: a mask with an alpha layer that gives the simple squiggly paths their texture. Alex Trost dissects how it works. Inspiring!

Grainy Gradients

Noise is a simple technique to add texture to an image and make otherwise solid colors or smooth gradients more realistic. But despite designer’s affinity for texture, noise is rarely used in web design. Jimmy Chion explores how we can add texture to a gradient with only a small amount of CSS and SVG.

The trick is to use an SVG filter to create the noise, then apply that noise as a background. Layer it underneath your gradient, boost the brightness and contrast, and that’s already it. Potential use cases could be light and shadows or holographic foil effects, for example. The core of this technique is supported by all modern browsers. A clever visual effect to add depth and texture to a design.

Adding Texture And Depth

“Analog” materials like paint and paper naturally add depth to an artwork, but when working digitally, we often sacrifice the organic depth they provide for precision and speed. Let’s bring some texture back into our work! George Francis shares three ways to do so.

The techniques that George explores are quite simple but effective. Tiny random shapes added to a canvas at random points, solid shape fills with lines, and non-overlapping circles distributed evenly but randomly with an algorithm. Inspiring ideas to tinker with.

Cut-Out Effects With CSS And SVG

In a recent front-end project that Ahmad Shadeed was working on, one of the components included a cut-out effect where an area is cut out of a shape. And because there are multiple ways to create such an effect in CSS or SVG, he decided to explore the pros and cons that each of the solutions brings along.

In his blog post “Thinking About The Cut-Out Effect”, Ahmad takes a look at three different use cases for a cutout effect: an avatar with a cut-out status badge that indicates that a user is currently online, a “seen avatar” that consists of overlapping circle avatars that are indicators that a message has been seen in a group chat, as well as a website header with a cut-out area behind a circular logo. Ahmad presents different solutions for each use case — SVG-only, CSS-only, and a mix of both — and explains the pros and cons of each one of them. A comprehensive overview.

Fractional SVG Stars

Are you building a rating component and you want it to support fractional values like 4.2 or 3.7 stars but without using images? Good news, you can achieve fractional ratings with only CSS and inline SVG. Samuel Kraft explains how it works.

The component basically consists of two parts: a list of star icons based on the max rating and an “overlay” div that will be responsible for changing the colors of the stars underneath. This is the magic that makes the fractional part work. The technique is supported in all modern browsers; for older browsers, you can fall back to opacity instead. Clever!

Generative Mountain Ridge Dividers

When Alistair Shepherd built his personal website, he wanted to have section dividers that match the mountain theme of the site. But not any mountain dividers, but dividers with unique ridges for every divider.

Instead of creating a variety of different dividers manually, Alistair decided to use a combination of SVG and terrain generation, a technique that is usually used in game development, to generate the dividers automatically. In a blog post, he explains how it works.

If you’re up for some more horizontal divider inspiration, also be sure to check out Sara Soueidan’s blog post “Not Your Typical Horizontal Rules” in which she shows how she turned a boring horizontal line into a cute “birds on a wire” divider with the help of some CSS and SVG.

Flexible Repeating SVG Masks

Sometimes it’s a small idea, a little detail in a project that you tinker with and that you can’t let go off until you come up with a tailor-made solution to make it happen. Nothing that seems like a big deal at first glance, but that requires you to think outside the box. In Tyler Gaw’s case, this little detail was a flexible header with a little squiggle at the bottom instead of a straight line. The twist: to make the component future-proof, Tyler wanted to use a seamless, horizontal repeating pattern that he could color with CSS.

To get the job done, Tyler settled on flexible repeating SVG masks. SVG provides the shape, CSS handles the color, and mask-image does the heavy lifting by hiding anything in the underlying div that doesn’t intersect with the shape. A clever approach that can be used as the base for some fun experiments.

Swipey Image Grids

When you think of “SVG animation”, what comes to your mind? Illustrative animation? Well, SVG can be useful for much more than pretty graphics. As Cassie Evans points out, a whole new world of UI styling opens up once you stop looking at SVG purely as a format for illustrations and icons. One of her favorite use cases for SVG: responsive animated image grids.

Cassie doesn’t build her image grid on CSS Grid but uses SVG’s internal coordinate system (which is responsive by design) to design the grid layout. She then adds images to the grid and positions them with preserveAspectRatio. clipPath “swipes” the images in. The final animation relies on GreenSock to ensure that the transforms work consistently across browsers. If you want to dig deeper into the code, be sure to check out Cassie’s blog post in which she explains each step in detail.

Animated SVG Debit Card Illustrations

What if you could animate a debit card design? Probably not on an actual physical card, but rather for a landing page where you’d like to drive interest towards the card’s design or features? Well that’s an unusual challenge to tackle, and Tom Miller decided to take it on.

In a series of SVG debit card animations, Tom uses GreenSock to animate SVG paths and shapes smoothly, so every card literally comes to life on its own, transforming, rotating, and scaling beautifully, alongside just a few lines of JavaScript. A wonderful inspiration for your next landing page design!

Raster Image To SVG Converter

You need to quickly convert a raster image into an SVG? Then SVGcode is for you. The progressive web app converts image formats like JPG, PNG, GIF, WebP, and AVIF to vector graphics in SVG format.

To convert an image, drop your raster image into the SVGcode app, and the app will trace the image, color by color, until a vectorized version of the input appears. You can choose between color SVG and monochrome SVG and there also are a number of customization settings to improve the output further, by suppressing speckles and adjusting the color, for example. If you install the PWA, you can even use it as a default file handler on your machine. A real timesaver.

Download SVGs From Any Site

A handy little tool to enhance your SVG workflow is SVG Gobbler. The browser extension finds the vector content on the page you’re viewing and gives you the option to download, optimize, copy, view the code, or export it as an image.

When you click the browser extension, it shows you all SVGs detected on the site. You can quickly download the ones you like or copy them to your clipboard. When you view the code, you can toggle optimization options from SVGO — to beautify the markup or clean up attributes or numeric values, for example. And if you need a PNG version of an SVG, you can export it in any size you want. A fantastic addition to any developer’s toolkit.

Scaling SVGs Made Simple

Scaling svg elements can be a daunting task, since they act very differently than normal images. Amelia Wattenberger came up with an ingenious comparison to help us make sense of SVGs and their special features: “The svg element is a telescope into another world.”

Based on the idea of the telescope, Amelia explains how to use the viewBox property to zoom in or out with your “telescope”, and, thus, change the size of your <svg>. A small tip that works wonders.

Wrapping Up

We hope that these techniques will tickle your curiosity and inspire you to try some SVG magic yourself. If you came across an interesting SVG technique that left you in awe, please don’t hesitate to share it in the comments below. We’d love to hear about it. Happy creating!

You Don’t Need A UI Framework

Every now and then, someone will ask for my recommendations on UI frameworks. By “UI framework”, I mean any third-party package that is focused on providing styled UI components. It includes CSS frameworks like Bootstrap, as well as JS component libraries like Material UI or Ant Design.

My answer to this question tends to catch people off guard: I don’t use them, and I don’t think they should be used for most consumer-facing products. 😅

To be clear, I have nothing against these tools, and I do think there are some valid use cases for them. But I’ve seen so many developers reach for these tools with unrealistic expectations about the problems they’ll solve, or how easy it’ll be to build applications with them.

In this article, I’m going to make my case for why you probably don’t need these tools. I’ll also share some of my go-to strategies for building professional-looking applications without a design background.

The Appeal Of UI Frameworks

There are lots of reasons that developers reach for a UI framework. Here are the three most common reasons I’ve seen:

  1. They want their app/site to look polished and professional, and these tools provide nicely-designed UI components.
  2. They want to get something up and running quickly without spending a bunch of time building everything from scratch.
  3. They recognize that many UI components — things like modals, dropdowns, and tooltips — are really hard to get right, especially when considering accessibility, so they want to make sure they get it right.

These are totally reasonable things to want, and I can absolutely see the appeal of finding a solution for these problems. But in some cases, I think there’s a mismatch between expectation and reality. In others, I think there are better tools for the job.

Let’s get into it.

Professional Design

This first reason might be the most common. There are tons of developers who want to build stuff, but who don’t have a design background. Rather than spend years learning how to design, why not use a third-party package that provides beautifully-designed components right out of the box?

Here’s the problem, in my opinion: design is about so much more than nice-looking pieces.

A little while ago, I received a LEGO Nintendo Entertainment System as a gift:

It was a really fun kit to build. If you’re a LEGO fan, I highly recommend checking it out!

Here’s the thing, though: I was able to build this model because the kit came with a 200-page book that told me exactly where to place each brick.

If I was given all of the pieces but no instructions, my NES would look much worse than it does. Having high-quality bricks isn’t enough, you also need to know how to use them.

A component library can give you nice buttons, date pickers, and pagination widgets, but it’s still your job to assemble them.

The blocks in a design system like Material Design were built by a talented design team. That team understands the framework, and they have the skills to assemble the pieces into beautiful interfaces. We have access to the same pieces, but that doesn’t mean we’ll automatically achieve the same results.

I remember hearing a designer say that only Google can make Material Design apps that look good. The Android App Store is full of third-party apps that use the same professionally-designed components but don’t look professional at all.

There are so many intangible aspects to good design — things such as balance, spacing, and consistency. To use a component library effectively, you need to put yourself in the shoes of the designers who created it and understand how they’re intended to be deployed.

Plus, no matter how comprehensive the library is, it’ll never have all the pieces you need. Every app and website is unique, and there will always be special requirements. Creating a brand-new component that “blends in” with an existing third-party design system is really friggin’ hard.

I don’t think it’s impossible — I’m sure there are examples of professional-looking apps with third-party styles. But if you’re able to make it look good, you probably have some pretty significant design chops and don’t need these tools in the first place.

I empathize with developers who want to launch a professional-looking project without any sort of design intuition… But it doesn’t usually work out that way, from what I’ve seen.

Saving Time

The next reason I’ve heard is that UI frameworks help save time. Building a whole component library from scratch is a significant undertaking and one that can be skipped by relying on a UI framework.

There’s some truth to this, but from what I’ve seen, it’s often a tortoise-and-hare situation.

I spent a few years teaching web development fundamentals to bootcamp students at Concordia University. The program culminates in a 2-week personal project. Students decide what to build, and it’s up to them to do it. As an instructor, I’d answer questions and help get them unstuck.

We noticed a trend: students who pick a UI framework like Bootstrap or Material UI get off the ground quickly and make rapid progress in the first few days. But as time goes on, they get bogged down. The daylight grows between what they need, and what the component library provides. And they wind up spending so much time trying to bend the components into the right shape.

I remember one student spent a whole afternoon trying to modify the masthead from a CSS framework to support their navigation. In the end, they decided to scrap the third-party component, and they built an alternative themselves in 10 minutes.

Writing your own styles feels a bit to me like writing tests: it’s a bit slower at first, but that early effort pays off. In the long run, you’ll save a lot of time, energy, and frustration.

Usability And Accessibility

The final reason I've heard is a super valid one. The web doesn’t have a very robust “standard library” when it comes to things like modals, dropdowns, and tooltips. Building a modal that works well for mouse users, keyboard users, and screen-reader users is incredibly difficult.

UI frameworks have a hit-or-miss record when it comes to usability and accessibility. Some of the libraries are actually quite good in this respect. But in most cases, it’s a secondary focus.

Thankfully, there’s another category of tools that focuses exclusively on usability and accessibility, without prescribing a bunch of styles.

Here are some of my favorite tools in this category:

  • Reach UI
    A set of accessibility-focused primitives for React. Built by Ryan Florence, co-creator of React Router and Remix.
  • Headless UI
    A set of unstyled, fully accessible UI components for React and Vue. Built and maintained by the Tailwind team.
  • Radix Primitives
    A set of unstyled, accessibility-focused components for React. This library has a very broad set of included components, lots of really neat stuff!
  • React ARIA
    A library of React hooks you can use to build accessible components from scratch.

Note: I realize that this list is very React-heavy; there may be similar tools for Angular, Svelte, and other frameworks, but I’m not as active in those communities, so I’m not sure. Feel free to let me know on Twitter if you know of any!

Nobody should be building a modal from scratch in the year 2022, but that doesn’t mean you need an enormous styles-included UI framework! There are tools that precisely solve the most important accessibility challenges while remaining totally agnostic when it comes to cosmetics and styles.

Rebuttals

I’ve been speaking with developers about this subject for a couple of years now, and I have heard some pretty compelling rebuttals.

Familiarity

First, Daniel Schutzsmith pointed out that “industry-standard” tools like Bootstrap have one big advantage: familiarity.

It’s easier to onboard new developers and designers when using tools that are widely understood. New teammates don’t have to spend a ton of time learning the ins and outs of a custom framework, they can hit the ground running.

From the perspective of an agency that takes on lots of short/medium-term projects, this could make a lot of sense. They don’t have to start every new project from zero. And as the team gets more and more comfortable with the tool, they learn to use it really effectively.

I haven’t done much agency work, so it’s hard for me to say. I’ve spent most of my career working for product companies. None of the places I’ve worked for have ever used a third-party UI framework. We always built something in-house (eg. Wonder Blocks at Khan Academy, or Walrus at DigitalOcean).

Internal Tools

I think that it can make sense to use a UI framework when building internal tools or other not-for-public-consumption projects (eg. prototypes).

If the goal is to quickly get something up and running, and you don’t need the UI to be 100% professional, I do think it can be a bit of a time-saver to quickly drop in a bunch of third-party components.

What About Tailwind and Chakra UI?

So, I don’t consider Tailwind or Chakra UI to be in this same category of “UI frameworks”.

Tailwind doesn’t provide out-of-the-box components, but it does provide design tokens. As Max Stoiber says, Tailwind gives developers a set of guardrails. You still need a design intuition to use it effectively, but it isn’t quite as daunting as designing something from scratch.

Chakra UI does provide styled-components out of the box, but they’re very minimal and low-level. They mostly just look like nicer versions of platform defaults.

My good friend Emi mentioned to me that she likes using Chakra UI because it provides her with a set of sensible defaults for things like checkboxes and radio buttons. She’s good enough at design to avoid the customization pitfalls, but not so confident that she’d be comfortable creating a whole design system from scratch. This tool is the perfect middle ground for someone in her situation.

I think the difference is that these solutions don’t claim to solve design for you. They help nudge you in the right direction, but they make sure that everything is customizable, and that you aren’t locked into a specific design aesthetic.

My Suggested Alternative

So, if you’re a solo developer who wants to build professional-looking websites and applications, but who doesn’t have a design background, what should you do?

I have some suggestions.

Develop a Design Intuition

So, here’s the bad news: I do think you should spend a bit of time learning some design fundamentals.

This is one of those things where a little bit goes a long way. You don’t need to go to an art school or dedicate years to learning a new craft. Design is hard, but we aren’t trying to become world-class designers. Our goals are much more modest, and you might be surprised by how quickly they can be attained, or how far along you are already!

Even if you’re not that interested in design, I think building a design intuition is a critical skill for front-end developers. Believe it or not, we’re constantly making design decisions in our work. Even the most detailed high-fidelity mockup is still missing a ton of important context.

For example:

  • If we’re lucky, we might be given 3 screen sizes, but it’s up to us to decide how the UI should behave between those screen sizes.
  • Data is rarely as clean as it appears in mockups, and we have to decide how to handle long names, missing data, etc.
  • Loading, empty, and error states are often missing from mockups.

One of my super-powers as a developer is having enough design sense to be able to figure out what to do when I run into a situation not clearly specified in the design. Instead of being blocked, while I wait for the designer to respond to my questions, I can rely on my intuition. I won’t always get it right, but I usually will (and when I don’t, it’s another opportunity to improve my design intuition!).

How do you develop a design intuition?

If you work with a product/design team, you have a tremendous resource available to you! Think critically about the designs they produce. Ask lots of questions — most designers will be delighted to help you understand how things are structured, and why they made the decisions they did. Treat it as an engineering challenge. You can learn the systems and processes that lead to good designs.

I wrote a blog post a while back, called “Effective Collaboration with Product and Design”. It goes a bit deeper into some of these ideas.

If you don’t work with any designers (or have any designer friends), you can try to reverse-engineer the products you use every day. Take a note of how things are spaced, and what font sizes are used. With a critical eye, you’ll start to see patterns.

Steal

Alright, so even with a keen design instinct, it’s still really hard to come up with a design from scratch. So, let’s not do that.

Instead, let’s try and find some professional designs that are similar to the thing we’re trying to build. You can search on designer-community sites like dribbble or behance or use archives like awwwards.

For example, let’s say we’re building an Uber-for-dogs startup, and we’re trying to design the driver dashboard. A Dribbble search for “dashboard” turns up a ton of interesting designs:

Dribbble tends to skew very “designery”, and so you might want to use real-world products for inspiration. That works too!

The trick is to use multiple sources. If you steal 100% of a design, it’s plagiarism and a bad form. People will notice, and it’ll cause problems.

Instead, we can mix 3 or 4 designs together to create something unique. For example, maybe I’ll take the color scheme from one site, the general layout and spacing from another, and the typography styles from the third!

When I’ve mentioned this strategy to actual designers, they laugh and say that it’s what they all do. I think this is their version of the “joke” that programmers spend half their time googling things.

This strategy feels like such a life hack. It’s not effortless, and it does require some design chops. The designs you use for inspiration won’t 100% match the thing you’re building, and you’ll need to use your intuition to fill in the gaps. But it’s by far the fastest way I’ve found to come up with a professional-looking design without a design background.

Putting It All Together

As developers, it can be tempting to believe that a UI framework will absolve us from needing to learn anything about design. Unfortunately, it doesn’t usually work out that way. At least, not from what I’ve seen.

Here’s the good news: you can definitely build a professional-looking product without a designer! With a few high-quality reference designs and a dash of design intuition, you can build something that hits the “good-enough” threshold, where a product feels legitimate and “real”.

There’s one more aspect we haven’t really spoken much about CSS.

Lots of front-end developers struggle with CSS. I struggled with it too! CSS is a deceptively complex language, and it can often feel inconsistent and frustrating, even after you have years of experience with the language.

This is a problem I feel very passionately about. I spent all of last year focused full-time on building and developing a CSS course, to help developers gain confidence with the language.

It’s called CSS for JavaScript Developers. It’s made specifically for folks who use a JS framework like React or Angular. The course is focused on giving you a robust mental model so that you have an intuitive understanding of how CSS works.

If you feel like CSS is unpredictable, I really hope you'll check it out. 9000+ developers have gone through the course, and the response has been overwhelmingly positive.

You can learn more here: css-for-js.dev.

Collective #704







Collective 704 item image

Lapce

Another code editor project: a modern open source code editor in Rust.

Check it out






Collective 704 item image

AgnosticUI

In case you didn’t know about it: UI components you can use across multiple projects.

Check it out


Collective 704 item image

Cirrus CSS

The SCSS framework for the modern web. It’s component based, customizable, and completely open source. By Stanley Lim.

Check it out








Collective 704 item image

RP2040 Doom

The Making of RP2040 Doom, a fully-featured Doom port for the Raspberry Pi RP2040 microcontroller.

Check it out


Collective 704 item image

Metarank

Still in the early stage of development: Metarank is a low-code Machine Learning tool that personalizes product listings, articles, recommendations, and search results.

Check it out


Collective 704 item image

Yuga Labs

A really cool website with some nice pattern interactivity made by the folks of Antinomy Studio.

Check it out


Collective 704 item image

Diskernet

DiskerNet (codename PROJECT 22120) is an archivist browser controller that caches everything you browse, a library server with full text search to serve your archive.

Check it out



The post Collective #704 appeared first on Codrops.

Documenting Angular Components Using Storybook

As developers, in our daily work, we like to find good documentation of the libraries and technologies we use. It is, therefore, our responsibility to leave our work well documented. Those who come after us to use it and/or continue it will appreciate it. At Apiumhub we are very fond of documenting our projects.

There are many tools that allow us to write documentation in Markdown (.md) format, and some others that also allow us to document our UI components. Most of them are written for and focused on React. What happens then if we want to document the components of our Angular project?

Build and Ship a Design System in 8 Steps Using Backlight

What is a Design System

If you ever wondered how Apple, Uber or Spotify keep their UI and UX perfectly consistent over their hundreds of pages, it’s because they use a Design System. Enhanced version of what used to be “pattern libraries” or “branding guidelines” a Design System could be defined as a library of reusable components, that includes documentation along each of these components to ensure the proper use of it and its consistency over the different applications. The documentation is at the core of the system, going beyond components by covering accessibility, layouts, overall guidelines, and much more.

By creating Design Systems, companies are building a Single Source of Truth for their front-end teams, thus allowing for shipment of products at scale, with perfect consistency of the User Experience guaranteed over the entire product range.

As well documented in this article, a Design System is made of different pieces which we can split into four main categories: Design tokens, Design kit, Component Library, and a Documentation site.

Who Design Systems are for

You could think that a Design System is costly to build and maintain and would need a dedicated team. If some companies do rely on a team, there are now tools that allow any company to benefit from a Design System, no matter the size of their frontend team or their existing product. One of these tools is Backlight.

What is Backlight

Backlight is an all-in-one collaborative tool that allows teams to build, ship, and maintain Design Systems at scale.

With Backlight, every aspect of the Design System is kept under a single roof, teams can build every component, document it, share it to gather feedback, and ship it, all without leaving Backlight environment. This allows for seamless collaboration between Developers and Designers on top of the productivity gain and the insurance of perfect UI and UX consistency among the products relying on the Design Systems.

Steps to build your Design System

#1 Pick your technology

You might already have existing components and you could choose to stick with your existing technology. While a lot of companies go for React, other technologies are worth considering.

If you would prefer to not start a new Design System from scratch, Backlight offers a number of Design Systems starter-kits to chose from. Each comes with built-in tokens, components, interactive documentation, Storybook stories, all ready to be customized to your liking for your products.

#2 Set your Design Tokens

Once your technology is picked, you often start by creating (or customizing, if you chose to use a starter-kit) the basics Design tokens. Design tokens are the value that rules your Design System components, such as Color, Spacing, Typography, radii…

In Backlight, Design tokens are conveniently listed in the left-side panel so you can get an overview at a glance.

To create a new Token, simply hit the + button and start coding. In edit mode, the code is displayed next to the token so you can edit as you go with the benefit of having the preview window side by side with the code. Any change to the tokens code can be pushed automatically to the preview window so you can see the result of your changes instantly.

For users simply consulting the Design System, the list is displayed next to a preview for better clarity. You can observe that the UI of the documentation mode doesn’t display the code which allows for simpler and noise-free consultation of your Design System. You can see for yourself by playing with this starter-kit.

#3 Build your Components

Components are the core of your Design Systems, you can picture them as re-usable building blocks. Buttons, avatar, radio, tab, accordion, this list will be as complex or as simple as your UI need.

Most companies already have existing components. To get started with your Design System the first step would be to create an exhaustive list of every component used in the products to date and identify the most appropriate architecture, then you can start building them one by one.

In Backlight you can build your components straight in the built-in browser IDE, always keeping the preview panel next to it, to verify the result at all times.

Once a component is created, it will live on your Design System for as long as it exists (or as you delete it) and because it will have to grow with it, Backlight makes it extra easy to update components on the go.

Also, if you build upon existing assets, with GitHub and Gitlab native support you can push changes on branches directly from Backlight and review pull-request in a click.

#4 Add Stories

Collaboration between Designers and Developers is one of the bottlenecks that every team creating Design Systems will have to solve. One way to ensure alignment between the two is by providing simple visual iterations of a components state, which is a live representation of the code instead of being a simple screenshot at a given time.

In order to do so, Backlight added support to the most common solutions: Storybook

Backlight natively supports Storybooks’s story files. Stories are visual representations of a state of a UI component given a set of arguments, and one of the best ways to visualize a Design System or simply get a quick overview of a component iterations.

Stories can be coded directly into Backlight and displayed next to the documentation

#5 Link your Design assets

If you already have design assets, Backlight support Figma as well as Adobe XD and Sketch. By embedding a link to the assets, Backlight will display them live within the interface along with the documentation and the code so developers can make sure that both are in sync.

  • Figma libraries

Among Designer tools, Figma is often one of the go-to, and its libraries can be natively displayed within Backlight, giving Developers direct access to the visuals.

  • Adobe XD

Next to Figma, Adobe XD hold a special place in the Designer community and it is as well supported in Backlight

  • Sketch

By supporting Sketch links and allowing them to be embedded within the documentation, Backlight ensures once again proper alignment between Designers and Developers, removing the need for long back and forth as well as team members relying on tools they are not comfortable with.

#6 Generate the Documentation

A Design System is only as great as its documentation. The core of your system, the documentation has multiple facets but will mostly be able to:

  • Facilitate the adoption of the design system among the product team thanks to visual preview and concise how-to.
  • Ease the maintenance, a well-documented system is like a documented code, knowing the how and why of every bit, it gets easier for the team to scale or adapt parts.
  • Ensure the survival of the Design System, an easy-to-digest documentation of your system will avoid team members taking shortcuts and ending up not using it.

Backlight supports multiple technologies to build your documentation, MDX, MD Vue, mdjs, or Astro, you can pick the one that suits you best. If you are wondering what technology to chose, this article will be able to guide you. However keep in mind that the best practice is to use a technology that can embed your real component, thus ensuring that the documentation has the latest visual iteration of it, at all times.

Backlight allows for users to build interactive documentation, with menu, dark and light mode, live code sandbox, components preview, and more.

Like for the rest of the Design System, the code is displayed next to the preview to have visual feedback at all times.

For inspiration purposes, here is a list of the best Design Systems document sites to date.

#7 Gather feedback from the team

One, if not THE, main bottleneck front-end teams encounter while building a Design System is communication between Developers and Designers. Both sides are living within their own tools, and it often ends up with teams creating multiple asynchronous Design Systems, which are costly to maintain and often sources of mistakes.

Backlight offers a platform that non only regroups everything under a single roof, but that outputs documentation and visuals that are easy to share to entire teams.

  • At any time a Developer can share a live preview of what he’s working on, and edit the components as he receives feedback. Each edit will be pushed to the live preview and the other side can directly see the results.
  • Designers can update a Figma or Adobe Xd library, it will be automatically shown in the respective tabs inside Backlight for a Developer to update the impacted components.
  • Thanks to the live preview panel, Designers that know code can quickly update any component or token to their liking, which then can be reviewed by a developer before pushing for release.

#8 Ship your Design System

Once you have a proper Design System, tokens, components, and the documentation that goes with it, it is now time to use it which means generating the outputs of the Design Systems (code, documents site…) for the team to consume it.

Before releasing you can double-check unreleased changes at a glance, using the built-in visual diff panel, and even automate testing.

Once everything is properly verified, to facilitate the release Backlight has a baked-in npm package publisher, so you can compile, package and version your Design System on demand.

Once published, you will be able to see the history of previous releases and access every corresponding package directly from the release panel.

Kickstart your own Design System

By simplifying every step and keeping it all under the same roof, Backlight makes Design Systems affordable for every team, no matter their size.

Sound promising? Wait until you learn that there are a LOT more built-in features and that you can start your own Design System now! More than a product, Backlight is a community that will set the starting blocks for you and guides you through the finish line.

The post Build and Ship a Design System in 8 Steps Using Backlight appeared first on Codrops.

Collective #695




Collective 695 item image

CSS Speedrun

A small fun app to test your CSS knowledge. Find the correct CSS selectors for the 10 puzzles as fast as possible.

Play it



Collective 695 item image

Faker

Generate massive amounts of fake data in the browser and node.js. Initially deleted by its owner, the project is again available on npm under new management.

Check it out






Collective 695 item image

Essence

A desktop operating system built from scratch, for control and simplicity.

Check it out















The post Collective #695 appeared first on Codrops.

Demystifying Grids For Developers and Designers

Designers and developers each have their own individual definition and use of grids, making the concept a relatively nebulous and unclear concept to all. To some, grids could mean layout and structure, while to others grids refer to interactive tables that manage data. Understanding the target audience is key here because there might not be a universally understandable direction which can lead designers to be misguided during the cross-collaboration process. When given the time, developers and designers can fully evaluate the user story and create a thoughtful user experience together through the use of grids. But first, we need to find common ground to work from.

Identifying a Grid

As we mentioned, it is important to know your audience when talking about Grids. If you come from a typical design background, the word “Grid” instantly brings you to think about layout (either print or online). The term has even penetrated CSS with the Grid Layout. As you can see in this article, that demonstrates the use of the term “Grid” generically for design layout purposes.