How to Create a Private Store in WooCommerce

How to Create a Private Store in WooCommerceRecognized as the leading eCommerce platform, WooCommerce is empowering 28.19% of all online stores worldwide. It allows users to easily turn your WordPress site into a fully-functional eCommerce shop without touching a bit of code. You’re able to add products, control customer data, manage the checkout process, and more. As a matter of fact, the […]

The post How to Create a Private Store in WooCommerce appeared first on WPExplorer.

Going Serverless With Adnan Rahic

Serverless computing is growing in popularity and is heavily promoted by public cloud providers. The much-touted benefit of serverless computing is to allow developers to focus on their code whilst the public cloud provider manages the environment and infrastructure that will be running it.

But how is serverless different from container-based services? What are the best use cases for serverless? How about the challenges? And can this architecture move forward in the future? We answer these questions and more in this episode of Coding Over Cocktails.

Python Code Review Checklist

As developers, we are all too familiar with code reviews. Having another pair of eyes take a look at our code can be wonderful; it shows us so many aspects of our code we would not have noticed otherwise. A code review can be informative, and it can be educational. I can confidently attribute most of what I know about good programming practices to code reviews.

The amount of learning a reviewee takes away from a code review depends on how well the review is performed. It thus falls on the reviewer to make their review count by packing the most lessons into the review as possible.

Event-Driven Architecture With Apache Kafka for .NET Developers Part 1: Event Producer

In This Series:

  1. Development Environment and Event Producer (this article)
  2. Event Consumer (coming soon)
  3. Azure Integration (coming soon)

Introduction

An event-driven architecture utilizes events to trigger and communicate between microservices. An event is a change in the service's state, such as an item being added to the shopping cart. When an event occurs, the service produces an event notification which is a packet of information about the event.

The architecture consists of an event producer, an event router, and an event consumer. The producer sends events to the router, and the consumer receives the events from the router. Depending on the capability, the router can push the events to the consumer or send the events to the consumer on request (poll). The producer and the consumer services are decoupled, which allows them to scale, deploy, and update independently.

Tips and Tricks for Spring in IntelliJ IDEA

Have you ever wondered if you are making proper use of Intellij's Spring integration features?

From useful navigation shortcuts & compile-time error highlighting, to analyzing Spring contexts, beans & dependencies, to utilities that help you write REST services or good old HTML pages, there is a lot of functionality that IntelliJ offers you when it comes to working with Spring projects.

HTML Inputs and Labels: A Love Story

Most inputs have something in common — they are happiest with a companion label! And the happiness doesn’t stop there. Forms with proper inputs and labels are much easier for people to use and that makes people happy too.

A happy label and input pair

In this post, I want to focus on situations where the lack of a semantic label and input combination makes it much harder for all sorts of people to complete forms. Since millions of people’s livelihoods rely on forms, let’s get into the best tips I know for creating a fulfilling and harmonious relationship between an input and a label.

Content warning: In this post are themes of love and relationships.

The love story starts here! Let’s cover the basics for creating happy labels and inputs.

How to pair a label and an input

There are two ways to pair a label and an input. One is by wrapping the input in a label (implicit), and the other is by adding a for attribute to the label and an id to the input (explicit).

Think of an implicit label as hugging an input, and an explicit label as standing next to an input and holding its hand.

<label> 
  Name:
  <input type="text" name="name" />
</label>

An explicit label’s for attribute value must match its input’s id value. For example, if for has a value of name, then id should also have a value of name.

<label for="name">Name: </label>
<input type="text" id="name" name="name" />

Unfortunately, an implicit label is not handled correctly by all assistive technologies, even if for and id attributes are used. Therefore, it is always the best idea to use an explicit label instead of an implicit label.

<!-- IMPLICIT LABEL (not recommended for use) - the label wraps the input. -->
<label> 
  Name:
  <input type="text" name="name" />
</label>

<!-- EXPLICIT LABEL (recommended for use) - the label is connected to an input via "for" and "id" -->
<label for="explicit-label-name">Name: </label>
<input type="text" id="explicit-label-name" name="name" />

Each separate input element should only be paired with one label. And a label should only be paired with one input. Yes, inputs and labels are monogamous. ♥️

Note: There’s one sort of exception to this rule: when we’re working with a group of inputs, say several radio buttons or checkboxes. In these cases, a <legend> element is used to group certain input elements, such as radio buttons, and serves as the accessible label for the entire group.

Not all inputs need labels

An input with a type="submit" or type="button" does not need a label — the value attribute acts as the accessible label text instead. An input with type="hidden" is also fine without a label. But all other inputs, including <textarea> and <select> elements, are happiest with a label companion.

What goes in a label

The content that goes inside of a label should:

  • Describe its companion input. A label wants to show everyone that it belongs with its input partner.
  • Be visible. A label can be clicked or tapped to focus its input. The extra space it provides to interact with the input is beneficial because it increases the tap or click target. We’ll go into that more in just a bit.
  • Only contain plain text. Don’t add elements like headings or links. Again, we’ll go into the “why” behind this further on down.

One useful thing you can do with the content in a label is add formatting hints. For example, the label for <input type="date" id="date"> will be <label for="date"> as we’d expect. But then we can provide a hint for the user that the date needs to be entered in a specific format, say DD-DD-YYYY, in the form of a <span> between the label and input that specifies that requirement. The hint not only specifies the date format, but has an id value that corresponds to an aria-describedby attribute on the input.

<!-- Describes what the input is for - should be plain text only -->
<label for="date">Start date</label>
<!-- Describes additional information, usually about formatting -->
<span id="date-format">(DD-MM-YYYY):</span>
<input type="date" id="date" aria-describedby="date-format" min="2021-03-01" max="2031-01-01" />

This way, we get the benefit of a clear label that describes what the input is for, and a bonus hint to the user that the input needs to be entered in a specific format.

Best practices for a healthy relationship

The following tips go beyond the basics to explain how to make sure a label and input are as happy as can be.

Do: Add a label in the right place

There is a WCAG success criterion that states the visual order of a page should follow the order in which elements appear in the DOM. That’s because:

A user with low vision who uses a screen magnifier in combination with a screen reader may be confused when the reading order appears to skip around on the screen. A keyboard user may have trouble predicting where focus will go next when the source order does not match the visual order.

Think about it. If we have some standard HTML where the label comes before the input:

<label for="orange">Orange</label>
<input type="checkbox" id="orange" name="orange">

That places the label before the input in the DOM. But now let’s say our label and form are inside a flexible container and we use CSS order to reverse things where the input visually comes before the label:

label { order: 2; }
input { order: 1; }

A screen reader user, who is navigating between elements, might expect the input to gain focus before the label because the input comes first visually. But what really happens is the label comes into focus instead. See here for the difference between navigating with a screen reader and a keyboard.

So, we should be mindful of that. It is conventional to place the label on the right-hand side of the input for checkboxes and radio buttons. This can be done by placing the label after the input in the HTML, ensuring the DOM and visual order match.

<form>
  <!-- Checkbox with label on the right -->
  <span>
    <input type="checkbox" id="orange" name="orange">
    <label for="orange">Orange</label>
  </span>
  <!-- Radio button with label on the right -->
  <span>
    <input type="radio" id="banana" name="banana">
    <label for="banana">Banana</label>
  </span>
  <!-- Text input with label on the left -->
  <span>
    <label for="apple">How do you like them apples?</label>
    <input type="text" id="apple" name="apple">
  </span>
</form>

Do: Check inputs with a screen reader

Whether an input is written from scratch or generated with a library, it is a good idea to check your work using a screen reader. This is to make sure that:

  • All relevant attributes exist — especially the matching values of the for and id attributes.
  • The DOM matches the visual order.
  • The label text sounds clear. For example, “dd-mm-yyyy” is read out differently to its uppercase equivalent (DD-MM-YYYY).
Screen reader reading out a lower case and upper case date input hint.
Just because the letters are the same doesn’t mean that they get read the same way by a screen reader.

Over the last few years, I have used JavaScript libraries, like downshift, to build complex form elements such as autocomplete or comboboxes on top of native HTML ones, like inputs or selects. Most libraries make these complex elements accessible by adding ARIA attributes with JavaScript.

However, the benefits of native HTML elements enhanced using JavaScript are totally lost if JavaScript is broken or disabled, making them inaccessible. So check for this and provide a server-rendered, no-JavaScript alternative as a safe fallback.

Check out these basic form tests to determine how an input and its companion label or legend should be written and announced by different screen readers.

Do: Make the label visible

Connecting a label and an input is important, but just as important is keeping the label visible. Clicking or tapping a visible label focuses its input partner. This is a native HTML behavior that benefits a huge number of people.

Imagine a label wanting to proudly show its association with an input:

A visible happy label proudly showing off its input partner.
A label really wants to show off its input arm candy. 💪🍬

That said, there are going to be times when a design calls for a hidden label. So, if a label must be hidden, it is crucial to do it in an accessible way. A common mistake is to use display: none or visibility: hidden to hide a label. These CSS display properties completely hide an element — not only visually but also from screen readers.

Consider using the following code to visually hide labels:

/* For non-natively-focusable elements. For natively focusable elements */
/* Use .visually-hidden:not(:focus):not(:active) */
.visually-hidden {
  border-width: 0 !important;
  clip: rect(1px, 1px, 1px, 1px) !important;
  height: 1px !important;
  overflow: hidden !important;
  padding: 0 !important;
  position: absolute !important;
  white-space: nowrap !important;
  width: 1px !important;
}

Kitty Giraudel explains in depth how to hide content responsibly.

What to Avoid

To preserve and maintain a healthy relationship between inputs and labels, there are some things not to do when pairing them. Let’s get into what those are and how to prevent them.

Don’t: Expect your input to be the same in every browser

There are certain types of inputs that are unsupported In some older desktop browsers. For example, an input that is type="date" isn’t supported in Internet Explorer (IE) 11, or even in Safari 141 (released September 2020). An input like this falls back to type="text". If a date input does not have a clear label, and it automatically falls back to a text input in older browsers, people may get confused.

Don’t: Substitute a label with a placeholder

Here is why a placeholder attribute on an input should not be used in place of a label:

  • Not all screen readers announce placeholders.
  • The value of a placeholder is in danger of being cut-off on smaller devices, or when a page is translated in the browser. In contrast, the text content of a label can easily wrap onto a new line.
  • Just because a developer can see the pale grey placeholder text on their large retina screen, in a well-lit room, in a distraction-free environment, doesn’t mean everyone else can. A placeholder can make even those with good vision scrunch their eyes up and eventually give up on a form.
An input on a mobile device where the placeholder text is cut off.
Make sure this is somewhere I can what? It’s hard to tell when the placeholder gets cut off like that.
An input where the translated placeholder is cut off.
Work on a multi-lingual site? A translated placeholder can get cut off even though the English translation is just fine. In this case, the translated placeholder should read “Durchsuchen Sie die Dokumentation.”

  • Once a character is entered into an input, its placeholder becomes invisible — both visually and to screen readers. If someone has to back-track to review information they’ve entered in a form, they’d have to delete what was entered just to see the placeholder again.
  • The placeholder attribute is not supported in IE 9 and below, and disappears when an input is focused in IE 11. Another thing to note: the placeholder color is unable to be customized with CSS in IE 11.

Placeholders are like the friend that shows up when everything is perfect, but disappears when you need them most. Pair up an input with a nice, high-contrast label instead. Labels are not flaky and are loyal to inputs 100% of the time.

The Nielsen Norman Group has an in-depth article that explains why placeholders in form fields are harmful.

Don’t: Substitute a label with another attribute or element

When no label is present, some screen readers will look for adjacent text and announce that instead. This is a hit-and-miss approach because it’s possible that a screen reader won’t find any text to announce.

The below code sample comes from a real website. The label has been substituted with a <span> element that is not semantically connected to the input.

<div>
  <span>Card number</span>
  <div>
    <input type="text" value="" id="cardNumber" name="cardNumber" maxlength="40">
  </div>
</div>

The above code should be re-written to ensure accessibility by replacing the span with a label with for="cardNumber" on it. This is by far the most simple and robust solution that benefits the most people.

While a label could be substituted with a span that has an id with a value matching the input’s aria-labelledby attribute, people won’t be able to click the span to focus the input in the same way a label allows. It is always best to harness the power of native HTML elements instead of re-inventing them. The love story between native input and label elements doesn’t need to be re-written! It’s great as-is.

Don’t: Put interactive elements inside labels

Only plain text should be included inside a label. Avoid inserting things such as headings, or interactive elements such as links. Not all screen readers will announce a label correctly if it contains something other than plain text. Also, if someone wants to focus an input by clicking its label, but that label contains a link, they may click the link by mistake.

<form>
  <div>
    <!-- Don't do this -->
    <input type="checkbox" id="t-and-c" name="name" />
    <label for="t-and-c">I accept the <a href="https://link.com">terms and conditions</a></label>
  </div>
  <div>
    <!-- Try this -->
    <input type="checkbox" id="t-and-c2" name="name" />
    <label for="t-and-c2">I accept the terms and conditions.</label>
    <span>Read <a href="https://link.com">terms and conditions</a></span>
  </div>
</form>

Real-life examples

I always find that real-life examples help me to properly understand something. I searched the web and found examples of labels and inputs from a popular component library and website. Below, I explain where the elements fall short and how they can be improved to ensure a better pairing.

Component library: Material

MaterialUI is a React component library based on Google’s design system. It includes a text input component with a floating label pattern that has become a popular go-to for many designers and developers:

The floating label pattern starts with the label in the placeholder position that moves the label to its proper place above the field. The idea is that this achieves a minimal design while solving the issue of placeholders disappearing when typing.

Clicking on the input feels smooth and looks great. But that’s the problem. Its qualities are mostly skin-deep.

At the time of writing this post, some issues I found with this component include:

  • There is no option to have the label outside of the input in order to offer an increased interactive area for focusing the input.
  • There is an option to add a hint like we saw earlier. Unfortunately, the hint is only associated with the input via proximity and not through a matching id and aria-describedby. This means that not all screen readers will be able to associate the helper message with the input.
  • The label is behind the input in the DOM, making the visual order is incorrect.
  • The empty input looks look like it is already filled out, at least as long as it is not active.
  • The label slides up when the input is clicked, making it unsuitable for those who prefer reduced motion.

Adam Silver also explains why float labels are problematic and gets into a detailed critique of Material’s text input design.

Website: Huffpost

The Huffpost website has articles containing a newsletter subscription form:

Huffpost newsletter signup form
This form looks quite nice but could be improved

At the time of writing this blog post, the email input that Huffpost uses could benefit from a number of improvements:

  • The lack of a label means a smaller click or tap target. Instead of a label there is an aria-label attribute on the input.
  • The font size of the placeholder and input values are a tiny 11px, which can be hard to read.
  • The entire input disappears without JavaScript, meaning people have no way to sign up to the newsletter if JavaScript is disabled or broken.

Closing remarks

A surprising number of people struggle to enter information into poorly-constructed inputs. That list includes people with cognitive, motor and physical disabilities, autism spectrum disorders, brain injuries, and poor vision. Other people who struggle include those in a hurry, on a poor connection, on a small device, on an old device, and unfamiliar with digital forms, among many others.

Additionally, there are numerous reasons why JavaScript might break or be switched off in a browser, meaning inputs become dysfunctional or completely inaccessible. There are also people who are fully capable of viewing a web page but who may choose to use a keyboard along with a screen reader.

The message I want to get across is that happy label and input pairs are crucial. It doesn’t matter if your form is beautiful if it is unusable. I can bet that almost everyone would rather fill out an ugly but easy-to-use form rather than a pretty one that causes problems.

Thanks

I want to warmly thank the following people for helping me with this post: Eric Eggert, Adam Silver, Dion Dajka, and Kitty Giraudel. Their combined accessibility knowledge is a force to be reckoned with!

Footnotes

  • 1 The datepicker is actually well-supported in iOS 14 and very nice. One would imagine that a macOS version has got to be on the horizon.


The post HTML Inputs and Labels: A Love Story appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

ShopTalk Patreon

Dave and I launched a Patreon for ShopTalk Show. You get two completely priceless things for backing us:

  1. That great feeling you’re supporting the show, which has costs like editing, transcribing, developing, and hosting.
  2. Access to our backer-only Discord.

I think the Discord might be my favorite thing we ever done. Sorry if I’m stoking the FOMO there, but just saying, it’s a good gang. My personal intention is to be helpful in there, but everyone else is so helpful themselves that I’ve actually learned more than I’ve shared.

Direct Link to ArticlePermalink


The post ShopTalk Patreon appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

GoDaddy Pro Will Kick off 2-Day Expand Event on April 27

GoDaddy Pro is launching its first virtual Expand event on April 27. The free-to-attend conference will focus on web professionals and will last until April 28.

In February, GoDaddy re-launched GoDaddy Pro as a formal sub-brand of the company. The “Pro” branding was not a new outing. However, it was a fresh take on an old idea. As part of this launch, they introduced a re-engineered interface named the Hub. The experience was geared toward website designers and developers, providing them with a central location to manage their client projects. More than simply a new set of tools, the company created a dedicated branding experience for GoDaddy Pro.

“We are committed to, passionate about, and truly in awe of web designers and developers,” said Adam Warner, the Global Field Marketing Sr. Manager at GoDaddy. “It has been a pleasure supporting this audience, but it’s now our mission to up the ante and deepen our commitment. Our goal with GoDaddy Pro is to empower web pros with a unified tool that helps them create outsize efficiency for their operation, deliver incredible results for their clients, grow their skills and operation, and connect and inspire by fostering community.”

The Expand 2021 event is a continuation of that goal. The target audience is described as “web designers and developers who identify as eager side hustlers or website freelancers.” The conference’s sessions are meant to provide instructions, resources, and connections to help developers grow their businesses.

The event will begin with an opening keynote from GoDaddy’s Aman Bhutani, the CEO; Tara Wellington, the Senior Director of Product Management; and Warner. It will feature eight sessions, split evenly between each day. The event’s schedule will fit into a small window between 10 am and 12:30 pm Pacific Time on both days.

By some conference standards, even virtual ones, this may seem like a small event. However, the tighter focus could be a welcome one for people suffering from online event fatigue. Each session is scheduled to last around 30 minutes.

The sessions will focus on a wide span of topics like client management, eCommerce, creating a project starter stack, website security, and scaling a freelance business with care plans.

The History Behind GoDaddy Pro Expand

Spearheading the first Expand conference, Warner avoided talking about target audiences, performance indicators, or budgetary concerns in an internal memo sent throughout GoDaddy. The effort is about creating a series of GoDaddy Pro Expand events globally in the coming months and years that help the community.

The idea for a conference had been in the making since 2018. Warner described the purpose of GoDaddy Pro, Expand 2021, and meetups as empowering and inspiring the next generation of web designers and developers to deliver for their clients and create self-sustaining freelance businesses.

Expand 2021 is, in some way, my Opus for giving back to the community that raised me and giving the new generation a head start in following their own passions. It’s the culmination of my years exploring and creating on the internet. My first time on the internet was in 1993. I was in my third year of college, working at Hungry Howie’s Pizza in small-town Michigan. A co-worker had some of us to her house, and her boyfriend logged us on to a Bulletin Board System (BBS). We chatted in a dull, orange text late into the night to people around the world.

I was amazed then, and that sense of wonder and possibility stayed with me as I started to learn HTML and build simple websites a few years later. Then came my discovery of WordPress in early 2005 and the incredible global community of users willing to lift each other up for collective success.

It was at that moment, I knew I had to find a way to make websites and WordPress my career.

Now, 28 years later, Warner is taking the next step in organizing his first event from scratch along with over 40 others. He wanted to build a conference that was devoid of “talking heads touting theoretical advice.” Instead, to offer an experience with actionable knowledge that web professionals can use immediately.

“The real purpose of Expand 2021 is to ‘pay it forward’ by advising and inspiring the next generation of web designers and developers, enabling them to take the next steps in their paths,” said Warner.

Best HTML Editors

Making mistakes when writing code is inevitable. But just one mistake can cause glitches, security issues, and countless other headaches. Trying to find an error in your HTML code manually with your own eyes is near impossible.

For beginners and experienced web developers alike, HTML editors will improve your coding.

These tools help check for errors and even speed up your workflow with useful features like syntax highlighting, auto-completion, spell checking, and so much more.

If a basic word processor just isn’t getting the job done, it’s time to upgrade to an HTML editor.

The Top 7 Best HTML Editors

  1. Sublime Text — Best for Customization
  2. Atom — Best for Collaborative Coding
  3. UltraEdit — Best Versatility for Advanced Users
  4. Visual Studio Code — Best for Debugging Code
  5. BBEdit — Best for Simple HTML Editing
  6. NoteTab — Best for Fast HTML Coding
  7. TinyMCE — Best Flexible and Powerful WYSIWYG HTML Editor

After extensive research and testing, we’ve narrowed down the top seven HTML editors on the market today. Check out the following reviews to help you find the best HTML editor for your unique needs.

#1 – Sublime Text — Best For Customization

  • Multi-language support
  • Total customization with JSON files
  • Cross-platform editor
  • Only $80 for full license
Try for free

Sublime Text is an advanced version of a basic text editor. It’s an ideal solution if you need multi-language support.

Users love Sublime Text’s clean interface, robust performance, and advanced features.

What really makes Sublime Text unique is the ability to customize anything. The tool gives you total flexibility, as the settings can be modified on a per-project and per-file type basis. Nearly every aspect of Sublime Text can be customized with JSON files.

Use Sublime Text for customizing symbol indexing on a per-syntax basis. Customize the menus, macros, key bindings, completions, snippets, and so much more.

Another cool feature of Sublime Text is the “Goto Anything” capability. This shortcut allows you to open files or jump to lines, symbols, or words with just a few clicks. Overall, the feature really helps speed up the coding process and improves the user experience.

You’ll also benefit from top features like split editing, instant project switching, and the ability to make multiple changes simultaneously.

Sublime Text is a cross-platform editor, available on Mac, Windows, and Linux. With just a single license, you can use it on every computer you own, regardless of the operating system.

You can download and try Sublime Text for free. But for continued use, the license is $80.

#2 – Atom — Best For Collaborative Coding

  • Free and open-source text editor
  • Real-time collaborative editing
  • Thousands of open-source add-ons
  • Helpful for knowledge sharing
Try for free

Atom is a free and open-source text editor. The tool was initially developed by GitHub, and it’s still maintained by the same community.

If you’re working on a team and need to write code collaboratively, Atom will be a top choice to consider.

The Teletype package offered from Atom supports shared workspaces and real-time editing.

Here’s how it works. A host user has the ability to invite collaborators to join. Once those collaborators are in, they can start editing in real-time. Even as the host user moves between different files, it’s easy for the collaborators to follow along.

Not only is this great for team projects, but it’s also really helpful for knowledge sharing.

Teletype is just one of many open-source packages offered by Atom. You can browse from thousands of other open-source packages to add features and functionality to your workspace. You’ll also benefit from features like autocompletion, split interfaces, cross-platform editing, and more.

Download and start using Atom today—it’s free.

#3 – UltraEdit — Best Versatility For Advanced Users

  • Hundreds of features
  • Fully customizable UI
  • Trusted by 4+ million users
  • 30-day money back guarantee
Starts at $79.95/year

UltraEdit is a robust and secure text editor that’s loaded with features. The tools definitely make it easier for users to speed up their coding and reduce errors.

With 4+ million users trusting the platform, UltraEdit is one of the most powerful HTML editors on the planet.

UltraEdit can be used by programmers, web developers, database managers, system administrators, and more. It supports multiple languages and potential use cases.

Overall, UltraEdit is so feature-rich and powerful that a beginner will be overwhelmed by its capabilities. From text editing to web development and cloud services, the platform can handle everything.

Noteworthy features and highlights include:

  • Supports files larger than 4 GB
  • XML tree view, validation, reformatting, and more
  • Filtered spell checker
  • Hex editing
  • CSV data reformatting
  • Macros and scripts for automating editing
  • Block mode editing
  • Code syntax formatting for nearly all programming languages
  • Powerful bookmarking
  • Robust search
  • File compare

The list goes on and on. There are literally hundreds of features.

The UI is fully customizable as well, so you can make it your own. If you need a text editor that can do more than just help you write and edit basic HTML, look no further than UltraEdit.

UltraEdit starts at $79.95 per year, which includes up to five installs. The All Access version costs $99.95 per year. Your purchase is backed by a 30-day money-back guarantee.

#4 – Visual Studio Code — Best For Debugging Code

  • Free and open-source text editor
  • Integrates with Git and other SCM providers
  • In-editor debugging functionality
  • Easy to customize
Get it for free

Visual Studio Code is another free and open-source editor. This modern tool is designed to help build and debug web applications and cloud applications.

The software is available on Windows, Linux, and Mac.

While lots of HTML errors help you avoid errors and make fewer mistakes, not all of them have debugging tools. That’s where Visual Studio Code really shines and stands out from alternatives on the market.

Print statement debugging is outdated. With Visual Studio Code, you can debug your code directly from the editor. Use the tool to debug with call stacks, point breaks, and an interactive console.

Another cool feature of Visual Studio Code is that it works directly with Git and other SCM providers. In fact, Git commands are built-in to the software. You’ll be able to push and pull your work from whatever hosted SCM service you’re using.

I like Visual Studio Code because you can customize the platform and extend functionality based on your needs and preferences. Simply install an extension if you want a new theme, want to add new languages, or connect to a third-party service.

All of the extensions run as a separate process, so your HTML editor won’t be slowed down.

Download Visual Studio Code and get started for free.

#5 – BBEdit — Best For Simple HTML Editing

  • Simple, straightforward functionality
  • Built for MacOS
  • Special Mac Store pricing
  • 30-day free trial
Try for free

BBEdit is an HTML and text editor built for macOS. While so many HTML editors on the market are made for advanced users and professional developers, BBEdit stands out for its simplicity.

You’ll obviously still need to have a coding background. But BBEdit delivers basic functionality without complex bells and whistles.

The tool is trusted by software developers, writers, and web authors.

BBEdit has features for basic editing and searching. It can also be used for manipulating prose, textual data, and source code.

Other noteworthy features and highlights of BBEdit include:

  • FTP and SFTP open and save
  • Search and replace across multiple files
  • Code folding
  • Function navigation and syntax coloring for source code
  • Text and code completion
  • Grep pattern matching

Again, all of these will accommodate basic needs. But if you’re looking for something more advanced like a tool for debugging or checking for errors, BBEdit falls a bit short.

You can try all of BBEdit’s features during a 30-day trial period. For continued use, individual licenses start at $49.99. Alternatively, you can subscribe from the Mac App Store for $3.99 per month or $39.99 per year.

#6 – NoteTab — Best For Fast HTML Coding

  • Packed with productivity tools
  • Syntax highlighting built in
  • Only $38.95
  • 90-day money back guarantee
Try for 30 days free

If you’ve been using Notepad to write code and you’re looking for an upgrade, NoteTab will be a great option to consider. This award-winning HTML editor is known for its versatility and speed.

What’s unique about NoteTab is that it combines the benefits of a dedicated HTML editor with a feature-packed text editor.

Webmasters love NoteTab because the features are designed to speed up the HTML coding process. Huge collections of HTML code snippets are available at your fingertips. From single characters to complete web pages, you can implement these snippets into your code.

Instead of spending time copying, pasting, and editing large blocks of HTML and tags, NoteTab streamlines the way you work. You’ll be able to:

  • Drag-and-drop snippets of code
  • Insert code snippets with your keyboard
  • Leverage autocomplete to add tags while you’re typing
  • Add tags and HTML code from the toolbar
  • Set attributes for tags

Like many other HTML editors, NoteTab has syntax highlighting. I really like the fact that they don’t go overboard with a crazy amount of different colors. So it’s easier for you to work without feeling overwhelmed by too many colors.

If you’re looking for a way to boost your productivity while writing HTML, NoteTab has you covered.

The software costs $38.95. Take advantage of NoteTab’s 30-day free trial to test it out before you buy. All purchases are backed by a 90-day money-back guarantee.

#7 – TinyMCE — Best Flexible and Powerful WYSIWYG HTML Editor

  • Get set up in five minutes or less
  • Over 50 plugins included
  • Free open-source option included
  • Paid plans start at $25
Try for 14 days free

TinyMCE is built to simplify content creation on websites. This WYSIWYG (What You See Is What You Get) HTML editor has 350+ million downloads.

As one of the most popular rich text editors on the market today, TinyMCE is really easy to use. They have a simple guide that can get you up and running in less than five minutes. Plus, they provide you with everything you need to customize TinyMCE for your needs.

TinyMCE offers over 50 plugins to extend its functionality. That flexibility can be added with just a single line of code. In addition to its flexibility, TinyMCE is highly customizable. The plugins are super easy to configure, and there are 100+ different customization options.

Another cool part about TinyMCE is its versatility. It can be used for such a wide range of possibilities. Common use cases include:

  • CMS platforms
  • Email marketing
  • CRM and marketing automation
  • Learning management systems
  • Content creation in SaaS

No matter what you’re building, TinyMCE can help. The WYSIWYG editor is really user-friendly, so you don’t have to be an advanced developer to get the most out of this tool.

There’s an open-source version of TinyMCE that’s free forever. For advanced features, upgrade to Cloud Essential or Cloud Professional. These start at $25 and $75 per month, respectively. You can try either paid version for free with a 14-day trial.

How to Find the Best HTML Editor For You

As you’ve likely learned by now, HTML editors are not created equal. The best option for me might not be the best choice for you.

With that said, there are definitely certain factors that should be taken into consideration as you’re evaluating HTML editors and comparing them side-by-side. I’ll explain these in greater detail below to help you make an informed decision.

Text Editors vs. WYSIWYG Editors

Generally speaking, HTML editors fall into one of these two categories. The biggest difference between the two is that a WYSIWYG (What You See Is What You Get) tool is connected to a visual editor.

So when you’re working in a WYSIWYG editor, you can see exactly how things will be displayed when everything is published on a web page. Then you can generate HTML code.

With a text editor, you’ll have to write all of your HTML code manually. Lots of developers still prefer to use these, but WYSIWYG tools are great for beginners and people who want to save time and avoid doing things by hand.

Time-Saving Tools

One of the benefits of using an HTML editor as opposed to a plain notepad is the fact that you’ll have tons of different tools at your disposal. This can make your life much easier and save you time as you’re writing code.

Features like autocomplete, find and replace, and syntax highlighting are just a few things to look out for. With something like autocomplete, the HTML editor will provide you with suggestions based on what you’re doing. So with a single click, you can fill in blocks of code.

Collaboration

In some cases, you might be working on a project with team members or other users. Not every HTML editor is ideal for this. With some options, you’ll be forced to copy and paste and do a ton of manual work to collaborate with others.

But some of the best HTML editors are specifically designed for collaborative work. You can even find tools that allow you to invite collaborators to edit your code in real-time. So you and a few other people on your team can all be working on the same thing simultaneously and see those updates as they happen.

Customization and Extensibility

Some HTML editors don’t have a ton of features out-of-the-box. But you can customize the editor and extend the functionality by adding plugins or extensions.

If you’re using a free and open-source tool, expect to do lots more customizing on your own. Paid HTML editors typically come with more functionality.

Avoid rigid platforms if customization is important to you. But if you just need a basic HTML editor, this is something that you can probably look past and not worry about.

Summary

What’s the best HTML editor? It depends on what you’re looking for.

Sublime Text is my top recommendation for anyone who prioritizes total customization when they’re coding.

Atom is the best option for collaborative coding. UltraEdit is better for advanced users, and it’s extremely versatile.

If you want an HTML editor with debugging capabilities, check out Visual Studio Code.

For those of you searching for a solution towards the simple end of the spectrum, try BBEdit or NoteTab.

TinyMCE is perfect if you want a WYSIWYG HTML editor.

Regardless of your needs, you can find what you’re looking for based on the reviews and recommended use cases in this guide.

Instant API Backends

Is backend creation blocking your Mobile App Dev? Or strategic B2B/internal integration?

 Imagine that you could...
Create a database API, instantly?
And declare logic, with spreadsheet-like rules?
Plus an instant Web App, to engage Business Users?

You want margin-inline-start

David Bushell in ”Changing CSS for Good“:

I’m dropping “left“ and “right“ from my lexicon. The new CSS normal is all about Logical Properties and Values […] It can be as easy as replacing left/right with inline start/end. Top/bottom with block start/end. Normal inline flow, Flexbox, and Grid layouts reverse themselves automatically.

I figured it made sense as a “You want…” style post. Geoff has been documenting these properties nicely in the Almanac.


The post You want margin-inline-start appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Deliver Enhanced Media Experiences With Google’s Core Web Vitals

Hello! Satarupa Chatterjee from Cloudinary. There is a big change coming from Google in May 2021 having to do with their Core Web Vitals (CWVs). It’s worth paying attention here, as this is going to be a SEO factor.

I recently spoke with Tamas Piros about CWVs. The May 2021 update will factor in CWVs, along with other factors like mobile-friendliness and safe browsing, to generate a set of benchmarks for search rankings. Doubtless, the CWVs will directly affect traffic for websites and apps alike. Tamas is a developer-experience engineer at Cloudinary, a media-optimization expert, and a Google developer-expert in web technologies and web performance.

Here’s a written version of the video above, where the questions (Qs) are me, Satarupa, asking and Tamas answering (As).

Q: How did Google arrive at the three Core Web Vitals and their values?

A: As a dominant force in the search space, Google has researched in depth what constitutes a superb user experience, arriving at three important factors, which the company calls, collectively, the Core Web Vitals.

Before explaining them, I’d like to recommend an informative article, published last May on the Chromium Blog, titled The Science Behind Web Vitals. At the bottom of the piece are links to papers on the research that led to the guidelines for accurately evaluating user experiences.

Now back to the three Core Web Vitals. The first one affects page-load speed, which Google calls Largest Contentful Paint (LCP) with a recommendation of 2.5 seconds or less for the largest element on a page to load.

The second metric is First Input Delay (FID), which is a delta between a user trying to interact with a page, and the browser effectively executing that action. Google recommends 100 milliseconds or less. 

The third and last metric is Cumulative Layout Shift (CLS), which measures how stable a site is while it’s loading or while you’re interacting with it. In other words it is a measurement of individual layout shifts as well as unexpected layout shifts that happen during the lifespan of a page. The calculation involves impact and distance fractions which are multiplied together to give a final value. Google advocates this value to be 0.1 or less.

Q: How do the Core Web Vitals affect e-commerce?

A: Behind the ranking of Google search results are many factors, such as whether you use HTTPS and how you structure your content. Let’s not forget that relevant and well-presented content is as important as excellent page performance. The difference that core web vitals will make cannot be overstated. Google returns multiple suggestions for every search, however remember that good relevance is going to take priority. In other words good page experience will not override having great relevant content For example, if you search for Cloudinary, Google will likely show the Cloudinary site at the top of the results page. Page experience will become relevant when there are multiple available results, for a more generic search such as ‘best sports car’. In this case Google establishes that ranking based on the page’s user experience, too, which is determined by the Core Web Vitals.

Q: What about the other web vitals, such as the Lighthouse metrics? Do they still matter?

A: Businesses should focus primarily on meeting or staying below the threshold of the Core Web Vitals. However, they must also keep in mind that their page load times could be affected by other metrics, such as the length of time the first purchase takes and the first contentful paint.

For example, to find out what contributes to a bad First Input Delay—the FID, check the total blocking time and time to interact. Those are also vitals, just not part of the Core Web Vitals. You can also customize metrics with the many robust APIs from Google.. Such metrics could prove to be invaluable in helping you identify and resolve performance issues.

Q: Let’s talk about the Largest Contentful Paint metric, called LCP. Typically, the heaviest element on a webpage or in an app is an image. How would you reduce LCP and keep it below the Google threshold of 2.5 seconds?

A: What’s important to remember with regards to LCP is that we are talking about the largest piece of content that gets loaded on a page, and that is visible in the viewport (that is, it’s visible above the fold). Due to popular UX design patterns it’s likely that the largest, visible element is a hero image.

Google watches for <img> elements as well as <image> elements inside an SVG element. Video elements are considered too but only if they contain a poster attribute. Also of importance to Google are block-level elements, such as text-related ones like <h1>, <h2>, etc., and <span>.

All that means that you must load the largest piece of content as fast as possible. If your LCP is a hero image, be sure to optimize it—but without degrading the visual effects. Check out Cloudinary’s myriad effective and intuitive options for optimization. If you can strike a good balance between the file size and the visual fidelity of your image, your LCP will shine. 

Q: Suppose it’s now May 2021. What’s the likely effect of Google’s new criteria for search rankings for an e-commerce business that has surpassed the thresholds of all three or a couple of the Core Web Vitals?

A: According to Google, sites that meet the thresholds of the Core Web Vitals enjoy a 24-percent lower abandonment rate. The more you adhere to Google’s guidelines, the more engaging your site or app becomes and the faster your sales will grow. Needless to say, an appealing user experience attracts visitors and retains them, winning you an edge over the competition. Of course bear in mind the other search optimization guidelines set out by Google.

Again, be sure to optimize images, especially the most sizable one in the viewport, so that they load as fast as possible.

Q:  It sounds like e-commerce businesses should immediately start exploring ways to meet or better the vitals’ limits. Before we wrap up, what does the future look like for Core Web Vitals?

A: Late last year, Google held a conference and there were multiple talks touching upon this exact subject. All major changes will go into effect on a per-year basis, and Google has committed to announcing them well in advance.

Behind the scenes, Google is constantly collecting data from the field and checking them against user expectations. The first contentful paint, which I mentioned before, is under consideration as another Core Web Vital. Also, Google is thinking about reducing the yardstick for the First Input Delay metric—the FID, remember?—from 100 milliseconds to 75 or even 50.

Beyond that, Google has received a lot of feedback about some of the Core Web Vitals not working well for single-page apps. That’s because those apps are loaded only once. Even if they score an ideal Cumulative Layout Shift—that’s CLS—as you click around the page, things might move around and bring down the score. Down the road, Google might modify CLS to better accommodate single-page apps. 

Also on Google’s radar screen are metrics for security, privacy, and accessibility. Google promises to fine-tune the current metrics and launch new ones more frequently than major releases, including the introduction of new Core Web Vital metrics. 

So, change is the only constant here. I see a bright future for the vitals and have no doubt that we’re in good hands. Remember that Google vigilantly collects real user data as analytics to help figure out the appropriate standards. As long as you keep up with the developments and ensure that your site or app comply with the rules, you’ll get all greens throughout the scoreboard. That’s a great spot to be in.

Cloudinary offers myriad resources on media experience (MX), notably the MX Matters podcast, which encompasses experts’ take on the trends in today’s visual economy along with bulletins on new products and enhancements. Do check them out.


The post Deliver Enhanced Media Experiences With Google’s Core Web Vitals appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Guide to Payment Gateway Testing

It’s easy to see why more consumers are switching to digital payment methods: it’s more convenient, easier to track, and reduces contact. To keep up with consumer expectations, eCommerce websites must ensure their payment channels are secure and seamless. Any issue with online payments directly impacts revenue, website traffic, and customer loyalty.

In short, payment channels that regularly undergo payment gateway testing are ready for market, ready for the consumer, and ready to combat any cybercriminal attempts. But how can QA testers keep track of every test case scenario for payment gateway channels?

The Best Defect Tracking Tools in 2021

It’s very rare to find a defect-free product. Every software application has some kind of bug crawling through its code, even those designed by highly talented developers and tested by vigilant QA teams. And these bugs can multiply quickly when a disjointed process—or worse, no process—is in place.

Perfection, of course, is always the goal when QA testing. But reducing the number of defects within the system while adhering to tight deadlines and lean teams can make this goal seem unattainable.

Choosing A New Serverless Database Technology At An Agency (Case Study)

Adopting a new technology is one of the hardest decisions for a technologist in a leadership role. This is often a large and uncomfortable area of risk, whether you are building software for another organization or within your own.

Over the last twelve years as a software engineer, I’ve found myself in the position of having to evaluate a new technology at increasing frequency. This may be the next frontend framework, a new language, or even entirely new architectures like serverless.

The experimentation phase is often fun and exciting. It is where software engineers are most at home, embracing the novelty and euphoria of “aha” moments while grokking new concepts. As engineers, we like to think and tinker, but with enough experience, every engineer learns that even the most incredible technology has its blemishes. You just haven’t found them yet.

Now, as the co-founder of a creative agency, my team and I are often in a unique position to use new technologies. We see many greenfield projects, which become the perfect opportunity to introduce something new. These projects also see a level of technical isolation from the larger organization and are often less burdened by prior decisions.

That being said, a good agency lead is entrusted to care for someone else’s big idea and deliver it to the world. We have to treat it with even more care than we would our own projects. Whenever I’m about to make the final call on a new technology I often ponder this piece of wisdom from the co-founder Stack Overflow Joel Spolski:

“You have to sweat and bleed with the thing for a year or two before you really know it’s good enough or realize that no matter how hard you try you can’t...”

This is the fear, this is the place that no tech lead wants to find themselves in. Choosing a new technology for a real-world project is hard enough, but as an agency, you have to make these decisions with someone else’s project, someone else’s dream, someone else’s money. At an agency, the last thing you want is to find one of those blemishes near the deadline for a project. Tight timelines and budgets make it nearly impossible to reverse course after a certain threshold is crossed, so finding out a technology can’t do something critical or is unreliable too late into a project can be catastrophic.

Throughout my career as a software engineer, I’ve worked at SaaS companies and creative agencies. When it comes to adopting a new technology for a project these two environments have very different criteria. There is overlap in criteria, but by and large, the agency environment has to work with rigid budgets and rigorous time constraints. While we want the products we build to age well over time, it’s often more difficult to make investments in something less proven or to adopt technology with steeper learning curves and rough edges.

That being said, agencies also have some unique constraints that a single organization may not have. We have to bias for efficiency and stability. The billable hour is often the final unit of measurement when a project is complete. I’ve been at SaaS companies where spending a day or two on setup or a build pipeline is no big deal.

At an agency, this type of time cost puts strain on relationships as finance teams see narrowing profit margins for little visible results. We also have to consider the long-term maintenance of a project, and conversely what happens if a project needs to be handed back off to the client. We therefore must bias for efficiency, learning curve, and stability in the technology we choose.

When evaluating a new piece of technology I look at three overarching areas:

  1. The Technology
  2. The Developer Experience
  3. The Business

Each of these areas has a set of criteria I like met before I start really diving into the code and experimenting. In this article, we’ll take a look at these criteria and use the example of considering a new database for a project and review it at a high level under each lens. Taking a tangible decision like this will help demonstrate how we can apply this framework in the real world.

The Technology

The very first thing to take a look at when evaluating a new technology is if that solution can solve the problems it claims to solve. Before diving into how a technology can help our process and business operations, it’s important to first establish that it is meeting our functional requirements. This is also where I like to take a look at what existing solutions we are using and how this new one stacks up against them.

I’ll ask myself questions like:

  1. Does it at a minimum solve the problem my existing solution does?
  2. In what ways is this solution better?
  3. In what ways is it worse?
  4. For areas that it is worse, what will it take to overcome those shortcomings?
  5. Will it take the place of multiple tools?
  6. How stable is the technology?

Our Why?

At this point, I also want to review why we are seeking another solution. A simple answer is we are encountering a problem that existing solutions don’t solve. However, this is often rarely the case. We have solved many software problems over the years with all of the technology we have today. What typically happens is that we get turned onto a new technology that makes something we are currently doing easier, more stable, faster, or cheaper.

Let’s take React as an example. Why did we decide to adopt React when jQuery or Vanilla JavaScript was doing the job? In this case, using the framework highlighted how this was a much better way to handle stateful frontends. It became faster for us to build things like filtering and sorting features by working with data structures instead of direct DOM manipulation. This was a saving in time and increased stability of our solutions.

Typescript is another example where we decided to adopt it because we found increases in the stability of our code and maintainability. With adopting new technologies, there often isn’t a clear problem we are looking to solve, but rather just looking to stay current and then discovering more efficient and stable solutions than we are currently using.

In the case of a database, we were specifically considering moving to a serverless option. We had seen a lot of success with serverless applications and deployments reducing our overhead as an organization. One area where we felt this was lacking was our data layer. We saw services like Amazon Aurora, Fauna, Cosmos and Firebase that were applying serverless principles to databases and wanted to see if it was time to take the leap ourselves. In this case, we were looking to lower our operational overhead and increase our development speed and efficiency.

It’s important at this level to understand your why before you start diving into new offerings. This may be because you are solving a novel problem, but far more often you are looking to improve your ability to solve a type of problem you are already solving. In that case, you need to take inventory of where you have been to figure out what would provide a meaningful improvement to your workflow. Building upon our example of looking at serverless databases, we’ll need to take a look at how we are currently solving problems and where those solutions fall short.

Where we have been...

As an agency, we have previously used a wide range of databases including but not limited to MySQL, PostgreSQL, MongoDB, DynamoDB, BigQuery, and Firebase Cloud Storage. The vast majority of our work centered around three core databases though: PostgreSQL, MongoDB, and Firebase Realtime Database. Each one of these does, in fact, have semi-serverless offerings, but some key features of newer offerings had us re-evaluating our previous assumptions. Let’s take a look at our historical experience with each of these first and why we are left considering alternatives in the first place.

We typically chose PostgreSQL for larger, long-term projects, as this is the battle-tested gold standard for almost everything. It supports classic transactions, normalized data, and is ACID compliant. There are a wealth of tools and ORMs available in almost every language and it can even be used as an ad-hoc NoSQL database with its JSON column support. It integrates well with many existing frameworks, libraries and programming languages making it a true go-anywhere workhorse. It is also open-source and therefore doesn’t get us locked into any one vendor. As they say, nobody ever got fired for choosing Postgres.

That being said, we have gradually found ourselves using PostgreSQL less and less as we became more of a Node-oriented shop. We have found the ORM’s for Node to be lackluster and requiring more custom queries (although this has become less problematic now) and NoSQL felt to be a more natural fit when working in a JavaScript or TypeScript runtime. That being said, we often had projects that could be done quite quickly with classic relational modeling like e-commerce workflows. However, dealing with the local setup of the database, unifying the testing flow across teams, and dealing with local migrations were things we didn’t love and were happy to leave behind as NoSQL, cloud-based databases became more popular.

MongoDB was increasingly our go-to database as we adopted Node.js as our preferred back end. Working with MongoDB Atlas made it easy to have quick development and testing databases that our team could use. For a while, MongoDB was not ACID compliant, didn’t support transactions, and discouraged too many inner join-like operations, thus for e-commerce applications we still were using Postgres most often. That being said, there are a wealth of libraries that go with it and Mongo’s query language and first-class JSON support gave us speed and efficiency we had not experienced with relational databases. MongoDB has added support for ACID transactions recently, but for a long time, this was the chief reason we would opt for Postgres instead.

MongoDB also introduced us to a new level of flexibility. In the middle of an agency project, requirements are bound to change. No matter how hard you defend against it, there is always a last-minute data requirement. With NoSQL databases, in general, the flexibility of the data structure made those types of changes less harsh. We didn’t end up with a folder full of migration files to manage that added and removed and added columns again before a project even saw daylight.

As a service, Mongo Atlas was also pretty close to what we desired in a database cloud service. I like to think of Atlas as a semi-serverless offering since you still have some operational overhead in managing it. You have to provision a certain size database and select an amount of memory upfront. These things will not scale for you automatically so you will need to monitor it for when it is time to provide more space or memory. In a truly serverless database, this would all happen automatically and on-demand.

We also utilized Firebase Realtime Database for a few projects. This was indeed a serverless offering where the database scales up and down on-demand, and with pay-as-you-go pricing, it made sense for applications where the scale was not known upfront and the budget was limited. We used this instead of MongoDB for short-lived projects that had simple data requirements.

One thing we did not enjoy about Firebase was it felt to be further from the typical relational model built around normalized data that we were used to. Keeping the data structures flat meant we often had more duplication, which could turn a bit ugly as a project grows. You end up having to update the same data in multiple places or trying to join together different references resulting in multiple queries that can become hard to reason about in the code. While we liked Firebase, we never really fell in love with the query language and sometimes found the documentation to be lackluster.

In general, both MongoDB and Firebase had a similar focus on denormalized data, and without access to efficient transactions, we often found many of the workflows that were easy to model in relational databases, which led to more complex code at the application layer with their NoSQL counterparts. If we could get the flexibility and ease of these NoSQL offerings with the robustness and relational modeling of a traditional SQL database we would really have found a great match. We felt MongoDB had the better API and capabilities but Firebase had the truly serverless model operationally.

Our Ideal

At this point, we can start looking at what new options we will consider. We’ve clearly defined our previous solutions and we’ve identified the things that are important for us to have at a minimum in our new solution. We not only have a baseline or minimum set of requirements, but we also have a set of problems that we’d like the new solution to alleviate for us. Here are the technical requirements we have:

  1. Serverless operationally with on-demand scale
  2. Flexible modeling (schemaless)
  3. No reliance on migrations or ORMs
  4. ACID compliant transactions
  5. Supports relationships and normalized data
  6. Works with both serverless and traditional backends

So now that we have a list of must-haves we can actually evaluate some options. It may not be important that the new solution nails every target here. It may just be that it hits the right combination of features where existing solutions are not overlapping. For instance, if you wanted schemaless flexibility, you had to give up ACID transactions. (This was the case for a long time with databases.)

An example from another domain is if you want to have typescript validation in your template rendering you need to be using TSX and React. If you go with options like Svelte or Vue, you can have this — partially but not completely — through the template rendering. So a solution that gave you the tiny footprint and speed of Svelte with the template level type checking of React and TypeScript could be enough for adoption even if it were missing another feature. The balance of and wants and needs is going to change from project to project. It is up to you to figure out where the value is going to be and decide how to tick the most important points in your analysis.

We can now take a look at a solution and see how it evaluates against our desired solution. Fauna is a serverless database solution that boasts an on-demand scale with global distribution. It is a schemaless database, that provides ACID-compliant transactions, and supports relational queries and normalized data as a feature. Fauna can be used in both serverless applications as well as more traditional backends and provides libraries to work with the most popular languages. Fauna additionally provides workflows for authentication as well as easy and efficient multi-tenancy. These are both solid additional features to note because they could be the swaying factors when two technologies are nose to nose in our evaluation.

Now after looking at all of these strengths we have to evaluate the weaknesses. One of which is Fauna is not open source. This does mean that there are risks of vendor lock-in, or business and pricing changes that are out of your control. Open source can be nice because you can often up and take the technology to another vendor if you please or potentially contribute back to the project.

In the agency world, vendor lock-in is something we have to watch closely, not so much because of the price, but the viability of the underlying business is important. Having to change databases on a project that is in the middle of development or a few years old are both disastrous for an agency. Often a client will have to foot the bill for this, which is not a pleasant conversation to have.

One other weakness we were concerned with is the focus on JAMstack. While we love JAMstack, we find ourselves building a wide variety of traditional web applications more often. We want to be sure that Fauna continues to support those use cases. We had a bad experience in the past with a hosting provider that went all-in on JAMstack and we ended up having to migrate a rather large swath of sites from the service, so we want to feel confident that all use cases will continue to see solid support. Right now, this seems to be the case, and the serverless workflows provided by Fauna actually can complement a more traditional application quite nicely.

At this point, we’ve done our functional research and the only way to know if this solution is viable is to get down and write some code. In an agency environment, we can’t just take weeks out of the schedule for people to evaluate multiple solutions. This is the nature of working in an agency vs. a SaaS environment. In the latter, you might build a few prototypes to try to get to the right solution. In an agency, you will get a few days to experiment, or maybe the opportunity to do a side project but by and large we really have to narrow this down to one or two technologies at this stage and then put the fingers to the keyboard.

The Developer Experience

Judging the experience side of a new technology is perhaps the most difficult of the three areas since it is by nature subjective. It will also have variability from team to team. For example, if you asked a Ruby programmer, a Python programmer, and a Rust programmer about their opinions on different language features, you will get quite an array of responses. So, before you begin to judge an experience, you must first decide what characteristics are most important to your team overall.

For agencies I think there are two major bottlenecks that come up with regard to developer experience:

  1. Setup time and configuration
  2. Learnability

Both of these affect the long-term viability of a new technology in different ways. Keeping transient teams of developers in sync at an agency can be a headache. Tools that have lots of upfront setup costs and configurations are notoriously difficult for agencies to work with. The other is learnability and how easy it is for developers to grow the new technology. We’ll go into these in more detail and why they are my base when starting to evaluate developer experience.

Setup Time And Configuration

Agencies tend to have little patience and time for configuration. For me, I love sharp tools, with ergonomic designs, that allow me to get to work on the business problem at hand quickly. A number of years ago I worked for a SaaS company that had a complex local setup that involved many configurations and often failed at random points in the setup process. Once you were set up, the conventional wisdom was not to touch anything, and hope that you weren’t at the company long enough to have to set it up again on another machine. I’ve met developers that greatly enjoyed configuring each little piece of their emacs setup and thought nothing of losing a few hours to a broken local environment.

In general, I have found agency engineers have a disdain for these types of things in their day-to-day work. While at home they may tinker with these types of tools, but when on a deadline there’s nothing like tools that just work. At agencies, we typically would prefer to learn a few new things that work well, consistently, rather than to be able to configure each piece of tech to each individual’s personal taste.

One thing that is good about working with a cloud platform that is not open source is they own the setup and configuration entirely. While a downside of this is vendor lock-in, the upside is that these types of tools often do the thing they are set up to do well. There is no tinkering with environments, no local setups, and no deployment pipelines. We also have fewer decisions to make.

This is inherently the appeal of serverless. Serverless in general has a greater reliance on proprietary services and tools. We trade the flexibility of hosting and source code so that we can gain greater stability and focus on the problems of the business domain we are trying to solve. I’ll also note that when I’m evaluating a technology and I get the feeling that migrating off of a platform might be needed, this is often a bad sign at the outset.

In the case of databases, the set-it-and-forget-it setup is ideal when working with clients where the database needs can be ambiguous. We’ve had clients who were unsure how popular a program or application would be. We’ve had clients that we technically were not contracted to support in this way but nonetheless called us in a panic when they needed us to scale their database or application.

In the past, we’d always have to factor in things like redundancy, data replication, and sharding to scale when we crafted our SOW’s. Trying to cover each scenario while also being prepared to move a full book of business around in the event a database wasn’t scaling is an impossible situation to prepare for. In the end, a serverless database makes these things easier.

You never lose data, you don’t have to worry about replicating data across a network, nor provisioning a larger database and machine to run it on -- it all just works. We only focus on the business problem at hand, the technical architecture and scale will always be managed. For our development team, this is a huge win; we have less fire drills, monitoring, and context switching.

Learnability

There is a classic user experience measure, which I think is applicable to developer experience, which is learnability. When designing for a certain user experience we don’t just look at if something is apparent or easy on first try. Technology just has more complexity than that most of the time. What is important is how easily a new user can learn and master the system.

When it comes to technical tools, especially powerful ones, it would be a lot to ask for there to be zero learning curve. Usually what we look for is for there to be great documentation for the most common use cases and for that knowledge to be easily and quickly built upon when in a project. Losing a little time to learning on the first project with a technology is okay. After that, we should see efficiency improve with each successive project.

What I look for specifically here is how we can leverage knowledge and patterns we already know to help shorten the learning curve. For instance, with serverless databases, there is going to be virtually zero learning curve for getting them set up in the cloud and deployed. When it comes to using the database one of the things I like is when we can still leverage all the years of mastering relational databases and apply those learnings to our new setup. In this case, we are learning how to use a new tool but it’s not forcing us to rethink our data modeling from the ground up.

As an example of this, when using Firebase, MongoDB, and DynamoDB we found that it encouraged denormalized data rather than trying to join different documents. This created a lot of cognitive friction when modeling our data as we needed to think in terms of access patterns rather than business entities. On the other side of this Fauna allowed us to leverage our years of relational knowledge as well as our preference for normalized data when it came to modeling data.

The part we had to get used to was using indexes and a new query language to bring those pieces together. In general, I’ve found that preserving concepts that are a part of larger software design paradigms makes it easier on the development team in terms of learnability and adoption.

How do we know that a team is adopting and loving a new technology? I think the best sign is when we find ourselves asking whether that tool integrates with the said new technology? When a new technology gets to a level of desirability and enjoyment that the team is searching for ways to incorporate it into more projects, that is a good sign you have a winner.

The Business

In this section, we have to look at how a new technology meets our business needs. These include questions like:

  • How easily can it be priced and integrated into our support plans?
  • Can we transition it to clients easily?
  • Can clients be onboarded to this tool if need be?
  • How much time does this tool actually save if any?

The rise of serverless as a paradigm fits agencies well. When we talk about databases and DevOps, the need for specialists in these areas at agencies is limited. Often we are handing off a project when we are done with it or supporting it in a limited capacity long term. We tend to bias toward full-stack engineers as these needs outnumber DevOps needs by a large margin. If we hired a DevOps engineer they would likely be spending a few hours deploying a project and many more hours hanging out waiting for a fire.

In this regard, we always have some DevOps contractors on the ready, but do not staff for these positions full time. This means we cannot rely on a DevOps engineer to be ready to jump for an unexpected issue. For us we know we can get better rates on hosting by going to AWS directly, but we also know that by using Heroku we can rely on our existing staff to debug most issues. Unless we have a client we need to support long term with specific backend needs, we like to default to managed platforms as a service.

Databases are no exception. We love leaning on services like Mongo Atlas or Heroku Postgres to make this process as easy as possible. As we started to see more and more of our stack head into serverless tools like Vercel, Netlify, or AWS Lambda -- our database needs had to evolve with that. Serverless databases like Firebase, DynamoDB, and Fauna are great because they integrate well with serverless apps but also free our business completely from provisioning and scaling.

These solutions also work well for more traditional applications, where we don’t have a serverless application but we can still leverage serverless efficiencies at the database level. As a business, it is more productive for us to learn a single database that can apply to both worlds than to context switch. This is similar to our decision to adopt Node and isomorphic JavaScript (and TypeScript).

One of the downsides we have found with serverless has been coming up with pricing for clients we manage these services for. In a more traditional architecture, flat rate tiers make it very easy to translate those into a rate for clients with predictable circumstances for incurring increases and overages. When it comes to serverless this can be ambiguous. Finance people don’t typically like hearing things like we charge 1/10th of a penny for every read beyond 1 million, and so on and so forth.

This is hard to translate into a fixed number even for engineers as we are often building applications that we are not certain what the usage will be. We often have to create tiers ourselves but the many variables that go into the cost calculation of a lambda can be hard to wrap your head around. Ultimately, for a SaaS product these pay-as-you-go pricing models are great but for agencies the accountants like more concrete and predictable numbers.

When it came to Fauna, this was definitely more ambiguous to figure out than say a standard MySQL database that had flat-rate hosting for a set amount of space. The upside was that Fauna provides a nice calculator that we were able to use to put together our own pricing schemes.

Another difficult aspect of serverless can be that many of these providers do not allow for easy breakdown of each application being hosted. For instance, the Heroku platform makes this quite easy by creating new pipelines and teams. We can even enter a client’s credit card for them in case they don’t want to use our hosting plans. This can all be done within the same dashboard as well so we didn’t need to create multiple logins.

When it came to other serverless tools this was much more difficult. In evaluating serverless databases Firebase supports splitting payments by project. In the case of Fauna or DynamoDB, this is not possible so we do have to do some work to monitor usage in their dashboard, and if the client wants to leave our service, we would have to transfer the database over to their own account.

Ultimately, serverless tools provide great business opportunities in terms of cost savings, management, and process efficiency. However, often they do prove challenging for agencies when it comes to pricing and account management. This is one area where we have had to leverage cost calculators to create our own predictable pricing tiers or set clients up with their own accounts so they can make the payments directly.

Conclusion

It can be a difficult task to adopt a new technology as an agency. While we are in a unique position to work with new, greenfield projects that have opportunities for new technologies, we also have to consider the long-term investment of these. How will they perform? Will our people be productive and enjoy using them? Can we incorporate them into our business offering?

You need to have a firm grasp of where you have been before you figure out where you want to go technologically. When evaluating a new tool or platform it’s important to think of what you have tried in the past and figure out what is most important to you and your team. We took a look at the concept of a serverless database and passed it through our three lenses -- the technology, the experience, and the business. We were left with some pros and cons and had to strike the right balance.

After we evaluated serverless databases, we decided to adopt Fauna over the alternatives. We felt the technology was robust and ticked all of our boxes for our technology filter. When it came to the experience, virtually zero configuration and being able to leverage our existing knowledge of relational data modeling made this a winner with the development team. On the business side serverless provides clear wins to efficiency and productivity, however on the pricing side and account management there are still some difficulties. We decided the benefits in the other areas outweighed the pricing difficulties.

Overall, we highly recommend giving Fauna a shot on one of your next projects. It has become one of our favorite tools and our go-to database of choice for smaller serverless projects and even more traditional large backend applications. The community is very helpful, the learning curve is gentle, and we believe you’ll find levels of productivity you hadn’t realized before with existing databases.

When we first use a new technology on a project, we start with something either internal or on the smaller side. We try to mitigate the risk by wading into the water rather than leaping into the deep end by trying it on a large and complex project. As the team builds understanding of the technology, we start using it for larger projects but only after we feel comfortable that it has handled similar use cases well for us in the past.

In general, it can take up to a year for a technology to become a ubiquitous part of most projects so it is important to be patient. Agencies have a lot of flexibility but also are required to ensure stability in the products they produce, we don’t get a second chance. Always be experimenting and pushing your agency to adopt new technologies, but do so carefully and you will reap the benefits.

Further Reading

Inspirational Websites Roundup #24

It’s always such a pleasure to make these roundups of wonderful designs. It gives a glimpse into what the current trends are and what could be the next cool thing when it comes to colors, layout and typography.

This past month some amazing design were unleashed upon the web, reflecting the immense diversity of ideas. It’s exciting to see new shapes being used enhancing individual layouts. Expressive typography is becoming more prominent and is used to define the mood and character of each design.

I really hope you enjoy this roundup and find it inspirational!

Jonathan Alpmyr

Hanai World

Swissdent

Gunnar Freyr

François-Xavier Manceau

Cosmic Collusion

Marcus Eriksson

Bruno

Tangan

Descente Allterrain

JAM!

Allies for Talents and Brands

Web_vsign

Mars Rouge

Bridge Tour

Lamanna – Bakery

Château Boll

Armadillo

Stykka

The Hiring Chain

Garoa Skincare

Garet Font

Fabio Formato

Fat Free

Le chef d’orchestre du corps

komnata

Brandon Tyson

BLUE HAMHAM

Mank the Unmaking

école supérieure de mode

The post Inspirational Websites Roundup #24 appeared first on Codrops.