Front-End Testing is For Everyone

Testing is one of those things that you either get super excited about or kinda close your eyes and walk away. Whichever camp you fall into, I’m here to tell you that front-end testing is for everyone. In fact, there are many types of tests and perhaps that is where some of the initial fear or confusion comes from.

I’m going to cover the most popular and widely used types of tests in this article. This might be nothing new to some of you, but it can at least serve as a refresher. Either way, my goal is that you’re able to walk away with a good idea of the different types of tests out there. Unit. Integration. Accessibility. Visual regression. These are the sorts of things we’ll look at together.

And not just that! We’ll also point out the libraries and frameworks that are used for each type of test, like Mocha. Jest, Puppeteer, and Cypress, among others. And don’t worry — I’ll avoid a bunch of technical jargon. That said, you should have some front-end development experience to understand the examples we’re going to cover.

OK, let’s get started!

What is testing?

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.

Cem Kaner, “Exploratory Testing” (November 17, 2006)

At its most basic, testing is an automated tool that finds errors in your development as early as possible. That way, you’re able to fix those issues before they make it into production. Tests also serve as a reminder that you may have forgotten to check your own work in a certain area, say accessibility.

In short, front-end testing validates that what people see on the site and the features they use on it work as intended.

Front-end testing is for the client side of your application. For example, front-end tests can validate that pressing a “Delete” button properly removes an item from the screen. However, it won’t necessarily check if the item was actually removed from the database — that sort of thing would be covered during back-end testing.

That’s testing in a nutshell: we want to catch errors on the client side and fix them before code is deployed.

Different tests look at different parts of the project

Different types of tests cover different aspects of a project. Nevertheless, it is important to differentiate them and understand the role of each type. Confusing which tests do what makes for a messy, unreliable testing suit.

Ideally, you’d use several different types of tests to surface different types of possible issues. Some test types have a test coverage analytic that shows just how much of your code (as a percentage) is looked at by that particular test. That’s a great feature, and while I’ve seen developers aim for 100% coverage, I wouldn’t rely on that metric alone. The most important thing is to make sure all possible edge cases are covered and taken into account.

So, with that, let’s turn our attention to the different types of testing. Remember, it’s not so much that you’re expected to use each and every one of these. It’s about being able to differentiate the tests so that you know which ones to use in certain circumstances.

Unit testing

Unit testing is the most basic building block for testing. It looks at individual components and ensures they work as expected. This sort of testing is crucial for any front-end application because, with it, your components are tested against how they’re expected to behave, which leads to a much more reliable codebase and app. This is also where things like edge cases can be considered and covered.

Unit tests are particularly great for testing APIs. But rather than making calls to a live API, hardcoded (or “mocked”) data makes sure that your test runs are always consistent at all time.

Let’s take a super simple (and primitive) function as an example:

const sayHello = (name) => {
  if (!name) {
    return "Hello human!";
  }

  return `Hello ${name}!`;
};

Again, this is a basic case, but you can see that it covers a small edge case where someone may have neglected to provide a first name to the application. If there’s a name, we’ll get “Hello ${name}!” where ${name} is what we expect the person to have provided.

“Um, why do we need to test for something small like that?” you might wonder. There are some very important reasons for this:

  • It forces you to think deeply about the possible outcomes of your function. More often than not, you really do discover edge cases which helps you cover them in your code.
  • Some part of your code can rely on this edge case, and if someone comes and deletes something important, the test will warn them that this code is important and cannot be removed.

Unit tests are often small and simple. Here’s an example:

describe("sayHello function", () => {
  it("should return the proper greeting when a user doesn't pass a name", () => {
    expect(sayHello()).toEqual("Hello human!")
  })

  it("should return the proper greeting with the name passed", () => {
    expect(sayHello("Evgeny")).toEqual("Hello Evgeny!")
  })
})

describe and it are just syntactic sugar. The most important lines with expect and toEqual. describe and it breaks the test into logical blocks that are printed to the terminal. The expect function accepts the input we want to validate, while toEqual accepts the desired output. There are a lot of different functions and methods you can use to test your application.

Let’s say we’re working with Jest, a library for writing units. In the example above, Jest will display the sayHello function as a title in the terminal. Everything inside an it function is considered as a single test and is reported in the terminal below the function title, making everything very easy to read.

The green checkmarks mean both of our tests have passed. Yay!

Integration testing

If unit tests check the behavior of a block, integration tests make sure that blocks work flawlessly together. That makes Integration testing super important because it opens up testing interactions between components. It’s very rare (if ever) that an application is composed of isolated pieces that function by themselves. That’s why we rely on integration tests.

We go back to the function we unit tested, but this time use it in a simple React application. Let’s say that clicking a button triggers a greeting to appear on the screen. That means a test involves not only the function but also the HTML DOM and a button’s functionality. We want to test how all these parts play together.

Here’s the code for a <Greeting /> component we’re testing:

export const Greeting = () => {  
  const [showGreeting, setShowGreeting] = useState(false);  

 return (  
   <div>  
     <p data-testid="greeting">{showGreeting && sayHello()}</p>  
     <button data-testid="show-greeting-button" onClick={() => setShowGreeting(true)}>Show Greeting</button>  
   </div>
 );  
};

Here’s the integration test:

describe('<Greeting />', () => {  
  it('shows correct greeting', () => {  
    const screen = render(<Greeting />);  
     const greeting = screen.getByTestId('greeting');  
     const button = screen.getByTestId('show-greeting-button');  

     expect(greeting.textContent).toBe('');  
     fireEvent.click(button);  
     expect(greeting.textContent).toBe('Hello human!');  
 });  
});

We already know describe and it from our unit test. They break tests up into logical parts. We have the render function that displays a <Greeting /> component in the special emulated DOM so we can test interactions with the component without touching the real DOM — otherwise, it can be costly.

Next up, the test queries <p> and <button> elements via test IDs ( #greeting and #show-greeting-button, respectively). We use test IDs because it’s easier to get the components we want from the emulated DOM. There are other ways to query components, but this is how I do it most often.

It’s not until line 7 that the actual integration test begins! We first check that <p> tag is empty. Then we click the button by simulating a click event. And lastly, we check that the <p> tag contains “Hello human!” inside it. That’s it! All we’re testing is that an empty paragraph contains text after a button is clicked. Our component is covered.

We can, of course, add input where someone types their name and we use that input in the greeting function. However, I decided to make it a bit simpler. We’ll get to using inputs when we cover other types of tests.

Check out what we get in the terminal when running the integration test:

Termain message showing a passed test like before, but now with a specific test item for showing the correct greeting. It includes the number of tests that ran, how many passed, how many snapshots were taken, and how much time the tests took, which was 1.085 seconds.
Perfect! The <Greeting /> component shows the correct greeting when clicking the button.

End-to-end (E2E) testing

  • Level: High
  • Scope: Tests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
  • Possible tools: Cypress, Puppeteer

E2E tests are the highest level of testing in this list. E2E tests care only about how people see your application and how they interact with it. They don’t know anything about the code and the implementation.

E2E tests tell the browser what to do, what to click, and what to type. We can create all kinds of interactions that test different features and flows as the end user experiences them. It’s literally a robot that’s interacted to click through an application to make sure everything works.

E2E tests are similar to integration tests in a sort of way. However, E2E tests are executed in a real browser with a real DOM rather than something we mock up — we generally work with real data and a real API in these tests.

It is good to have full coverage with unit and integration tests. However, users can face unexpected behaviors when they run an application in the browser — E2E tests are the perfect solution for that.

Let’s look at an example using Cypress, an extremely popular testing library. We are going to use it specifically for an E2E test of our previous component, this time inside a browser with some extra features.

Again, we don’t need to see the code of the application. All we’re assuming is that we have some application and we want to test it as a user. We know what buttons to click and the IDs those buttons have. That’s all we really have to go off of.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm it works', () => {
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  
  })  
})

This E2E test looks very similar to our previous integration test. The commands are extremely similar, the main difference being that these are executed in a real browser.

First, we use cy.visit to navigate to a specific URL where our application lies:

cy.visit('http://localhost:3000')

Second, we use cy.get to get the navigation button by its ID, then instruct the test to click it. That action will navigate to the page with the <Greetings /> component. In fact, I’ve added the component to my personal website and provided it with its own URL route.

cy.get('#greeting-nav-button').click()

Then, sequentially, we get text input, type “Evgeny,” click the #greetings-show-button button and, lastly, check that we got the desired greeting output.

cy.get('#greetings-input').type('Evgeny', { delay: 400 })
cy.get('#greetings-show-button').click()
cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  

It is pretty cool to watch how the test clicks buttons for you in a real live browser. I slowed down the test a bit so you can see what is going on. All of this usually happens very quickly.

Here is the terminal output:

Terminal showing a run test for greetings.spec.js that passed in 12 seconds.

Accessibility testing

Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them.

W3C

Accessibility tests make sure people with disabilities can effectively access and use a website. These tests validate that you follow the standards for building a website with accessibility in mind.

For example, many unsighted people use screen readers. Screen readers scan your website and attempt to present it to users with disability in a format (usually spoken) those users can understand. As a developer, you want to make a screen reader’s job easy and accessibility testing will help you understand where to start.

There are a lot of different tools, some of them automated and some that run manually to validate accessibilit. For example, Chrome already has one tool built right into its DevTools. You may know it as Lighthouse.

Let’s use Lighthouse to validate the application we made in the E2E testing section. We open Lighthouse in Chrome DevTools, click the “Accessibility” test option, and “Generate” the report.

That’s literally all we have to do! Lighthouse does its thing, then generates a lovely report, complete with a score, a summary of audits that ran, and an outline of opportunities for improving the score.

But this is just one tool that measures accessibility from its particular lens. We have all kinds of accessibility tooling, and it’s worth having a plan for what to test and the tooling that’s available to hit those points.

Visual regression testing

  • Level: High
  • Scope: Tests the visual structure of application, including the visual differences produced by a change in the code.
  • Possible tools: Cypress, Percy, Applitools

Sometimes E2E tests are insufficient to verify that the last changes to your application didn’t break the visual appearance of anything in an interface. Have you pushed the code with some changes to production just to realize that it broke the layout of some other part of the application? Well, you are not alone. Most times than not, changes to a codebase break an app’s visual structure, or layout.

The solution is visual regression testing. The way it works is pretty straightforward. Visual test merely take a screenshot of pages or components and compare them with screenshots that were captured in previous successful tests. If these tests find any discrepancies between the screenshots, they’ll give us some sort of notification.

Let’s turn to a visual regression tool called Percy to see how visual regression test works. There are a lot of other ways to do visual regression tests, but I think Percy is simple to show in action. In fact, you can jump over to Paul Ryan’s deep dive on Percy right here on CSS-Tricks. But we’ll do something considerably simpler to illustrate the concept.

I intentionally broke the layout of our Greeting application by moving the button to the bottom of the input. Let’s try to catch this error with Percy.

Percy works well with Cypress, so we can follow their installation guide and run Percy regression tests along with our existing E2E tests.

describe('Greetings functionality', () => {  
  it('should navigate to greetings page and confirm everything is there', () => {  
    cy.visit('http://localhost:3000')  
    cy.get('#greeting-nav-button').click()  
    cy.get('#greetings-input').type('Evgeny', { delay: 400 })  
    cy.get('#greetings-show-button').click()  
    cy.get('#greeting-text').should('include.text', 'Hello Evgeny!')  


    // Percy test
     cy.percySnapshot() // HIGHLIGHT
  })  
})

All we added at the end of our E2E test is a one-liner: cy.percySnapshot(). This will take a screenshot and send it to Percy to compare. That is it! After the tests have finished, we’ll receive a link to check our regressions. Here is what I got in the terminal:

Terminal output that shows white text on a black background. It displays the same result as before, but with a step showing that Percy created a build and where to view it.
Hey, look, we can see that the E2E tests have passed as well! That shows how E2E testing won’t always catch a visual error.

And here’s what we get from Percy:

Animated gif of a webpage showing a logo and navigation above a form field. The animation overlays the original snapshot with the latest to reveal differences between the two.
Something clearly changed and it needs to be fixed.

Performance testing

Performance testing is great for checking the speed of your application. If performance is crucial for your business — and it likely is given the recent focus on Core Web Vitals and SEO — you’ll definitely want to know if the changes to your codebase have a negative impact on the speed of the application.

We can bake this into the rest of our testing flow, or we can run them manually. It’s totally up to you how to run these tests and how frequently to run them. Some devs create what’s called a “performance budget” and run a test that calculates the size of the app — and a failed test will prevent a deployment from happening if the size exceeds a certain threshold. Or, test manually every so often with Lighthouse, as it also measures performance metrics. Or combine the two and build Lighthouse into the testing suite.

Performance tests can measure anything related to performance. They can measure how fast an application loads, the size of its initial bundle, and even the speed of a particular function. Performance testing is a somewhat broad, vast landscape.

Here’s a quick test using Lighthouse. I think it’s a good one to show because of its focus on Core Web Vitals as well as how easily accessible it is in Chrome’s DevTools without any installation or configuration.

A Lighthouse report open in Chrome DevTools showing a Performance score of 55 indicated by a orange circle bearing the score. Various metrics are listed below the score, including a timeline of the page as it loads.
Not a great score, but at least we can see what’s up and we have some recommendations for how to make improvements.

Wrapping up

Here’s a breakdown of what we covered:

TypeLevelScopeTooling examples
UnitLowTests the functions and methods of an application.
IntegrationMediumTests Interactions between units.
End-to-endHighTests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes.
AccessibilityHighTests the interface of your application against accessibility standards criteria.
Visual regressionHighTests the visual structure of application, including the visual differences produced by a change in the code. 
PerformanceHighTests the application forperformance and stability.

So, is testing for everyone? Yes, it is! Given all the available libraries, services, and tools we have to test different aspects of an application at different points, there’s at least something out there that allows us to measure and test code against standards and expectations — and some of them don’t even require code or configuration!

In my experience, many developers neglect testing and think that a simple click-through or post check will help any possible bugs from a change in the code. If you want to make sure your application works as expected, is inclusive to as many people as possible, runs efficiently, and is well-designed, then testing needs to be a core part of your workflow, whether it’s automated or manual.

Now that you know what types tests there are and how they work, how are you going to implement testing into your work?


The post Front-End Testing is For Everyone appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

9 Test Automation Predictions for 2021: Automated Visual Testing

Every year, pundits and critics offer their predictions for the year ahead. Here are my predictions for test automation in 2021. (Note: these are my personal predictions)

Prediction 1: Stand-Alone QA Faces Challenges of Dev Teams With Integrated Quality Engineering

Teams running Continuous Integration/Continuous Deployment (CICD) have learned that developers must own the quality of their code. In 2021, everyone else will figure that out, too. Engineers know that the delay between developing code and finding bugs produces inefficient development teams. Companies running standalone QA teams find bugs later than teams with integrated quality. In 2021, this difference will begin to become painful as more companies adopt quality engineering in the midst of development.

The Future of Testing and the Big Bang of Software – Automated Visual Testing

In an era of digital transformation, software is no longer just a tool; it is at the heart of every business. Why is it at the heart of a business? Because we are constantly interacting with our users and customers through a booming number of software applications: web, mobile, and native.

Applitools recently sponsored an independent survey with 400 engineering and quality assurance leaders from a variety of Fortune 500 companies. Among other things, they were asked how many different applications they used in their organization and how many pages each application included. Let's look at the findings of the survey.

Visual Testing With Appium, Applitools, and Amazon Device Farm

Visual UI testing is more than just testing your app on Desktop browsers and Mobile emulators. In fact, you can do more with Visual UI testing to run your tests over physical mobile devices.

Visual UI testing compares the visually-rendered output of an application against itself in older iterations. Users call this type of test version checking. Some users apply visual testing for cross-browser tests. They run the same software version across different target devices/operating systems/browsers/viewports. For either purpose, we need a testing solution that has high accuracy, speed, and works with a range of browsers and devices. For these reasons, we chose Applitools.

Using BugHerd to Track Visual Feedback on Websites

BugHerd is about collecting visual feedback for websites.

If you’re like me, you’re constantly looking at your own websites and you’re constantly critiquing them. I think that’s healthy. Nothing gets better if you look at your own work and consider it perfectly finished. This is where BugHerd shines. With BugHerd, anytime you have one of those little “uh oh this area is a little rough” moments while looking at your site, you can log it to be dealt with.

Let’s take a look at a workflow like that. I’m going to assume you’ve signed up for a BugHerd account (if not grab a free trial here) and either installed the script on your site or have installed the browser extension and are using that.

I’ve done that for this very site. So now I’m looking at a page like our Archives Page, and I spot some stuff that is a little off.

I’ve taken a screenshot and circled the things that I think are visually off:

  1. The “Top Tags” and dropdown arrow are pretty far separated with nothing much connecting them. Maybe dropdowns like that should have a background or border to make that more clear.
  2. There is a weird shadow in the middle of the bottom line.

With BugHerd, I can act upon that stuff immediately. Rather than some janky workflow involving manual screenshots and opening tickets on some other unrelated website, I can do it right from the site itself.

  1. I open the BugHerd sidebar
  2. I click the green + button
  3. Select the element around where I want to give the visual feedback
  4. Enter the details of the bug

Their help video does a great job of showing this.

Here’s me logging one of those bugs I found:

Now, the BugHerd website becomes my dashboard for dealing with visual bugs. This unlocks a continual cycle of polish that that is how great websites get great!

Note the kanban board setup, which is always my prefered way to work on any project. Cards are things that need to be worked on and there are columns for cards that aren’t started, started, and finished. Perhaps your team works another way though? Maybe you have a few more columns you generally kanban with, or you name them in a specific way? That’s totally customizable in BugHerd.

I love that BugHerd itself is customizable, but at a higher level, the entire workflow is customizable and that’s even better.

  • I can set up BugHerd just for myself and use it for visual improvement work on my own projects
  • I can set up BugHerd for just the design team and we can use it among ourselves to track visual issues and get them fixed.
  • I can set up BugHerd for the entire company, so everyone feels empowered to call out visual rough spots.
  • I can set up BugHerd for clients, if I’m a freelancer or agency worker, so that the clients themselves can use it to report visual feedback.
  • I can open up BugHerd wide open so that guests of these websites can use it to report visual problems.

Check out this example of a design team with core members and guests and their preferred workflow setup:

It’s hard to imagine a better dedicated tool than BugHerd for visual feedback.

The post Using BugHerd to Track Visual Feedback on Websites appeared first on CSS-Tricks.

Getting Started with Front End Testing

Amy Kapernick covers four types of testing that front-end devs could and should be doing:

  1. Linting (There's ESLint for JavaScript and Stylelint or Prettier for CSS.)
  2. Accessibility Testing (Amy recommends pa11y, and we've covered Axe.)
  3. Visual Regression Testing (Amy recommends Backstop, and we've covered Percy.)
  4. End to End Testing (There's Cypress and stuff like jest-puppeteer.)

Amy published something similar over on 24 ways, listing out 12 different testing tools.

As long as we're being comprehensive, we might consider performance testing to be part of all this, ala SpeedCurve or Calibre to mention some web services.

I've liked what Harry Roberts has said lately about performance budgets. They don't need to be fancy; they just need to prevent you from bad screwups.

[...] most organisations aren’t ready for challenges, they’re in need of safety nets. Performance budgets should not be things to work toward, they should be things that stop us slipping past a certain point. They shouldn’t be aspirational, they should be preventative.

Direct Link to ArticlePermalink

The post Getting Started with Front End Testing appeared first on CSS-Tricks.

Complex Functional Testing, Simplified

A different view on functional testing.

How does functional testing with visual assertions help simplify test development for complex real-world apps? Like, say, a retail app with inventory, product details, rotating displays, and shopping carts?

My special blog series discusses Modern Functional Testing with Visual AI, Raja Rao’s course on Test Automation University. I arrived at Chapter 6 – E-Commerce Real World Example. In this review, I hope to give you an overview of Raja’s examples and how they might apply to your test challenges.

A/B Testing: Validating Multiple Variations

Can you spot the difference? Is there one?

When you have multiple variations of your app, how do you automate the process to validate each variation?

A/B testing is a technique used to compare multiple experimental variations of the same application to determine which one is more effective with users. You typically run A/B tests to get statistically valid measures of effectiveness. But, do you know why one version is better than the other? It could be that one contains a defect.

Data-Driven Testing With Visual AI

Data-Driven Testing With Visual AI

Let's be honest: if you're using legacy test approaches, you spend a ton of time maintaining your data-driven tests. And that time slows you down when you're trying to keep up with a dev team that thinks, "We're coding to standards — it should all run everywhere."

Think about the most difficult parts of coding and maintaining your test infrastructure. The simplest part involves writing the initial tests. You use what you see and your understanding of expected behavior to drive the tests. Test maintenance costs can drive you crazy.

Tools and Frameworks for Faster Front End Testing

Tools and frameworks, just for you!


In every web app, the frontend is the face of the application that is visible to users. It includes the graphical user interface, functionality, and usability of the site. If the front-end is not working properly, you will not be able to earn potential users for your website. That’s why performing frontend testing for your web app is very crucial.

Visual Testing

In this Refcard, we cover everything you need to know about visual testing, from best practices and benefits to the tools and coding required to perform visual testing.

How Visual Testing Is Transforming the Way Modern Teams Test Software

Visual testing is the automated process of detecting and reviewing visual UI changes. Sometimes called visual regression testing or UI testing, it’s all about what your users actually see and interact with.

The visuals and UI of applications are critical parts of how our users use software, but often teams are still relying on slow, error-prone manual testing processes. The confidence gained by these manual processes is often minimal at best and lacks comprehensive coverage of applications.

What Is Visual Testing? A Definitive Answer [and Approach]

What Is Visual Testing?

Visual testing is how you ensure that your app appears to the user as you intended.

In today's world, in the world of HTML, a web developer create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, front-end developers want to feel comfortable with a 'write once, run anywhere' approach to their software. Which also translates to "Let QA sort out the implementation issues." QA is still stuck checking each possible output combination for visual bugs.