Squeezing Neurons into Narrow Spaces: AI in QA

Today, AI based on neural networks is at a very interesting stage of its development. It has clearly taken off: we see numerous applications from reading CT scans to picking fruits. But adoption rates vary a lot. Recommendation engines, customer support bots, and other stuff that's been called ”internet AI” are fairly widespread; we see it everywhere in our lives. However, most areas, including software testing, aren’t quite there yet.

What makes AI widely adopted and what’s tripping it up? Let’s take a look at our own field, software testing. Specifically, I want to see how well neural networks can generate automated tests, and what barriers there are to mass adoption of AI in our field. I’ll be looking first and foremost at the problems and difficulties not because I’m trying to be glass-half-empty, but because things that go wrong always tell us more about how a system works than things that go right.

Metrics Part 2: The DORA Keys

The DORA metrics are pretty much an iceberg, with the five indicators sticking out above the surface and plenty of research hidden beneath the waves. With the amount of work that has been put into that program, the whole thing can seem fairly opaque when you start working with them. Let’s try to peek under the surface and see what’s going on down there.

After our last post about metrics, we thought it might be interesting to look at how metrics are used on different organizational levels. If we start from the top, DORA is one of the more popular projects today. Here, we’ll tell you some ideas we’ve had on how to use the DORA metrics, but first, there have been some questions we’ve been asking ourselves about the research and its methodology. We’d like to share those questions with you, starting with:

Building QA Metrics That Actually Work: Part 1

Metrics are strange things: it might seem like they don’t cost anything, but they do. As a team behind Allure Report and Allure TestOps, we can confirm that QA metrics are no exception. Gathering them requires work and infrastructure. They have to be maintained. And if a metric carries an actual, measurable cost with it, its existence can only be justified when it provides an actual, measurable benefit. The metric should allow us to make decisions that will result in a net benefit for the team and the company.

Say we’ve counted the number of bugs that have emerged in production. What next? Sure, having 0 bugs would be awesome, but what does that number tell us about how we should distribute our effort? In and of itself — nothing, which means that the cost of gathering that metric is not offset by any benefit.