Will ChatGPT and Generative AI “Replace” Testing?

There is a lot of buzz within the software testing and development communities about ChatGPT and the role of generative AI in testing.

Some of the opinion pieces, webinars, and videos focus on the potentially beneficial applications of generative AI for testing speed and quality. However, many focus on how ChatGPT might impact a tester's future job prospects and security. Some go as far as questioning whether ChatGPT will replace the role of manual testers and SDETs.

The Democratization of (Test) Data

A glance at industry research from recent years shows that test data remains one of the major bottlenecks to fix in DevOps and CI/CD:

  1. The most recent Continuous Testing Report found that the average test team spends a massive 44% of their time finding, waiting for, or making test data.[i]
  2. The 2021-22 World Quality Report found that incomplete test data continues to undermine software quality, as organizations still lack sufficient data for all of their testing.[ii]
  3. Test data practices further pose costly compliance risks, as testers at 45% of organizations admit that they do not always follow security and privacy regulations for test data.[iii]

We have written extensively elsewhere on techniques for making complete and compliant data available on the fly to testers, developers, automation frameworks, and CI/CD pipelines. This article will highlight a principle underpinning many of these techniques, which is often missing from test data strategies today. Let’s call this missing ingredient of test data success “the democratization of data.”

If Testing Was a Race, Data Would Win Every Time

Okay, so that title doesn’t make complete sense. However, if you read to the end of this article, all will become clear. I’m first going to discuss some of the persistent barriers to in-sprint testing and development. I will then discuss a viable route to delivering rigorously tested systems in short sprints.

The two kingpins in this approach will be data and automation, working in tandem to convert insights about what needs testing into rigorous automated tests. But first, let’s consider why it remains so challenging to design, develop and test in-sprint.

Moving From GDPR Compliance to Full-Blown “Test Data Automation”

I’ve written fairly frequently on the impact of the GDPR on testing, often responding to the news and research that continues to flow in. The below infographic summarises my thinking. It draws on news and research from 2019-2020 to show the need to address test data privacy issues today.*

The stats tell a pretty consistent story: the risk of a data breach continues to rise, as do the associated fines and brand damage. Meanwhile, the most effective way to mitigate this risk in testing is to limit the sharing of sensitive information to test environments.

5 Reasons to Model During QA, Part 5: A “Single Pane of Glass” For Technologies and Teams

Modeling lets helpful information go both ways.

Welcome to the final installment of 5 Reasons to Model During QA! If you have missed any of the previous four articles, jump back in to find out how modelling can:

  1. Identify bugs during the requirements analysis and design phase, where they require far less time and cost to fix;
  2. Drive up testing efficiency, automating the creation of test cases, test data and automated test scripts;
  3. Maximise test coverage and shorten test cycles, focusing QA on the most critical, high risk functionality;
  4. Introduce QA Resilience and Flexibility to change, automatically updating a rigorous test suite as requirements evolve.

This last article in the series shifts focuses to consider modeling within the broader context of the Software Delivery Lifecycle. It goes beyond QA, considering how models deliver value to the BAs, developers, and testers who can work collaboratively from them.

Test Data Management Lagging Behind Automation in Testing: 10 Reasons Why That’s a Problem

Test Data Management: Still a Problem Worth Solving

The latest QA industry research suggests that Test Data Management (TDM) has remained static at many organizations, in spite of all the advances in test automation and move towards DevOps.

In fact, the tools and techniques used to provision test data remain largely the same as when the Curiosity Software team first began in QA, some 30 years ago. 65% of organizations still use production data in testing. 36% mask it and 30% subset data before provisioning it to test environments. Meanwhile, just 18% synthesize data using automated techniques. [i]  

5 Reasons to Model During QA, Part 1/5: “Shift Left” QA Uproots Design Defects

Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the automation technique with the greatest current business interest according to the 2018 World Quality Report, with 61% of respondents stating that they can foresee their organization adopting it in the coming year. [1]

Technologies like The VIP Test Modeler have significantly reduced the time and technical knowledge needed to model complex systems. Organizations can now enjoy all the benefits of Model-Based techniques within short iterations, whereas previously modeling had been reserved for only the most high-stake projects, such as lengthy Waterfall projects in aerospace and defense.

GDPR and Testing: Are You a Skeptic or a Gambler?

I started writing and speaking about the significance of the EU General Data Protection Regulation (GDPR) for testing about five years ago. My alarm then, at the implications of the tightening legislation, was frequently met with two forms of response:

1. The skeptic: “Big organizations will simply group together and resist this in the courts. Nothing will change in practice and there’s no way that national data protection agencies will be able to demand so much change so quickly, let alone levy fines this big.”