New Ways for CNAPP to Shift Left and Shield Right: The Technology Trends That Will Allow CNAPP to Address More Extensive Threat Models

Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC.


The cloud-native application protection platform (CNAPP) model is designed to secure applications that leverage cloud-native technologies. However, applications not in the scope are typically legacy systems that were not designed to operate within modern cloud infrastructures. Therefore, in practice, CNAPP covers the security of containerized applications, serverless functions, and microservices architectures, possibly running across different cloud environments.

The Impact of AI and Platform Engineering on Cloud Native’s Evolution: Automate Your Cloud Journey to Light Speed

Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC.


2024 and the dawn of cloud-native AI technologies marked a significant jump in computational capabilities. We're experiencing a new era where artificial intelligence (AI) and platform engineering converge to transform cloud computing landscapes. AI is now merging with cloud computing, and we're experiencing an age where AI transcends traditional boundaries, offering scalable, efficient, and powerful solutions that learn and improve over time. Platform engineering is providing the backbone for these AI systems to operate within cloud environments seamlessly. 

Observations on Cloud-Native Observability: A Journey From the Foundations of Observability to Surviving Its Challenges at Scale

Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC.


Cloud native and observability are an integral part of developer lives. Understanding their responsibilities within observability at scale helps developers tackle the challenges they are facing on a daily basis. There is more to observability than just collecting and storing data, and developers are essential to surviving these challenges.

Software Supply Chain Security

Securing software supply chains has become a first-class consideration — along with coding and CI/CD pipelines — when developing a software product. Far too many vulnerabilities have been subliminally introduced into software products and resulted in catastrophic breaches for us as diligent developers to treat supply chain security as an afterthought. The core practices and principles outlined in this Refcard provide a foundation for creating secure supply chains that produce deliverables and products that others can trust.

Product Security (DevSecOps Practices)

What Is Product Security?

Product Security is a process within the Cybersecurity function which aims to deliver a secure product, which includes the organization's Web applications, Web services, Mobile applications, or any hardware manufactured. This focuses on considering security at every stage, starting from design, development, and implementation, i.e., the secure SDLC process.

Product Security involves multiple activities, including threat modeling, Security testing (Static application security testing (SAST)), Dynamic application security testing (DAST), Penetration Testing, Secure coding practices, Incident response, and Continuous monitoring. The primary goal of product security is to protect the CIA (Confidentiality, Integrity, and Availability).

Using Automated Test Results To Improve Accessibility

A cursory google search will return a treasure trove of blog posts and articles espousing the value of adding accessibility checks to the testing automation pipeline. These articles are rife with tutorials and code snippets demonstrating just how simple it can be to grab one’s favorite open-source accessibility testing library, jam it into a cypress project, and presto changeo, shifting left, and accessibility has been achieved… right?

Unfortunately, no, because actioning results in a consistent, repeatable process is the actual goal of shift-left, not just injecting more testing. Unlike the aforementioned treasure trove of blog posts about how to add accessibility checks to testing automation, there is a noticeable dearth of content focused on how to leverage the results from those accessibility checks to drive change and improve accessibility.

With that in mind, the following article aims to fill that dearth by walking through a variety of ways to answer the question of “what’s next?” after the testing integration has been completed.

Status Quo

The confluence of maximum scalability and accessibility as requirements has brought most modern-day digital teams to the conclusion that the path to sustainable accessibility improvements requires a shift left with accessibility. Not surprisingly, the general agreement on the merits of shifting left has led to a tidal wave of content focused on how important it is to include accessibility checks in DevOps processes, like frontend testing automation, as a means to address accessibility earlier on in the product life cycle.

Unfortunately, there has yet to be a similar tidal wave of content addressing the important next steps of how to effectively use test results to fix problems and how to create processes and policies to reduce repeat issues and regression. This gap in enablement creates the problem that exists today:

The dramatic increase in the amount of accessibility testing performed in automation is not correlating to a proportional increase in the accessibility of the digital world.

Problem

The problem with the status quo is that without guidance on what to do with the results, increased testing does not correlate with increased accessibility (or a decrease in accessibility bugs).

Solutions

In order to properly tackle this problem, development teams need to be enabled and empowered to make the most of the output from automated accessibility testing. Only then can they effectively use the results to translate the increase in accessibility testing in their development lifecycle to a proportional decrease in accessibility issues that exist in the application.

How can we achieve this? With a combination of strategically positioned and mindfully structured quality gates within the CI/CD pipeline and leveraging freely available tools and technologies to efficiently remediate bugs when they are uncovered, your development team can be well on their way to effectively using automated accessibility results. Let’s dive into each of these ideas!

Quality Gates

Making a quality gate is an easy and effective way to automate an action on your project when committing your code. Most development teams now create gates to check if there are no linting errors, if all test cases have passed, or if the project has no errors. Automated accessibility results can fit right into this same model with ease!

Where Do The Gates Exist?

For the most part, the two primary locations for quality gates within the software development lifecycle (SDLC) are during pull requests (PRs) and build jobs (in CI).

With pull requests, one of the most commonly used tools is GitHub Actions, which allows development teams to automate a set of tasks that should be completed or checked when code is committed or deployed. In CI Jobs, the tools’ built-in functionality (Azure, Jenkins) is used to create a script that checks to see if test cases or scenario has passed. So, where does it make sense to have one for your team?

It all depends on what level development teams want to put a gate in place for accessibility testing results. If the team is doing more linting and component-level testing, the accessibility gate would make the most sense at a pull request level. If the automated test is at an integration level, meaning a full baked-out site ready for deployment, then the gate can be placed with a CI job.

Types Of Gates

There are two different ways that quality gates can operate: a soft check and a hard assertion.

A soft check is relatively simple in the definition. It looks at whether or not the accessibility tests were executed. That is it! If the accessibility checks were run, then the test passes. In contrast, assertions are more specific and stringent on what is allowed to pass. For example, if my accessibility test case runs, and it finds even ONE issue, the assertion fails, and the gate will say it has not passed.

So which one is most effective for your team? If you are looking to get more teams to buy into accessibility testing as a whole, a best practice is to not throw a hard assertion right away. Teams initially struggle with added tasks or requirements, and accessibility is no different. Starting with a soft gate allows teams to see what the requirement is going to be and what they are required to be doing.

Once a certain amount of time has passed, then that soft gate can switch to a hard assertion that will not allow a single automated issue out the door. However, if your team is mature enough and has been using accessibility automation for a while, a hard assertion may be used initially, as they already have experience with it.

Creating Effective Gates

Whether you are using a soft or hard gate, you have to create requirements that govern what the quality gate does with regard to accessibility automated results. Simply stating, “The accessibility test case failed,” is not the most effective way to make use of the automated results. Creation of gates that are data-driven, meaning they are based on a piece of data from the results, can help make a more effective gate that matches your development team or organization’s accessibility goals.

Here are three of the methods of applying assertions to govern accessibility quality:

  • Issue severity
    Pass or fail based on the existence or count of specific severity issues (Critical, Serious, and so on).
  • Most common issues
    Pass or fail based on the existence or count of specific issue types which are known to be most common (either global or organization specific).
  • Critical or Targeted UI /UX
    Do these bugs exist in high-traffic areas of the application, or do these bugs directly impede a user along a critical path through the UX?

Fixing Bugs

The creation and implementation of quality gates is an essential first step, but unfortunately, this is only half the battle. Ultimately a development organization needs to be able to fix the bugs found at the various quality gate inspection points. Otherwise, the applications’ quality will never improve, and nothing will clear the gates that were just put in place. What a terrifying thought that is.

In order to translate the adoption of the quality gates into improved accessibility, it is vital to be able to make effective use of the accessibility test results and leverage tools and technologies whenever possible to help drive remediation, which eliminates accessibility blockers and ultimately creates more inclusive experiences for users.

Accessibility Test Results

There is a common adage that “there is no such thing as bug-free software,” and given that accessibility conformance issues are bugs, this axiom applies to accessibility as well. As such, it is absolutely necessary to be able to clearly prioritize and triage accessibility test results in order to apply limited resources to seemingly unlimited bugs to fix them in as efficient and effective a way as possible.

It is helpful to have a few prioritization metrics to assist in the filtration and triage work when working with test results. Typically, context is an effective top-level filter, which is to say, attacking bugs and blockers that exist in high-traffic pages or screens or critical user flows is a useful way to drive maximal impact on the user experience and the application at large.

Another common filter, and one that is often secondary to the “context” filter mentioned above, is to prioritize bugs by their severity, which is to say, the impact on the user caused by the bug’s existence. Most free or open-source automated accessibility tools and libraries apply some form of issue severity or criticality label to their test results to help with this kind of prioritization.

Lastly, as a tertiary filter, some development teams are able to organize these bugs or tasks by thinking about the level of effort to implement a fix. This last filter isn’t something that will commonly be found in the test results themselves. Still, developers or product managers may be able to infer a level of effort estimation based on their own internal understanding of the application infrastructure and underlying source code.

Thankfully, accessibility test results, for the most part, share a level of consistency, regardless of which library is being used to generate the test results, in that they generally provide details about what specific checks failed, where the failures occurred in terms of page URL and sometimes even CSS or XPath as well as specific component HTML, and finally actionable recommendations on how to fix the components that failed the specific checks. That way, a developer always has a result that clearly states what’s wrong, where’s it wrong, and how to fix what’s wrong.

In the above ways, developers can clearly stack, rank, and prioritize tasks that result from automated accessibility test results. The test results themselves are typically designed to be clear and actionable so that each task can be remediated in a timely fashion. Again, the focus here is to be able to effectively deliver maximal impact with limited resources.

Helpful Tools

The above strategies are well and good in terms of having a clear direction for attacking known bugs within a project. Still, it can be daunting to figure out whether one’s remediation solution actually worked or further to figure out a path forward to prevent similar issues from recurring. This is where a number of free tools that exist in the community can come into play and support and empower development organizations to expedite remediation and enable validation of fixes, which ultimately improves downstream accessibility while maintaining development velocity.

One such family of free tools is the accessibility browser extension. These are free tools that can help teams locate, fix, and validate the remediation of accessibility bugs. It is likely that whatever accessibility library is being used in the CI/CD pipeline has an accompanying (and free) browser extension that can be used in local development environments. A couple of examples of browser extensions include:

The browser extensions allow a developer to quickly and easily scan a page in the browser, identify issues on the page, or as in the case described above, they can validate that an issue that was detected during the testing automation process, which they have since remediated, no longer exists (validation!). Browser extensions are also a fantastic tool that can be leveraged during active development to find and fix bugs before code gets committed. Often, they are used as a quality check during a pull request approval process, which can help prevent bugs from making their way downstream.

Another group of free tools that can help developers fix accessibility bugs is linters which can be integrated within the developers integrated development environment (IDE)and automatically identifies and sometimes automatically remediates accessibility bugs detected within the actual source code before it compiles and renders into HTML in a browser.

Linters are fantastic because they function similarly to a spell checker in a document editor tool like Microsoft Word. It’s largely fully automated and requires little to no effort for the developer. The downside is that linters typically have a limited number of reliable checks that can be executed for accessibility at the point of source code editing. Here are some of the top accessibility linters:

Equipping a development team with browser extensions and linters is a free and fast way to empower them to find and fix accessibility bugs immediately. The tools are simple to use, and no special accessibility training is required to execute the tests or consume and action the results. If the goal is to get farther faster with regard to actioning automated accessibility test results and improving accessibility, the adoption of these tools is a great first step.

The Next Level

Now that we have strategies for how to use results to improve accessibility at an operational level, what’s next? How can we ensure that all of our organization knows that accessibility is a practical piece of our development lifecycle? How can we build out our regression testing to include accessibility so that issues may not be reintroduced?

Codify it!

One way we can truly ensure that what we have created above will be done on a daily basis is to bring accessibility into your organization’s policy (also known as code policy or policy of code) — establishing such means that accessibility will be included throughout the SDLC as a foundational requirement and not an optional feature.

Although putting accessibility into the policy can take a while to achieve, the benefits of it are immeasurable. It creates a set of accessible coding practices that are clearly defined and established for how accessibility becomes part of the acceptance criteria or definition of “done” at the company level. We can use the automated accessibility results to drive this policy of code and ensure that the teams are doing full testing, using gates, and fixing the issues set by the policy!

Automate it!

Most automated accessibility testing libraries are standard out-of-the-box libraries that test generically for accessibility issues that exist on the page. The typical amount of issues caught is around 40%, which is a good amount. However, there is a way in which we can write automated accessibility tests to go above and beyond even more!

Accessibility regression scripts allow you to check accessibility functionality and markup to ensure that the contents of your page are behaving the way they should. Will this guarantee it works with a screen reader? Nope. But it will ensure that the accessible functionality of it is properly working.

For example, let’s say you have an expand/collapse section that shows extra details have you click the button. Automated accessibility libraries would be able to check to ensure the button has accessible text and maybe that it has a focus indicator. Writing a regression script, you could check to ensure the following:

  • It works with a keyboard (Enter and Space);
  • aria-expanded=” true/false” is properly set on the button;
  • The content in the expanded section is properly hidden from screen readers.

Doing this on key components can help ensure that the markup is properly set for assistive technology, and if there is an issue, it can be easier to debug if the issue is in code or potentially a bug in the assistive technology.

Conclusion

The “shift left” movement within the accessibility industry over the last few years has done a lot of good in terms of generating awareness and momentum. It has helped engage and activate companies and teams to actually take action to impact accessibility and inclusion within their digital properties, which in and of itself is a victory.

Even so, the actual impact on the overall accessibility of the digital world will continue to be somewhat limited until teams are not only empowered to execute tests in efficient ways but also that they are enabled to effectively use the test results to govern the overall quality, drive rapid remediation, and ultimately put process and structure in place to prevent regression.

In the end, the goal is really more than simply shifting left with accessibility, which often ends up taking what a bottleneck of testing in the QA stage of the SDLC is and simply dragging it left and upstream and placing it into the CI/CD pipeline. What really is desired, if sustainable digital accessibility transformation is the goal, is to decentralize the accessibility work and democratize it across the entire development team so that everyone participates (and hopefully into the design as well!) in the process.

The huge increase in automated accessibility testing adoption is a wonderful first step, but ultimately its impact is limited if we don’t know what to do with the results. If teams better understand how they can use these test results, then the increase in testing will, by default, increase accessibility in the end product. Simple gatekeeping, effective tool use and a mindful approach can have a major impact and lead to a more accessible digital world for all.

Related Reading on Smashing Magazine

SAST in Secure SDLC: 3 Reasons to Integrate It in a DevSecOps Pipeline

Vulnerabilities produce enormous reputational and financial risks. As a result, many companies are fascinated by security and desire to build a secure development life cycle (SSDLC). So, today we're going to discuss SAST — one of the SSDLC components.

SAST (static application security testing) searches for security defects in application source code. SAST examines the code for potential vulnerabilities — possible SQL injections, XSS, SSRF, data encryption issues, etc. These vulnerabilities are included in OWASP Top 10, CWE Top 25, and other lists.

Ingredients for a Perfect Design Document

Introduction

If you are a software developer or an architect, then surely you know how important software design documents (SDD) are. You can find unlimited articles on the internet related to the type of design documents, which type should have which heading, etc., so I am not going to waste your time by explaining those things again.

Now the question is this: if we already have a lot of information available on the internet, then what do we want to cover in this article and why?

What Can Go Wrong While Following Agile Methodology

Introduction

Most of the Job Postings you will see will have the term Agile Process mentioned somewhere in their Job Description. If you talk to Developers or Managers in your network, most of them will tell you that their teams are using Agile Methodology. 

So I believe it is fair to say that the term Agile Process is trending nowadays in Software Industry, and using Agile Process in your team is as cool as using AI/ML in your code. Agree?

The Complete Guide to SDLC

Introduction

35. That is the average number of applications installed on smartphones. I believe it won’t be out of place to say that we live in a software world. I mean, we are in a digital age, aren’t we? Have you ever wondered what it takes to build this software? Yes, we all know developers make them, but that isn’t all. It takes a lot more.

This article will discuss the Software Development Life Cycle (SDLC) and its phases and methodologies.

Low Code Platforms Require Software Development Skills

Why it matters: IT leaders seeking to democratize technology with low code citizen developer projects may not realize low code platforms still require application development skills.

The big picture: Regardless of required development effort, building more apps for workers to contend with increases complexity and builds technical debt.

Is Proof of Concept Approach Crucial in Software Development?

It’s a technique that we all love! PoC or Proof of Concept approach is a trend in software development to create a demo of a project in advance. It’s a fairly new method, especially when compared to old-school alternatives such as RFP (i.e. Request for Proposal).

However, the big question to address is “how essential is the PoC approach in SDLC?

Why Continuous Performance Testing for Retail Apps Matters

The retail industry has been actively adopting digital transformation in order to provide a better user experience. According to current trends, the reliance on digital channels has been enormous, placing them at the core of all significant online retail operations.

Mobile sales increased by 68 percent in 2020 and are anticipated to surpass other channels as the largest source of all sales by the end of this year. Can you identify the true reason for increased sales using retail mobile applications? The answer is "digital confidence."

Solution vs. Software Architecture

In my tenure as a solution architect in financial services working for a global consulting firm, I have often questioned the best way to practice enterprise architecture.  A common challenge that many architecture consultants face is that most client firms insist on using their proprietary enterprise architecture content. I have observed that frequently such architecture content does not always distinguish between the solution and software architecture.

This article is not an academic paper. In this article, I  present my experiences and ideas to help my colleagues better understand the vague differences between two similar but distinct architecture disciplines. At the very least, this article may trigger a conversation via the comments section that may persuade me that I am wrong.

If Testing Was a Race, Data Would Win Every Time

Okay, so that title doesn’t make complete sense. However, if you read to the end of this article, all will become clear. I’m first going to discuss some of the persistent barriers to in-sprint testing and development. I will then discuss a viable route to delivering rigorously tested systems in short sprints.

The two kingpins in this approach will be data and automation, working in tandem to convert insights about what needs testing into rigorous automated tests. But first, let’s consider why it remains so challenging to design, develop and test in-sprint.

DevSecOps: Agile Security in the Face of Rapid Change

Most organizations have a streamlined process in place to create, release, and maintain functional software. However, when it comes to securing the software, things are not as smooth.

Insecure software puts businesses and customers at risk as hackers expose and exploit inherent vulnerabilities. With the world becoming ever more interconnected, loopholes in software prove to be costlier than ever before.

7 Software Development Models You Should Know

The Software Development Life Cycle, or SDLC, is the process of planning, designing, developing, testing, and deploying high-quality software at the lowest cost possible, preferably in the shortest amount of time. To achieve this goal, software engineering teams must choose the correct software development model to fit their organization’s requirements, stakeholders’ expectations, and the project. There are a myriad software development models, each with distinct advantages and disadvantages.

Details of the project, including timeframe and budget, should influence your choice of model. The goal is to select a software development model that will ensure project success. Selecting the incorrect model will result in drawn out timelines, exceeded budgets, low quality outputs, and even project failure.

Can Your Software Development Processes Withstand a Software Supply Chain Attack?

Enterprise software development has graduated from the “waterfall” framework of development and operations - and has become less linear, more complex and, in several ways, more difficult to secure. While contemporary software supply chain practices allow developers to manage that complexity and deliver software efficiently at scale, unaddressed gaps and vulnerabilities within the process continue to be exploited by threat actors.

That’s why security measures within every step of software development and supply chain must take top priority as attacks continue to be directed to the application layer — and often succeed in penetrating the network and executing malicious instructions.