Patchstack Whitepaper: WordPress Ecosystem Records 150% Increase in Security Vulnerabilities in 2021

Patchstack has published its State of WordPress Security whitepaper with a summary of threats to the WordPress ecosystem recorded in 2021. The whitepaper aggregates data from multiple sources, including the Patchstack Vulnerability Database, the Patchstack Alliance (the company’s bug bounty platform), and publicy reported CVEs from other sources.

In 2021, Patchstack recorded nearly 1,500 vulnerabilities, a 150% increase as compared to 2020, which recorded ~600. Patchstack found that the majority of these come from the WordPress.org directory:

The WordPress.org repository leads the way as the primary source for WordPress plugins and themes. Vulnerabilities in these components represented 91.79% of vulnerabilities added to the Patchstack database.

The remaining 8.21% of the reported vulnerabilities in 2021 were reported in premium or paid versions of the WordPress plugins or themes that are sold through other marketplaces like Envato, ThemeForest, Code Canyon, or made available for direct download only.

WordPress core shipped four security releases, and only one included a patch for a critical vulnerability. This particular vulnerability was not in WordPress itself but rather in one of its bundled open source libraries, the PHPMailer library.

Patchstack estimates that 99.31% of all security bugs from 2021 were in components – WordPress plugins and themes. Themes had the most critical vulnerabilities, logging 55 this year. Patchstack found that 12.4% of vulnerabilities reported in themes had a critical CVSS score of 9.0-10.0. Arbitrary file upload vulnerabilities were the most common.

Plugins had a total of 35 critical security issues. This is fewer vulnerabilities compared to themes, but 29% of these received no public patch.

“The most surprising finding was really also the most unfortunate truth,” Patchstack Security Advocate Robert Rowley said. “I was not expecting to see so many plugins with critical vulnerabilities in them not receive patches.

“Some of those vulnerabilities required no authentication to perform, and  have publicly available proof of concepts (exploit code) made widely available online. It is probably already too late for the site owners who did not get a notice that their websites were vulnerable.”

Patchstack surveyed 109 WordPress site owners and found that 28% of respondents had zero budget for security, 27% budgeted $1-3/month, and just 7% budget ~$50/month. Agencies were more likely to allocate monthly costs to security than individual site owners.

Conversely, results from these same respondents showed$613 as the average cost of malware removal. Post-compromise cleanup prices reported ranged from $50 – $4,800.

Rowley sees the significant increase in security vulnerabilities found in 2021 as evidence of more engaged security professionals, not a sign of the WordPress ecosystem becoming less secure.

“Most likely this is due to more security bugs being reported (more vulnerable code being found, because more people are looking),” Rowley said. “Patchstack runs a bug bounty program which pays security researchers for the bugs they report in the WordPress ecosystem, which incentives security researchers (and even developers familiar with WordPress) to look for more security bugs.”

Overall, Patchstack’s findings this year show that WordPress core is very secure and the vast majority of vulnerabilities are found in themes and plugins. Users should monitor their extensions and periodically check to see if they have been abandoned, as not all vulnerable software is guaranteed to get patched. Check out the full security whitepaper for more details on the types of vulnerabilities most commonly found in 2021.

Should WordPress 6.0 Remove the “Beta” Label From the Site Editor?

The short answer: probably not.

The longer answer…

It will depend on many things going right over the next couple of months. WordPress 6.0 is scheduled for launch on May 24, 2022. The 6.0 Beta 1 is on April 12. Even with a slightly extended cycle, major updates come fast.

Anne McCarthy opened a discussion in the Gutenberg repository on Tuesday. She posted:

Across the community, I’m getting questions around when the beta label for the site editor will be removed. While not explicitly stated in the 6.0 post, I know it’s been discussed around having it removed for 6.0 as this was meant to be a temporary label to communicate that the site editor is still in early stages.

In order to track this concern and have a place to point people, I’m opening this issue so, when the time comes, a decision can be made and folks can be notified.

In response, Courtney Robertson said she would like to see a unified effort on creating support material across LearnWP, HelpHub, and DevHub. “What would the community’s expectation be for supporting resources related to beta, and what capacity do teams have to reach that goal?” she asked.

In a separate reply, she posed additional questions:

  • What does the 3rd party ecosystem struggle with today for adopting FSE?
  • What impact with this have for end site builders using products (plugins) that want to but can’t yet work with Site Editor because features are missing (custom post types, templates, shortcodes) and the following impact to their clients?
  • What areas feel incomplete yet in Site Editor?

“Before removing the label, we need feedback about the expectations when there is no beta label,” she wrote.

Alex Stine noted accessibility issues as a blocker for removing the beta label. Dave Ryan added items that theme authors were still hard-coding because they are unsupported in the editor.

Avoiding the WordPress 5.0 Gutenberg debacle should be a priority. The block editor was arguably the worst feature rollout in the platform’s history, one that has left a fractured community that is, over three years later, still picking up some of the pieces of itself.

The problem was more about communication than anything. It was not that the block editor was in and of itself a poor product. It just felt very much like beta software that was switched on overnight — the platform’s users its guinea pigs. Plus, the lack of a built-in method of staying on the classic editor without installing a plugin made for a rough transition.

Aurooba Ahmed noted a similar risk with removing the beta label early:

[Josepha Haden Chomphosy] talked about the impact and experience of social proof not being on Gutenberg’s side early on in the project. I think a lot of that had to do with how things were presented and a bunch of PR issues. Removing the beta label from the Site Editor could be just as problematic.

Some FSE features like block-based widgets and nav menus have also had problematic rollouts. Developers and end-users have often needed to scramble for solutions without an appropriate transition period before switching the features on when ready.

However, the site editor and global styles have been entirely opt-in FSE features thus far. That is not changing anytime soon. Users must explicitly activate a block theme to access them.

This has made for a far gentler transition, allowing early adopters to test the waters before the rest of the world. And, make no mistake, the site editor and block-based themes fundamentally change how WordPress’s theme and customization system has worked for years.

We will be lucky if even 100 block themes are in the official directory when WordPress 6.0 launches. Today, there are 53, a fraction of the 1,000s of themes in total.

There is little harm that could be done by keeping the site editor in beta for a while longer. When something breaks, it feels better knowing it is an experimental feature.

Of course, it must come to an end one day as we peel back the label and let the site editor shine in its own light. It cannot stay in beta endlessly, and “6.0” is a nice, round, feel-good number. Despite WordPress marching to the beat of its own versioning drum, it does not erase how much those “x.0” releases feel like they should be revolutionary in some way. Putting a stamp of approval on the site editor would be a highlight, but it would likely be premature.

WordPress 6.1 may be a more opportune moment. There is no pressing need to rush support material, bypass accessibility issues, or not let features mature for a cycle or two.

AI and Explainability: Discover Why Your Models Make Their Decisions

This is an article from DZone's 2022 Enterprise AI Trend Report.

For more:


Read the Report

Explainable artificial intelligence, sometimes referred to as XAI, is exactly what it sounds like — explaining how and why a machine learning model makes a prediction. While models are usually classified as either "black box" or "glass box," it isn't quite as simple as that; there are some that fall somewhere in between. Some models are more naturally transparent than others, and their uses depend on the application. 

The Top 11 AWS Certificates You Need to Know

As a whole, the world is going through a digital revolution that shows no signs of slowing down any time soon. One of the main reasons for the ongoing digital transformation is the growth of cloud computing. The COVID-19 pandemic has accelerated this growth to new highs as physical businesses, nonprofits, and entrepreneurs have been forced to adapt to the new climate and embrace a fully online way of working and providing products and services.

Essentially, lower maintenance, electricity, and storage costs, together with greater reliability, speed, and cost-effectiveness, make migrating from the costly and clunky physical servers to the thin, elastic, and multiplatform cloud incredibly appealing to every type of business or entrepreneur.

Functional Testing For ASP.NET Core API

What Is Functional Testing, Actually?

Functional testing is the process through which we determine if a piece of software is acting following pre-determined requirements. It uses black-box testing techniques, in which the tester does not know the internal system logic.

In the scope of API, we need to prepare possible requests that we expect to get and be sure that endpoints return a proper response for any payload.

How Milvus Balances Query Load Across Nodes

In previous blog articles, we have successively introduced the deletion, bitset, and compaction functions in Milvus 2.0. To culminate this series, we would like to share the design behind load balance, a vital function in the distributed cluster of Milvus.

Usage

Milvus 2.0 supports automatic load balance by default. But you can still trigger load balance manually. Please note that only sealed segments can be transferred across query nodes.

An Engineer’s Guide to TODOs: How to Get Things Done

We've long been promised a world where automation and other tech would free up our time to focus on more creative, rewarding pursuits.

However, we still find ourselves battling with small, but time-sucking tasks. We all want to surrender ourselves to the deep focus we need to complete more important work, but there are two big old-time sucks:

How to Create a Truly Immersive Metaverse Experience – Implementing Spatial Audio with the Web Audio API

With the rise of the metaverse and 3D games such as Battle Royale, the demand for immersive audio experiences in virtual environments is growing rapidly. Spatial audio, a technology that allows users to perceive the location and distance of a sound source around them in a virtual scene, is quickly becoming an essential part of creating immersive virtual experiences.

In response to this rapidly growing demand for an immersive audio experience, we've added a proximity voice module to the ZEGOCLOUD Express Web SDK (since v2.10.0), which provides the following features:

standard app for PDF continuously taken over by Edge.

Hi
what could be the cause that Edge most frequently is seen asthe standard app for PDF files?

I've changed it in the standard app settings.
I've changed in the explorer menu; r.click open with... choose other app click open always with chosen app
Also made changes in the registry.
Even uninstalled EDGE, but even then edge becomes the standard and the progress hangs and is hard to end.

But this annoying behaviour is still there.

thx for helping me out
grt Frank

Google Cloud Update Disrupts Discord API Availability

Earlier this week users of both Discord and Spotify noticed problems with accessing each platform, leading to significant frustration and Discord playfully tweeting that it's “time to go outside everyone.” The underlying issue ended up being connected to a Google Cloud component update that was later rolled back. 

Where to get Windows 10 iso

Hello,

I"m trying to make a nice USB-stick, with a Windows 10 ISO file on it. But where do I get that ISO file? I'm not looking for the tool which makes your USB-stick a bootable one. I'm looking for the actual ISO file

Thanks

Sanitize PHP user input strings

Suppose you have a php script where a user is prompted to enter a number. You then do something with that number ... you increment it, perform some other math calculation on it, search the database for records with the ID # the user passed in as a query string, etc.

But what if your script is expecting a number, but they passed in something like apple?

What if you were expecting the end-user to visit the URL www.example.com?id=5 but, instead, they went to the URL www.example.com?id=apple?

You can't increment apple, as it would throw an error. You can't look up the ID # apple in your database ... or worse yet, a malicious person can use an SQL injection attack string that could actually destroy your database!

Therefore, you always want to sanitize user input into the format you are expecting.

If you are expecting $variable to be an integer, then do $variable = intval($variable); and that will convert whatever $variable happens to be to its integer equivalent. If you are expecting $variable to be a positive integer (e.g. an ID # in a database) then do $variable = abs(intval($variable));. If you want to strip HTML tags from a string, you can use the strip_tags() php function.

PHP additionally has sanitization functions to ensure a string is properly formatted as an email or a URL. Here is a more complete list but you can use it as so:

// To strip all characters except those that are permitted in email addresses
$email = filter_var($email, FILTER_SANITIZE_EMAIL);

You can also use validation filters which return true or false depending on if a string is in a specific format. For example:

if (filter_var($email, FILTER_VALIDATE_EMAIL) {
    echo 'This string looks like a valid email address.';
}

By using filter_var() with sanitization and verification flags, you can ensure that a string doesn't contain hidden or weird characters, invalid characters, or an invalid format, that can screw up what you're expecting a string to look like.

Suppose you want to pass a string into MySQL. I've seen people write MySQL queries like this: SELECT * FROM user_table WHERE string = '$string';

The problem with this is what if $string contains a quote?! What if the value of string is:

My name is 'Dani'

Then, you'd actually be running the SQL query:

SELECT * FROM user_table WHERE string = 'My name is 'Dani'';

Notice the extra single quote at the end there. That would throw a MySQL error. But it gets worse! What if the value of the string that the end-user passed into the form or URL is:

My name is 'Dani'; DROP TABLE user_table;

You would then execute two SQL queries:

SELECT * FROM user_table WHERE string = 'My name is 'Dani';
DROP TABLE user_table;

The end-user could literally delete your entire table!

If you want to pass a string into MySQL, then MySQL has a sanitization function that automatically escapes potentially dangerous characters from the string, so that you aren't susceptible to these types of hacks, and the string stays within the quotes the way you intended it to. Here is a link to the mysqli::real_escape_string() function you want to use anytime you need to pass a string into a MySQL query.

You can use it like this:

// New database connection
$mysqli = new mysqli("localhost", "my_user", "my_password", "database_name");

// Value of the string (either via query string, form, some other unknown/unsanitized user input, etc.)
$string = "This is Dani's string";

// It's important to sanitize the string before using it in a query!
$string = $mysqli->real_escape_string($string));

$query = " SELECT string FROM table WHERE string = '$string' ";

// Execute the MySQL query
$result = $mysqli->query($query);

If you want to sanitize a string before being echo'ed out to the web browser, you want to use htmlspecialchars(). You would do something such as:

$string = 'An apple & a banana is invalid HTML.';

// Converts the & (which is invalid HTML) to &
echo htmlspecialchars($string);

In conclusion, always sanitize any variable where you don't have 1000% control over its value (e.g. all user input). However, sanitization should always be the last step before it is used in a database query, echo'ed to the screen, etc. You don't want to accidentally perform PHP-based calculations or manipulation with sanitized data, or you might wind up with unexpected results, depending on what you're trying to do.

An Introduction To AWS Cloud Development Kit (CDK)

When you start building a cloud-based back-end system for your application, you have a choice, on the one hand, to do it manually using a graphical user interface (GUI) or the command-line interface (CLI) or, on the other hand, to do it programmatically. If your application uses just a handful of cloud resources, you can easily manage it using the GUI console. As the complexity of your system increases, the underlying infrastructure will also grow, and managing it manually will become a nightmare. Moreover, it’s prone to human error — a small user error could potentially bring the system into a bad state. Managing your infrastructure programmatically is a much better alternative, whether you are an indie developer using just a small bunch of cloud resources or a large organization with very complex infrastructure requirements.

Before jumping into AWS CDK, I’ll provide a brief overview of the workflow for manual infrastructure deployment and discuss a few points to determine whether managing the infrastructure manually is the right choice for your project. Next, we’ll look into ways to programmatically manage your infrastructure and briefly discuss different tools that you can use to do so. Finally, we’ll dive deep into using AWS CDK, an infrastructure-as-code (IaC) tool offered by AWS, and see an example of how to use it to manage your infrastructure.

Manual Infrastructure Deployment

Manual infrastructure deployment refers to using the GUI or CLI made available by a cloud provider to deploy your cloud resources. Because it involves manual intervention, creating new environments can’t be executed in a repeatable, reliable, or consistent fashion. Moreover, the run books need to be kept up to date, and knowledge transfer is required whenever there is a change in personnel.

For example, if you need cloud storage for your application and you decide to use AWS for your cloud requirements, then you can simply browse to the AWS cloud console, log in to it, click on “Create a new bucket”, and fill out the web form to provision an AWS S3 bucket. The diagram below shows an example of the form that you need to fill out in order to create the bucket.

If you prefer to use the CLI instead, open your terminal and run the create-bucket command.

aws s3api create-bucket --bucket my-bucket --region us-east-1

Similarly, if your application uses multiple cloud resources, you would need to repeat these steps for each of the services involved. In addition to provisioning the resources, you will need to ensure that the inter-service permissions are set correctly. And if you are using a different cloud provider, then you would have to perform a similar set of steps in their console. All of the major cloud providers have a GUI and a CLI interface that can be used to create, modify, or delete any cloud resources.

If your process is more formalized, then any infrastructure change might require a new service request. The diagram below shows a general workflow for manually processing any service request. A development and IT operations (DevOps) engineer might be responsible for processing this request and would need to perform a series of steps to make the changes. The DevOps engineer would first determine the list of affected cloud services, and then log in to the corresponding service account to create, modify, or delete resources. Moreover, the engineer would also update the access-control policies for inter-service communication. Finally, the engineer might need to set up any event triggers. For example, let’s say that a function needs to be triggered whenever a new object is uploaded to the cloud storage. In such a scenario, and assuming that the function already exists, the engineer would need to create a new event trigger that invokes the function every time the cloud storage emits a PUT object event.

From the examples above, we get a sense that manually managing infrastructure isn’t a viable option for large projects with complex cloud requirements. For smaller projects, where you need to use just a few cloud resources that do not change often, you could very well manage it manually, because managing another code base for your infrastructure would be too much overhead. When you start working on a new prototype, you could start with manual deployment and switch to IaC once you see a need for frequent changes.

Programmatic Infrastructure Deployment

Programmatic infrastructure management refers to managing infrastructure in a descriptive model, using the same versioning as the DevOps team uses for source code. Most major cloud providers offer some way for you to manage infrastructure using code or templates.

AWS infrastructure can be managed programmatically using either AWS CloudFormation templates or AWS CDK. AWS CloudFormation templates comprise a YAML- or JSON-based configuration file that describes the desired resources and their dependencies, so you can launch and configure them together as a stack. Google Cloud recommends the use of its Deployment Manager to manage your infrastructure. Similar to AWS CloudFormation, Google Cloud’s Deployment Manager templates are YAML templates that can be used to describe your resources. Microsoft Azure offers Azure Resource Manager (ARM) templates to deploy and manage Azure services. ARM templates are JSON templates that can be used to define resources and their relationships. Moreover, Terraform is an open-source IaC tool that supports hundreds of cloud providers, including AWS, Google Cloud, and Microsoft Azure, and can be used to manage your infrastructure. Terraform configurations are maintained in .tf files and are based on the HashiCorp configuration language (HCL) syntax.

Whether AWS CloudFormation, Google Cloud Deployment Manager, Microsoft ARM, or Hashicorp Terraform — all of them require the use of YAML-, JSON-, or TF-based templates, which might not be intuitive to developers. As the complexity increases, working with YAML, JSON, Terraform files becomes a bit difficult because the configuration cannot be modularized. If you are working with AWS, you have an option to use AWS CDK, which we will discuss in detail in the coming sections. If you are using some other cloud provider, Terraform is currently the best IaC solution, because it supports the use of a declarative language (HCL) to define your infrastructure.

In the coming sections, I will provide a brief overview of AWS CDK and its benefits, and I’ll dive deep into CDK constructs, apps, stacks, and the deployment process.

Introduction To AWS CDK

AWS CDK is an open-source framework that lets you model and provision AWS cloud resources using the programming language of your choice. It enables you to model application infrastructure using TypeScript, Python, Java, or .NET. Behind the scenes, it uses AWS CloudFormation to provision resources in a safe and repeatable manner.

The diagram below shows the infrastructure management workflow with AWS CDK.

Benefits Of AWS CDK

CDK offers multiple advantages, making it one of the preferred choices for programmatically managing infrastructure.

  • Easier cloud onboarding
    CDK lets you leverage your existing skills and tools to build a cloud infrastructure. Developers can use their language of choice and continue using their preferred integrated development environment (IDE) to write a CDK app. CDK also provides various high-level components that can be used to preconfigure cloud resources with proven defaults, helping you build on AWS without needing to be an expert.
  • Faster development process
    The expressive power of programming languages and features, such as objects, loops, and conditions, can significantly accelerate the development process. Moreover, writing unit test cases for infrastructure components is also possible. Being able to unit test infrastructure code is of immense value, and it bolsters the developer’s confidence whenever they make any changes.
  • Customizable and shareable
    CDK allows you to extend existing components to create custom components that meet your organization’s security, compliance, and governance requirements. These components can be easily shared around your organization, enabling you to bootstrap new projects with best practices by default rapidly.
  • No context switching
    You can write your runtime code and define your AWS resources with the same programming language, and you can continue using the same IDE for runtime code and infrastructure development. Moreover, you can visualize your CDK application stacks and resources with the AWS Toolkit for Visual Studio Code. The toolkit provides an integrated experience for developing serverless applications, including a getting-started guide, step-through debugging, and deployment from the IDE.

In the next few sections, I will provide a brief overview of CDK concepts, and then we will use the AWS CDK toolkit to deploy a sample application to an AWS account.

CDK Constructs

AWS CDK constructs are cloud components that encapsulate configuration detail and glue logic for one or more AWS services. CDK provides a library of constructs covering most of the commonly used AWS services and features. You can customize these constructs based on your needs and create reusable components for your organization. You can easily change any of the parameters or encode your own custom construct. In addition to the constructs made available through these libraries, CDK provides one-to-one mapping with base-level AWS CloudFormation resources, providing a way to define it with a programming language. These resources provide complete coverage and make it possible to provision any AWS resource using CDK.

AWS CDK supports TypeScript, JavaScript, Python, Java, C# and .NET, and (in developer preview) Go. A construct represents a cloud component and encapsulates everything that AWS CloudFormation needs to create the component. When CDK objects are initialized in your CDK application, they are compiled into a YAML template that is deployed as an AWS CloudFormation stack.

The CDK constructs library includes all of the resources available on AWS. For example, s3.Bucket represents an Amazon S3 bucket, and sqs.Queue represents an Amazon SQS queue. The library contains three different levels of constructs: L1, L2, and L3.

L1 Constructs

The low-level constructs, L1, are comprised of CloudFormation resources. These constructs directly represent all of the resources available in AWS CloudFormation. For example, the s3.Bucket class represents an Amazon S3 bucket, and the dynamodb.Table class represents an Amazon DynamoDB table. Let’s take a few examples of L1 constructs to understand how they can be defined in a CDK application.

S3 Bucket Construct

The following code snippet can be used to create an S3 bucket and attach a policy to it that grants GetObject permission to the AWS account’s root user. In this example, we are using the addToResourcePolicy method to attach an IAM PolicyStatement to the bucket in order to provide fine-grained permissions:

import * as s3 from "@aws-cdk/aws-s3";
import * as iam from "@aws-cdk/aws-iam";

const bucket = new s3.Bucket(this, "CdkPlayBucket");
const result = bucket.addToResourcePolicy(
  new iam.PolicyStatement({
    actions: ["s3:GetObject"],
    resources: ["*"],
    principals: [new iam.AccountRootPrincipal()],
  })
);

DynamoDB Construct

The following code snippet can be used to create a DynamoDB table and attach autoscaling rules to it:

import * as dynamodb from "@aws-cdk/aws-dynamodb";

const table = new dynamodb.Table(this, "CdkPlayTable", {
  partitionKey: { name: "id", type: dynamodb.AttributeType.STRING },
  billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
});

const readScaling = table.autoScaleReadCapacity({
  minCapacity: 1,
  maxCapacity: 50,
});

readScaling.scaleOnUtilization({
  targetUtilizationPercent: 50,
});

The examples above demonstrate the power of L1 constructs and how they can be used to string together resources and configurations for your application.

L2 Constructs

The next level of constructs, L2, represent AWS resources with a higher-level intent-based API. They provide some defaults, boilerplate code, and glue logic, along with the low-level L1 constructs. For example, bucket.addLifeCycleRule() represents an S3 bucket with a lifecycle rule added to it. The code snippet below shows how it can be done:

bucket.addLifecycleRule({
  abortIncompleteMultipartUploadAfter: Duration.days(7),
  enabled: true,
  id: 'BucketLifecycleRule'
})

Additionally, you can add a CORS rule to the bucket by using the addCorsRule construct. These rules are useful when you need to access the objects in a bucket from a third-party domain.

bucket.addCorsRule({
  allowedMethods: [
    s3.HttpMethods.GET,
    s3.HttpMethods.POST,
    s3.HttpMethods.PUT,
  ],
  allowedOrigins: ["https://smashingmagazine.com"],
  allowedHeaders: ["*"],
});

L3 Constructs

The highest level of constructs, L3, is also called patterns. These constructs are designed to help you complete common tasks in AWS, often involving multiple kinds of resources. For instance, aws-apigateway.LambdaRestApi represents an AWS API Gateway API that is backed by an AWS Lambda function. The code snippet below shows how it can be used.

Note: We are creating a lambda.Function with inline code that is being passed to the LambdaRestApi method in order to connect it with the API Gateway.

const backend = new lambda.Function(this, "CDKPlayLambda", {
  code: lambda.Code.fromInline(
    'exports.handler = function(event, ctx, cb) { return cb(null, "success"); }'
  ),
  handler: "index.handler",
  runtime: lambda.Runtime.NODEJS_14_X,
});
const api = new apigateway.LambdaRestApi(this, "CDKPlayAPI", {
  handler: backend,
  proxy: false,
});

const items = api.root.addResource("items");
items.addMethod("GET"); // GET /items
items.addMethod("POST"); // POST /items
CDK Stacks And Apps

AWS CDK apps are composed of building blocks known as constructs, which are combined together to form stacks and apps.

CDK Stacks

A stack is the smallest deployable unit in AWS CDK. All of the resources defined in a stack are provisioned as a single unit. A CDK stack has the same limitations as AWS CloudFormation. You can define any number of stacks in your AWS CDK app. The code snippet below shows the scaffolding for a sample stack:

import * as cdk from "@aws-cdk/core";
export class CdkPlayStack extends cdk.Stack {
  constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    // resources
  }
}

CDK Apps

As discussed above, all constructs that represent AWS resources must be defined within the scope of a stack construct. We need to initialize the stack and define it in some scope to deploy it. To define the stack within the scope of an application, we can use the App construct. The code snippet below instantiates CdkPlayStack and produces the AWS CloudFormation template that the stack defined.

import { App } from "@aws-cdk/core";
import { CdkPlayStack } from "./cdk-play-stack";

const app = new App();
new CdkPlayStack(app, "hello-cdk");
app.synth();
Using the CDK Toolkit

AWS provides a CLI tool, which is the primary way to interact with your AWS CDK application. It builds, synthesizes, and deploys the resources defined in your CDK application.

Create the App

The cdk init command can be used to initialize a new application in the language of your choice. Each CDK app maintains its own set of module dependencies and should be created in its own directory. For example, we can create a TypeScript CDK application with the sample-app template by using the following command:

cdk init sample-app --language=typescript

Executing this command will generate several files, but the file that interests us the most is lib/cdk-init-stack.ts, which contains a single stack with a few constructs initialized in it. The code snippet below shows the stack that was generated for us:

import * as sns from '@aws-cdk/aws-sns';
import * as subs from '@aws-cdk/aws-sns-subscriptions';
import * as sqs from '@aws-cdk/aws-sqs';
import * as cdk from '@aws-cdk/core';

export class CdkInitStack extends cdk.Stack {
  constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    const queue = new sqs.Queue(this, 'CdkInitQueue', {
      visibilityTimeout: cdk.Duration.seconds(300)
    });
    const topic = new sns.Topic(this, 'CdkInitTopic');
    topic.addSubscription(new subs.SqsSubscription(queue));
  }
}

The cdk init command also initializes the project as a Git repository, along with the .gitignore file. Apart from that, it generates a package.json file for managing project dependencies and a tsconfig.json file for TypeScript configuration.

Once you have initialized the project, you can run the build command to manually compile the app. This step isn’t mandatory, because the cdk toolkit does it for you before you deploy the changes, but a manual build can sometimes help in catching syntax errors. Here’s how it can be done:

npm run build

Moreover, we saw earlier that the project was initialized with a single stack. We can verify the same by executing the following command:

cdk ls

The ls command should return the name of our app’s directory as the name of the stack. Moreover, we can check the changes made since the last deployment by using the cdk diff command.

Synthesize An AWS CloudFormation Template

Once we are done making changes to our stack, we can use the synth command to synthesize the stack to an AWS CloudFormation template. If our application contains multiple stacks, we will need to specify the name of the stack when executing the synth command. Here’s how we synthesize the stack:

cdk synth

This generates a cdk.out file, containing a YAML-formatted template, with the resources defined in the stack converted to the equivalent AWS CloudFormation template. The beginning of the YAML output is shown below:

Resources:
  CdkPlayQueue78BDD396:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 300
    UpdateReplacePolicy: Delete
    DeletionPolicy: Delete
    Metadata:
      aws:cdk:path: CdkPlayStack/CdkPlayQueue/Resource
    

The YAML template generated by cdk synth is a perfectly valid AWS CloudFormation template, and it can be deployed either manually via the console or by using any other tool. CDK toolkit also supports the deployment of the template, and the next section describes how it can be done.

Deploy The Stack

Before trying to deploy the stack, make sure that you have the AWS CLI installed and that your AWS credentials are configured on your device. Refer to the quick-start document for more details on how to set up your credentials.

Finally, in order to deploy the stack using AWS CloudFormation, we will have to execute the following command:

cdk deploy

Similar to the synth command, we don’t need to specify the name of the stack if our application contains a single stack. If our stack results in any sensitive policy changes in our account, then the toolkit will confirm those changes before proceeding with the deployment. The screenshot below shows the confirmation prompt when we try to deploy the stack:

The toolkit displays the progress of deployment, and once the deployment succeeds, we can visit the AWS CloudFormation console to see how it lists our stack. Also, if you check the SNS and SQS consoles, you will find the respective resources created for you.

Note: If you don’t see the resources or the stack, make sure that the region selected in the AWS console matches the region that you configured using the CLI.

The commands described above are some of the most commonly used toolkit commands. For a detailed overview of other commands, refer to the official documentation.

Conclusion

This article provided a quick overview of manual and programmatic deployment processes. Also, we talked about the different IaC options available, based on the cloud provider you are using, and then we went into detail on using AWS CDK to programmatically manage your AWS infrastructure. As we’ve seen, CDK offers multiple advantages over traditional techniques. It allows you to use logical statements and object-oriented techniques when modeling a system. You can define high-level abstractions, share them, and publish them to your team, company, or community. Moreover, the infrastructure project can be organized into logical modules and reused as a library. In addition to these benefits, CDK also makes the infrastructure code testable by using industry-standard protocols. It lets you leverage the existing code-review workflow for your infrastructure project.

Also, we saw how you can use the AWS CDK toolkit to interact with the CDK app. The toolkit allows you to synthesize the stacks to the AWS CloudFormation template and to deploy it to an AWS account. The complete source code of the sample CDK application that was used in this article can be found on GitHub. Moreover, you can refer to the cdk-samples repository for more examples of CDK-based stacks.

We also saw a few examples of the AWS Construct Library and how you can use L1, L2, and L3 constructs to glue together the system architecture. The AWS Construct Library reduces the complexity involved in integrating various AWS services for your application.

Relevance of Project Management Skills in an Agile World

As organizations are moving toward Agile transformation, there is a revamp of several roles involved in software delivery. One role that is gaining popularity is that of a Product Owner. On the other hand, a traditional role that is losing its significance in this context is that of a Project Manager: a role that is not a part of the Scrum Team. While both Project Manager and Product Owner roles are accountable for the successful delivery of a product, there are differences between the roles and responsibilities of each. This article examines the project management skills that are essential for the success of Product Owners in a Scrum Team.

Scrum Team and Roles

The Scrum Guide describes the Scrum Team as a small team of people consisting of one Scrum Master, one Product Owner, and Developers. 

WP Force SSL

The emphasis on safe websites is probably more prominent than it’s ever been. Because content on the web is more accessible than ever, the sheer amount of information that’s available, shared and interacted with raises...

The post WP Force SSL appeared first on 85ideas.com.

Have Maturity Models Become Irrelevant?

What Is It About?

Maturity models are based on consistent, systematic, linear-scale assessments and representations of existing software delivery processes that are applied through standardized methods of evaluation.  This enables maturity quantification of methods, ways of working, and applications of technology in the software delivery processes.

What Is It Good For?

Maturity models are arguably good for organizations that seek consistent, measurable improvement. In those organizations, different areas of software development can be assessed and rated (e.g., defect management process maturity, test data management process maturity, etc.), and subsequently benchmarked against a standard or industry average.