WordPress Security Team Discusses Backporting Security Releases to Fewer Versions

The WordPress Security Team is exploring different approaches to backporting security fixes to older versions of the software. The effort that goes into supporting versions back to 3.7 (the release that introduced automatic background updates) increases with each major version released.

“For the Core Security team, that means when security updates need to be released, we have to take the testing and release process not just to the current version of WordPress, but we have to test the changes, create code patches, and then release to every major version all the way back to 3.7,” security team lead Jake Spurlock said. “With 5.3 around the corner that puts us at over fifteen major versions of WordPress to support long term.”

Spurlock said 3.7 represents 0.1% of all WordPress sites but noted that supporting older versions requires “a large amount of time and energy and hurts the team’s ability to work effectively.”

When asked how much of a time investment is in involved, Spurlock said it varies depending how many tickets/issues have to be ported. All patches are reviewed, tested, and committed by several team members. There are approximately 50 security experts on the team, many of which are employed by Automattic, although some are volunteers.

“The problem with developing security releases for older versions of WordPress lies in the amount of testing and then reengineering that is specific to each older version of WordPress,” Spurlock said. “As an example. WordPress 4.2 received a fairly large refactor, and so taking a fix back before that time means extra testing, and ensuring that paths works for patches and more. Getting the testing suite to work on older versions has been difficult too with the code changes that accompany each version.”

Spurlock called for feedback and ideas on how the security team can support fewer versions of WordPress while keeping users secure. An active discussion is underway and opinions range from enthusiastic support for the idea to opposition.

Some who weighed in prefer to focus on urging users to update via emails to admins on older installs and/or a “please upgrade” widget ported back to older versions. As big version jumps can be intimidating for users, some recommended WordPress provide better ways to do incremental updates from older versions to the next most recent.

“If the goal is to keep WordPress users secure against hackers and other rogue agents, you should continue supporting older versions with security releases,” WordPress core contributor Rami Yushuvaev said.

“WordPress 3.7 represents 0.1% of all WordPress sites but WordPress 3.0 – 3.6 represents 1.6% of all WordPress sites. You don’t want to increase the number of sites using un-secure versions. With the current policy, ‘old version’ is not the same as ‘un-secure version.’

“I think you should educate users to use updated software, not to stop releasing security releases for older versions.”

Several commenters are in favor of limiting backporting security fixes to a set number of versions, as outlined by former WordPress security lead, Aaron Campbell:

I like the idea if supporting X versions back. That allows users to know that they don’t have to update to the latest version no matter what our release cycles are, and also makes sure we can eventually hone in on how many versions are actually tenable to support.

Supporting X years back would allow users to know they can avoid upgrading for a certain amount of time, but it would also mean that the security team wouldn’t always be supporting the same number of versions and if a release ever took longer than our supported time then all users would be expected to upgrade to the latest version (exceptions could always be made, but it’s harder to rely on those).

Stephen Edgar, one of the maintainers of WordPress’ build tools component, suggested implementing automatic major version upgrades to keep moving users forward to supported versions in waves.

“Maybe continue to ship them until ‘major’ updates are implemented,” Edgar said. “The current thinking is to add major updates to 3.7 first, bumping 3.7 to 3.8 via automatic updates. Once that’s completed then security updates would no longer be backported to the 3.7 branch.

“And similarly, once 3.8 major updates are implemented, i.e. 3.8 gets bumped to x.x then again, backports to 3.8 would cease at the same time and so forth through the branches.”

Edgar also noted that providing users a way to opt into automatic updates for major core releases is one of the nine projects that Matt Mullenweg had identified for working on in 2019.

Several other commenters said they would like to see WordPress implement semantic versioning and adopt a long-term support (LTS) policy. WordPress would then clearly communicate the number of years those versions would be supported. Older sites could then be auto-updated to the LTS version.

No decision has been made on the ideas proposed and the discussion is still ongoing. If you have experience maintaining older sites or have input on how WordPress can best keep users secure while decreasing the work load, leave a comment on the Make WordPress Core post.

How much specificity do @rules have, like @keyframes and @media?

I got this question the other day. My first thought is: weird question! Specificity is about selectors, and at-rules are not selectors, so... irrelevant?

To prove that, we can use the same selector inside and outside of an at-rule and see if it seems to affect specificity.

body {
  background: red;
}
@media (min-width: 1px) {
  body {
    background: black;
  }
}

The background is black. But... is that because the media query increases the specificity? Let's switch them around.

@media (min-width: 1px) {
  body {
    background: black;
  }
}
body {
  background: red;
}

The background is red, so nope. The red background wins here just because it is later in the stylesheet. The media query does not affect specificity.

If it feels like selectors are increasing specificity and overriding other styles with the same selector, it's likely just because it comes later in the stylesheet.

Still, the @keyframes in the original question got me thinking. Keyframes, of course, can influence styles. Not specificity, but it can feel like specificity if the styles end up overridden.

See this tiny example:

@keyframes winner {
  100% { background: green; }
}
body {
  background: red !important;
  animation: winner forwards;
}

You'd think the background would be red, especially with the !important rule there. (By the way, !important doesn't affect specificity; it's a per-rule thing.) It is red in Firefox, but it's green in Chrome. So that's a funky thing to watch out for. (It's been a bug since at least 2014 according to Estelle Weyl.)

The post How much specificity do @rules have, like @keyframes and @media? appeared first on CSS-Tricks.

Intrinsically Responsive CSS Grid with minmax() and min()

The most famous line of code to have come out of CSS grid so far is:

grid-template-columns: repeat(auto-fill, minmax(10rem, 1fr));

Without any media queries, that will set up a grid container that has a flexible number of columns. The columns will stretch a little, until there is enough room for another one, and then a new column is added, and in reverse.

The only weakness here is that first value in minmax() (the 10rem value above). If the container is narrower than whatever that minimum is, elements in that single column will overflow. Evan Minto shows us the solution with min():

grid-template-columns: repeat(auto-fill, minmax(min(10rem, 100%), 1fr));

The browser support isn't widespread yet, but Evan demonstrates some progressive enhancement techniques to take advantage of now.

Direct Link to ArticlePermalink

The post Intrinsically Responsive CSS Grid with minmax() and min() appeared first on CSS-Tricks.

How to Test WordPress Site Backups

How to Test WordPress Site BackupsIf you are managing WordPress websites for long, then you are aware of the importance of taking regular backups. Backups save you from a whole lot of trouble in the event of any unforeseen disaster. In addition to taking regular backups, it’s also important to handle your backups securely. Why? Because a number of things […]

The post How to Test WordPress Site Backups appeared first on WPExplorer.

Creating Dynamic Routes in a Nuxt Application

In this post, we’ll be using an ecommerce store demo I built and deployed to Netlify to show how we can make dynamic routes for incoming data. It’s a fairly common use-case: you get data from an API, and you either don’t know exactly what that data might be, there’s a lot of it, or it might change. Luckily for us, Nuxt makes the process of creating dynamic routing very seamless.

In the last post, we showed how to set up stripe payments with Netlify Functions, which allow us to create serverless lambda functions with ease. We’ll use this same application to show another Nuxt-specific functionality as well.

Let’s get started!

Creating the page

In this case, we’ve got some dummy data for the store that I created in mockaroo and am storing in the static folder. Typically you’ll use fetch or axios and an action in the Vuex store to gather that data. Either way, we store the data with Vuex in store/index.js, along with the UI state, and an empty array for the cart.

import data from '~/static/storedata.json'

export const state = () => ({
 cartUIStatus: 'idle',
 storedata: data,
 cart: []
})

It’s important to mention that in Nuxt, all we have to do to set up routing in the application is create a .vue file in the pages directory. So we have an index.vue page for our homepage, a cart.vue page for our cart, and so on. Nuxt automagically generates all the routing for these pages for us.

In order to create dynamic routing, we will make a directory to house those pages. In this case, I made a directory called /products, since that’s what the routes will be, a view of each individual product details.

In that directory, I’ll create a page with an underscore, and the unique indicator I want to use per page to create the routes. If we look at the data I have in my cart, it looks like this:

[
 {
   "id": "9d436e98-1dc9-4f21-9587-76d4c0255e33",
   "color": "Goldenrod",
   "description": "Mauris enim leo, rhoncus sed, vestibulum sit amet, cursus id, turpis. Integer aliquet, massa id lobortis convallis, tortor risus dapibus augue, vel accumsan tellus nisi eu orci. Mauris lacinia sapien quis libero.",
   "gender": "Male",
   "name": "Desi Ada",
   "review": "productize virtual markets",
   "starrating": 3,
   "price": 50.40,
   "img": "1.jpg"
 },
  …
]

You can see that the ID for each entry is unique, so that’s a good candidate for something to use, we’ll call the page:

_id.vue

Now, we can store the id of the particular page in our data by using the route params:

data() {
 return {
   id: this.$route.params.id,
  }
},

For the entry from above, our data if we looked in devtools would be:

id: "9d436e98-1dc9-4f21-9587-76d4c0255e33"

We can now use this to retrieve all of the other information for this entry from the store. I’ll use mapState:

import { mapState } from "vuex";

computed: {
 ...mapState(["storedata"]),
 product() {
   return this.storedata.find(el => el.id === this.id);
 }
},

And we’re filtering the storedata to find the entry with our unique ID!

Let the Nuxt config know

If we were building an app using yarn build, we’d be done, but we’re using Nuxt to create a static site. When we use Nuxt to create a static site, we’ll use the yarn generate command. We have to let Nuxt know about the dynamic files with the generate command in nuxt.config.js.

This command will expect a function that will return a promise that resolves in an array that will look like this:

export default {
  generate: {
    routes: [
      '/product/1',
      '/product/2',
      '/product/3'
    ]
  }
}

To create this, at the top of the file we’ll bring in the data from the static directory, and create the function:

import data from './static/storedata.json'
let dynamicRoutes = () => {
 return new Promise(resolve => {
   resolve(data.map(el => `product/${el.id}`))
 })
}

We’ll then call the function within our config:

generate: {
  routes: dynamicRoutes
},

If you’re gathering your data from an API with axios instead (which is more common), it would look more like this:

import axios from 'axios'
let dynamicRoutes = () => {
 return axios.get('https://your-api-here/products').then(res => {
   return res.data.map(product => `/product/${product.id}`)
 })
}

And with that, we’re completely done with the dynamic routing! If you shut down and restart the server, you’ll see the dynamic routes per product in action!

For the last bit of this post, we’ll keep going, showing how the rest of the page was made and how we’re adding items to our cart, since that might be something you want to learn, too.

Populate the page

Now we can populate the page with whatever information we want to show, with whatever formatting we would like, as we have access to it all with the product computed property:

<main>
 <section class="img">
   <img :src="`/products/${product.img}`" />
 </section>
 <section class="product-info">
   <h1>{{ product.name }}</h1>
   <h4 class="price">{{ product.price | dollar }}</h4>
   <p>{{ product.description }}</p>
 </section>
 ...
</main>

In our case, we’ll also want to add items to the cart that’s in the store. We’ll add the ability to add and remove items (while not letting the decrease count dip below zero

<p class="quantity">
 <button class="update-num" @click="quantity > 0 ? quantity-- : quantity = 0">-</button>
 <input type="number" v-model="quantity" />
 <button class="update-num" @click="quantity++">+</button>
</p>
...
<button class="button purchase" @click="cartAdd">Add to Cart</button>

In our methods on that component, we’ll add the item plus a new field, the quantity, to an array that we’ll pass as the payload to mutation in the store.

methods: {
 cartAdd() {
   let item = this.product;
   item.quantity = this.quantity;
   this.tempcart.push(item);
   this.$store.commit("addToCart", item);
 }
}

In the Vuex store, we’ll check if the item already exists. If it does, we’ll just increase the quantity. If not, we’ll add the whole item with quantity to the cart array.

addToCart: (state, payload) => {
 let itemfound = false
 state.cart.forEach(el => {
   if (el.id === payload.id) {
     el.quantity += payload.quantity
     itemfound = true
   }
 })
 if (!itemfound) state.cart.push(payload)
}

We can now use a getter in the store to calculate the total, which is what we’ll eventually pass to our Stripe serverless function (the other post picks up from here). We’ll use a reduce for this as reduce is very good at retrieving one value from many. (I wrote up more details on how reduce works here).

cartTotal: state => {
 if (!state.cart.length) return 0
 return state.cart.reduce((ac, next) => ac + next.quantity * next.price, 0)
}

And there you have it! We’ve set up individual product pages, and Nuxt generates all of our individual routes for us at build time. You’d be Nuxt not to try it yourself. 😬

The post Creating Dynamic Routes in a Nuxt Application appeared first on CSS-Tricks.

The Simplest Way to Load CSS Asynchronously

Scott Jehl:

One of the most impactful things we can do to improve page performance and resilience is to load CSS in a way that does not delay page rendering. That’s because by default, browsers will load external CSS synchronously—halting all page rendering while the CSS is downloaded and parsed—both of which incur potential delays.

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

Don't just up and do this to all your stylesheets though, otherwise, you'll get a pretty nasty "Flash of Unstyled Content" (FOUC) as the page loads. You need to pair the technique with a way to ship critical CSS. Do that though, and like Scott's opening sentence said, it's quite impactful.

Interesting side story... on our Pen Editor page over at CodePen, we had a FOUC problem:

What makes it weird is that we load our CSS in <link> tags in the <head> completely normally, which should block-rendering and prevent FOUC. But there is some hard-to-reproduce bug at work. Fortunately we found a weird solution, so now we have an empty <script> tag in the <head> that somehow solves it.

Direct Link to ArticlePermalink

The post The Simplest Way to Load CSS Asynchronously appeared first on CSS-Tricks.

How to Use Cookies in Spring Boot

An HTTP Cookie (also known as a web cookie or browser cookie) is a small piece of information stored by the server in the user's browser. The server sets the cookies while returning the response for a request made by the browser. The browser stores the cookies and sends them back with the next request to the same server. Cookies are generally used for session management, user-tracking, and to store user preferences.

Cookies help server remember the client across multiple requests. Without cookies, the server would treat every request as a new client.

Secure Your Cloud Estate with Continuous Audits

To meet the demands of an ever more connected world, executing on a comprehensive cloud strategy has become a critical component for organizations at any scale. While cloud platforms have made it incredibly easy to define and scale environments on demand, with those capabilities come new challenges in how to validate that those environments have been securely designed. With high-profile data breaches making headlines on a regular basis, it's only natural to feel some anxiety, but by implementing a process for continuous, automated audits, organizations can detect and correct deviations from security best practices at any scale.

To put a finer point on the challenges facing cloud security, CSO published The Dirty Dozen: 12 Top Cloud Security Threats. In this article, CSO outlines 12 threats collected and ranked by the Cloud Security Alliance (CSA), providing examples and outlining the severity of each threat. In our latest webinar, Secure Your Cloud Estate with Continuous Audits, we provided guidance on how organizations can use Chef to address those threats consistently and continuously.

4 Advantages to Building Cloud-Native Applications With AWS

The State of Cloud-Native Security Report 2018 states that 62% of enterprises today choose to go for cloud-native applications for more than half of their new applications. And this number is set to grow by 80% over the next three years. This is no surprise given the fact that most organizations are already heavily invested in their chosen cloud platform, and would like to use it up to its full potential.

Cloud-native applications are essentially those created specifically for the cloud and designed to leverage the gamut of resources available on the cloud. Being "cloud-native" means that an application has a vast operational landscape, capable of being available from wherever the cloud is instead of being tied down to a physical location. 

How Do Developers Choose the Best Fit for Containers-as-a-Service (CaaS)?

The emergence of cloud-native development and containers has redefined how software is developed. But not all organizations have the resources or expertise to set up the required infrastructure to support a containerized application. Luckily, cloud vendors offer Containers-as-a-Service to help developers to capitalize on the benefits of cloud native development.

All three leading cloud providers have CaaS products but choosing the right one can be a challenge. While everyone has different requirements, it is always beneficial to understand what solutions others are using and why, to help inform decisions.

Rotating Service Credentials for IBM Cloud Functions

If you have followed some of my work, you know that I use IBM Cloud Functions, i.e., a serverless approach, for many projects. The tutorials with a database-driven (Db2-backed) Slackbot and the GitHub traffic analytics are such examples. In this blog post, I want to detail some of the security-related aspects. This includes how to share service credentials (think of a database username and password) with a cloud function and how to rotate the credentials.

Create and Bind Credentials

In order for a user or an app to access a service like a database system or a chatbot, a username and password or API keys are needed. In general, they are called service credentials. For many cloud computing technologies, sharing those credentials between services and apps is called binding a service.

Role of Enterprise Architecture in DevOps Adoption

Introduction

In traditional development, the development team and operations team always work in silos. Development teams focus on functionality, features, user experience, architecture, and non-functional requirements. The focuses of operations teams are cost, reliability, security, risk, and manageability.

DevOps is the set of concepts, cultural philosophies, practices, team organization structures and tools that increases an organization’s ability to deliver applications and services at high velocity to the clients. It helps in responding to new requirements quickly, or to problems that occur in production. This enables organizations to better serve their customers and compete more effectively in the market.

CALMS for DevSecOps: Part 3—How Lean Improves Performance

This is the third of my blog posts investigating DevSecOps through the CALMS lenses—we’ve looked at Culture and Automation already now it's time for:

Lean

In the previous blog post, I looked at some DevSecOps tools I particularly like and introduced you to TaskTop. TaskTop is the ultimate tool for optimizing your flow from idea to realization—a connectivity framework that takes the pain away from writing and managing integrations and visualizing your lead and cycle times. It’ll show you where your bottlenecks are.

Run useEffect Only Once

React has a built-in hook called useEffect. Hooks are used in function components. The Class component comparison to useEffect are the methods componentDidMount, componentDidUpdate, and componentWillUnmount.

useEffect will run when the component renders, which might be more times than you think. I feel like I've had this come up a dozen times in the past few weeks, so it seems worthy of a quick blog post.

import React, { useEffect } from 'react';

function App() {
  useEffect(() => {
    // Run! Like go get some data from an API.
  });

  return (
    <div>
      {/* Do something with data. */}
    </div>
  );
}

In a totally isolated example like that, it's likely the useEffect will run only once. But in a more complex app with props flying around and such, it's certainly not guaranteed. The problem with that is that if you're doing something like fetching data from an API, you might end up double-fetching which is inefficient and unnecessary.

The trick is that useEffect takes a second parameter.

The second param is an array of variables that the component will check to make sure changed before re-rendering. You could put whatever bits of props and state you want in here to check against.

Or, put nothing:

import React, { useEffect } from 'react';

function App() {
  useEffect(() => {
    // Run! Like go get some data from an API.
  }, []);

  return (
    <div>
      {/* Do something with data. */}
    </div>
  );
}

That will ensure the useEffect only runs once.

Note from the docs:

If you use this optimization, make sure the array includes all values from the component scope (such as props and state) that change over time and that are used by the effect. Otherwise, your code will reference stale values from previous renders.

The post Run useEffect Only Once appeared first on CSS-Tricks.