7 Tips on Choosing WordPress Themes for Your Website

In 2020, a whopping 35% of the Internet is powered by WordPress, which means over 455,000,000 websites are using WordPress as their platform. The reason why WordPress is so popular is because of its ease of installation, ability to customize everything, its safety features, and ease of use for beginners. One of the most distinctive features […]

The post 7 Tips on Choosing WordPress Themes for Your Website appeared first on WPArena.

Nailing the Perfect Contrast Between Light Text and a Background Image

Have you ever come across a site where light text is sitting on a light background image? If you have, you’ll know how difficult that is to read. A popular way to avoid that is to use a transparent overlay. But this leads to an important question: Just how transparent should that overlay be? It’s not like we’re always dealing with the same font sizes, weights, and colors, and, of course, different images will result in different contrasts.

Trying to stamp out poor text contrast on background images is a lot like playing Whac-a-Mole. Instead of guessing, we can solve this problem with HTML <canvas> and a little bit of math.

Like this:

We could say “Problem solved!” and simply end this article here. But where’s the fun in that? What I want to show you is how this tool works so you have a new way to handle this all-too-common problem.

Here’s the plan

First, let’s get specific about our goals. We’ve said we want readable text on top of a background image, but what does “readable” even mean? For our purposes, we’ll use the WCAG definition of AA-level readability, which says text and background colors need enough contrast between them such that that one color is 4.5 times lighter than the other.

Let’s pick a text color, a background image, and an overlay color as a starting point. Given those inputs, we want to find the overlay opacity level that makes the text readable without hiding the image so much that it, too, is difficult to see. To complicate things a bit, we’ll use an image with both dark and light space and make sure the overlay takes that into account.

Our final result will be a value we can apply to the CSS opacity property of the overlay that gives us the right amount of transparency that makes the text 4.5 times lighter than the background.

Optimal overlay opacity: 0.521

To find the optimal overlay opacity we’ll go through four steps:

  1. We’ll put the image in an HTML <canvas>, which will let us read the colors of each pixel in the image.
  2. We’ll find the pixel in the image that has the least contrast with the text.
  3. Next, we’ll prepare a color-mixing formula we can use to test different opacity levels on top of that pixel’s color.
  4. Finally, we’ll adjust the opacity of our overlay until the text contrast hits the readability goal. And these won’t just be random guesses — we’ll use binary search techniques to make this process quick.

Let’s get started!

Step 1: Read image colors from the canvas

Canvas lets us “read” the colors contained in an image. To do that, we need to “draw” the image onto a <canvas> element and then use the canvas context (ctx) getImageData() method to produce a list of the image’s colors.

function getImagePixelColorsUsingCanvas(image, canvas) {
  // The canvas's context (often abbreviated as ctx) is an object
  // that contains a bunch of functions to control your canvas
  const ctx = canvas.getContext('2d');


  // The width can be anything, so I picked 500 because it's large
  // enough to catch details but small enough to keep the
  // calculations quick.
  canvas.width = 500;


  // Make sure the canvas matches proportions of our image
  canvas.height = (image.height / image.width) * canvas.width;


  // Grab the image and canvas measurements so we can use them in the next step
  const sourceImageCoordinates = [0, 0, image.width, image.height];
  const destinationCanvasCoordinates = [0, 0, canvas.width, canvas.height];


  // Canvas's drawImage() works by mapping our image's measurements onto
  // the canvas where we want to draw it
  ctx.drawImage(
    image,
    ...sourceImageCoordinates,
    ...destinationCanvasCoordinates
  );


  // Remember that getImageData only works for same-origin or 
  // cross-origin-enabled images.
  // https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image
  const imagePixelColors = ctx.getImageData(...destinationCanvasCoordinates);
  return imagePixelColors;
}

The getImageData() method gives us a list of numbers representing the colors in each pixel. Each pixel is represented by four numbers: red, green, blue, and opacity (also called “alpha”). Knowing this, we can loop through the list of pixels and find whatever info we need. This will be useful in the next step.

Image of a blue and purple rose on a light pink background. A section of the rose is magnified to reveal the RGBA values of a specific pixel.

Step 2: Find the pixel with the least contrast

Before we do this, we need to know how to calculate contrast. We’ll write a function called getContrast() that takes in two colors and spits out a number representing the level of contrast between the two. The higher the number, the better the contrast for legibility.

When I started researching colors for this project, I was expecting to find a simple formula. It turned out there were multiple steps.

To calculate the contrast between two colors, we need to know their luminance levels, which is essentially the brightness (Stacie Arellano does a deep dive on luminance that’s worth checking out.)

Thanks to the W3C, we know the formula for calculating contrast using luminance:

const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);

Getting the luminance of a color means we have to convert the color from the regular 8-bit RGB value used on the web (where each color is 0-255) to what’s called linear RGB. The reason we need to do this is that brightness doesn’t increase evenly as colors change. We need to convert our colors into a format where the brightness does vary evenly with color changes. That allows us to properly calculate luminance. Again, the W3C is a help here:

const luminance = (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b));

But wait, there’s more! In order to convert 8-bit RGB (0 to 255) to linear RGB, we need to go through what’s called standard RGB (also called sRGB), which is on a scale from 0 to 1.

So the process goes: 

8-bit RGB → standard RGB  → linear RGB → luminance

And once we have the luminance of both colors we want to compare, we can plug in the luminance values to get the contrast between their respective colors.

// getContrast is the only function we need to interact with directly.
// The rest of the functions are intermediate helper steps.
function getContrast(color1, color2) {
  const color1_luminance = getLuminance(color1);
  const color2_luminance = getLuminance(color2);
  const lighterColorLuminance = Math.max(color1_luminance, color2_luminance);
  const darkerColorLuminance = Math.min(color1_luminance, color2_luminance);
  const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);
  return contrast;
}


function getLuminance({r,g,b}) {
  return (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b));
}
function getLinearRGB(primaryColor_8bit) {
  // First convert from 8-bit rbg (0-255) to standard RGB (0-1)
  const primaryColor_sRGB = convert_8bit_RGB_to_standard_RGB(primaryColor_8bit);


  // Then convert from sRGB to linear RGB so we can use it to calculate luminance
  const primaryColor_RGB_linear = convert_standard_RGB_to_linear_RGB(primaryColor_sRGB);
  return primaryColor_RGB_linear;
}
function convert_8bit_RGB_to_standard_RGB(primaryColor_8bit) {
  return primaryColor_8bit / 255;
}
function convert_standard_RGB_to_linear_RGB(primaryColor_sRGB) {
  const primaryColor_linear = primaryColor_sRGB < 0.03928 ?
    primaryColor_sRGB/12.92 :
    Math.pow((primaryColor_sRGB + 0.055) / 1.055, 2.4);
  return primaryColor_linear;
}

Now that we can calculate contrast, we’ll need to look at our image from the previous step and loop through each pixel, comparing the contrast between that pixel’s color and the foreground text color. As we loop through the image’s pixels, we’ll keep track of the worst (lowest) contrast so far, and when we reach the end of the loop, we’ll know the worst-contrast color in the image.

function getWorstContrastColorInImage(textColor, imagePixelColors) {
  let worstContrastColorInImage;
  let worstContrast = Infinity; // This guarantees we won't start too low
  for (let i = 0; i < imagePixelColors.data.length; i += 4) {
    let pixelColor = {
      r: imagePixelColors.data[i],
      g: imagePixelColors.data[i + 1],
      b: imagePixelColors.data[i + 2],
    };
    let contrast = getContrast(textColor, pixelColor);
    if(contrast < worstContrast) {
      worstContrast = contrast;
      worstContrastColorInImage = pixelColor;
    }
  }
  return worstContrastColorInImage;
}

Step 3: Prepare a color-mixing formula to test overlay opacity levels

Now that we know the worst-contrast color in our image, the next step is to establish how transparent the overlay should be and see how that changes the contrast with the text.

When I first implemented this, I used a separate canvas to mix colors and read the results. However, thanks to Ana Tudor’s article about transparency, I now know there’s a convenient formula to calculate the resulting color from mixing a base color with a transparent overlay.

For each color channel (red, green, and blue), we’d apply this formula to get the mixed color:

mixedColor = baseColor + (overlayColor - baseColor) * overlayOpacity

So, in code, that would look like this:

function mixColors(baseColor, overlayColor, overlayOpacity) {
  const mixedColor = {
    r: baseColor.r + (overlayColor.r - baseColor.r) * overlayOpacity,
    g: baseColor.g + (overlayColor.g - baseColor.g) * overlayOpacity,
    b: baseColor.b + (overlayColor.b - baseColor.b) * overlayOpacity,
  }
  return mixedColor;
}

Now that we’re able to mix colors, we can test the contrast when the overlay opacity value is applied.

function getTextContrastWithImagePlusOverlay({textColor, overlayColor, imagePixelColor, overlayOpacity}) {
  const colorOfImagePixelPlusOverlay = mixColors(imagePixelColor, overlayColor, overlayOpacity);
  const contrast = getContrast(this.state.textColor, colorOfImagePixelPlusOverlay);
  return contrast;
}

With that, we have all the tools we need to find the optimal overlay opacity!

Step 4: Find the overlay opacity that hits our contrast goal

We can test an overlay’s opacity and see how that affects the contrast between the text and image. We’re going to try a bunch of different opacity levels until we find the contrast that hits our mark where the text is 4.5 times lighter than the background. That may sound crazy, but don’t worry; we’re not going to guess randomly. We’ll use a binary search, which is a process that lets us quickly narrow down the possible set of answers until we get a precise result.

Here’s how a binary search works:

  • Guess in the middle.
  • If the guess is too high, we eliminate the top half of the answers. Too low? We eliminate the bottom half instead.
  • Guess in the middle of that new range.
  • Repeat this process until we get a value.

I just so happen to have a tool to show how this works:

In this case, we’re trying to guess an opacity value that’s between 0 and 1. So, we’ll guess in the middle, test whether the resulting contrast is too high or too low, eliminate half the options, and guess again. If we limit the binary search to eight guesses, we’ll get a precise answer in a snap.

Before we start searching, we’ll need a way to check if an overlay is even necessary in the first place. There’s no point optimizing an overlay we don’t even need!

function isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast) {
  const contrastWithoutOverlay = getContrast(textColor, worstContrastColorInImage);
  return contrastWithoutOverlay < desiredContrast;
}

Now we can use our binary search to look for the optimal overlay opacity:

function findOptimalOverlayOpacity(textColor, overlayColor, worstContrastColorInImage, desiredContrast) {
  // If the contrast is already fine, we don't need the overlay,
  // so we can skip the rest.
  const isOverlayNecessary = isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast);
  if (!isOverlayNecessary) {
    return 0;
  }


  const opacityGuessRange = {
    lowerBound: 0,
    midpoint: 0.5,
    upperBound: 1,
  };
  let numberOfGuesses = 0;
  const maxGuesses = 8;


  // If there's no solution, the opacity guesses will approach 1,
  // so we can hold onto this as an upper limit to check for the no-solution case.
  const opacityLimit = 0.99;


  // This loop repeatedly narrows down our guesses until we get a result
  while (numberOfGuesses < maxGuesses) {
    numberOfGuesses++;


    const currentGuess = opacityGuessRange.midpoint;
    const contrastOfGuess = getTextContrastWithImagePlusOverlay({
      textColor,
      overlayColor,
      imagePixelColor: worstContrastColorInImage,
      overlayOpacity: currentGuess,
    });


    const isGuessTooLow = contrastOfGuess < desiredContrast;
    const isGuessTooHigh = contrastOfGuess > desiredContrast;
    if (isGuessTooLow) {
      opacityGuessRange.lowerBound = currentGuess;
    }
    else if (isGuessTooHigh) {
      opacityGuessRange.upperBound = currentGuess;
    }


    const newMidpoint = ((opacityGuessRange.upperBound - opacityGuessRange.lowerBound) / 2) + opacityGuessRange.lowerBound;
    opacityGuessRange.midpoint = newMidpoint;
  }


  const optimalOpacity = opacityGuessRange.midpoint;
  const hasNoSolution = optimalOpacity > opacityLimit;


  if (hasNoSolution) {
    console.log('No solution'); // Handle the no-solution case however you'd like
    return opacityLimit;
  }
  return optimalOpacity;
}

With our experiment complete, we now know exactly how transparent our overlay needs to be to keep our text readable without hiding the background image too much.

We did it!

Improvements and limitations

The methods we’ve covered only work if the text color and the overlay color have enough contrast to begin with. For example, if you were to choose a text color that’s the same as your overlay, there won’t be an optimal solution unless the image doesn’t need an overlay at all.

In addition, even if the contrast is mathematically acceptable, that doesn’t always guarantee it’ll look great. This is especially true for dark text with a light overlay and a busy background image. Various parts of the image may distract from the text, making it difficult to read even when the contrast is numerically fine. That’s why the popular recommendation is to use light text on a dark background.

We also haven’t taken where the pixels are located into account or how many there are of each color. One drawback of that is that a pixel in the corner could possibly exert too much influence on the result. The benefit, however, is that we don’t have to worry about how the image’s colors are distributed or where the text is because, as long as we’ve handled where the least amount of contrast is, we’re safe everywhere else.

I learned a few things along the way

There are some things I walked away with after this experiment, and I’d like to share them with you:

  • Getting specific about a goal really helps! We started with a vague goal of wanting readable text on an image, and we ended up with a specific contrast level we could strive for.
  • It’s so important to be clear about the terms. For example, standard RGB wasn’t what I expected. I learned that what I thought of as “regular” RGB (0 to 255) is formally called 8-bit RGB. Also, I thought the “L” in the equations I researched meant “lightness,” but it actually means “luminance,” which is not to be confused with “luminosity.” Clearing up terms helps how we code as well as how we discuss the end result.
  • Complex doesn’t mean unsolvable. Problems that sound hard can be broken into smaller, more manageable pieces.
  • When you walk the path, you spot the shortcuts. For the common case of white text on a black transparent overlay, you’ll never need an opacity over 0.54 to achieve WCAG AA-level readability.

In summary…

You now have a way to make your text readable on a background image without sacrificing too much of the image. If you’ve gotten this far, I hope I’ve been able to give you a general idea of how it all works.

I originally started this project because I saw (and made) too many website banners where the text was tough to read against a background image or the background image was overly obscured by the overlay. I wanted to do something about it, and I wanted to give others a way to do the same. I wrote this article in hopes that you’d come away with a better understanding of readability on the web. I hope you’ve learned some neat canvas tricks too.

If you’ve done something interesting with readability or canvas, I’d love to hear about it in the comments!


The post Nailing the Perfect Contrast Between Light Text and a Background Image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

More Control Over CSS Borders With background-image

You can make a typical CSS border dashed or dotted. For example:

.box {
   border: 1px dashed black;
   border: 3px dotted red;
}

You don’t have all that much control over how big or long the dashes or gaps are. And you certainly can’t give the dashes slants, fading, or animation! You can do those things with some trickery though.

Amit Sheen build this really neat Dashed Border Generator:

The trick is using four multiple backgrounds. The background property takes comma-separated values, so by setting four backgrounds (one along the top, right, bottom, and left) and sizing them to look like a border, it unlocks all this control.

So like:

.box {
  background-image: repeating-linear-gradient(0deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(90deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(180deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(270deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px);
  background-size: 3px 100%, 100% 3px, 3px 100% , 100% 3px;
  background-position: 0 0, 0 0, 100% 0, 0 100%;
  background-repeat: no-repeat;
}

I like gumdrops.


The post More Control Over CSS Borders With background-image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

New WordPress Plugins Disable Unsplash CDN

In light of the recent conversations about the Unsplash plugin’s CDN, several extensions have popped up this week for disabling it. By default, the plugin serves images from the CDN but saves copies to the WordPress media library in case the plugin is disabled or removed. The plugin does not currently have an option to change this.

Disable Unsplash CDN is the first to be published to the WordPress.org directory for changing the plugin’s default behavior. There are no options or settings – activating it turns it on. Xaver Birsak, a prolific WordPress plugin author, created it to help users who may experience slower page speed caused by the Unsplash CDN.

“I’ve followed the release of the official Unsplash plugin as well the strange one-star rating from Matt Mullenweg which is think is not appropriate,” Birsak said. “The problem he mentioned was, in addition to the Unsplash license, the fact that images are being served from Unsplash (Imgix) servers. I don’t think that this is totally unnecessary from Unsplash as a CDN can serve images much quicker in most cases. For some users this is maybe not the case.”

Birsak was referencing Matt Mullenweg’s recent one-star review of the Unsplash plugin, which drew the ire of many plugin developers whose ability to monetize their products can hinge on getting decent reviews. The review called the plugin “sketchy” and called into question the practice of making the CDN the default:

It’s unclear why they want you to use their CDN and make that the default, it’s probably to support their new advertising business model and get analytics for it. Running a CDN is expensive, and if you’re not paying for it then you are the product. I would not be surprised if Unsplash hotlinked images broke at some point in the future.

If you want a CDN, you should run one for your entire site, not just certain images from a single source — in fact having multiple CDNs running at the same time could slow down your site because of the additional DNS lookups.

Birsak said he checked the plugin and found a simple solution for bypassing the hotlinking, which only requires a few lines of code.

“Since it’s so easy, and others may find it useful, I released this plugin,” Birsak said. “Nowadays with GDPR and the invalidation of the Privacy Shield people are more likely to be concerned about sending data to third party services. So disabling the CDN should at least be an option.”

WordPress developer Tom Nowell also created a quick plugin to disable Unsplash’s CDN, which is now available on GitHub.

“I don’t have qualms with Unsplash themselves but I did miss having the option to choose for myself,” Nowell said regarding the plugin’s CDN default. “Rather than argue to add it, I spent a little time and built the plugin, it’s only small so didn’t take much time. As for the CDN, it’s nice to save bandwidth, though for local development it’s always faster to switch it off.”

Unsplash Plugin Will Not Add an Option to Disable the CDN – Its API Guidelines Require Apps to Use It

The plugins that disable Unsplash’s CDN could immediately become obsolete if Unsplash decided to build in an option into the official plugin to do the same. The company has confirmed the team has no current plans to so.

“The CDN is a feature that dynamically serves the right size and format of image, and includes performance optimizations not available via additional plugins like WordPress.com’s Jetpack or most CDNs,” Unsplash co-founder Luke Chesser said. “We do this to improve the performance of the image loading and allow Unsplash contributors to count the number of times their images have been seen.”

In addition to sharing this data with contributing photographers, Unsplash advertisers also need this data to continue getting value from the new Unsplash for Brands business model.

The total monthly cost in 2019 for the company’s image hosting with Imgix was $42,408, which means Unsplash spends north of $500k per year to serve optimized images via its CDN. Chesser said the cost of the CDN is “very low relative to the number of requests and traffic it can serve,” given how optimized and performant the image serving infrastructure is. Last year Unsplash sent petabytes of data through Imgix’s CDN for 250 million variations of the library’s source images.

“We treat brands as contributors as they also share images on Unsplash,” Chesser said. “We report downloads and views back to them. So yes, the view and download counts do matter to our business from a monetary perspective, but to be clear, if you take away brands, we would still have this requirement as it’s central to growing the library and encouraging more contributors.”

Providing stats to brands undoubtedly helps pay the bills and keeps the lights on, so it is no wonder the requirement to use the CDN will remain in the WordPress plugin. In fact, this requirement was built into Unsplash’s API guidelines in 2018 and applies to all applications accessing the collection:

All API uses must use the hotlinked image URLs returned by the API under the photo.urls  properties. This applies to all uses of the image and not just search results. “

In 2019, Unsplash received more traffic from its API partners than from the company’s own website and official apps. Any successful monetization strategy that hinges on advertising will need to deliver those stats and requiring applications use the CDN in order to use the API is one way to do that.

Matt Mullenweg recently asked what these API guidelines mean for existing WordPress plugins, like Instant Images, that serve Unsplash images without using the CDN. The plugin has more than 50,000 active installations.

“When we released the updated guidelines we applied them proactively to new apps and worked with developers on a case by case basis over a one year period to consider hotlinking and downloads for legacy apps,” Chesser said. “Instant Images was built before we made the update to the guidelines and so we exempted them long ago, along with a number of other legacy apps.”

Instant Images plugin developer Darren Cooney said he will not be adding an option to his plugin for turning on the CDN and declined to comment further on his reasons.

“I will say that I think the CDN should be opt-in and it should be more clear what happens on the Unsplash side when the CDN is in use,” Cooney said. “What is tracked, why it’s tracked and what benefit do added views provide the contributors.”

When asked whether Unsplash plans to update the plugin to deny API access to sites that have added a plugin to disable the CDN, Chesser said no. WordPress plugins weaponizing themselves against each other is not unheard of, although it is unusual and frowned upon.

“We don’t do things like that,” Chesser said. “I think anyone who knows our team and our community will know that we always try to take reasonable actions as we’re representing a lot of contributors and a large community. If a user wants to install a plugin to deactivate the CDN but still access the library, they can do that by all means, but we don’t want to build, promote, and support that functionality ourselves because it works against our community, our business, and our mission.”

The bottom line is Unsplash is a business, and a business needs to make money. Certainly a company doesn’t commission a WordPress plugin from a team of the caliber of XWP without hoping for a return on that kind of investment. The plugin’s setup process makes it effortless for users to connect to the Unsplash API, but there isn’t any transparency during this process regarding what data users are agreeing to send Unsplash. The plugin needs to be more forthcoming about the data the CDN collects on views and downloads. This would go a long way towards establishing more credibility with skeptics. Those who are wary of the requirement to use the CDN can use a plugin to disable it or install an alternative like Instant Images.

Gutenberg 8.7 Adds Minor Changes, Updates Block Pattern Designs, and Continues Full-Site Editing Work

On Wednesday, the Gutenberg team pushed what was primarily minor enhancements and bug fixes to the WordPress platform’s primary project. Everyone is mostly gearing up for the WordPress 5.5 release, so we are not seeing any major features dropping at the moment. However, steady work continues on improving the Gutenberg plugin.

Gutenberg 8.7 contains over 30 bug fixes, in which nearly a third were accessibility-related changes. Around half of the new enhancements focused on updating block patterns.

Users can look forward to several minor enhancements that should improve the editor, such as the Buttons block getting a proper preview in the inserter. The monitoring solution behind auto-saving should also work more consistently with this update.

The biggest user-facing enhancement is the change in dealing with invalid blocks. The latest version of the plugin makes the attempt block recovery option the default. This change hides the resolve, convert to classic, and convert to HTML options under the sub-menu (ellipsis button). This is a nice touch and makes the most sense. Attempting to recover a block should generally be the first step when correcting invalid block output.

Block Pattern Updates

I can now proceed to eat my earlier words of frustration with block patterns. Or, perhaps I can praise myself in some small way for pushing the Gutenberg team to up their game. I was unhappy with the abysmal designs that were originally going to ship with WordPress 5.5. The team has taken what was looking to be one of the most disappointing first outings for a feature and turned it into something the project can be proud of.

It did not take much. A photo here. A touch of pizazz there. The Don Quixote images and text bring a cohesive theme to the patterns, breathing a touch of life in an otherwise desolate and barren feature.

The “large header with a heading” pattern dropped the blinding background gradient and replaced it with an image. The “quote” pattern now has a face instead of an impersonal icon. Even the “two images side by side” pattern fits in thematically.

If anything, I am not a fan of the long pattern names. “Large header with a heading and a button” and “three columns of text with buttons” do not exactly roll off the tongue. Nor do they make it easy to write about them. I do not wish the pain of typing them out on any support volunteers.

At least we are working with a somewhat decent set of patterns going forward, and that is enough to be thankful for at this point. I will now await the first theme author to truly impress me with custom patterns.

Experimental Features Update

Beta version of the site editor in the Gutenberg plugin.
Site editor beta.

Much of the work for this release centered on the plugin’s experimental features. The bulk of it went toward post-related blocks. At this point, these features are so experimental that even experienced developers outside of the inner Gutenberg circle have trouble following the progress. It is nice to see the continual movement in this area. However, from a user viewpoint, it is not even ready for a quick look. Enable at your own risk. Wait until the product is a bit more polished.

I typically enable Gutenberg’s experimental features once a month or so. I want to keep up with the progress and not feel out of the loop. Such was the case over the past couple of days as I tinkered a bit more with the full-site editing and demo templates features. I am unsure what I was hoping for. Mostly, I wanted some indication of a bright future — one that I fully expect to be realized at some point. I wanted to be wowed.

I understand why the wow factor is not there. The feature is far from ready. More than that, I know that, as a developer, you have a vision of the finished product in your head, and as the UI is in flux, others cannot see that vision. It is a step-by-step process that you simply have to continue working through.

I am still of the belief that full-site editing will not be close to a viable feature until 2021. Even with all hands on deck, four months is too small of a window to make anything remotely competitive to existing solutions out there. When full-site editing does land in core WordPress, it needs to do so with a bang, not a whimper.

What does 100% mean in CSS?

When using percentage values in CSS like this…

.element {
  margin-top: 40%;
}

…what does that % value mean here? What is it a percentage of? There’ve been so many times when I’ll be using percentages and something weird happens. I typically shrug, change the value to something else and move on with my day.

But Amelia Wattenberger says no! in this remarkable deep dive into how percentages work in CSS and all the peculiar things we need to know about them. And as is par for the course at this point, any post by Amelia has a ton of wonderful demos that perfectly describe how the CSS in any given example works. And this post is no different.

Direct Link to ArticlePermalink


The post What does 100% mean in CSS? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

#279: Mini-Releases

Show Description

Dee, Stephen and Marie talk about CodePen’s evolving strategy of releasing mini features and updates rather than waiting for a big, mega update. They also talk about using feature flagging to test new features with specific users, and how to avoid building "the Homer car" with a minimum viable product mentality.

Time Jumps

  • 07:19 Update to collections
  • 14:04 Feature flagging
  • 20:21 Sponsor: WordPress
  • 22:30 Assets and feature flagging
  • 25:15 Minimum viable product

Sponsor: WordPress.com

Have an idea for a subscription business? Need to charge customers on a monthly or yearly basis for something, not just a one-off charge?

WordPress.com can do that for you. You connect your Stripe account (the best payment gateway out there!) and the rest is as easy as adding button blocks to your site. The UI your customers see is clean and clear, and the pricing is straightforward. The eCommerce plan pays no fees, the Business plan pays 2%, and the less expensive plans go up from there.

Show Links

CodePen Links

The post #279: Mini-Releases appeared first on CodePen Blog.

Cutting Step-Functions Costs on Enterprise-Scale Workflows

AWS Step Functions is a great service for orchestrating multi-step workflows with complex logic. It’s fast to implement, relatively easy to use and just works. The problem is its price.

For relatively low-scale projects, it’s a feasible solution. But for large-scale, enterprise-grade orchestration with hundreds of millions of processes, each with dozens of steps, it can be cost-prohibitive.

OpenAI GPT-3: How It Works and Why It Matters

You have probably heard about an innovative language model called GPT3. The hype is so overwhelming that we decided to research its core and the consequences for the tech players. Let’s explore whether the language deserves this much attention and what makes it so exceptional.

What Is GPT-3? Key Facts

GPT-3 is a text generating neural network that was released in June 2020 and tested for $14 million. Its creator is the AI research agency OpenAI headed by Sam Altman, Marc Benioff, Elon Musk, and Reid Hoffman.

The language is based on 175 million parameters and is by far more accurate than its predecessors. For example, GPT-2 had only 1.5 billion of parameters, and Microsoft Turing NLG - 17 billion of them. Thus, the power of GPT-3 is significantly surpassing the alternatives.

The MQTT Essentials are Back

For data exchange with constrained devices and server applications, MQTT is the top choice of large enterprises around the world. You are just as likely to find MQTT connecting things in your living room, in a car as within a factory. MQTT's widespread adoption has led to its acceptance as an open OASIS and ISO standard. Today, MQTT is the absolute standard protocol for the Internet of Things (IoT) worldwide (see Google Trend Chart).

In 2015, HiveMQ wrote the MQTT essentials blog series. A new article introducing the core concepts of MQTT was published every Monday - that was the beginning of the MQTT Monday. By explaining the features of the protocol and other essential information, our aim was to provide easy access to the subject. The idea was to make it easy to understand and implement MQTT quickly and successfully. 

AI Can Replicate Any Human Voice: What Does That Mean for Podcasts?

Podcasting is moving towards a more informal genre of audio narrative. More emphasis is placed on the relationship between host and listener, fostered by a less crafted use of language.

That is to say, the host attempts to speak everybody’s language, making everything easier to understand and react to. For this reason, audio storytelling follows an ascending trend in terms of popularity. The numbers support this claim.

Integrating Codecov Test Coverage With Nebula Graph

A solid testing strategy is a key point to the successful adoption of agile development. Test coverage is a metric used to measure how much of the source code of a program is executed by running a set of tests. It helps developers to identify the code in their application that was not tested.

Ideally, tests against software should define all behaviors of the software. However, this is rarely realized. That is how test coverage comes into play.

How Do I SELECT These Two Tables Using A Single PDO Query

I have two MySQL tables, which I want to SELECT using a single PDO query and positional placeholders.

I've been going through similar questions here to find a solution, but none seems to match the issues I'm having.

The following code is the section of my script:

<?php
// query users table to retrieve its contents   
if (isset($_SESSION["user_id"]["0"]))
{               
    // select a particular user by user_id
    $user_id = isset($_POST["user_id"]) ? $_POST["user_id"] : '';

    $stmt = $pdo->prepare("SELECT * FROM users WHERE user_id=?",$_SESSION["user_id"]["0"]);
    $stmt->execute([$user_id]); 
    $user = $stmt->fetch(); # get user data

}

    // query courses table to retrieve its contents            
        $cid = $_POST["cid"] ?? NULL;
        if (is_null($cid))
    {
           $stmt = $pdo->query("SELECT * FROM courses");
        }
        else
    {
           $stmt = $pdo->prepare("SELECT * FROM courses WHERE cid = ?");
           $stmt->execute([$cid]);
    }

        $results = $stmt->fetchAll(PDO::FETCH_ASSOC);

        echo '<option value="">'. "Select a course to proceed" .'</option>';

        foreach ($results as $row) {
        echo '<option value=" '. $row["cid"] .' ">'. $row["c_name"] .'</option>';                
    }

Apart from echoing $row["cid"] (course ID) and $row["c_name"] (course name) from the courses table, I also want to echo the following from the same courses table: $row["code"], $row["duration"], $row["start"]

In the users table, I have the logged in user's "user_id", "firstname", "lastname", "username", "email", which I also want to echo in the above foreach loop. That means the user must be logged in.

Thank you in advance for your time and help.

Breaking Down Serverless Anti-Patterns

Serverless adoption rates have been climbing ever since the technology was brought into the spotlight with the release of AWS Lambda in 2014. That is because serverless makes an offer that cloud developers simply can not resist, providing the following benefits:

  • Server management is abstracted to vendor
  • Pay-as-you-go model where you only pay for what you use
  • Automatically scalable and highly available

These benefits are achieved by the characteristics that define the technology. Serverless applications are stateless distributed systems that scale to the needs of the system, providing event-based and async models of development. This has worked in favor of the technology, resulting in a desirable solution for the cloud.