How to Speed Up Hosting with Static Server Cache

What the H double T P? If site loading slowdowns have you shaking your fist, it’s probably time to cache out.

When it comes to fast loading WordPress sites, caching is crucial. A well-optimized page cache can dramatically improve page load speed for visitors, and reduce the load on your server.

You have a slew of options when it comes to caching. You could go with a caching plugin from WordPress.org (there are many, so we compiled a “best of the bunch” short list), or a caching module installed on top of a web server.

Of course the caching method you choose will produce greatly varying results in terms of the quality and impact on your site performance. So what’s the best option?

Continue reading, or jump ahead using these links:

In this article, we’re going to look more closely at what static server cache is, explain why we recommend FastCGI (with a peek into Static Server Cache FastCGI), and how implementing it can optimize your site speed and user experience.

Let’s get started.

All About That Cache

Rendering or fetching a page or post in WordPress requires queries to be sent back and forth from the database. A lot of these posts and pages won’t be updated everyday.

Rather than the server interpreting the site code itself, querying the database, and returning an HTML document back to the customer and finally loading the page, static caching saves a single result from the first two steps and provides that document to anyone else making the request.

Static assets like CSS, JavaScript, and images are stored in browser caching, so the browser can retrieve them from its local cache. This is faster than downloading the resources again from the page’s server.

Caching in WordPress has multiple benefits, the top amongst them being: speed and performance boosts, hosting server load reductions, and more favorable rankings with search engines. As stated in prior optimization articles, this will be affected by other metrics as well, as there are many components that factor into speed and performance.

There are different caching modules, such as Varnish and FastCGI, along with different types of web servers, such as Nginx, Apache, and LiteSpeed. These modules and servers work in tandem to provide superior caching.

Caching modules and services
Some modules and servers in the caching arena.

Varnish is a web application accelerator, also known as a caching HTTP reverse proxy. One of its key features is its configuration language, VCL. Offering great flexibility, VCL enables you to write policies on the handling of incoming requests, such as what content you want to serve, where you want to get the content from, and how the request or response should be altered.

Nginx (pronounced Engine-X) started as a simple web server designed for maximum stability and performance, and has evolved into a multi-performance powerhouse, with capabilities to handle reverse proxy with caching, load balancing, WebSockets, index files & auto-indexing, FastCGI support with caching, and more. As the fastest‑growing open source web server, with more than 450 million sites dependent on their technology, Nginx is incredibly stable.

We believe FastCGI, served by Nginx, is the cream of the crop. Read on for why.

Why FastCGI Rules

FastCGI―an enhanced version of its predecessor, CGI (Common Gateway Interface)―is a binary protocol for interfacing interactive programs with a web server. It’s primary function is to reduce the overhead related to interfacing web server and CGI programs, allowing a server to handle more web page requests per unit of time.

Instead of creating a new process for each request, FastCGI uses persistent processes to handle a series of requests. Using Nginx FastCGI, when a user visits the same WordPress page as they did prior, your website will not perform the same PHP and database requests again because the page is already cached and served by FastCGI. Thus, users will have a much faster server response time after the initial visit.

You’ll also have reduced PHP-FPM and MariaDB (MySQL) load, as well as lower CPU usage. And finally, your server will be able to handle more traffic with the same specs, enabling you to better meet more demanding needs.

For a visual on how these elements work together for superior caching, see the infographic below. (We’ll talk about object caching in a bit.)

Browser request path infographic
Serving a browser request, using FastCGI and Memcached Object Cache.

All WordPress pages can gain huge benefits when using FastCGI.

Caching Policies

There are two content types to consider when setting your cache: static and dynamic.

Static content is any file that is stored in a server and is the same every time it’s delivered to users. Dynamic content changes based on factors specific to the user such as time of visit, location, and device.

Social media pages are a good example of dynamic content. Twitter feeds look totally different for any given user, and users can interact with the content in order to change it (e.g., by liking, re-tweeting, or commenting).

E-commerce sites are commonly heavy on dynamic content as well. With WooCommerce, for example, certain pages like Home, Shop, and single product pages can be fully cached. However, Cart, Checkout, and My Account pages should be excluded. You do not want to page cache these dynamic pages fully as the latest changes would not be seen.

Dynamic web pages are not stored as static HTML files. Generated server-side, these typically come via origin servers, not from a cache. Since dynamic content can’t be served to multiple users (as it’s unique to each one), it’s difficult to cache. However with advancements in technology, caching dynamic content is possible.

One way to speed up dynamic web pages is to use dynamic compression. In this manner, the content still comes from the origin server instead of a cache, but the HTML files generated are made significantly smaller so that they can reach the client device more quickly.

Just as page caching works on HTML page output, object caching works on your database queries. Object caching is a fantastic solution for caching dynamic content.

Like the other caching components we discussed, there are several persistent object cache contenders in the field, the most well-known being Memcached, Redis, and APCu.

As far as setting your caching policies goes, there isn’t a one size fits all. However, what determines a more or less desirable static cache policy is basing it on the type of content your site is comprised of.

For sites where user comments are steadily being added & approved (often by the minute), or frequent new content is the norm, you should structure your cache policy to clear more often, such as daily or even hourly.

For content that is regularly updated, just not that often, a 30-day cache policy is more than enough.

For static elements like logos, images, page fonts, JS, and core CSS stylesheets, you can extend the max age to one year.

Even Google says there’s no one best cache policy, but they do offer some tips to assist in devising your caching strategy, beyond the scope of your static assets. These are:

  • Use consistent URLs
  • Ensure that the server provides a validation token (ETag)
  • Identify which resources can be cached by intermediaries (like a CDN)
  • Determine the optimal lifetime for each resource
  • Determine the best cache hierarchy for your site
  • Minimize churn (for a particular part of a resource that is often updated [e.g., JS function or set of CSS styles], deliver that code as a separate file)

Popular site speed performance testers, GTmetrix, consider resources cacheable if the following conditions are met:

  • It’s a font, image, media file, script, or stylesheet
  • It has a 200, 203, or 206 HTTP status code
  • It doesn’t have an explicit no-cache policy

If you use a CDN like Cloudflare, you can set your cache policies through your account. Additionally, if you use our Hummingbird plugin, you can access these settings via the built in integrations for Cloudflare.

As a WPMU DEV hosted member, you can access the primary Static Cache settings through The Hub to enable the extremely efficient static cache policy.

Static Server Cache
Turn on Static Server Cache from The Hub.

Ultimately, how you design your cache policy should be based on the type of content you serve, your web traffic, and any application-specific needs that exist for that new new data.

There are a number of tools you can use directly within WordPress that make implementing a static cache policy quick and easy. We’ll look at those next.

Plugin Possibilities

A quick search for caching plugins on WordPress.org will return in excess of a thousand results. That’s a lot of options to wade through. We handpicked a few that we believe to be solid options.

Hummingbird

Hummingbird banner

Hummingbird is a one of a kind, world-class caching suite, active on more than +1 million websites.

With Hummingbird’s WordPress speed optimization, your pages will load faster, your search rankings and PageSpeed scores will be higher, and your visitors will be happier. In fact, speeding up your site has never been easier.

Here is just a selection of HB’s standout features: full Page, Browser, RSS, & Gravatar caching; performance reports; minify and combine Javascript, CSS, and Google Font files; GZIP compression for blazing-fast HTML, JS, and stylesheet transfer; configs (set & save your preferred performance settings, and instantly upload to any other site)―and more.

Hummingbird scans your site and provides one-click fixes to speed up WordPress in a flash. And it’s completely free. (Consider Smush as well; while not a static caching solution, it will compress and lazy load your images for marked speed improvements, and is also free.)

WP Rocket

WP Rocket banner

With more than 1.5 million users, WP Rocket is a favored caching plugin for WordPress. It’s a premium service, which you can only install directly from their website.

It’s easy for non-techie users to understand, while more knowledgeable developers can customize it to their liking. It’s compatible with many hosting providers, e-commerce platforms, themes, and other plugins.

WP Rocket automatically starts caching your pages, without any need to tweak the code or mess with settings. Pricing starts at $49, for 1 website/1 year.

WP Super Cache

WP Super Cache Banner

WP Super Cache is from the team behind WordPress.com and WooCommerce… Automattic. This plugin is free, and has an astounding 2 million+ active installations.

WP Super Cache serves cached files in 3 ways, which are ranked by speed. Expert (the fastest), bypasses PHP by using Apache mod-rewrite to serve static html files. Simple (mid-level speed, and the recommended way of using the plugin), uses PHP & does not require configuration of the .htaccess file, allowing you to keep portions of your page dynamic. WP-Caching mode (the slowest), mainly caches pages for known users, and is the most flexible method.

WP Super Cache comes with recommended settings, one of which is: If you’re not comfortable with editing PHP files, use Simple mode.

W3 Total Cache

W3 Total Cache banner

W3 Total Cache (W3TC) has over a million users, with an average rating of 4.4 out of 5 stars. It is a free plugin.

W3TC improves the SEO and user experience of your site by increasing website performance, and reducing load times, leveraging features like CDN integration and the latest best practices.

W3 Total Cache remedies numerous performance-reducing aspects of any website. It requires no theme modifications, modifications to your .htaccess (mod_rewrite rules) or programming compromises to get started. The options are many and setup is easy.

Some of W3TC features include: transparent CDN management with Media Library, theme files and WordPress itself; mobile support; SSL support; AMP support; minification & compression of pages/posts in memory; and minification of CSS, JavaScript, and HTML with granular control.

WP Fastest Cache

WP Fastest Cache banner

WP Fastest Cache is another million+ user caching plugin.

Setup is easy; no need to modify the .htaccess file (it’s done automatically). It’s got a more minimal set of features, including SSL support, CDN support, Cloudflare support, preload cache, cache timeout for specific pages, and the ability to enable/disable cache option for mobile devices. WP Fastest Cache is also compatible with WooCommerce.

WP Fastest Cache is free, but offers a premium-for-pay version, which adds additional features, such as: Widget Cache, Minify HTML Plus, Minify CSS Plus, Minify JS, Defer Javascript, Optimize Images, Convert WebP, Google Fonts Async, and Lazy Load.

LiteSpeed Cache

LiteSpeed Cache banner

LiteSpeed Cache for WordPress (LSCWP) is an all-in-one site acceleration plugin, with more than 2 million active installations.

It features an exclusive server-level cache and a collection of optimization features, such as: free QUIC.cloud CDN cache; lossless/lossy image optimization; minification of CSS, JavaScript, and HTML; asynchronous loading of CSS; deferred/delayed JS loading; and WebP image format support.

LSCWP does require use with a web server (LiteSpeed, Apache, NGINX, etc.). It supports WordPress Multisite, and is compatible with most popular plugins, including WooCommerce, bbPress, and Yoast SEO.

LiteSpeed Cache is free, but some of the premium online services provided through QUIC.cloud (CDN Service, Image Optimization, Critical CSS, Low-Quality Image Placeholder, etc.) require payment at certain usage levels.

Ok, now that we’ve covered some viable plugin options for caching, let’s look at what you can do with the cache settings in WPMU DEV’s hosting platform.

(Con)figuring it All Out

The best WordPress hosting providers―leading in sales and racking up rave reviews―have caching built in. Without it, they wouldn’t be competitive enough in today’s market of tech-savvy web developers.

If you’re looking for tools that are integrated on managed WordPress hosting environments, WPMU DEV Hosting, WPEngine, Flywheel, and Kinsta all have caching built in. Quite frankly, the systems used by hosting companies are quicker and more effective than WordPress plugins.

With WPMU DEV hosting, we use our own mega caching tool, Static Server Cache. This is page caching at the server level using FastCGI. Much faster than any PHP plugin, Static Server Cache greatly speeds up your site and allows for an average of 10 times more concurrent visitors.

Understanding and managing the settings for caching in WPMU DEV hosting is an easy, hassle-free experience. C’mon along and you’ll see what I mean.

From your WordPress admin page, go to WPMU DEV, Plugins, then click on The Hub icon.

Hub icon in WP dashboard
One-click access to the Hub from the WPMU DEV dashboard.

Next, from The Hub landing page, click on the site of your choice, under My Sites.

Listed sites in WPMU DEV's Hub
The Hub lists all of your hosted sites.

From here, click on either of the Hosting headers.

Hub hosting settings in WPMU DEV
Two options to get to the hosting tools page.

Next you’ll click on Tools, and scroll down to Static Server Cache. Click the Off button, then click Continue from the “Turn on Static Server Cache” popup window. (Note: Static Server Cache will be enabled by default for all new server/hosting accounts created with us.)

Turning Static Server Cache on is a breeze through The Hub.

You can also do a quick manual clear of the Static Server Cache from here. Simply click the Clear button, then click Continue from the “Are you sure?” popup window.

Clear static server cache
You’ll get a confirmation message indicating the cache clearing action is complete.

Static Server Cache is fully integrated with our Hummingbird performance plugin, so any action or process in Hummingbird that triggers clearing of the page cache will clear the Static Server Cache as well.

For example, if you click the Clear Cache button in the Hummingbird plugin and have Page Caching enabled in settings, the Static Server Cache will be cleared as well. Likewise, if you have options like Clear cache on interval or Clear full cache when post/page is updated enabled in Hummingbird, Static Server Cache will follow suit with those settings.

Cache setting in Hummingbird
Static Server Cache respects cache settings enabled in Hummingbird.

WooCommerce is also supported by default, hence any dynamic process in Woo is not cached. So if a user on your site adds items to their cart, that would not be cached by the Static Server Cache.

Below is an itemized list of what does or does not get cached when Static Server Cache is enabled. (Note: The max size of any item is 1 GB.)

Cached:

  • GET/HEAD requests (that’s your content; e.g., posts, pages, etc.)

NOT cached:

  • POST requests (e.g., forms or any other frontend submission)
  • Query strings
  • wp-admin, xmlrpc, wp-*.php, feed, index.php, sitemap URIs
  • If these cookies are found:
    comment_author, wordpress_, wp-postpass, wordpress_no_cache, wordpress_logged_in, woocommerce_items_in_cart
  • If these WooCommerce URIs are found:
    /store, /cart, /my-account, /checkout, /addons

If you want to check if any page is being cached by our Static Server Cache, pull up our detailed documentation for a walkthrough.

Please note that Static Server Cache is not enabled on staging sites.

Your Cache Cow

Caching is a reliable and worthy solution to improve your pages’ load speed, and thus your users’ experience. It’s powerful enough to allow refined subtleties for specific content types, but yielding enough to allow easy updates when your site content changes.

While many forms of caching are available, static caching is a method for converting the page generated by a user’s request into an HTML document to serve any subsequent requests to that same page.

Caching images and other static objects will certainly speed up page load time, but caching items such as full HTML documents is what can really amplify a website.

Apart from just basic page caching, make sure your caching solution combines and minifies JavaScript and CSS. Then add Object Caching to take advantage of serving dynamic content, without sacrificing load time or CPU usage.

If you’re looking for a full-featured caching solution, then WPMU DEV’s Hosting plan might be your answer. Pair our FastCGI, accessible via the streamlined Hub interface, with our caching queen, Hummingbird, for the speed round’s 1-2 knock-out punch. With our 30-day money-back guarantee, you’ve got nothing to lose!

If you’re a WPMU DEV paid plan user, you already enjoy the full functionality of this feature. Not a member yet? Try it for yourself, free for 7 days, and see why we have so many five-star reviews.

Whatever method you opt for, you’re well advised to put caching tools and policies in place, so response and loading time is never a hindrance to your visitors’ experience, or your conversion success rates. As someone once said… Cache is King!

Gutenberg 11.4 Overhauls Galleries, Adds Axial Padding for Buttons, and Lays Groundwork for Global Spacing

Another two weeks have flown by, and another Gutenberg plugin update is in the books. I always look forward to the latest release, awaiting what goodies our contributor community has produced. Sometimes I jump the gun and install a development version of the plugin to understand an upcoming feature, such as the new “block gap” style setting. Other times, I like to be surprised with enhancements like the new vertical/horizontal padding controls for the Button block.

Of course, there is always a good chance that a plugin update will throw off our theme’s editor styles in a new and exciting way. It feels like it has been a while since Gutenberg caught me off guard. At least it is only the post title this go-round. The WP Tavern theme is aging a bit anyway. It is due for an update (hint, hint).

Aside from block gap and axial padding, Gutenberg 11.4 turns the Gallery block into a container for nested Image blocks and adds duotone filter support to featured images. Other notable enhancements include an option for adding alt text to the Cover block and font-weight support to the Post Date, Post Terms, and Site Tagline blocks.

Axial Padding for Button Block

Adjusting the top and bottom (vertical) and left and right (horizontal) padding for an individual Button block in the WordPress editor.
Adjusting vertical and horizontal Button padding.

The Button block now supports changing the spacing along the X or Y axis when unlinking the padding. Previously, users could define the padding for all sides, but this could be tedious work. In most designs, top and bottom (vertical) padding should match, and left and right (horizontal) should get the same treatment.

This change should speed up padding customization in nearly all cases. However, it does introduce a regression. The consensus in the ticket was that the tradeoff for a less cumbersome experience was worth less flexibility for edge cases.

Overall, this should be a win for most. I am already a happier user.

Gallery Block Uses Nested Images

New WordPress Gallery block in the editor with the toolbar link option open.
Adding a link to an Image block within a Gallery.

The Gallery block in Gutenberg 11.4 supports nesting individual Image blocks. It is currently hidden behind an experimental support flag and must be enabled via the Gutenberg > Experiments settings screen.

Effectively, the Gallery block is now a container. Inserting media still works the same way. The difference is that end-users have access to customize each Image block within a Gallery separately.

One use case for this feature is to allow users to add custom links around images. However, they now have access to more of the Image block’s options, such as custom theme styles.

Last week, I covered this feature in-depth because it is expected to land in WordPress 5.9, and theme authors should be ready for the transition. This is a breaking change in terms of HTML. Any themer with custom Gallery block styles should test the front-end and editor output before WordPress merges the changes.

Featured Image Duotone Support

Post Featured Image block shown in the WordPress editor with a custom duotone filter selected.
Applying a duotone filter to the Post Featured Image block.

While we are still missing an image size control, I will take any Post Featured Image block improvements I can get at this point. The block felt like a second-class citizen for so long that I am giddy about any enhancements.

Duotone filters, which landed in WordPress 5.8, allow end-users to add a CSS filter over images to control shadow and highlight colors. Themes can register custom ones, or users can modify them. The latest Gutenberg plugin update brings this feature to the Post Featured Image block.

This change allows theme authors to explore adding some visual flair since the Post Featured Image block is meant for templating or site editing. It still has a long way to go before it is ready for more advanced theme design, but the tools are getting us closer.

Global Block “Gap” for Themes

Developer tools view of the block gap (top margin) feature on the front end of a site.
Highlighting a Paragraph block and its preceding “gap” (top margin).

One custom feature that has become commonplace with themes that support the block editor is a “global spacing” style rule, which controls the whitespace between elements. Gutenberg contributors have noticed this trend and are now shipping a standard solution for it. Themes that use a theme.json file will automatically opt into support.

The gap feature adds a top margin to all adjacent sibling elements within block containers. This creates the space between each block using a standard method. Theme authors can control this via the styles.spacing.blockGap key in their theme.json files.

If you are a theme developer, this is one of the most crucial components of block theming from a pure design viewpoint. It is not something to avoid until it lands in WordPress. The time to test and provide feedback is now.

It is also merely a first step. There are pieces left to implement and problems to solve. There is currently an open pull request to bring this to editor block controls. There is also another ticket for zeroing out the margins for the first and last blocks, which would typically not need any. There are still some open questions on how to best deal with exceptions to the default block gap in the original ticket.

Regardless of its unfinished nature, it is an exciting development if you care anything at all about vertical rhythm in design systems.

Accessing Your Data With Netlify Functions and React

(This is a sponsored post.)

Static site generators are popular for their speed, security, and user experience. However, sometimes your application needs data that is not available when the site is built. React is a library for building user interfaces that helps you retrieve and store dynamic data in your client application. 

Fauna is a flexible, serverless database delivered as an API that completely eliminates operational overhead such as capacity planning, data replication, and scheduled maintenance. Fauna allows you to model your data as documents, making it a natural fit for web applications written with React. Although you can access Fauna directly via a JavaScript driver, this requires a custom implementation for each client that connects to your database. By placing your Fauna database behind an API, you can enable any authorized client to connect, regardless of the programming language.

Netlify Functions allow you to build scalable, dynamic applications by deploying server-side code that works as API endpoints. In this tutorial, you build a serverless application using React, Netlify Functions, and Fauna. You learn the basics of storing and retrieving your data with Fauna. You create and deploy Netlify Functions to access your data in Fauna securely. Finally, you deploy your React application to Netlify.

Getting started with Fauna

Fauna is a distributed, strongly consistent OLTP NoSQL serverless database that is ACID-compliant and offers a multi-model interface. Fauna also supports document, relational, graph, and temporal data sets from a single query. First, we will start by creating a database in the Fauna console by selecting the Database tab and clicking on the Create Database button.

Next, you will need to create a Collection. For this, you will need to select a database, and under the Collections tab, click on Create Collection.

Fauna uses a particular structure when it comes to persisting data. The design consists of attributes like the example below.

{
  "ref": Ref(Collection("avengers"), "299221087899615749"),
  "ts": 1623215668240000,
  "data": {
    "id": "db7bd11d-29c5-4877-b30d-dfc4dfb2b90e",
    "name": "Captain America",
    "power": "High Strength",
    "description": "Shield"
  }
}

Notice that Fauna keeps a ref column which is a unique identifier used to identify a particular document. The ts attribute is a timestamp to determine the time of creating the record and the data attribute responsible for the data.

Why creating an index is important

Next, let’s create two indexes for our avengers collection. This will be pretty valuable in the latter part of the project. You can create an index from the Index tab or from the Shell tab, which provides a console to execute scripts. Fauna supports two types of querying techniques: FQL (Fauna’s Query language) and GraphQL. FQL operates based on the schema of Fauna, which includes documents, collections, indexes, sets, and databases. 

Let’s create the indexes from the shell.

This command will create an index on the Collection, which will create an index by the id field inside the data object. This index will return a ref of the data object. Next, let’s create another index for the name attribute and name it avenger_by_name.

Creating a server key

To create a server key, we need to navigate the Security tab and click on the New Key button. This section will prompt you to create a key for a selected database and the user’s role.

Getting started with Netlify functions and React

In this section, we’ll see how we create Netlify functions with React. We will be using create-react-app to create the react app.

npx create-react-app avengers-faunadb

After creating the react app, let’s install some dependencies, including Fauna and Netlify dependencies.

yarn add axios bootstrap node-sass uuid faunadb react-netlify-identity react-netlify-identity-widget

Now let’s create our first Netlfiy function. To make the functions, first, we need to install Netlifiy CLI globally.

npm install netlify-cli -g

Now that the CLI is installed, let’s create a .env file on our project root with the following fields.

FAUNADB_SERVER_SECRET= <FaunaDB secret key>
REACT_APP_NETLIFY= <Netlify app url>

Next, Let’s see how we can start with creating netlify functions. For this, we will need to create a directory in our project root called functions and a file called netlify.toml, which will be responsible for maintaining configurations for our Netlify project. This file defines our function’s directory, build directory, and commands to execute.

[build]
command = "npm run build"
functions = "functions/"
publish = "build"

[[redirects]]
  from = "/api/*"
  to = "/.netlify/functions/:splat"
  status = 200
  force = true

We will do some additional configuration for the Netlify configuration file, like in the redirection section in this example. Notice that we are changing the default path of the Netlify function of /.netlify/** to /api/. This configuration is mainly for the improvement of the look and field of the API URL. So to trigger or call our function, we can use the path:

https://domain.com/api/getPokemons

 …instead of:

https://domain.com/.netlify/getPokemons

Next, let’s create our Netlify function in the functions directory. But, first, let’s make a connection file for Fauna called util/connections.js, returning a Fauna connection object.

const faunadb = require('faunadb');
const q = faunadb.query

const clientQuery = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET,
});

module.exports = { clientQuery, q };

Next, let’s create a helper function checking for reference and returning since we will need to parse the data on several occasions throughout the application. This file will be util/helper.js.

const responseObj = (statusCode, data) => {
  return {
    statusCode: statusCode,
    headers: {
     /* Required for CORS support to work */
      "Access-Control-Allow-Origin": "*",
      "Access-Control-Allow-Headers": "*",
      "Access-Control-Allow-Methods": "GET, POST, OPTIONS",
    },
   body: JSON.stringify(data)
  };
};

const requestObj = (data) => {
  return JSON.parse(data);
}

module.exports = { responseObj: responseObj, requestObj: requestObj }

Notice that the above helper functions handle the CORS issues, stringifying and parsing of JSON data. Let’s create our first function, getAvengers, which will return all the data.

const { responseObj } = require('./util/helper');
const { q, clientQuery } = require('./util/connection');

exports.handler = async (event, context) => {
  try {
   let avengers = await clientQuery.query(
     q.Map(
       q.Paginate(q.Documents(q.Collection('avengers'))),
       q.Lambda(x => q.Get(x))
      )
    )
    return responseObj(200, avengers)
  } catch (error) {
    console.log(error)
    return responseObj(500, error);
  }
};

In the above code example, you can see that we have used several FQL commands like Map, Paginate, Lamda. The Map key is used to iterate through the array, and it takes two arguments: an Array and Lambda. We have passed the Paginate for the first parameter, which will check for reference and return a page of results (an array). Next, we used a Lamda statement, an anonymous function that is quite similar to an anonymous arrow function in ES6.

Next, Let’s create our function AddAvenger responsible for creating/inserting data into the Collection.

const { requestObj, responseObj } = require('./util/helper');
const { q, clientQuery } = require('./util/connection');

exports.handler = async (event, context) => {
  let data = requestObj(event.body);

  try {
    let avenger = await clientQuery.query(
      q.Create(
        q.Collection('avengers'),
        {
          data: {
            id: data.id,
            name: data.name,
            power: data.power,
            description: data.description
          }
        }
      )
    );

    return responseObj(200, avenger)
  } catch (error) {
    console.log(error)
    return responseObj(500, error);
  }
 
};

To save data for a particular collection, we will have to pass, or data to the data:{} object like in the above code example. Then we need to pass it to the Create function and point it to the collection you want and the data. So, let’s run our code and see how it works through the netlify dev command.

Let’s trigger the GetAvengers function through the browser through the URL http://localhost:8888/api/GetAvengers.

The above function will fetch the avenger object by the name property searching from the avenger_by_name index. But, first, let’s invoke the GetAvengerByName function through a Netlify function. For that, let’s create a function called SearchAvenger.

const { responseObj } = require('./util/helper');
const { q, clientQuery } = require('./util/connection');

exports.handler = async (event, context) => {
  const {
    queryStringParameters: { name },
  } = event;

  try {
    let avenger = await clientQuery.query(
      q.Call(q.Function("GetAvengerByName"), [name])
    );
    return responseObj(200, avenger)
  } catch (error) {
    console.log(error)
    return responseObj(500, error);
  }
};

Notice that the Call function takes two arguments where the first parameter will be the reference for the FQL function that we created and the data that we need to pass to the function.

Calling the Netlify function through React

Now that several functions are available let’s consume those functions through React. Since the functions are REST APIs, let’s consume them via Axios, and for state management, let’s use React’s Context API. Let’s start with the Application context called AppContext.js.

import { createContext, useReducer } from "react";
import AppReducer from "./AppReducer"

const initialState = {
    isEditing: false,
    avenger: { name: '', description: '', power: '' },
    avengers: [],
    user: null,
    isLoggedIn: false
};

export const AppContext = createContext(initialState);

export const AppContextProvider = ({ children }) => {
    const [state, dispatch] = useReducer(AppReducer, initialState);

    const login = (data) => { dispatch({ type: 'LOGIN', payload: data }) }
    const logout = (data) => { dispatch({ type: 'LOGOUT', payload: data }) }
    const getAvenger = (data) => { dispatch({ type: 'GET_AVENGER', payload: data }) }
    const updateAvenger = (data) => { dispatch({ type: 'UPDATE_AVENGER', payload: data }) }
    const clearAvenger = (data) => { dispatch({ type: 'CLEAR_AVENGER', payload: data }) }
    const selectAvenger = (data) => { dispatch({ type: 'SELECT_AVENGER', payload: data }) }
    const getAvengers = (data) => { dispatch({ type: 'GET_AVENGERS', payload: data }) }
    const createAvenger = (data) => { dispatch({ type: 'CREATE_AVENGER', payload: data }) }
    const deleteAvengers = (data) => { dispatch({ type: 'DELETE_AVENGER', payload: data }) }

    return <AppContext.Provider value={{
        ...state,
        login,
        logout,
        selectAvenger,
        updateAvenger,
        clearAvenger,
        getAvenger,
        getAvengers,
        createAvenger,
        deleteAvengers
    }}>{children}</AppContext.Provider>
}

export default AppContextProvider;

Let’s create the Reducers for this context in the AppReducer.js file, Which will consist of a reducer function for each operation in the application context.

const updateItem = (avengers, data) => {
    let avenger = avengers.find((avenger) => avenger.id === data.id);
    let updatedAvenger = { ...avenger, ...data };
    let avengerIndex = avengers.findIndex((avenger) => avenger.id === data.id);
    return [
        ...avengers.slice(0, avengerIndex),
        updatedAvenger,
        ...avengers.slice(++avengerIndex),
    ];
}

const deleteItem = (avengers, id) => {
    return avengers.filter((avenger) => avenger.data.id !== id)
}

const AppReducer = (state, action) => {
    switch (action.type) {
        case 'SELECT_AVENGER':
            return {
                ...state,
                isEditing: true,
                avenger: action.payload
            }
        case 'CLEAR_AVENGER':
            return {
                ...state,
                isEditing: false,
                avenger: { name: '', description: '', power: '' }
            }
        case 'UPDATE_AVENGER':
            return {
                ...state,
                isEditing: false,
                avengers: updateItem(state.avengers, action.payload)
            }
        case 'GET_AVENGER':
            return {
                ...state,
                avenger: action.payload.data
            }
        case 'GET_AVENGERS':
            return {
                ...state,
                avengers: Array.isArray(action.payload && action.payload.data) ? action.payload.data : [{ ...action.payload }]
            };
        case 'CREATE_AVENGER':
            return {
                ...state,
                avengers: [{ data: action.payload }, ...state.avengers]
            };
        case 'DELETE_AVENGER':
            return {
                ...state,
                avengers: deleteItem(state.avengers, action.payload)
            };
        case 'LOGIN':
            return {
                ...state,
                user: action.payload,
                isLoggedIn: true
            };
        case 'LOGOUT':
            return {
                ...state,
                user: null,
                isLoggedIn: false
            };
        default:
            return state
    }
}

export default AppReducer;

Since the application context is now available, we can fetch data from the Netlify functions that we have created and persist them in our application context. So let’s see how to call one of these functions.

const { avengers, getAvengers } = useContext(AppContext);

const GetAvengers = async () => {
  let { data } = await axios.get('/api/GetAvengers);
  getAvengers(data)
}

To get the data to the application contexts, let’s import the function getAvengers from our application context and pass the data fetched by the get call. This function will call the reducer function, which will keep the data in the context. To access the context, we can use the attribute called avengers. Next, let’s see how we could save data on the avengers collection.

const { createAvenger } = useContext(AppContext);

const CreateAvenger = async (e) => {
  e.preventDefault();
  let new_avenger = { id: uuid(), ...newAvenger }
  await axios.post('/api/AddAvenger', new_avenger);
  clear();
  createAvenger(new_avenger)
}

The above newAvenger object is the state object which will keep the form data. Notice that we pass a new id of type uuid to each of our documents. Thus, when the data is saved in Fauna, We will be using the createAvenger function in the application context to save the data in our context. Similarly, we can invoke all the netlify functions with CRUD operations like this via Axios.

How to deploy the application to Netlify

Now that we have a working application, we can deploy this app to Netlify. There are several ways that we can deploy this application:

  1. Connecting and deploying the application through GitHub
  2. Deploying the application through the Netlify CLI

Using the CLI will prompt you to enter specific details and selections, and the CLI will handle the rest. But in this example, we will be deploying the application through Github. So first, let’s log in to the Netlify dashboard and click on New Site from Git button. Next, It will prompt you to select the Repo you need to deploy and the configurations for your site like build command, build folder, etc.

How to authenticate and authorize functions by Netlify Identity

Netlify Identity provides a full suite of authentication functionality to your application which will help us to manage authenticated users throughout the application. Netlify Identity can be integrated easily into the application without using any other 3rd party service and libraries. To enable Netlify Identity, we need to login into our Neltify dashboard, and under our deployed site, we need to move to the Identity tab and allow the identity feature.

Enabling Identity will provide a link to your netlify identity. You will have to copy that URL and add it to the .env file of your application for REACT_APP_NETLIFY. Next, We need to add the Netlify Identity to our React application through the netlify-identity-widget and the Netlify functions. But, first, let’s add the REACT_APP_NETLIFY property for the Identity Context Provider component in the index.js file.

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import "react-netlify-identity-widget/styles.css"
import 'bootstrap/dist/css/bootstrap.css';
import App from './App';
import { IdentityContextProvider } from "react-netlify-identity-widget"
const url = process.env.REACT_APP_NETLIFY;

ReactDOM.render(
  <IdentityContextProvider url={url}>
    <App />
  </IdentityContextProvider>,
  document.getElementById('root')
);

This component is the Navigation bar that we use in this application. This component will be on top of all the other components to be the ideal place to handle the authentication. This react-netlify-identity-widget will add another component that will handle the user signI= in and sign up.

Next, let’s use the Identity in our Netlify functions. Identity will introduce some minor modifications to our functions, like the below function GetAvenger.

const { responseObj } = require('./util/helper');
const { q, clientQuery } = require('./util/connection');

exports.handler = async (event, context) => {
    if (context.clientContext.user) {
        const {
            queryStringParameters: { id },
        } = event;
        try {
            const avenger = await clientQuery.query(
                q.Get(
                    q.Match(q.Index('avenger_by_id'), id)
                )
            );
            return responseObj(200, avenger)
        } catch (error) {
            console.log(error)
            return responseObj(500, error);
        }
    } else {
        return responseObj(401, 'Unauthorized');
    }
};

The context of each request will consist of a property called clientContext, which will consist of authenticated user details. In the above example, we use a simple if condition to check the user context. 

To get the clientContext in each of our requests, we need to pass the user token through the Authorization Headers. 

const { user } = useIdentityContext();

const GetAvenger = async (id) => {
  let { data } = await axios.get('/api/GetAvenger/?id=' + id, user && {
    headers: {
      Authorization: `Bearer ${user.token.access_token}`
    }
  });
  getAvenger(data)
}

This user token will be available in the user context once logged in to the application through the netlify identity widget.

As you can see, Netlify functions and Fauna look to be a promising duo for building serverless applications. You can follow this GitHub repo for the complete code and refer to this URL for the working demo.

Conclusion

In conclusion, Fauna and Netlify look to be a promising duo for building serverless applications. Netlify also provides the flexibility to extend its functionality through the plugins to enhance the experience. The pricing plan with pay as you go is ideal for developers to get started with fauna. Fauna is extremely fast, and it auto-scales so that developers will have the time to focus on their development more than ever. Fauna can handle complex database operations where you would find in Relational, Document, Graph, Temporal databases. Fauna Driver support all the major languages such as Android, C#, Go, Java, JavaScript, Python, Ruby, Scala, and Swift. With all these excellent features, Fauna looks to be one of the best Serverless databases. For more information, go through Fauna documentation.


The post Accessing Your Data With Netlify Functions and React appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

10 Marketing Automation Tools for WordPress Businesses

Marketing Automation Tools for WordPress BusinessesMarketing automation plugins can save you valuable time when managing your marketing strategy. Relying on the best tools for your WordPress site can provide many benefits, unfortunately, website owners often fail to use it. Either they end up choosing irrelevant marketing tools or they are not aware of which tools can be most useful. Why […]

The post 10 Marketing Automation Tools for WordPress Businesses appeared first on WPExplorer.

What I Wish I Knew About CSS When Starting Out As A Front-Ender

Nathan Hardy shares when things “clicked”:

Reflecting back on this time, I think there are a few key concepts that were vital to things finally all making sense and fitting together. These were:

• The Box Model (e.g. box-sizing, height, width, margin, padding)
• Layout (e.g. display)
• Document Flow and Positioning (e.g. position, top, left, etc.)

I called this my ah-ha moment a few years back:

For me, it was a couple of concepts that felt like an unlocking of real power. It was a combination of these concepts that were my “Ah-ha!” moment.

• Every page element is a box.
• I can control the size and position of those boxes.
• I can give those boxes background images.

People shared their own as well. And again.

It’s really the deal.

Direct Link to ArticlePermalink


The post What I Wish I Knew About CSS When Starting Out As A Front-Ender appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Paychex Review

Paychex offers an all-in-one, highly customizable online payroll service and HR solution. 

It includes payroll, HR, business insurance, attendance tracking, payment processing, and employee benefits. 

Users can find tax credits and ensure regulatory compliance with help from Paychex experts and guides. 

Paychex has solutions for solopreneurs all the way to large corporations in multiple industries. 

Paychex Pros and Cons

Pros

  • Fits All Business Sizes
  • Tax Credits Feature
  • Benefits Packages
  • Payment Processing Solutions
  • Business Insurance Options

Cons

  • Tedious Sign-Up Process
  • Complex Platform Structure
  • No Free Trial
Compare The Online Payroll Providers
We reviewed dozens of online payroll providers and narrowed them down to the best options.
See Top Picks

How Paychex Compares to Top Online Payroll Services

Paychex is the best online payroll service for complex payroll cycles. It is a highly flexible platform suited for businesses of all sizes and various industries. It offers services for everything from employee payroll to business insurance and is a great platform for implementing employee benefits and tax credits. 

Paychex isn’t the only payroll service that could fit your needs, however. If you want a smoother tax season, QuickBooks Payroll is an excellent option. Consider purchasing Gusto if you’re looking for a more straightforward platform. Check out our helpful guide for the complete list of our top online payroll services.

Paychex Management & Employee Self-Service

A Paychex small business survey concluded that 73% of employees expected self-service options for tasks like requesting time off, entering bank information, and managing retirement accounts. Self-service options make tasks more efficient and cut out the middleman when entering data. 

Paychex provides self-service options when employees log into People View. Here, they can complete tasks like updating tax withholding and entering personal information. Employees can submit pay adjustments, request time off, view time-off balances and work schedules, and approve timecards. 

Self-service options are also available using Paychex’s mobile app. Workers can clock in, schedule meal breaks, and transfer hours from the app. They can schedule and request shift times and leave notes for managers as well.

Paychex Multi-State Tax Filings

Because remote work has become more mainstream, it can be beneficial to have a payroll service that considers multi-state tax filings. This is because employees may be working in a different location than a company’s headquarters. It’s always important to ensure your employees are filing state taxes correctly, and Paychex can help with this. 

Paychex helps users with payroll tax payment, payroll tax calculations, ensuring taxes are filed with the right agency, and eliminating inaccurate or late payment penalties. You should reach out to Paychex directly to address your specific needs for more information on multi-state employee taxes. 

Paychex also helps companies find tax credits, and users only pay after credits are found. For example, some states offer tax credits as high as $3,000 per employee to spur economic growth. Paychex helps companies find these location-based credits and reduce their income liability. 

In addition to state tax credits, Paychex also helps with credits for domestic production deduction, grant screening, training incentives, cost segregation, and research and development (R&D). 

Users can search for work opportunity tax credits (WOTC) by hiring from groups like unemployed veterans and food stamp recipients. To qualify for this credit, wages and hours must be tracked, and applicant pre-screening forms must be filled out. 

Its experts also track regulation and legislation updates, manage seven-year audit trail documentation, create an annual tax return report, and log employee wages, hours, and location changes. 

Paychex Specialized Payroll Solutions

Some payroll services focus on a niche group of businesses, while others have options for all types. Paychex is a comprehensive and flexible platform that can be customized for startups to large enterprises. When applying for Paychex, you have the option to complete a quiz to assess needs and must contact a Paychex expert for quotes. 

Paychex partners with MyCorporation to let self-employed users create filings for federal tax IDs, corporate formations, business licenses, and state IDs. Users can make 401(k) contributions as an employee and employer. It also helps them with W-2 and W-4 filing and direct deposits. 

Small businesses with one to nine employees get three pay entry options, multiple payment options, flexible payroll processing, 24/7 customer service, plus recruiting, benefits, and HR services. Paychex also provides users with articles and tutorials on the mobile app and payroll tax filing and payments. 

Businesses with 10 to 49 employees receive specific Paychex midsize business solutions. These include more efficient payroll processing, compliance services, customized payroll solutions, extra HR support, analytics and reporting insights, and additional benefit packages. 

If your company has over 1,000 employees, Paychex also has solutions for you. These include large business solution integrations, more flexible HR technology, scalable HR support, access to compliance experts, and single employee records. 

Paychex Pay Cycle Frequency

Some online payroll services restrict how often users can run payroll. Fortunately, Paychex offers a highly customizable service where users can design their payroll structure precisely as needed. It suits businesses of any size and industry, so it understands that pay cycle frequency and other options must be highly customizable. 

Users can design complex payroll cycles and variable schedules with this platform. In addition, users can customize earnings, deductions, payment options, self-service options, and more. 

Paychex All-in-One Benefits

As far as employee benefits are concerned, Paychex has got you covered. It has an entire service dedicated to employee benefits. Options include individual health insurance, 401(k) plans, group health insurance, dental and vision insurance, premium-only plans (POP), flexible spending accounts (FSA), and health savings accounts (HSA). 

It lets users offer comprehensive financial wellness programs to employees. These plans have been proven to reduce employee stress, improve productivity, and increase loyalty. These financial wellness programs help employees budget for insurance, taxes, savings, investing, debt and credit management, and household expenses. 

Paychex is an experienced retirement plan provider and has the U.S.’s top 401(k) recordkeeper expertise. Users have access to benefits administration, employee benefit self-service, full-service management, and flexible plan options. Its mobile app lets employees stay updated on things like health, retirement, and Section 125 plans. 

Paychex users also gain access to its professional employer organization (PEO) services. These are designed to help companies afford more expensive all-in-one benefits. Users will get professional service, detailed invoicing, and access to safety representatives for help with OSHA regulatory compliance. 

Paychex PEO services can help with things like workers’ compensation insurance, health benefit accounts, state unemployment insurance (SUI) management, employee assistance programs (EAP), health benefit accounts, group health insurance, employment practices liability insurance (EPLI), employee benefits management, employee performance administration, and more.

Paychex Payroll Services

Paychex’s premier focus is on its payroll services. It has several flexible plans suited for businesses of any size. It has won awards for best user experience and vendor satisfaction, and it has 24/7 customer support. 

When you sign up with Paychex, its team helps you set up your year-to-date payroll data. Customer support options include a knowledge base of how-to guides and articles, in-app assistance options, live chat access, and regular support from experienced professionals. 

It provides users with a simple platform and mobile app with three pay entry options. It also improves accuracy by alerting users when something goes wrong. For example, users will receive an error notification when an employee’s online pay stub doesn’t match their record. Employees can access W-2s, check stubs, and other payroll information. 

Paychex’s payment options include pay-on-demand, paper checks, pay cards, and direct deposit. 

Paychex Payroll Services and other Paychex solutions are part of Paychex Flex, and they can be paired together depending on each business’s needs. Paychex Go is another option meant for smaller businesses.  

Paychex Time Clocks

Paychex Time Clocks is a cloud-based system that’s fully integrated with Paychex Flex. It can be used to track time and attendance and imports the data into the other HR solutions. It also helps with remote workforce scheduling and management, compliance and safety support, and extended leave and time-off tracking. 

To make sure employees are paid accurately, you can automate time and attendance tracking. This platform makes administrative tasks easier to perform and streamlines scheduling requests. It also keeps operations running smoothly by helping users navigate fluctuating employee dynamics. 

Users can clock in through its InVision Iris Time Clock, web punch, mobile app, or tablet kiosk app with an option to set up facial recognition. Users can see when employees are working, taking a break, or taking time off. Users can manage teams, approve timesheets, and review requests from the manager dashboard. 

Paychex HR Services

Paychex HR Services is designed to simplify employee, administrator, and HR team tasks. It helps users comply, administer, and plan every aspect of HR management. It offers users dedicated HR professionals averaging eight years of experience for advice. These experts help guide users through the process and understand important regulatory requirements. 

The Paychex experts create a service action plan for your specific needs and show users how to document it and take action. They help with HR duties like performance management, leave of absence policies, and remote work arrangements. The experts also stay up to date on all regulations and inform users of new changes. 

The platform helps onboard new hires, discover critical details about candidates, find qualified candidates, conduct interview best practices, and create accurate job descriptions. Users can access reminders, quizzes, videos, and courses from the learning management system to enhance employee retention strategies. 

It comes with an HR calendar that keeps users updated on employee work anniversaries and birthdays. To show employees the monetary value of their benefits, users can create a comprehensive compensation summary report. Users can also chat with experts to ensure they comply with federal, state, and local regulations.

Paychex Employee Benefits Services

Paychex Employee Benefits Services allows users to provide employees with retirement services, health insurance, health savings accounts, tax savings plans, dental and vision, and financial wellness programs. 

Users can work with a professional employer organization (PEO) to help them afford expensive benefits. Users can also speak with a Paychex professional to assess their needs and create an action plan. 

Paychex Employee Benefits Services provides efficient recordkeeping and benefits management. Employees can check Section 125 plans, health plans, retirement plans, and other benefits directly from their desktop or mobile device. 

Paychex Business Insurance Solutions

Paychex Business Insurance Solutions helps provide comprehensive coverage for employees, properties, and businesses. This solution helps with compliance, billing, deductions, and selecting the best provider. Paychex partners with top carriers, has over 20 years of experience, and has been rated a top 25 insurance agency for the past five years. 

Users can bundle business protection with a business owner’s policy (BOP) to cover business risks. Cyber liability insurance helps protect against the growing number of hackers and cybercriminals. Entire fleets or single vehicles can be covered from work-related accidents with commercial auto insurance. 

For protection against wrongful termination, workplace violence, discrimination, and harassment claims, employers can use Paychex to purchase an employment practices liability insurance (EPLI) policy. Contractor, employee, manager, and owner mistakes can be protected with professional liability insurance. 

To extend the limits of other policies, employers can purchase a commercial umbrella plan. Damaged equipment, inventory, and buildings can be protected with commercial property insurance. Paychex users can also find general liability insurance policies and workers’ compensation insurance. 

Paychex Payment Processing Solutions

Paychex has the latest technology for payment processing needs. In addition to payment processing, this platform enhances customer engagement with automated emails, upgrades decision-making with inventory and sales data, and offers more efficient employee management. 

Users can completely customize their payment process with online payment services, debit and credit card processing, point of sale (POS) solutions, eCheck processing, automatic clearing house (ACH) payment, transparent pricing, PCI compliance notifications, interchange optimization, and real-time, same-day, or next-day funding abilities. 

Paychex’s credit card processing includes payment from credit cards like MasterCard, Visa, and Discover, along with gift cards and EMV chip cards. It also gives users the ability to add credit card surcharges. International purchasers can be accommodated using dynamic currency conversion and receive receipts showing exchange rate details and other statistics. 

Users can take payments with PC, iPad, iPhone, and Android devices. Users can accommodate additional customers, get paid faster, and save processing costs through electronic processing. Paychex integrates with QuickBooks, and it has PCI DSS compliance reminders to protect users from liability and fraud issues. 

Compare The Online Payroll Providers
We reviewed dozens of online payroll providers and narrowed them down to the best options.
See Top Picks

Summary

If you’re looking for an online service that can do much more than manage payroll, Paychex is for you. You can use its platform for employee benefits, tax credits, HR management, business insurance, and much more. This flexible platform fits businesses of any type and size, and it is the best option for complex payroll cycles. 

Microservices and Workflow Engines

Automation of business processes enables organizations to better meet critical factors for success across industries today — from increased team agility and faster time-to-market to lower costs and improved customer service. However, many are hindered by the existing dependencies between their software, systems, and teams, making process automation and business efficiency all the more challenging to achieve and maintain.

This Refcard introduces a way to address such challenges using a microservice architectural style and a workflow engine for orchestration. You will learn key techniques in areas such as microservice design, communication, and state management, as well as first steps to take when getting started with business process automation.

331: Next.js + Apollo + Server Side Rendering (SSR)

Our goal here was to explore server-side rendering (SSR) in Next.js using data from Apollo GraphQL, for faster client-rendering and SEO benefits.

There are a variety of approaches, but Shaw has set his sights on a very developer-ergonomic version here where you can leave queries on individual components and mark them as SSR-or-not.

There are two “official” approaches:

  1. Apollo’s documentation
  2. Next.js’ example

These are sorta-kinda-OK, except…

  • They have to be configured per-page
  • They are mostly limited to queries at the top page level
  • You’d likely need to duplicate queries with slightly differently handling from client to server
  • May or may not populate the client cache so that the client can have live queries without having to query for the same data. For example, ay you have data that you want to change on the client side (pagination, fetching new results). If it’s fetched and rendered server-side, you have to fetch again client side or send over the cache from the server so the queries on the client-side can be ready to update with new data.

These limitations are workable in some situations, but we want to avoid duplicating code and also have server-side rendered queries that aren’t top-level on the page.

A probably-better approach is to use the getDataFromTree method, which walks down the tree and executes the queries to fill up the ApolloClient cache. We got this from a two-year old Gist from Tylerian showing a way to integrate getDataFromTree into Next.js. Tylerian’s gist had some extra complications that might’ve been due to older Next.js limitations, but the overall process was sound:

  1. Create a shared ApolloClient instance
  2. Render the page using getDataFromTree() to fill the cache with data
  3. Render the page again with that data using Next’s Document.getInitialProps()
  4. Extract the cache to deliver with the page and hydrate the client-side Apollo cache

The benefits:

  • Quick to set up
  • No duplicate queries
  • Nothing special per page to handle server-side rendering or client-side queries.
  • Cache hydration to keep client active without re-querying data
  • Easy to enable / disable server-side rendering for individual queries

Here’s a repo with Shaw’s findings.

The post 331: Next.js + Apollo + Server Side Rendering (SSR) appeared first on CodePen Blog.

introduction of kasmar45, me

Hello,

My name is George, but many call me kasmar45. I am 78 years old and long retired. I have worked in the IT business for 50 years. I started out wiring and operating tab equipment. IE (403, 407, 604, sorter, etc, etc. Then I got a job operating a 305 Ramac. That was fun. My next job I operated a 1401 computer. It had 4k of usable memory. How could you write a program with only 4k of memory you might ask. Well back in those days code was written in line, started at the top and finished at the bottom. Our devs programmed in assembler. When a program was started it read the first 4k into memory from the ram file, and when it came to the end of that it it read in the next 4k, etc, etc. Thats a simplfied explanation. I ended up my career doing app support at pick systems. I code today as a hobby.

Co-Founding Kubernetes with Microsoft CVP Brendan Burns

The creation of Kubernetes was never a foregone conclusion, it required years of hard work and evangelism. Microsoft Corporate Vice President Brendan Burns is here to bring you that story.

In the final episode of our two-part series, Brendan returns to the Dev Interrupted podcast to tell us the founding story of Kubernetes. He discusses the battle to make Kubernetes open source, why a strong community was vital to early success and learning when to let others take the lead, avoiding a "dictator for life" approach to development.

how to define a var = ???.text in vb.net

I want to make several of my textbox.text properties equal to variables. I tried:

 Dim outcome As New txtResult.Text
    outcome = "xxxxx"

this produced an error. :(

first, is this possible and if it is, what am I doing wrong???

Complete Tutorial On Creating Deterministic Finite Automata in Swift

This is a complete tutorial on creating a Deterministic Finite Automata (DFA) in Swift.

Go through the entire post to get familiar with all aspects of DFAs, and then implement your own DFA in Swift from scratch by following the steps given here. This will give you real-world experience on how to create such a thing.

What is a DFA?

A DFA is a mathematical model of computation used to classify the input strings that can be accepted by a finite state machine.

Once classified, these strings are translated into an accepting state or an accepting path. The set of all (accepting) paths through the graph defines the language accepted by the corresponding FSA, and also serves as its computational description. A deterministic finite automaton is a theoretical device consisting of a finite number of states, and transitions between possible states.

States are represented by one-dimensional "state space" which can hold either single character or even multi-characters classifying the range so for example if it holds ASCII codes then you can use this character classifier to get real-life examples like aaa, ab, abc,yyy you can find all this information in this excellent tutorial created by Tal Sus which is a gem of a tutorial from the perspective of sheer simplicity and ease. You'll understand that once you check it out.

What's so special about DFAs? The big idea behind using DFAs is that they are deterministic: if some state accepts an input string (or sequence of symbols), then any other run starting at the same state will also accept the very same input string.

Moreover, there exists an algorithm called "partial computation" to compute the set of states accepting an arbitrary given word with only polynomial-time complexity, i.e., much faster than when we try to do it for regular expressions with exponential time complexity.

The idea of a DFA helps you in that it is easier to design an algorithm for processing input strings than it is using regular expressions.

Representation of a DFA

A DFA can be represented by a directed graph where each node (or vertex ) represents a state and an arc or edge connects two nodes if they represent different states; edges typically have labels, which are either character sequences, numbers (representing some quantity), or pseudocode snippets (if we want to make our automata more general).

The "initial" state is denoted as Q0 and the "final" state as F. Every non-final node's outEdges list must be finite. Here is what the above theory means in actual code:

A DFA is a data type that you can use to perform pattern matching of strings. In order to create a DFA, we will need one state for each possible character in our alphabet. For example, if I want to match only lowercase letters "a" through "z", my automaton would have 26 states, including the initial and final states.

Every time we enter a new state (character), we can keep track of which characters were consumed, what the next character in our string is and whether or not we finished processing all characters in the string associated with this state. Once every letter has been matched on, return true or false depending on whether the user's entry was correct.

DFA in Swift

The implementation of a DFA in Swift is fairly simple. Here's how you can do it:
State
First, create a struct called "State" and set it as Equatable; since we want to be able to compare our states with their variables a variable called "name" which we set to be of type String, and a variable called "isFinal" of type Bool.

Screen_Shot_2021-08-24_at_8_27_26_AM.png
Creating the State struct for the DFA

The name variable is obviously the main point of comparison for our states, but the Bool value is important so that our machine knows whether or not it has reached a final state.

Transition
Next, we create a struct called "Transition" where we define a transition as something that has a "fromState", an input character, and a "toState" we set the fields of this struct accordingly.

A transition in a DFA from one state to another is called a run. A run has exactly one start state and at least two end states, the final state of the run being where the transition leaves off. The start state of a run is called the starting state, and the end states are called final states.

Screen_Shot_2021-08-24_at_8_27_36_AM.png
Defining the variables for our Transition struct

A run is said to be active from its starting state if it has not yet left this state. A run is said to be acceptable if the set of end states just after the starting state contains one specific terminal symbol. If there exists such an accepting run, the DFA is said to accept the language defined by this run. If no accepting run exists, a DFA is said not to accept any string from an alphabet of some specified size.

Creating the States of Our Machine

For pedagogical sense, we will hardcode the state of our DFA but there are also multiple implementations wherein these states can be accepted from the user through I/O.

We will name our states S1-S4 as seen in the code block below. It is important that the second parameter of the constructor be a transition in a DFA from one state to another is called a run. A run has exactly one start state and at least two end states, the final state of the run being where the transition leaves off. The start state of a run is called the starting state, and the end states are called final states.

Screen_Shot_2021-08-24_at_8_27_44_AM.png
Hardcoding each state of the machine from S1-S4

A run is said to be active from its starting state if it has not yet left this state. A run is said to be acceptable if the set of end states just after the starting state contains one specific terminal symbol. If there exists such an accepting run, the DFA is said to accept the language defined by this run. If no accepting run exists, a DFA is said not to accept any string from an alphabet of some specified size.
ade clear which state/s are terminal states by setting the isFinal variable to true.

DFA Dynamics: Defining the Brain of Our Transitions

Finally, the last element of our DFA is what we will call our Transition Brain where the logic behind our transitions is actually defined.

Screen_Shot_2021-08-24_at_8_27_59_AM.png
The TransitionBrain struct is the most important element

To do this, lets create a struct called TransitionBrain, and inside it, we have a static variable called dynamics. The dynamics variable shall be a list of Transitions wherein each fromState, input, and toState is hardcoded.

Creating the Logic Behind the Automata

Now that we have the necessary elements for our deterministic finite machine, it is time to actually design the automata including the functions that will control the behavior of our machine.

The first thing we have to do is create our initial state through a variable called currentState of type State. For simplicity, we will simply use s1 as our currentState. The functions of our automata are as follows:
Getting the Correct State

Our first state is a mutating function called getState it does, as the name suggests, get the correct state that we are looking for based on the testInput that is passed as a parameter called testInput of type String.

Screen_Shot_2021-08-24_at_8_28_22_AM.png
Creating our Automata struct, setting the initial currentState, and implementing getState

Through -> State, we make it clear that our function must return a State, or nothing at all. We then use an enhanced for loop to search for an element in the dynamics list of our TransitionBrain struct. It is worthy to note that each element will be of type State.

The logic behind this search is simple if the fromState of the current element of the for loop is equal to our currentState and the input value for the current element is equal to the current testInput value we make a transition by setting the currentState to the toState of that element and then break the loop.

This function will then return the final value of the currentState variable.

Checking User Input for Validity
The final and most exciting part of Automata logic is where we actually check the users input String to see whether or not it is valid based on the transition rules of our DFA. We shall call this function checkString and have it return true if the input is acceptable, and false if it is not. Obviously, this will be a mutating function that takes in a String parameter.

Screen_Shot_2021-08-24_at_8_28_31_AM.png
Checking if the users input is valid under the transition logic of our DFA

The first thing we have to do is create a placeholder finalState variable, we will set it to s1 but it actually does not matter what you set it to at this point.

Next, we do an enhanced for-loop on the users input and check every character, changing the value of our finalState variable based on our previously defined getState method with each character (typecast as String for syntactic validity) as the parameter value.

Once we arrive at the last finalState value, we output the name of the finalState with the line print("Final state is (finalState.name)").

Screen_Shot_2021-08-24_at_8_28_38_AM.png
A test run for our DFA

If the isFinal variable of the last state is true, then the String is acceptable under the transition rules of our automata.

In-Memory Database Architecture: Ten Years of Experience Summarized (Part 1)

An in-memory database is not a new concept. However, it is associated too closely with terms like "cache" and "non-persistent". In this article, I want to challenge these ideas. In-memory solutions have much wider use cases and offer higher reliability than it would seem at a first glance.

I want to talk about the architectural principles of in-memory databases, and how to take the best of the "in-memory world"— incredible performance — without losing the benefits of disk-based relational systems. First of all, how to ensure data safety.

WordPress Translation Day 2021 Kicks Off September 1, Expanded to Month-Long Event

WordPress Translation Day 2021

WordPress Translation Day kicked off today, and the event has been expanded to run from September 1-30 this year. WordPress Polyglots contributors from all over the world will be hosting mini-events throughout the month where they will be translating themes, plugins, apps, meta, docs, and other important projects. Events will also focus on recruitment, virtual training for new PTEs/GTEs, and general process improvements.

In the past, the event has been a boon for the Polyglots contributor base. In 2020, the teams hosted more than 20 local events, resulting in more than 175,000 strings translated. French, Spanish, and Japanese-language locales logged the most translated strings during the first week last year.

There are currently seven mini-events scheduled for 2021 in different locales throughout the month of September. From Portugal to Tehran to Jakarta, contributors are planning sprints to translate popular plugins and WordPress core. In Bengaluru, one of the largest IT hubs in India, organizers will be onboarding new translators, including high school students who are interested in contributing to WordPress.

WordPress Translation Day will also include some global events during the second half of the month. These events will be hosted in English and contributors of all experience levels are welcome to attend:

  • Friday, September 17th (time to be announced): Introduction to WordPress Translation Day
  • Sunday, September 19th at 12:00 UTC: Panel on Polyglots Tools
  • Tuesday, September 21st at 11:00 UTC: Panel on Open Source Translation Communities
  • Thursday, September 30th (time to be announced): Closing Party – Why do you translate?

Attendees will be able to participate live as the events are broadcasted on YouTube. The final session will recap the month’s events, highlight success stories, and will also include some activities and games.

This year translators are extending their volunteer efforts to some newer projects, including working with the Training Team to translate video workshops hosted on learn.wordpress.org, translating Community team resources, translating the Block Patterns project, and translating the Pattern Directory itself.

The global events combined with the local mini-events are essentially like a virtual Polyglots WordCamp held over the span of a month. Attendees will have opportunities to connect with other translators and team leaders and share their experiences contributing to WordPress. If you are new and thinking of joining the Polyglots team, check out the new Polyglots Training course on Learn WordPress.org to find out more about contributing.

Some Typography Links VII


The post Some Typography Links VII appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

DataGridView current cell after new data input

Hi guys, I have one question about Visual Basic and DataGridView behaviour.
So, after I type all the text boxes and click on button (save data), I would like user to get focus on that new data. Now, each time button is clicked, DataGridView goes on a first row. New data can be on many positions, depending on user input.

This is my code, and I wonder if there is a way to catch TextBox1 value, and after button completes its tasks to find apropriate row based on a TextBox1 value.

Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
        '     Dim inn As Integer  = Convert.ToInt32(TextBox1.Text)
        Dim test As Boolean = False
        For Each row In DataGridView1.Rows
            If TextBox1.Text = Trim(row.Cells("NewEntry").Value.ToString) Then
                test = True
                MsgBox("Double entry, try with a new one")
                TextBox1.Focus()
                TextBox1.BackColor = Color.Red
            End If
        Next
        If test = False Then
            If TextBox1.Text = "" Or String.IsNullOrEmpty(TextBox1.Text) Or TextBox2.Text = "" Or String.IsNullOrEmpty(TextBox2.Text) Or TextBox3.Text = "" Or String.IsNullOrEmpty(TextBox3.Text) Then
                MessageBox.Show("Check for all data")
            Else
                con.Open()
                Dim cmd As SqlCommand = New SqlCommand("Insertjmj", con)
                cmd.Parameters.AddWithValue("@NewEntry", Trim(TextBox1.Text))
                cmd.Parameters.AddWithValue("@NewMark", Trim(TextBox2.Text))
                cmd.Parameters.AddWithValue("@NewDescription", Trim(TextBox3.Text))
                cmd.Connection = con
                cmd.CommandType = CommandType.StoredProcedure
                Try
                    Dim rdr As SqlDataReader = cmd.ExecuteReader
                    Dim dt As New DataTable
                    dt.Load(rdr)
                    rdr.Close()
                    DataGridView1.DataSource = dt
                    con.Close()
                    '              DataGridView1.CurrentCell = DataGridView1.Rows(inn).Cells(0)

                Catch ex As SqlException
                    MessageBox.Show(ex.Message.ToString(), "Error Message")
                End Try
            End If
        End If
    End Sub

A Guide to Web Scraping in Python using BeautifulSoup

Today we’ll discuss how to use the BeautifulSoup library to extract content from an HTML page. After extraction, we’ll convert it to a Python list or dictionary using BeautifulSoup!

What Is Web Scraping, and Why Do I Need It?

The simple answer is this: not every website has an API to fetch content. You might want to get recipes from your favorite cooking website or photos from a travel blog. Without an API, extracting the HTML, or scraping, might be the only way to get that content. I’m going to show you how to do this in Python.