Forget SLAs – Today, It’s All About Service-Level Objectives (SLOs)

What’s Wrong With SLAs?

“A service-level agreement (SLA) is a commitment between a service provider and a client,” according to Wikipedia. “Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user.”

SLAs are thus contracts between third parties – contracts that can drive the wrong business outcomes.

DevOps vs. Agile: What’s Right for Your Team?

There are two primary software development methodologies these days: Agile and DevOps. Research from Gartner has found that 70% of IT teams use DevOps. Has DevOps totally usurped its predecessor, then? Or can developers use both methodologies for rapid time to market and problem-free deployment? It helps to first examine the primary difference between the two.

The goal of Agile is to make sure the dev team and its processes have the agility to make quick changes. Whereas DevOps prizes end-to-end business solutions, enabling collaboration between the development and operations teams to increase the speed of work in both stages. Both approaches speed up the deployment pipeline but in different ways. So, this brings up the question: which is better? 

Why should I choose Quarkus over Spring for my microservices?

As interest grows in microservices and containers, Java developers have struggled to make applications smaller and faster to meet today’s demands and requirements. In the modern computing environment, applications must respond to requests quickly and efficiently, be suitable for running in volatile environments such as virtual machines or containers, and support rapid development. Because of this, Java, and popular Java runtimes, are sometimes considered inferior to runtimes in other languages such as Node.js and Go.

The Java language and the Java platform have been very successful over the years, preserving Java as the predominant language in current use. Analysts have estimated the global application server market size at $15.84 billion in 2020, with expectations of growing at a rate of 13.2% from 2021 to 2028. Additionally, tens of millions of Java developers worldwide work for organizations that run their businesses using Java. Faced with today’s challenges, these organizations need to adapt and adopt new ways of building and deploying applications. Forgoing Java for other application stacks isn’t a choice for many organizations. It would involve re-training their development staff and re-implementing processes to release and monitor applications in production.

Exploring Spring Cloud Configuration Server  in Microservices

In this article, we will learn how to use the Spring Cloud Configuration server to centralize managing the configuration of our microservices. An increasing number of microservices typically come with an increasing number of configuration files that need to be managed and updated. 

With the Spring Cloud Configuration server, we can place the configuration files for all our microservices in a central configuration repository that will make it much easier to handle them. Our microservices will be updated to retrieve their configuration from the configuration server at startup. 

A Guide to Spot-Readiness in Kubernetes

Using spot nodes in your Kubernetes cluster can be intimidating due to their lack of availability guarantees. Kubecost’s Spot-Readiness Checklist is here to give you more confidence. The checklist investigates your public cloud Kubernetes workloads to identify candidates for safe scheduling on spot instance types, which can save you up to 90% on cloud resource costs. Kubecost automatically performs a series of checks on your AWS (EKS), Azure (AKS), and Google Cloud (GKE) clusters using your workload configurations to determine readiness. It then estimates the savings impact from making the transition to Spot.

What Are Spot Instances and Why Use Them?

Spot instances are spare compute instances that public cloud providers offer to customers at a deeply discounted rate—potentially up to 90% cheaper. However, spot nodes vary in their availability and pricing, depending on the supply and demand of compute resources at a given time and fluctuate per instance size, instance family, and deployment location. Once the demand for a particular instance type increases, spot instances may receive an interruption notice and spin down within a short shutdown window (usually a few minutes). For this reason, spot resources are best used for fault-tolerant and flexible applications like Spark/Hadoop nodes, microservices that can be replicated, etc.

Advanced PostgreSQL Features: A Guide

Guide to Advanced PostgreSQL Features


Despite the rise in popularity of NoSQL databases, relational databases still continue to be preferred for many applications. This is because of their strong querying abilities and their robustness.

Relational databases excel in running complex queries and reporting based on the data in cases where the data structure does not change frequently. Open-source relational databases like MySQL and PostgreSQL provide a cost-effective alternative as a stable production-grade database compared to licensed counterparts like Oracle, MSSQL, etc. 

Node.js vs. PHP: Understanding Server-Side Development

Creating the right toolkit of languages, frameworks, libraries, and databases is the first step towards executing a successful project. While understanding each tool’s pros and cons is a logical route to perform this comparative analysis, frontend and backend development teams can benefit more if they understand the context which dictates the ideal tools.

Server-side development is essential for engineering a functional and fluid web-based product — a website, an app, or a native web app. Client-side development deals with the user experience and how information is laid out. Server-side development is responsible for efficient organization and access to data stored in databases and web applications, made accessible using static resources like CSS and JavaScript or HTML templates and even data.

State Management In Next.js

This article is intended to be used as a primer for managing complex states in a Next.js app. Unfortunately, the framework is way too versatile for us to cover all possible use cases in this article. But these strategies should fit the vast majority of apps around with little to no adjustments. If you believe there is a relevant pattern to be considered, I look forward to seeing you in the comments section!

React Core APIs For Data

There is only one way a React application carries data: passing it down from parent components to children components. Regardless of how an app manages its data, it must pass data from top to bottom.

As an application grows in complexity and ramifications of your rendering tree, multiple layers surface. Sometimes it is needed to pass down data far down multiple layers of parent components until it finally reaches the component which the data is intended for, this is called Prop Drilling.

As one could anticipate: Prop Drilling can become a cumbersome pattern and error-prone as apps grow. To circumvent this issue comes in the Context API. The Context API adds 3 elements to this equation:

  1. Context
    The data which is carried forward from Provider to Consumer.
  2. Context Provider
    The component from which the data originates.
  3. Context Consumer
    The component which will use the data received.

The Provider is invariably an ancestor of the consumer component, but it is likely not a direct ancestor. The API then skips all other links in the chain and hands the data (context) over directly to the consumer. This is the entirety of the Context API, passing data. It has as much to do with the data as the postal office has to do with your mail.

In a vanilla React app, data may be managed by 2 other APIs: useState and useReducer. It would be beyond the scope of this article to suggest when to use one or another, so let's keep it simple by saying:

  • useState
    Simple data structure and simple conditions.
  • useReducer
    Complex data structures and/or intertwined conditions.

The fact Prop Drilling and Data Management in React are wrongfully confused as one pattern is partially owned to an inherent flaw in the Legacy Content API. When a component re-render was blocked by shouldComponentUpdate it would prevent the context from continuing down to its target. This issue steered developers to resort to third-party libraries when all they needed was to avoid prop drilling.

To check a comparison on the most useful libraries, I can recommend you this post about React State Management.

Next.js is a React framework. So, any of the solutions described for React apps can be applied to a Next.js app. Some will require a bigger flex to get it set up, some will have the tradeoffs redistributed based on Next.js' own functionalities. But everything is 100% usable, you can pick your poison freely.

For the majority of common use-cases, the combination of Context and State/Reducer is enough. We will consider this for this article and not dive too much into the intricacies of complex states. We will however take into consideration that most Jamstack apps rely on external data, and that is also state.

Propagating Local State Through The App

A Next.js app has 2 crucial components for handling all pages and views in our application:

  • _document.{t,j}sx
    This component is used to define the static mark-up. This file is rendered on the server and is not re-rendered on the client. Use it for affecting the <html> and <body> tags and other metadata. If you don’t want to customize these things, it’s optional for you to include them in your application.
  • _app.{t,j}sx
    This one is used to define the logic that should spread throughout the app. Anything that should be present on every single view of the app belongs here. Use it for <Provider>s, global definitions, application settings, and so on.

To be more explicit, Context providers are applied here, for example:

// _app.jsx or _app.tsx

import { AppStateProvider } from './my-context'

export default function MyApp({ Component, pageProps }) {
  return (
    <AppStateProvider>
      <Component {...pageProps} />
    </AppStateProvider>
  )
}

Every time a new route is visited, our pages can tap into the AppStateContext and have their definitions passed down as props. When our app is simple enough it only needs one definition to be spread out like this, the previous pattern should be enough. For example:

export default function ConsumerPage() {
  const { state } = useAppStatecontext()
  return (
    <p>
      {state} is here! 🎉
    </p>
  )
}

You can check a real-world implementation of this ContextAPI pattern in our demo repository.

If you have multiple pieces of state defined in a single context, you may start running into performance issues. The reason for this is because when React sees a state update, it makes all of the necessary re-renders to the DOM. If that state is shared across many components (as it is when using the Context API), it could cause unnecessary re-renders, which we don’t want. Be discerning with the state variables you share across components!

Something you can do to stay organized with your state-sharing is by creating multiple pieces of Context (and thus different Context Providers) to hold different pieces of state. For example, you might share authentication in one Context, internationalization preferences in another, and website theme in another.

Next.js also provides a <Layout> pattern that you can use for something like this, to abstract all this logic out of the _app file, keeping it clean and readable.

// _app.jsx or _app.tsx
import { DefaultLayout } from './layout'

export default function MyApp({ Component, pageProps }) {
  const getLayout = Component.getLayout || (
    page => <DefaultLayout>{page}</DefaultLayout>
  )

  return getLayout(<Component {...pageProps} />)
}



// layout.jsx
import { AppState_1_Provider } from '../context/context-1'
import { AppState_2_Provider } from '../context/context-2'

export const DefaultLayout = ({ children }) => {
  return (
    <AppState_1_Provider>
      <AppState_2_Provider>
        <div className="container">
          {children}
        </div>
      </AppState_2_Provider>
    </AppState_1_Provider>
  )
}

With this pattern, you can create multiple Context Providers and keep them well defined in a Layout component for the whole app. In addition, the getLayout function will allow you to override the default Layout definitions on a per-page basis, so every page can have its own unique twist on what is provided.

Creating A Hierarchy Amongst Routes

Sometimes the Layout pattern may not be enough, though. As apps go further in complexity, a need may surface to establish a relationship provider/consumer relationship between routes. A route will wrap other routes and thus provide them with common definitions instead of making developers duplicate code. With this in mind, there is a Wrapper Proposal in Next.js discussions to provide a smooth developer experience for achieving this.

For the time being, there is not a low-config solution for this pattern within Next.js, but from the examples above, we can come up with a solution. Take this snippet directly from the docs:

import Layout from '../components/layout'
import NestedLayout from '../components/nested-layout'

export default function Page() {
  return {
    /** Your content */
  }
}

Page.getLayout = (page) => (
  <Layout>
    <NestedLayout>{page}</NestedLayout>
  </Layout>
)

Again the getLayout pattern! Now it is provided as a property of the Page object. It takes a page parameter just as a React component takes the children prop, and we can wrap as many layers as we want. Abstract this into a separate module, and you share this logic with certain routes:

// routes/user-management.jsx

export const MainUserManagement = (page) => (
  <UserInfoProvider>
    <UserNavigationLayout>
      {page}
    </UserNavigationlayout>
  </UserInfoProvider>
)


// user-dashboard.jsx
import { MainUserManagement } from '../routes/user-management'

export const UserDashboard = (props) => (<></>)

UserDashboard.getLayout = MainUserManagement
Growing Pains Strike Again: Provider Hell

Thanks to React's Context API we eluded Prop Drilling, which was the problem we set out to solve. Now we have readable code and we can pass props down to our components touching only required layers.

Eventually, our app grows, and the number of props that must be passed down increases at an increasingly fast pace. If we are careful enough to isolate eliminate unnecessary re-renders, it is likely that we gather an uncountable amount of <Providers> at the root of our layouts.

export const DefaultLayout = ({ children }) => {
  return (
    <AuthProvider>
      <UserProvider>
        <ThemeProvider>
          <SpecialProvider>
            <JustAnotherProvider>
              <VerySpecificProvider>
                {children}
              </VerySpecificProvider>
            </JustAnotherProvider>
          </SpecialProvider>
        </ThemeProvider>
      </UserProvider>
    </AuthProvider>
  )
}

This is what we call Provider Hell. And it can get worse: what if SpecialProvider is only aimed at a specific use-case? Do you add it at runtime? Adding both Provider and Consumer during runtime is not exactly straightforward.

With this dreadful issue in focus Jōtai has surfaced. It is a state management library with a very similar signature to useState. Under the hood, Jōtai also uses the Context API, but it abstracts the Provider Hell from our code and even offers a “Provider-less” mode in case the app only requires one store.

Thanks to the bottom-up approach, we can define Jōtai's atoms (the data layer of each component that connects to the store) in a component level and the library will take care of linking them to the provider. The <Provider> util in Jōtai carries a few extra functionalities on top of the default Context.Provider from React. It will always isolate the values from each atom, but it will take an initialValues property to declare an array of default values. So the above Provider Hell example would look like this:

import { Provider } from 'jotai'
import {
  AuthAtom,
  UserAtom,
  ThemeAtom,
  SpecialAtom,
  JustAnotherAtom,
  VerySpecificAtom
} from '@atoms'

const DEFAULT_VALUES = [
  [AuthAtom, 'value1'],
  [UserAtom, 'value2'],
  [ThemeAtom, 'value3'],
  [SpecialAtom, 'value4'],
  [JustAnotherAtom, 'value5'],
  [VerySpecificAtom, 'value6']
]

export const DefaultLayout = ({ children }) => {
  return (
    
      {children}
    
  )
}

Jōtai also offers other approaches to easily compose and derive state definitions from one another. It can definitely solve scalability issues in an incremental manner.

Fetching State

Up until now, we have created patterns and examples for managing the state internally within the app. But we should not be naïve, it is hardly ever the case an application does not need to fetch content or data from external APIs.

For client-side state, there are again two different workflows that need acknowledgement:

  1. fetching the data
  2. incorporating data into the app's state

When requesting data from the client-side, it is important to be mindful of a few things:

  1. the user's network connection: avoid re-fetching data that is already available
  2. what to do while waiting for the server response
  3. how to handle when data is not available (server error, or no data)
  4. how to recover if integration breaks (endpoint unavailable, resource changed, etc)

And now is when things start getting interesting. That first bullet, Item 1, is clearly related to the fetching state, while Item 2 slowly transitions towards the managing state. Items 3 and 4 are definitely on the managing state scope, but they are both dependent on the fetch action and the server integration. The line is definitely blurry. Dealing with all these moving pieces is complex, and these are patterns that do not change much from app to app. Whenever and however we fetch data, we must deal with those 4 scenarios.

Luckily, thanks to libraries such as React-Query and SWR every pattern shown for the local state is smoothly applied for external data. Libraries like these handle cache locally, so whenever the state is already available they can leverage settings definition to either renew data or use from the local cache. Moreover, they can even provide the user with stale data while they refresh content and prompt for an interface update whenever possible.

In addition to this, the React team has been transparent from a very early stage about upcoming APIs which aim to improve the user and developer experience on that front (check out the proposed Suspense documentation here). Thanks to this, library authors have prepared for when such APIs land, and developers can start working with similar syntax as of today.

So now, let's add external state to our MainUserManagement layout with SWR:

import { useSWR } from 'swr'
import { UserInfoProvider } from '../context/user-info'
import { ExtDataProvider } from '../context/external-data-provider'
import { UserNavigationLayout } from '../layouts/user-navigation'
import { ErrorReporter } from '../components/error-reporter'
import { Loading } from '../components/loading'

export const MainUserManagement = (page) => {
  const { data, error } = useSWR('/api/endpoint')

  if (error) => <ErrorReporter {...error} />
  if (!data) => <Loading />

  return (
    <UserInfoProvider>
      <ExtDataProvider>
        <UserNavigationLayout>
          {page}
        </UserNavigationlayout>
      </ExtDataProvider>
    </UserInfoProvider>
  )
}

As you can see above, the useSWR hook provides a lot of abstractions:

  • a default fetcher
  • zero-config caching layer
  • error handler
  • loading handler

With 2 conditions we can provide early returns within our component for when the request fails (error), or for while the round-trip to the server is not yet done (loading). For these reasons, the libraries side closely to State Management libraries. Although they are not exactly user management, they integrate well and provide us with enough tools to simplify managing these complex asynchronous states.

It is important to emphasize something at this point: a great advantage of having an isomorphic application is saving requests for the back-end side. Adding additional requests to your app once it is already on the client-side will affect the perceived performance. There’s a great article (and e-book!) on this topic here that goes much more in-depth.

This pattern is not intended in any way to replace getStaticProps or getServerSideProps on Next.js apps. It is yet another tool in the developer's belt to build with when presented with peculiar situations.

Final Considerations

While we wrap up with these patterns, it is important to stress out a few caveats which may creep out on you if you are not mindful as you implement them. First, let us recapitulate what we have covered in this article:

  • Context as a way of avoiding Prop Drilling;
  • React core APIs for managing state (useState and useReducer);
  • Passing client-side state throughout a Next.js application;
  • How to prevent certain routes from accessing state;
  • How to handle data-fetching on the client-side for Next.js apps.

There are three important tradeoffs that we need to be aware of when opting for these techniques:

  1. Using the server-side methods for generating content statically is often preferable to fetching the state from the client-side.
  2. The Context API can lead to multiple re-renders if you aren’t careful about where the state changes take place.

Making good consideration of those points will be important, in addition all good practices when dealing with state in a client-side React app remain useful on a Next.js app. The server layer may be able to offer a performance boost and this by itself may mitigate some computation issues. But it will also benefit from sticking to the common best practices when it comes to rendering performance on apps.

Try It Yourself

You can check the patterns described in this article live on nextjs-layout-state.netlify.app or check out the code on github.com/atilafassina/nextjs-layout-state. You can even just click this button to instantly clone it to your chosen Git provider and deploy it to Netlify:

In case you would like something less opinionated or are just thinking about getting started with Next.js, there is this awesome starter project to get you going all set up to easily deploy to Netlify. Again, Netlify makes it easy as pie to clone it to your own repository and deploy:

References

Play two videos side by side for comparison

Comparing two videos side by side

I have been creating walk-through videos on the cheap by just wandering through an area with my little Sony camera. Lacking a steady-cam I just try to hold the camera as steady as possible. Fortunately, with the proper (free) tools I can still end up with a reasonable result. It's nice to be able to compare before and after videos so I'll describe the application I put together, and the tools I used to build it.

First the tools (again, all free)

  1. ffplay (included with ffmpeg version 4.2.3) download
  2. ffprobe (included with ffmpeg)
  3. AutoIt (version 3) download
  4. Python (version 3.8 or newer) download
  5. VirtualDubMod (optional - version 1.10.5) download

At the heart of it all is ffplay.exe which comes bundled with ffmpeg. Ffmpeg is a free and open source command line video suite that with a little effort (warning - there is a learning curve) is unbelievably versatile. While ffmpeg.exe is the main engine that you'll use to apply complex filters or convert formats, ffplay.exe offers a small, simple playback utility as well as a way to preview effects in real time.

ffprobe.exe is used to get the frame size of the video.

AutoIt (or more specifically AutoItX) is used to control windows/applications from within Python. In this case, I use it to resize and place the playback windows which will be approximately half the screen wide, and placed side by side (thus the name of the application).

Python, of course, will be used to tie everything together.

Unless you are de-shaking video you won't need VirtualDubMod.

Making sure you have the tools

Once you have downloaded ffmpeg, unzip it into a folder and add the full path of ffmpeg\bin to your system PATH environment variable.

To use AutoItX you'll also need the Python interface module which you can get by running

python -m pip install --upgrade pip
pip install PyAutoIt
pip install pywin32

The first line upgrades to the latest version of pip. In my experience, this should always be done whenever you run pip. It updates frequently.

PyAutoIt is the interface to AutoIt and pywin32 is the interface to various Windows components.

Running the application

In it's simplest form, you can run two videos side by side by typing

SideBySide video1.ext video2.ext

or by creating a shortcut to SideBySide.py on your desktop and dragging the two videos onto it.

The videos do not have to be the same size or even the same format, however, they should be at least the same aspect ratio. You may want to compare the quality of a video after converting it to another format, resizing it, or decreasing the bitrate. You may also want to compare the result of tweaking a video before actually applying the tweak to the entire video. For that you specify the video file as the first parameter and -same as the second parameter. This tells the app to use the same video for both windows. That's not very useful unless you also specify an ffmpeg video filter. For example to apply a blur filter you can type:

SideBySide video.ext -same -vf smartblur=5:0.8:0

To increase the contrast slightly you can type:

SideBySide video.ext -same -vf colorlevels=rimin=0.1:gimin=0.1:bimin=0.1

The parameters are obtuse but there is lots of documentation explaining the options. You can find it in the ffmpeg docs that were installed when you unzipped it, or you can google ffmpeg video filters. Because of a slight delay in loading the videos, the two windows may not be exactly in sync, but they should be close enough to make an easy comparison possible.

I've added command line options for some of the more common effects.

-gamma #.#          gamma correction (see ffmpeg doc for details)
-contrast #.#       contrast correction (see ffmpeg doc for details)
-grey               convert to grey scale
-sepia              convert to sepia

As an aside, I've seen lots of questions over the years asking "how do I convert from format X to format Y?" In ffmpeg, to convert (for example) from AVI to MP4 you type:

ffmpeg -i input.avi output.mp4

Of course you can also apply a ton of options and filters between the two file names. For example, to rescale a video to 1280x720 you type

ffmpeg -i input.ext -s 1280x720 output.mp4

Just replace ext and mp4 with your extensions of choice. You can also use ffmpeg to convert image, and even subtitle formats.

If anyone is interested in learning how to de-shake a video using VirtualDubMod please post the request in this thread and I'll be happy to write something up.

The code:
"""
    Name:

        SideBySide.py

    Description:

        Given two video files, runs them concurrently in two side by side
        instances of ffplay. Ths is very useful when you have processed a
        video and want to compare the original with the processed version.

        If you want to test a process (e.g. a filter) before processing the
        entire video, run the script by specifying -same as the second video
        as in

            SideBySide video1.mp4 -same  -vf smartblur=5:0.8:0

        Try the following filter to increase the contrast

            -vf colorlevels=rimin=0.2:gimin=0.2:bimin=0.2

        Convert to greyscale

            -vf colorchannelmixer=.3:.4:.3:0:.3:.4:.3:0:.3:.4:.3

        Convert to sepia

            -vf colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131

        adjust gamma/saturation

            -vf eq=gamma=1.5:saturation=1.3 

    Requires:

        Python version 3.8 or later
        ffmpeg (which includes ffplay)
        autoit (version 3)

    Usage:

        SideBySide video1 video2

    Notes:

        Regardless of the dimensions of the input videos, they will always be scaled so that
        they can be placed side by side, each filling just under half the width of the display.

        I haven't verified this, but I'm assuming that manipulating windows by handle rather
        than by name is more efficient which may be a consideration because I do it repeatedly
        in the wait loop at the bottom.

    Audit:

        2021-08-31  rj  original code

"""        

import os
import re                   #needed to extract video frame size
import tkinter              #needed to get display size
import win32com.client      #needed to create the AutoId com object
import subprocess           #needed to run ffprobe.exe
import sys
import time


def DisplaySize():
    """Returns the monitor display resolution as (width, height)"""
    root = tkinter.Tk(None)
    return root.winfo_screenwidth(), root.winfo_screenheight()

def VideoSize(file):
    """Returns the frame size of a video as (width, height)"""

    #Run ffprobe to get video info
    res = subprocess.run(['ffprobe', '-i',  file], shell=True, stderr=subprocess.PIPE, text=True)

    #Extract frame size
    for line in res.stderr.split('\n'):
        if 'Video:' in line:
            if (search := re.search(' \d+x\d+ ', line)):
                w,h = line[1+search.start():search.end()-1].split('x')
                return int(w), int(h)

    return 0, 0

def WaitFor(title, timeout):
    """Waits for up to timeout seconds for the window with the given title to be active"""
    timeout *= 10
    while not aut.WinActive(title):
        time.sleep(0.1)
        timeout -= 1
        if timeout == 0:
            print('expired')
            sys.exit()
    return


#check for sufficient number of parameters
if len(sys.argv) < 3:
    print("""
SideBySide video1 video2

    Displays two videos side by side for comparison. This is useful to see
    before and after video effects such as colour/contrast manipulation or
    scaling.

    If you want to try some ffmpeg filters before applying them to a complete
    video you can supply ffmpeg parameters ad hoc. To use the same video for
    both panels specify '-same' as the second video. For example, to see the
    effect of a gamma correction you can type:

        sidebyside video.mp4 -same -vf eq=gamma=0.9

    To save you the trouble of remembering ffmpeg filters several shortcuts
    are provided as follows:

        sidebyside video.mp4 -same -gamma 0.9        apply gamma correction
        sidebyside video.mp4 -same -contrast .12     apply contrast correction
        sidebyside video.mp4 -same -grey             convert to greyscale
        sidebyside video.mp4 -same -sepia            convert to sepia
""")
    sys.exit()

#get file names and command line options
video1 = sys.argv[1]
video2 = sys.argv[2]

if video2 == '-same':
    video2 = video1

if len(sys.argv) > 3:
    if sys.argv[3].lower() == '-grey':
        args = '-vf colorchannelmixer=.3:.4:.3:0:.3:.4:.3:0:.3:.4:.3'
    elif sys.argv[3].lower() == '-sepia':
        args = '-vf colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131'
    elif sys.argv[3].lower() == '-contrast' and len(sys.argv) > 4:
        cval = sys.argv[4].strip('0')
        args = '-vf colorlevels=rimin=%s:gimin=%s:bimin=%s' % (cval, cval, cval)
    elif sys.argv[3].lower() == '-gamma' and len(sys.argv) > 4:
        gval = sys.argv[4].strip('0')
        args = '-vf eq=gamma=%s' % gval
    else:
        args = ' '.join(sys.argv[3:])
else:
    args = ''

if not os.path.isfile(video1):
    print('Could not find:', video1)
    sys.exit()

if not os.path.isfile(video2):
    print('Could not find:', video2)
    sys.exit()

#personal hack - when I deshake a video I add '-ds' to the end of
#the filename. The following forces the -ds video to the right hand
#frame. Feel free to remove these two lines.

if '-ds.' in video1:
    video1,video2 = video2,video1

#create unique window titles
title1 = '1: ' + video1
title2 = '2: ' + video2

#create the AutoIt com object
aut = win32com.client.Dispatch("AutoItX3.Control")
aut.Opt("WinTitleMatchMode", 3)     #3 = Match Exact Title String)

#get the display width and height, and same for video
dw,dh  = DisplaySize()
vw,vh  = VideoSize(video1)
aspect = vw / vh

#Calculate size and position of playback windows
vw = int((dw-20) / 2)
vh = int(vw / aspect)
x1 = '10'
y1 = '35'
x2 = str(int((dw/2)) + 5)
y2 = '35'

#set up the commands to run ffplay
#  -v 0 suppresses the standard ffplay output
#  -window_title guarantees unique windo titles even if using the same video
cmd1 = 'ffplay -v 0 -window_title "' + title1 + '" -i "' + video1 + '"' \
     + ' -x ' + str(vw) + ' -y ' + str(vh) + ' -left ' + x1 + ' -top ' + y1
cmd2 = 'ffplay -v 0 -window_title "' + title2 + '" -i "' + video2 + '" ' + args \
     + ' -x ' + str(vw) + ' -y ' + str(vh) + ' -left ' + x2 + ' -top ' + y2

#Run ffplay on the first video. Wait for it to be active then get the handle.
print('\n' + cmd1)
if (p1 := aut.Run(cmd1)) == 0:
    print('Could not start ffplay.exe')
    sys.exit()

WaitFor(title1, 5)
handle1 = aut.WinGetHandle(title1)
handle1 = '[HANDLE:%s]' % handle1
#print('video 1 active - handle is', handle1)

#Run ffplay on the second video. Wait for it to be active then get the handle.
print('\n' + cmd2)
if (p2 := aut.Run(cmd2)) == 0:
    print('Could not start ffplay.exe')
    sys.exit()

WaitFor(title2, 5)
handle2 = aut.WinGetHandle(title2)
handle2 = '[HANDLE:%s]' % handle2
#print('video 2 active - handle is', handle2)

#This loop will terminate on CTRL-C or when both video players are closed
try:
    while aut.WinExists(handle1) or aut.WinExists(handle2):
        time.sleep(1)
except:
    pass

Surface Sampling in Three.js

One day I got lost in the Three.js documentation and I came across something called “MeshSurfaceSampler“. After reading the little information on the page, I opened the provided demo and was blown away!

What exactly does this class do? In short, it’s a tool you attach to a Mesh (any 3D object) then you can call it at any time to get a random point along the surface of your object.

The function works in two steps:

  1. Pick a random face from the geometry
  2. Pick a random point on that face

In this tutorial we will see how you can get started with the MeshSurfaceSampler class and explore some nice effects we can build with it.

💡 If you are the kind of person who wants to dig right away with the demos, please do! I’ve added comments in each CodePen to help you understand the process.

⚠ This tutorial assumes basic familiarity with Three.js

Creating a scene

The first step in (almost) any WebGL project is to first setup a basic scene with a cube.
In this step I will not go into much detail, you can check the comments in the code if needed.

We are aiming to render a scene with a wireframe cube that spins. This way we know our setup is ready.

⚠ Don’t forget to also load OrbitControls as it is not included in Three.js package.

// Create an empty scene, needed for the renderer
const scene = new THREE.Scene();
// Create a camera and translate it
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.set(1, 1, 2);

// Create a WebGL renderer and enable the antialias effect
const renderer = new THREE.WebGLRenderer({ antialias: true });
// Define the size and append the <canvas> in our document
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

// Add OrbitControls to allow the user to move in the scene
const controls = new THREE.OrbitControls(camera, renderer.domElement);

// Create a cube with basic geometry & material
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshBasicMaterial({
  color: 0x66ccff,
  wireframe: true
});
const cube = new THREE.Mesh(geometry, material);
scene.add(cube);

/// Render the scene on each frame
function render () {  
  // Rotate the cube a little on each frame
  cube.rotation.y += 0.01;
  
  renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);

See the Pen by Louis Hoebregts (@Mamboleoo) on CodePen.

Creating a sampler

For this step we will create a new sampler and use it to generate 300 spheres on the surface of our cube.

💡 Note that MeshSurfaceSampler is not built-in with Three.js. You can find it in the official repository, in the ‘examples’ folder.

Once you have added the file in your imported scripts, we can initiate a sampler for our cube.

const sampler = new THREE.MeshSurfaceSampler(cube).build();

This needs to be done only once in our code. If you want to get random coordinates on multiple meshes, you will need to store a new sampler for each object.

Because we will be displaying hundreds of the same geometry, we can use the InstancedMesh class to achieve better performance. Juste like a regular Mesh, we define the geometry (SphereGeometry for the demo) and a material (MeshBasicMaterial). After to have those two, you can pass them to a new InstancedMesh and define how many objects you need (300 in this case).

const sphereGeometry = new THREE.SphereGeometry(0.05, 6, 6);
const sphereMaterial = new THREE.MeshBasicMaterial({
 color: 0xffa0e6
});
const spheres = new THREE.InstancedMesh(sphereGeometry, sphereMaterial, 300);
scene.add(spheres);	

Now that our sampler is ready to be used, we can create a loop to define a random position and scale for each of our spheres.

Before we loop, we need two dummy variables for this step:

  • tempPosition is a 3D Vector that our sampler will update with the random coordinates
  • tempObject is a 3D Object used to define the position and scale of a sphere and generate a matrix from it

Inside the loop, we start by sampling a random point on the surface of our cube and store it into tempPosition.
Those coordinates are then applied to our tempObject.
We also define a random scale for the dummy object so that not every sphere will look the same.
Because we need the Matrix of the dummy object, we ask Three.js to update it.
Finally we add the updated Matrix of the object into our InstancedMesh’s own Matrix at the index of the sphere we want to move.

const tempPosition = new THREE.Vector3();
const tempObject = new THREE.Object3D();
for (let i = 0; i < 300; i++) {
  sampler.sample(tempPosition);
  tempObject.position.set(tempPosition.x, tempPosition.y, tempPosition.z);
  tempObject.scale.setScalar(Math.random() * 0.5 + 0.5);
  tempObject.updateMatrix();
  spheres.setMatrixAt(i, tempObject.matrix);
}	

See the Pen #1 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

Amazing isn’t it? With only a few steps we already have a working scene with random meshes along a surface.

Phew, let’s just take a breath before we move to more creative demos ✨

Playing with particles

Because everybody loves particles (I know you do), let’s see how we can generate thousands of them to create the feeling of volume only from tiny dots. For this demo, we will be using a Torus knot instead of a cube.

This demo will work with a very similar logic as for the spheres before:

  • Sample 15000 coordinates and store them in an array
  • Create a geometry from the coordinates and a material for Points
  • Combine the geometry and material into a Points object
  • Add them to the scene
/* Sample the coordinates */
const vertices = [];
const tempPosition = new THREE.Vector3();
for (let i = 0; i < 15000; i ++) {
  sampler.sample(tempPosition);
  vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
}

/* Create a geometry from the coordinates */
const pointsGeometry = new THREE.BufferGeometry();
pointsGeometry.setAttribute('position', new THREE.Float32BufferAttribute(vertices, 3));

/* Create a material */
const pointsMaterial = new THREE.PointsMaterial({
  color: 0xff61d5,
  size: 0.03
});
/* Create a Points object */
const points = new THREE.Points(pointsGeometry, pointsMaterial);

/* Add the points into the scene */
scene.add(points);		

Here is the result, a 3D Torus knot only made from particles ✨
Try adding more particles or play with another geometry!

See the Pen #3 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

💡 If you check the code of the demo, you will notice that I don’t add the torus knot into the scene anymore. MeshSurfaceSampler requires a Mesh, but it doesn’t even have to be rendered in your scene!

Using a 3D Model

So far we have only been playing with native geometries from Three.js. It was a good start but we can take a step further by using our code with a 3D model!

There are many websites that provide free or paid models online. For this demo I will use this elephant from poly.pizza.

See the Pen #4 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

#1 Loading the .obj file

Three.js doesn’t have built-in loaders for OBJ models but there are many loaders available on the official repository.

Once the file is loaded, we will update its material with wireframe activated and reduce the opacity so we can see easily through.

/* Create global variable we will need for later */
let elephant = null;
let sampler = null;
/* Load the .obj file */
new THREE.OBJLoader().load(
  "path/to/the/model.obj",
  (obj) => {
    /* The loaded object with my file being a group, I need to pick its first child */
    elephant = obj.children[0];
    /* Update the material of the object */
    elephant.material = new THREE.MeshBasicMaterial({
      wireframe: true,
      color: 0x000000,
      transparent: true,
      opacity: 0.05
    });
    /* Add the elephant in the scene */
    scene.add(obj);
    
    /* Create a surface sampler from the loaded model */
    sampler = new THREE.MeshSurfaceSampler(elephant).build();

    /* Start the rendering loop */ 
    renderer.setAnimationLoop(render);
  }
);	

#2 Setup the Points object

Before sampling points along our elephant we need to setup a Points object to store all our points.

This is very similar to what we did in the previous demo, except that this time we will define a custom color for each point. We are also using a texture of a circle to make our particles rounded instead of the default square.

/* Used to store each particle coordinates & color */
const vertices = [];
const colors = [];
/* The geometry of the points */
const sparklesGeometry = new THREE.BufferGeometry();
/* The material of the points */
const sparklesMaterial = new THREE.PointsMaterial({
  size: 3,
  alphaTest: 0.2,
  map: new THREE.TextureLoader().load("path/to/texture.png"),
  vertexColors: true // Let Three.js knows that each point has a different color
});
/* Create a Points object */
const points = new THREE.Points(sparklesGeometry, sparklesMaterial);
/* Add the points into the scene */
scene.add(points);	

#3 Sample a point on each frame

It is time to generate the particles on our model! But you know what? It works the same way as on a native geometry 😍

Since you already know how to do that, you can check the code below and notice the differences:

  • On each frame, we add a new point
  • Once the point is sampled, we update the position attribute of the geometry
  • We pick a color from an array of colors and add it to the color attribute of the geometry
/* Define the colors we want */
const palette = [new THREE.Color("#FAAD80"), new THREE.Color("#FF6767"), new THREE.Color("#FF3D68"), new THREE.Color("#A73489")];
/* Vector to sample a random point */
const tempPosition = new THREE.Vector3();

function addPoint() {
  /* Sample a new point */
  sampler.sample(tempPosition);
  /* Push the point coordinates */
  vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
  /* Update the position attribute with the new coordinates */
  sparklesGeometry.setAttribute("position", new THREE.Float32BufferAttribute(vertices, 3)  );
  
  /* Get a random color from the palette */
  const color = palette[Math.floor(Math.random() * palette.length)];
  /* Push the picked color */
  colors.push(color.r, color.g, color.b);
  /* Update the color attribute with the new colors */
  sparklesGeometry.setAttribute("color", new THREE.Float32BufferAttribute(colors, 3));
}

function render(a) {
  /* If there are less than 10,000 points, add a new one*/
  if (vertices.length < 30000) {
    addPoint();
  }
  renderer.render(scene, camera);
}		

Animate a growing path

A cool effect we can create using the MeshSurfaceSampler class is to create a line that will randomly grow along the surface of our mesh. Here are the steps to generate the effect:

  1. Create an array to store the coordinates of the vertices of the line
  2. Pick a random point on the surface to start and add it to your array
  3. Pick another random point and check its distance from the previous point
    1. If the distance is short enough, go to step 4
    2. If the distance is too far, repeat step 3 until you find a point close enough
  4.  Add the coordinates of the new point in the array
  5. Update the line geometry and render it
  6. Repeat steps 3-5 to make the line grow on each frame

The key here is the step 3 where we will pick random points until we find one that is close enough. This way we won’t have two points across the mesh. This could work for a simple object (like a sphere or a cube) as all the lines will stay inside the object. But think about our elephant, what if we have a point connected from the trunk to one of the back legs. You will end up with lines where there should be ’empty’ spaces.

Check the demo below to see the line coming to life!

See the Pen #5 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

For this animation, I’m creating a class Path as I find it a cleaner way if we want to create multiple lines. The first step is to setup the constructor of that Path. Similar to what we have done before, each path will require 4 properties:

  1. An array to store the vertices of the line
  2. The final geometry of the line
  3. A material specific for Line objects
  4. A Line object combining the geometry and the material
  5. The previous point Vector
/* Vector to sample the new point */
const tempPosition = new THREE.Vector3();
class Path {
  constructor () {
    /* The array with all the vertices of the line */
    this.vertices = [];
    /* The geometry of the line */
    this.geometry = new THREE.BufferGeometry();
    /* The material of the line */
    this.material = new THREE.LineBasicMaterial({color: 0x14b1ff});
    /* The Line object combining the geometry & the material */
    this.line = new THREE.Line(this.geometry, this.material);
    
    /* Sample the first point of the line */
    sampler.sample(tempPosition);
    /* Store the sampled point so we can use it to calculate the distance */
    this.previousPoint = tempPosition.clone();
  }
}		

The second step is to create a function we can call on each frame to add a new vertex at the end of our line. Within that function we will execute a loop to find the next point for the path.
When that next point is found, we can store it in the vertices array and in the previousPoint variable.
Finally, we need to update the line geometry with the updated vertices array.

class Path {
  constructor () {...}
  update () {
    /* Variable used to exit the while loop when we find a point */
    let pointFound = false;
    /* Loop while we haven't found a point */
    while (!pointFound) {
      /* Sample a random point */
      sampler.sample(tempPosition);
      /* If the new point is less 30 units from the previous point */
      if (tempPosition.distanceTo(this.previousPoint) < 30) {
        /* Add the new point in the vertices array */
        this.vertices.push(tempPosition.x, tempPosition.y, tempPosition.z);
        /* Store the new point vector */
        this.previousPoint = tempPosition.clone();
        /* Exit the loop */
        pointFound = true;
      }
    }
    /* Update the geometry */
    this.geometry.setAttribute("position", new THREE.Float32BufferAttribute(this.vertices, 3));
  }
}

function render() {
  /* Stop the progression once we have reached 10,000 points */
  if (path.vertices.length < 30000) {
    /* Make the line grow */
    path.update();
  }
  renderer.render(scene, camera);
}		

💡 The value of how short the distance between the previous point and the new one depends on your 3D model. If you have a very small object, that distance could be ‘1’, with the elephant model we are using ’30’.

Now what?

Now that you know how to use MeshSurfaceSampler with particles and lines, it is your turn to create funky demos with it!
What about animating multiple lines together or starting a line from each leg of the elephant, or even popping particles from each new point of the line. The sky is the limit ⛅

See the Pen #6 Surface Sampling by Louis Hoebregts (@Mamboleoo) on CodePen.

This article does not show all the available features from MeshSurfaceSampler. There is still the weight property that allows you to have more or less chance to have a point on some faces. When we sample a point, we could also use the normal or the color of that point for other creative ideas. This could be part of a future article one day… 😊

Until next time, I hope you learned something today and that you can’t wait to use that new knowledge!

If you have questions, let me know on Twitter.

The post Surface Sampling in Three.js appeared first on Codrops.

Why and When to Opt for a Multicloud Strategy?

With an ever-increasing demand for DevOps and releasing software quickly, the strategies that support this approach are also in high demand. Technologies such as Docker and Kubernetes have revolutionized the way software is built and shipped. Enterprises today have innovative ways to streamline their business and reduce noise and disturbance that affect their growth. Major clouds provide many of the resources and support that any organization needs.

But along with benefits of DevOps comes challenges such as vendor lock-in, rigid systems with no customization and more. To address such scenarios, organizationsare intentionally pursuing a strategy that includes a multicloud approach.

How to Add Content Locking in WordPress (2 Methods)

Do you want to add content locking on your WordPress site?

Many websites use content locking to boost their lead generation, increase sales, or build their email list. You will see this on many news and journalism sites that lock articles to make money online.

In this article, we will show you how to add content locking in WordPress without annoying users.

How to add content locking in WordPress

What Is Content Locking?

Content locking is a technique used by site owners to encourage their users to take action.

That action might be anything from signing up for an email newsletter to paying for premium content.

Content locking OptinMonster

Content locking works similarly to content upgrades. When you offer valuable content on your WordPress websites, such as a course or eBook download, you give your visitors a reason to take the initiative and sign up for a membership.

Doing this effectively can help you generate leads, build an email list, and grow your business. But if you don’t set up content locking in the best way, then visitors to your site may find it annoying.

That being said, let’s see how you can easily add content locking in WordPress the right way. Method 1 is best for exclusive free content, and Method 2 is for premium paid content:

Method 1: Add Content Locking With OptinMonster (Free Content)

OptinMonster is the best lead generation plugin for WordPress on the market. It’s the best choice if you want to use exclusive content to grow your email list.

You will need a Plus or higher plan for content locking. WPBeginner users can get a 10% discount by using our OptinMonster coupon.

First, you will need to visit the OptinMonster website and click the ‘Get OptinMonster Now’ button to sign up for a plan.

OptinMonster – The best WordPress popup plugin

Next, you need to install and activate the OptinMonster plugin. For more details, see our guide on how to install a WordPress plugin.

Upon activation, you will see the welcome screen and the setup wizard. Simply click the ‘Connect Your Existing Account’ button and follow the on-screen instructions.

Connect your existing account

Next, you will see a new popup window open to connect your WordPress site with OptinMonster.

Go ahead and click the ‘Connect To WordPress’ button.

Connect OptinMonster to WordPress

Once you’ve done that, you will then need to log in to your OptinMonster account or create a new one.

After you are successfully connected, you should navigate to the OptinMonster » Campaigns page in your WordPress dashboard. Since you haven’t yet made a campaign, you will be asked to create a new one.

Create first OptinMonster campaign

When you click the ‘Create Your First Campaign’ button, a popup window will open.

OptinMonster will ask you to choose from templates or playbooks. If you select the ‘Templates’ option, then you can pick from 300+ designs for your campaign.

On the other hand, you can also opt for the ‘Playbooks’ option and use a ready-to-use campaign inspired by successful brands.

Creating a campaign using a ready-made playbook

For this tutorial, we will select the ‘Templates’ option.

Next, you will be taken to the OptinMonster website and asked to pick a campaign type.

To add content locking, you will need to choose the Inline option.

OptinMonster campaigns

You will then need to scroll down and choose a template. OptinMonster offers multiple templates, and they all work great across any device.

You can view each template by clicking on the ‘Preview’ button. When you find one that matches your needs, you will need to click the ‘Use Template’ button.

Choose an inline campaign template

You will then be asked to provide a title for your campaign.

Once you’ve typed it in, simply click the ‘Start Building’ button.

Click the Start Building button

This will open the OptinMonster editor. Here, you can spend time perfecting the content and appearance of the popup.

For instance, you can use the drag-and-drop campaign builder to add different blocks to the template. There are blocks for adding columns, text, images, videos, countdown timers, and more.

Customize your content locking campaign

You can click on any section to change the wording, edit fonts, add images, change colors, and more.

You can also customize the success message that’s displayed to your users after they sign up.

Edit success view of campaign

Once you are happy with the way your popup looks, you need to activate content locking.

To do that, you will need to switch to the ‘Display Rules’ tab. Here, you can choose when and where the campaign will be displayed to users.

For instance, you can show the content lock popup on all the pages of your WordPress website or just on selected pages.

Display rules settings OptinMonster

Once you are done, simply click the ‘Done? Go To Actions’ button.

On the next screen, you will see options to add animation with MonsterEffects and play a sound when the popup appears.

To lock content behind the popup, simply go to the ‘Lock Content’ section and click the ‘Enable Content Locking’ toggle.

Enable content locking option

If you scroll down, there are more campaign display options.

For instance, OptinMonster lets you configure the cookie settings and set the days after which the popup will show to different users.

Changing the cookie settings

After that, you need to make the campaign active.

Simply click on the ‘Published’ tab at the top of the screen and then click on the ‘Publish’ button.

Publish your content locking campaign

Then, you can save your campaign by clicking the ‘Save’ button at the top right and closing the campaign builder.

Next, you will see the WordPress Output Settings for your campaign. Here, OptinMonster will ask how you’d like to show your inline campaign.

The Automatic option is the simplest to set up. It will lock content automatically after a specified number of words or paragraphs. For example, you could lock all content after the first three paragraphs of each post.

Select how inline campaign will appear with automatic mode

The Manual setting requires a little more work, but it lets you choose exactly which content will be locked.

You can do this by adding a shortcode to each post that you wish to lock.

Select how inline campaign will appear with manual mode

Simply start by copying the shortcode.

To add it to your content, you will need to edit the post or page where you want to enable content locking.

When you are in the WordPress content editor, just add a Shortcode block and paste the shortcode just before the content you wish to lock.

Enter shortcode for content locking

Now, you can save and publish your post or page.

If you’d like to see content locking in action, then simply visit the post or page in a new browser window.

Content locking example

The locked content is blurred or hidden.

Once a user enters an email address, the locked content will be displayed.

Content unlocked preview

Method 2: Add Content Locking With MemberPress (Paid Content)

MemberPress is the best membership plugin for WordPress, and you can use it to easily and effectively lock your WordPress content. It’s the best choice when you want to make money by charging for premium content.

Is MemberPress the right membership plugin for your WordPress website?

You will need at least a Basic plan for content locking. WPBeginner users can save up to $479 off their first year of MemberPress using our MemberPress coupon.

The first thing you need to do is install and activate the MemberPress plugin. For more details, see our guide on how to install a WordPress plugin.

On activation, you will have to enter your MemberPress license key. To do that, navigate to MemberPress » Settings and paste your key into the text box. You then need to click the ‘Activate License Key’ button.

Adding a license key to your MemberPress plugin

When you first set up your membership site, you will have to select a payment method, decide on pricing, create a signup page, and more.

Check out our ultimate guide on how to create a WordPress membership site for all the details.

In this tutorial, we will show you how to use MemberPress to create a premium subscription and then determine which content can only be accessed after paying for a membership.

Let’s start by setting up a new membership plan for your subscribers. To create a membership level, you need to go to the MemberPress » Memberships page and click on the ‘Add New’ button at the top.

Adding membership levels to your WordPress website

You will need to give the plan a name and decide on the cost of the content and the billing type.

We will create a one-time lifetime payment, but you could choose one of the regular subscription options.

Creating multiple membership levels in WordPress

After that, you need to scroll down to the membership options meta box below the post editor.

This is where you can customize permissions and membership options for this particular plan.

The Membership Options settings

If you need more than one membership plan, then go ahead and repeat this process.

When you are finished, you can click on the ‘Publish’ button on the right of the screen to make it available.

Publishing a membership level with a free trial

The next step is to select which content is available to each membership plan on your website. MemberPress makes it easy to control access using Rules.

You can set up your rules by visiting the MemberPress » Rules page and clicking on the ‘Add New’ button at the top.

Adding a new rule to your WordPress membership site

The rule edit page allows you to select different conditions and associate them with a membership plan.

For example, we can protect all content with the ‘Gold’ tag so that it’s available only to members of our Gold plan.

Setup rules conditions

Below the rules editor, you will see the content drip and expiration options. These let you release content gradually and make it unavailable after a period of time.

If you’d like all the content to be available as soon as users sign up and remain available without expiring, then you should leave these boxes unchecked.

Content drip and expiration settings

You can repeat the process to create more rules as needed for your membership site.

Once you are satisfied, go ahead and click on the ‘Save Rule’ button on the right to save your rule settings.

Click save rule button

Now, all we need to do is to add our locked content.

In our example, we add the ‘gold’ tag to the posts we want only Premium members to be able to access.

Adding tag to post

Next, scroll down to the ‘MemberPress Unauthorized Access’ meta box below the post editor.

Here, you can select what logged-out users (non-members) will see when they reach this content.

Unauthorized access section

You can see the content lock-in action by visiting your WordPress site.

For example, on our demo site, someone who is not a premium member will see the pricing page when they access the locked content.

View pricing page for locked content

Premium members will be able to see the content when they subscribe to a plan and log in to your WordPress site.

Bonus: How to Use Content Locking to Grow Your Business

Now that you know how to add content locking to your site, let’s look at some use cases for growing your email list.

1. Lock Content for Registered Users

If you are running a membership site and want to make money from your WordPress blog, then you can lock exclusive content for registered users.

For example, you could lock content like interviews, online courses, videos, podcast episodes, cheat sheets, and other content for registered users.

Content locking preview

This way, users will have to subscribe to a premium plan and submit their email address to access the exclusive content.

2. Restrict Content Based on User Roles

You can also use content locking to restrict access to specific pages and sections on your site. This is really useful if you run a multi-author website and don’t want writers or contributors to view certain pages.

Similarly, you can restrict content based on user roles on a membership site. For instance, only users with a subscriber user role can view the video section on your site or access online courses.

3. Offer Content Upgrades to Visitors

Content upgrades are pieces of bonus content that users can unlock by signing up for your email list.

You can use content locking to offer free upgrades to visitors and encourage them to join your newsletter. This way, users will have an incentive to get bonus content while you will get more subscribers.

Offer Content Upgrades

We hope this article helped you add content locking in WordPress. You may also want to learn the right way to create an email newsletter or check out our list of must-have WordPress plugins to grow your site.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add Content Locking in WordPress (2 Methods) first appeared on WPBeginner.

How To Prepare for SOC 2 Compliance: SOC 2 Types and Requirements

To be reliable in today’s data-driven world, SOC 2 compliance is essential for all cloud-based businesses and technology services that collect and store their clients’ information. This gold standard of information security certifications helps to ensure your current data privacy levels and security infrastructure to prevent any kind of data breach. 

Data breaches are all too common nowadays among small to large scale companies across the globe in all sectors. According to PurpleSec, half of all data breaches will occur in the United States by 2023. 

Hi I’m bboycage

Hi, I'm bboycage and I work as a Data Analyst / Data Scientist at a Financial Consulting company. Python is my main language, had to learn SAS because of my job, along with the 1992 version of ANSI SQL as available through PROC SQL. I've also taken up Java on the side and plan on learning C++ down the line.

Automattic Acquires Frontity, Founders to Work Full-Time on Gutenberg

Frontity co-founders Pablo Postigo and Luis
Herranz

Automattic has acquired Frontity, the company behind an open source framework for building WordPress themes with React. The acquisition comes more than a year after the company raised €1M in funding in a round led by K Fund, with Automattic covering 22%. Frontity co-founders Pablo Postigo and Luis Herranz and their team will no longer be developing and maintaining the framework. Their new focus will be on contributing to the WordPress open source project and improving the full site editing developer experience.

“After a series of conversations, Automattic offered to sponsor our team to work directly on the WordPress open source project,” Frontity’s founders said in the announcement. “In particular, to contribute our expertise in developer experience, frontend tooling, performance, and UX to the WordPress core itself, instead of doing so only for an external tool.”

In a separate FAQ document, Frontity clarified that this acquisition does not mean the framework will be merged into WordPress, nor does it mean the team plans to bring React into the WordPress PHP or full site editing themes. The founders intend to apply their expertise to the Gutenberg project full time:

Even though Frontity is a React framework, it doesn’t mean that we are going to push React to the WordPress frontend. We will look at the Gutenberg and full site editing space to identify those areas in which our work could have the most significant impact, and work closely with the WordPress community to help improve its developer experience.

WordPress is already the best content platform on the web. We want to help it become the best development platform on the web.

In addition to putting the Frontity team on improving developer experience, Automattic is also investing in other ways that expand its support of the Gutenberg project. The company has recently hired a new head of developer relations who is building out a team tasked with improving the developer experience with Gutenberg and full-site editing. Birgit Pauli-Haack is a new member of that team and Automattic is also sponsoring her curation of the Gutenberg Times publication and the Changelog Podcast.

Frontity Framework Will Transition to a Community-Led Project

As the result of the acquisition and the team’s reassignment to working on Gutenberg, Frontity’s founders are transitioning the framework to be a community-led project. The team has prepared to leave the project in “a stable, bug-free position,” with documentation regarding what features they were working on. The framework is used by many companies and agencies, including high profile sites like the TikTok Creator Portal, popular Catholic news site Aleteia, and Diariomotor, a popular Spanish automotive publication.

“As far as we know, Automattic is not using Frontity Framework in any of its products,” Frontity CEO and co-founder Pablo Postigo said. “But we know there are a lot of Automatticians who have been following our progress closely. 

“We are aware that WordPress VIP does recommend Frontity for decoupled solutions, too. We are sure our experience and knowledge might be of help for this team as well.”

The departure of Frontity’s founders and team introduces some uncertainty into the future of the framework. When asked if it can survive as a community-led project, Postigo was optimistic but not certain.

“We still think that Frontity Framework is the best way to run a decoupled WordPress site with React and that this will be the case for a long time,” Postigo said.

“It is still too early to know what will happen. Frontity has a great community behind it, there are a lot of great projects which are using the framework in production, and there’s also a nice group of really active contributors. We feel really positive about the future of the framework.”

Google Chrome 94 Beta Includes WebCodecs API

Google has announced the beta release of v94 of Google Chrome and is highlighting the inclusion of a new WebCodecs API that is designed to handle low-level video processing at a broad level. This functionality was first introduced as an origin trial in Chrome 93. 

The Chromium blog outlined the importance of this new API in stating that:

Announce Your Plugin to the World, Shout It From the Rooftop

The easiest way to kill your WordPress plugin is to fail to let the world know about it. If you cannot manage a tweet, blog post, or quick note on Facebook, you may as well sign the death certificate then and there.

I get it. I have been there. Not everyone is a marketing guru, so putting out the right messaging might seem like speaking in a foreign language. But no messaging at all? That will not bode well for your young project.

Part of my job is finding plugins and sharing them with the community. Every week, I am on the lookout for that next great idea. Or, at least, a sort-of-good idea. I scour Twitter, regular blogs that I read, and official WordPress directories for plugins and themes. What I like most about writing about our beloved platform is not big business deals or the latest drama. While those pieces can be fun, I am most interested in what people create on top of the software. Whether a large company or an individual builds a new plugin, I am always excited when Monday rolls around. I can begin my search anew.

Often, I will find a new plugin that looks promising, so I dive into it. I install and activate it. At times, I find something so interesting that I have no choice but to share it. However, most of the time, I need a little push. To understand “the why” behind it. I do a quick check to see if they have written a blog post, tweeted about it, or shared it in some way. More often than not, nothing exists about it other than its listing in the plugin directory. And, reaching out to devs via email is often a hit-or-miss affair.

When you do not announce your new project to the world, it feels like you are not passionate about it.

I understand that some people simply hash out an idea and decide to drop it in the plugin directory. They are not in it for glory or even recognition. For them, it is just a piece of code they thought might come in hand for others. But, usage is the lifeblood of software. If no one else downloads, installs, and activates your plugin, can we really call it software?

Like the proverbial tree falling in the forest, whether it makes a sound is irrelevant if no one is around to hear it.

I have been mulling over whether to finishing writing this post for months, unsure if I was ever going to hit the publish button. I initially scratched down some notes in early April, attempting to understand why so few go through the trouble of doing any marketing of their plugins. I reached out to Bridget Willard to get insight from someone with a rich history in the marketing world. She had just published How to Market Your Plugin the month before, so the timing made sense.

However, I still felt too frustrated with the status quo in the WordPress community. A message from a reader wishing that we would mention alternative choices for plugin-related posts prompted me to revisit this. The truth is simple. So many projects fly under the radar because their authors begin and end their marketing by submitting to WordPress.org.

“Marketing is communication,” said Willard. “At the basic level, you must ‘tell people’ you have a product. The basic minimum is a blog post with social posts on Twitter, Facebook, and LinkedIn. It’s scary to market while you build, but that’s what the automobile industry does (along with others). You have to create the desire for the product — more than fixing a problem.”

While she tends to focus on products and services, I asked her what developers should be doing regardless of whether their plugins are commercial or free.

“I advocate with all of my being having a landing page on your main site (not a new site) promoting your plugin — while you’re building it,” paraphrasing from a chapter in her book. “Take signups for beta testers, start email marketing. The blog post is anti-climatic in many ways, and one or two tweets aren’t enough. Even better is to customize the sign-up ‘thank you page’ with something special — a video talking about your goals, for example. It’s not the time to have a tutorial or demo. This is about communicating your vision.

“The sad thing is that many plugin developers don’t see the need to spend money on a ‘free’ plugin. The axiom is true, ‘it takes money to make money.’ If you want sales, you need marketing. The sale for a free plugin is a download, and those are just as important.”

Part of me missed the old Weblog Tools Collection era. Every few days, the site would share a post with the latest plugins (themes too) with short descriptions of each. It was an easy win if you had no marketing skills. Developers could submit their new projects, and the team would share them with the community. When I was a young and upcoming developer, it was one of the only ways I knew how to reach folks in the WordPress community aside from pushing stuff from my own blog.

Today, we have far more avenues for sharing our work via social networking. Of course, the downside is that you have to cut through the noise.

In the long run, I would like to see an overhaul of the WordPress.org directory, focusing on the discoverability of plugins by feature instead of only popularity. Not all developers are known for their marketing skills. Having a little help from the directory they feed into could make it easier for budding developers to jump from hobby to business.

Until then, let the world know about your plugin. Even if it seems like you are shouting into the abyss, you may just hear an answer from someone who noticed your passion. If nothing else, let us know about it here at WP Tavern.