Should I Update WordPress or Plugins First? (Proper Update Order)

Not sure about whether to update WordPress core, or your plugins first?

Often beginners don’t install updates because they are afraid of breaking their site. Updating your WordPress site in the correct order will help you prevent common errors and troubleshoot more easily.

In this article, we’ll show you the proper order for updating WordPress core, plugins, and themes.

Should You Update WordPress or Plugins First? Proper Update Order

Why Keep WordPress Up to Date?

It’s important to always use the latest version of WordPress. This will make sure your website has the latest security patches, newest features, and the best speed and performance.

Unfortunately, on rare occasions, updating WordPress or a plugin can break your website. This can happen if there’s a bug in the code or if the update introduces some kind of conflict with a theme or plugin.

That’s why we always recommend that you create a complete WordPress backup before performing any updates. You can also create a staging site where you can test the updates and catch any errors without risking your live website.

It’s also helpful to use the proper WordPress update order. You may be wondering whether it’s best to update WordPress core or your plugins first.

Our expert team recommends updating WordPress in this order:

  1. First, update WordPress core
  2. Then update your plugins
  3. Finally, update your theme last

Let’s take a look at the best order to update your WordPress website.

Before You Start, Make a Complete WordPress Backup

Before you update anything, it’s important to perform a full backup of your WordPress website. You should store the backup on your computer or in cloud storage, not just on your hosting server.

That’s because there is always some risk that an update may break your site, no matter how careful you are or which order you perform the updates.

A complete WordPress backup includes everything:

  • Your WordPress database
  • All your images and media uploads
  • Your WordPress plugins and themes
  • Core WordPress files

UpdraftPlus is the best WordPress backup plugin and is used by more than 3 million websites. You can use it to create a complete backup of your WordPress site and store it on the cloud or download it to your computer.

Back Up Your Website With UpdraftPlus

You can learn the best way to use UpdraftPlus to back up your website step by step by visiting our guide on how to back up and restore your WordPress site.

First, Update WordPress Core

If a new version of WordPress core is available, then you should update that first. This follows the update order as it is listed on the Dashboard » Updates page and helps minimize the risk to your site.

Because plugin and theme updates are tested to work with the latest WordPress version, you are less likely to have conflicts by updating your plugins and themes after the latest version of WordPress is installed.

The simplest way to update WordPress core is to navigate to the Dashboard » Updates page and then click the ‘Update Now’ button.

Updating WordPress Core From the Dashboard

When you press ‘Update Now,’ WordPress will automatically put your site in maintenance mode, then fetch the latest version of the software and install it for you. You will see the update progress on your screen.

Advanced users can also update WordPress manually by downloading the latest version from the WordPress download page, and then logging into their WordPress hosting account and using FTP to upload the new WordPress files.

To learn how to update WordPress core using either of these methods, see our beginner’s guide and infographic on how to safely update WordPress.

WordPress Update Flowchart

Troubleshooting a WordPress Core Update

Before you move on to update your plugins, you should first make sure that there are no problems with your website now that it is running the latest version of WordPress.

Simply visit your website in a new browser window to see if anything isn’t working or looks out of place. You should also review the settings in your WordPress admin area.

If you come across any issues, then take a look at our list of common WordPress errors and how to fix them.

If the problem you are facing is not listed there, then you should follow the steps in our WordPress troubleshooting guide to figure out the problem and apply a solution.

After That, Update Your Plugins

Once you have upgraded WordPress on your website, then you can update your plugins.

An easy way to do that is to scroll further down the Dashboard » Updates page to the ‘Plugins’ section.

Simply select the specific plugins you wish to update and click the ‘Update Plugins’ button. You can select all of the plugins listed by checking the ‘Select All’ box at the top of the list.

Updating WordPress Plugins From the Updates Page

You may also notice a red number beside Plugins in the admin dashboard. Clicking inside will show you a yellow notice under each plugin that needs to be updated.

Then, all you have to do is click the ‘Update now’ link under any plugin you want to update without having to leave the page.

How to update plugins in WordPress

For more detailed information, see our step-by-step guide on how to properly update WordPress plugins.

Troubleshooting a Plugin Update

As you did after updating WordPress core, you should visit your website in a new browser window to see if you encounter any error messages or other problems.

You may sometimes discover that one of your plugins is not compatible with the latest WordPress version.

When that happens, you should follow the steps in our WordPress troubleshooting guide to see if you can find a solution to the problem.

If you can’t, then reach out to the developer and see if they plan to release an update. If the plugin is from the WordPress Plugin Directory, then you can contact the developer using the site’s support forum. Otherwise, check the official website for support information.

How to get WordPress support in the official support forums

If no further development is planned, then you will need to look for a different plugin that performs the same task. You might like to take a look at our beginner’s guide on how to choose the best WordPress plugin.

If you’re not ready to move on to a different plugin, or if there are other issues with the update that you can’t resolve, then you may need to restore your WordPress site from the backup you made before you began the update process.

Alternatively, you can roll back WordPress to the previous version.

Finally, Update Your Theme

After you have updated WordPress core and your plugins, and you have checked that your website is working, you can update your theme, if an update is available.

However, when you update a theme, you will overwrite the existing theme files with new ones and lose any changes you made. If you added any code to your theme, then you should carefully check our guide on how to update a WordPress theme without losing customization.

Once you are ready to update your theme, you can simply scroll to the ‘Themes’ section at the bottom of the Dashboard » Updates page.

Once there, you can select the themes you want to update, then click the ‘Update Themes’ button. The ‘Select All’ checkbox will automatically select all available theme updates.

Updating Themes From the Dashboard » Updates Page

Alternatively, you can navigate to Appearance » Themes in your admin area. If any updates are available, you will notice a red number next to ‘Themes’ in the admin sidebar.

Updating Themes From the Appearance » Themes Page

Simply click the ‘Update now’ link above any theme you wish to update.

Troubleshooting Your Theme Update

Troubleshooting a theme update is similar to troubleshooting a plugin update. You should start by visiting your website in a new browser window to see if there are error messages or other problems.

If there are, you can follow our WordPress troubleshooting guide to find a solution, or reach out to the developer for help.

If the theme is from the WordPress Theme Directory, then you can contact the developer using the support forum for that theme. Otherwise, check the official website for support information.

What Is the Proper WordPress Update Order?

In conclusion, let’s summarize the proper order to update your WordPress website:

  • First, you should back up your website
  • Then, update the core WordPress files
  • Next, update your plugins
  • Finally, update your theme

Always make sure your website is working properly before moving on to the next step.

Of course, if there is no update for WordPress core, then you can update your plugins or theme whenever new versions become available.

We hope this tutorial helped you learn the correct order to use when updating WordPress core and plugins. You may also want to learn how to properly install Google analytics in WordPress, or check out our list of must-have WordPress plugins to grow your site.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post Should I Update WordPress or Plugins First? (Proper Update Order) first appeared on WPBeginner.

What Makes A Great Toggle Button? (Case Study, Part 2)

In the first article of this two-parter, we have covered a crucial yet unresolved problem in UI design concerning toggle buttons. Getting across which option is active isn’t easy. There are many visual cue types to choose from: font style, colors, and outlines, just to name a few. To assess which visual cues communicate effectively the option toggled on, we have conducted a thorough case study with over 100 real users and 27 visual cues. Read on to learn about our findings and their implications to take away when designing toggle buttons of your own.

Case Study Results

Let’s see what we found out about effective ways to put an emphasis on a button to make it clear that it’s active. But first, a quick summary of our participants.

Participant Review

After our data collection was completed, we first had to review the quality of participants in our study. This review has led to the disqualification of some participants, mainly those who have shown signs of choosing the answers at random 50-50, a clear sign of not making an effort to complete the tasks genuinely. After we removed these offenders, we were left with the following numbers of participants per study:

Study type: 5-Second Test 20-Second Test
Group: 1 2 1 2
Number of participants: 28 29 30 27

Note: These numbers are still higher than the number of results that we set out to collect as the minimum since we accounted for a dropout rate of up to 16% while launching our recruitment online.

Metric For Comparing Utility Of Visual Cues

We conducted four studies with the tool Five Second Test. Two with a 5-second time limit and two with a 20-second limit. We needed a metric that could objectively compare toggles to each other and how a specific toggle fared in 5-second and 20-second test variants.

We created a weighted metric, which we named the Success-Confidence score. The Success-Confidence score is derived from the number of correct answers (according to expectations) combined with the Likert scale answers to the question: “How sure do you feel about your answer?”

First, we calculate the average confidence for correct and incorrect answers separately and for every toggle. Average confidence can range from 0 to 1 based on how participants answered the Likert scale question. For example, if every respondent who chose the correct toggle side were to respond with “Absolutely sure” on the Likert, the average confidence for the correct answers for the given toggle would be 1.

We then used the calculated average confidence for correct and incorrect answers and calculated the Success-Confidence score of the toggle by using the following formula:

Success-Confidence score = (correct_num  correct_conf) - (incorrect_num  incorrect_conf)

correct_num -> number of correct answers

incorrect_num -> number of incorrect answers for toggle

correct_conf -> average confidence with correct answers

incorrect_conf -> average confidence with incorrect answers 

Since we had different numbers of participants available for each test, we normalized the Success-Confidence score by dividing it by the total number of participants for the given test. Resulting in the following formula:

Normalized Success-Confidence score = Success-Confidence score / number of participants

The scale of normalized Success-Confidence score is -1 to 1. Minus 1 designates a toggle where all participants provide wrong answers with high confidence, and 1 designates a toggle where all respondents answer correctly with high confidence.

Evaluation Of Research Questions

RQ 1: Bold text

A low error rate of 1.7% and a high Success-Confidence score of 0.86 confirmed our expectation that emboldened text in a toggle button makes options perceived as active compared to regular text. This version of the toggle even performed well enough to earn the third best average rank among all the evaluated toggles. Based on this result, we can safely pronounce bold text in the active button as a functional yet simple solution for communicating which toggle option is selected. This knowledge should be particularly useful if all your toggle buttons use fonts of equal weight, as is often the case.

RQ 2: Text size

We tested four toggles with varying size differences between the text in the active and inactive buttons. As predicted, the toggle where the font size difference was barely noticeable performed the worst with an error rate of almost 15% and a confidence score of only 0.63. Compared to that, the toggle with the greatest difference in font size was perceived with an error rate of only 4.4% and a confidence score of 0.81, which are both significant improvements when compared to the smallest difference. The performance of the two middle toggles was set between these two extremes. Unexpectedly, the toggle with the second smallest difference slightly outperformed the toggle with the second greatest difference. However, this irregularity is small enough to be explained by noise in the data.

Since the performance rate grew in general with the font size, our expectation of “greater size difference means better visual cue for toggles” was confirmed. However, since using a significantly bigger font to represent the active state can prove to be visually unappealing, we suggest using bold text instead. Bold text is not only easier to include in your design, but also performs better.

RQ 3: Contrast of inverted colors in text labels

The black & white and orange & blue inverted color combinations proved to be some of the worst performing toggles we’ve tested, with error rates of 19.3% and 23.7% and confidence of only 0.56 and 0.41, respectively. The low confidence levels suggest that even the respondents who managed to select the correct answer weren’t confident at all in their answers. Our prediction of the darker colors perceived as active was confirmed by the error rate of less than 0.5 in both cases. However, the low confidence deteriorates the strength of the lower error rates. This means that our hypothesis that inverted font colors are an ineffective visual cue was confirmed. Try to avoid using colors of the same visual importance, as also seen in research question number 8 which concerns toggle backgrounds.

RQ 4: Cultural perception of red vs. green in text labels

A seemingly surprising (although not completely unexpected) result came from the toggle with red and green text. The error rate for this toggle is 32.5% and confidence only 0.32, making it one of the worst performing toggles overall, with an average rank of 24.67. This result suggests that the red/green combination not only fails to improve the results compared to other color couples but actually makes it worse. The possible explanation could be that the green color was perceived as a switch, not a sign of an active state. Red-green colorblindness is also the most common type of color vision deficiency, which is reason enough not to use this visual cue, as wrong answers in our experiment also reflect.

RQ 5: Color vs. black/white in text labels

The combination of colorful and white labels performed well (avg rank of 9.33). The toggle which was surprisingly problematic was the combination of color and black. This toggle with an error rate of 14% and confidence of only 0.59 shows that the participants weren’t able to pick the active side reliably. We predict that this phenomenon was most likely caused by the visual strength of black text compared to colored text, regardless of hue. Therefore, simply distinguishing active and inactive toggles by turning inactive black text colorful isn’t recommended. For better color-based approaches, simply continue reading our findings for the next research questions.

RQ 6: Primary color vs. neutral color (shades of gray) in text labels

Compared to the toggles from the immediately preceding research question, this toggle represents a middle ground between the white and black inactive options with its gray color. This was reflected in the resulting average rank of 12, which is better than the color vs. black option, but worse than color vs. white.

RQ 7: Different saturation of the same color in text labels

The last text color variant of toggles we tested also confirmed our theory. The difference in saturation was a strong enough cue to secure satisfying results (an error rate of 8.7% with a confidence of 0.77). This suggests that the respondents reliably selected the correct option. Note that while the error rate was comparable to primary vs. neutral color, different saturations of the same color inspired higher confidence. Therefore, the preferable option is to use a lower saturation of the same color instead of greyscale for inactive toggle buttons.

RQ 8: Contrast of inverted colors in the background and RQ 9: Cultural perception of red and green in the background

The toggles defined in these hypotheses were counterparts to the toggles from hypotheses 3 and 4. We kept the color pairs the same, but this time we filled the toggle’s backgrounds with the color instead of coloring the text. The results with background colors followed the same pattern as with the text, with the black-&-white combination performing the best, the orange-&-blue coming second and red-&-green taking last place. However, compared to the colored text variants, the filling variants performed better than their text alternative (error rate improvement by 5-8%).

What may seem counterintuitive at first is that although black-&-white filling has a stronger potential to stimulate confusion due to dark/light mode settings, it still performed better than black-&-white text alternative or inverted colors with hue. How a light/dark mode setting would affect the results for this specific toggle could bear further investigation. However, for building an optimal toggle, it might be unnecessary, considering the overall better results achieved by other types of toggle backgrounds.

RQ 10: Different saturations of the same color in the background

Different shades of orange achieved an error rate of 9.7% and a normalized Success-Confidence score of 0.72. Compared to that, different shades of gray had an error rate of 15% and a normalized Success-Confidence score of 0.63 for the gray toggle — both overall decent scores which proved these visual cues as usable. The improvement of the orange color over the greyscale variant has been significant (resulting in an average rank of 13.67 compared to 18). It is important to note that even though the orange variant performed better than the gray one, their performance was still average at best. If background colors are used in this form, we recommend combining them with another visual cue.

RQ 11: Saturated colors and grayscale colors in the background

As expected, the version where the inactive button was a lighter shade of gray performed better (6.1% error rate and 0.79 confidence) than the darker gray version (12.3% error rate and 0.66 confidence). It also outperformed the orange version from hypothesis 10 and overall performed well, earning the average rank of 6.67 (sixth best). The more saturated version was placed in the bottom half but still managed to outperform the grayscale version from hypothesis 10 (average rank 15 compared to 18). The results of these two hypotheses suggest that if we want to use a saturated color fill to denote activity, it is best coupled with low saturated gray.

RQ 12: Inverted design of buttons

We believed that the inversion of design would be more confusing to the users than the saturations described in hypotheses 10 and 11. With a 6.1% error rate and 0.78 Success-Confidence score, this toggle ranks just below the best saturation variant (saturated color and less saturated gray), scoring seventh place overall with an average rank of 7.33. However, it is important to note that this toggle performed significantly worse in the 20-second test compared to the 5-second test (a drop of 9 between the rankings). This can be explained by the fact that the half with the filled background (the correct one to pick) lures the user’s attention very quickly (resulting in better performance on a 5-second test). However, when the user is provided with a longer time to observe the toggle, they start to question their instincts, resulting in a more than doubled error rate (from 3.5% to 8.8%). Therefore, we recommend avoiding inversion of toggle buttons in favor of visual cues that avoid potential confusion and don’t highlight the inactive button in any way.

RQ 13: Highlighted outline of the active button

As expected, the highlighted outline provided a reliable cue for respondents to decide (8.8% error rate and 0.76 Success-Confidence score). The average rank of 10 puts this toggle in the top half of toggles performance-wise. Due to being outperformed by other visual cues, a combination with another cue is recommended for better visual clarity.

RQ 14: Inactive button coincides with the background

Another exciting result. Although we suspected that the respondents could have problems perceiving the inactive button as a button at all, this toggle achieved stellar results. With an error rate of only 0.9% and confidence of more than 0.89, it ranked first overall with an average rank of 1.33, which is an improvement over the simple saturated color vs. grayscale toggle seen in RQ11. This means that having the inactive button of the same color as the surroundings is a supreme way to communicate selection in a toggle button.

RQ 15: Embossed vs. debossed button

The error rate for both embossed toggles was 83.3% and the confidence score was the same as well with -0.58. This means that chasing skeuomorphism isn’t always the right solution, at least when it comes to toggles.

We expect this result is due to the common use of embossing effects in digital interfaces to bestow more weight on interface elements. A toggle with more visual weight would be perceived as active.

RQ 16: Check sign

As expected from its straightforward nature, the check sign icon added to an active button in a toggle performed very well, achieving the second best average rank of 2.33 with only a 5% error rate on Success-Confidence score of 0.86. The only problem we see in choosing this toggle is its potential cumbersome inclusion in the design of the web, and it may induce unwanted connections with checkboxes.

RQ 17: Radio button

Even though the nature of the radio button toggle is similar to the check sign design, when used as an icon, its meaning is less explicit. This was confirmed by achieving a worse average rank of 5.67 and a higher error rate of 9% combined with a lower Success-Confidence score of only 0.8. Despite the rather good performance of this visual cue, using radio buttons as toggles doesn’t align with their semantics since the radio buttons are meant to be used in forms, while toggles are meant to signify an immediate change of state.

Ranking The Visual Cues

We ranked the visual cues represented by toggles separately for the results they achieved in 5-second tests, 20-second tests, and the two combined. This resulted in 3 separate rankings. We calculated the average rank for every toggle and came up with the three worst and three best toggles.

Worst Toggles

Third last place — Toggle #9 — Red & Green Text Labels

  • Average rank: 24.67
  • 5-second test rank: 25
  • 20-second test rank: 24
  • Combined rank: 25

Second last place — Toggle #22 — Embossed button (no shadow version)

  • Average rank: 26.33
  • 5-second test rank: 27
  • 20-second test rank: 26
  • Combined rank: 26

Last place — Toggle #27 — Embossed button (shadow version)

  • Average rank: 26.67
  • 5-second test rank: 26
  • 20-second test rank: 27
  • Combined rank: 27

Best Toggles

Third place winner — Toggle #2 — Bold text

  • Average rank: 2.67
  • 5-second test rank: 4
  • 20-second test rank: 2
  • Combined rank: 2

Second place winner — Toggle #24 — Check sign

  • Average rank: 2.33
  • 5-second test rank: 1
  • 20-second test rank: 3
  • Combined rank: 3

First place winner — Toggle #26 — Inactive button coincides with the background

  • Average rank: 1.33
  • 5-second test rank: 2
  • 20-second test rank: 1
  • Combined rank: 1

Difference between the 5-second and 20-second test

Our secondary goal was to learn the difference in perception of toggles based on the time the respondents had to observe them, before deciding on an answer. Our expectation was that the result from the 20-second tests should be better overall (lower error rate and higher confidence score) than the results of the 5-second tests since the participants would have more time to think about the toggles in front of them.

We have calculated the average values and the results can be seen in the following table:

Test type Average error rate Average n. confidence score
5-second test 0.1728 0.5749
20-second test 0.1670 0.6013

The results confirmed our expectations since the average error rate was lower in the 20-second tests and the Success-Confidence score was higher. However, these differences were not significant. What interested us was whether any specific toggles showed significant differences between the two test variants. Therefore we focused on toggles that showed the biggest improvements/deteriorations between the 5-second and 20-second test results.

Toggles that performed better after 20 seconds

The greatest improvement in the number of ranks gained between a 5-second and a 20-second test is shared between toggles #4, #11, and #18 seen below. They all gained 6 ranks once participants had more time to observe them. This signifies that the clarity of the cues improved with added observation time.

  • 5-second test rank: 16
  • 20-second test rank: 10
  • Error Rate Difference: -0.0527
  • Normalized Success-Confidence Score Difference: 0.0913

This visual cue had the second smallest font size difference between the active and inactive states. We believe the change in rank is due to some participants needing time to notice smaller font size differences. However, the difference was noticeable enough to matter when the additional time was added to the test.

The next two toggles have enough in common for us to analyze them together.

  • 5-second test rank: 12
  • 20-second test rank: 6
  • Error Rate Difference: -0.0526
  • Normalized Success-Confidence Score Difference: 0.0912

  • 5-second test rank: 17
  • 20-second test rank: 11
  • Error Rate Difference: -0.0526
  • Normalized Confidence Score Difference: 0.0772

Both these cues were designed in a way that the more pronounced/saturated color denotes the active option while the inactive option is portrayed by a lighter color. The difference in results shows that a certain percentage of users initially view a lighter color as the more pronounced one. However, the percentage decreases when users spend more seconds thinking about the toggle. To make a toggle that is easy to comprehend right away, an interface designer should probably look at the other visual cues.

Toggles that performed worse after 20 seconds

Toggle 15

Toggle 17

Toggle Number 5-second test rank 20-second test rank Error Rate Difference N. Confidence Score Difference
15 11 19 0.0526 -0.1018
17 15 21 0.0877 -0.1299

Toggle 15 showed the biggest drop in rank, while toggle 17 suffered the most significant negative changes in error rate and confidence score.

We explain the drop in these two by the fact that these two toggles are similar in a way — both have a dark and a light half — which means they would be perceived differently, for example, when using the light mode versus dark mode setting on a mobile device. While the user’s instinctive reaction may be to pick the darker color as active, given some time, more people will begin to second-guess themselves. Instead of the darker color capturing their gaze, they may start overthinking that the brighter color is highlighted against the dark. A good toggle shouldn’t encourage such doubts.

Potential For Future Research

All the cues we tested in our study were simple/singular. Going from here, the natural next step for research would be to go deeper, with a study that focuses on evaluating our findings in more detail: Can I use a bold font in an inactive toggle button if the inactive button is even bolder? Will the combination of visual cues perform better than either cue individually? While the answers may seem intuitive, research data may prove otherwise, as our study has shown.

Another next step would be testing the effect of color alterations. Would the saturation of green work just as well as the saturation of orange?

Testing the performance of visual cues in prototypes of website navigation using different color schemes is another ambitious area for continued research. We tested our toggles in the void, but it’s possible that their performance would vary depending on the visual context.

Conclusion

In this article, we described our research where we analyzed a complex list of visual cues used by toggle buttons to communicate which of their options is active. By testing our research questions with real users, we collected a respectable amount of data to make reliable statements about the effectiveness of visual cues.

Here are some of the main points we arrived at that you should keep in mind when designing your next toggle button:

  • If you choose to use color as the main lead, we suggest you use a combination of a saturated lively color (ideally corresponding with your CTA color scheme) and a light grayscale neutral color. Using the colors in the toggle’s background fill is preferable to using colored text. If the color of the inactive button is the same as the surrounding background, this will further improve the button’s comprehensibility.
  • Contrasting colors of similar visual weight should not be used under any circumstances. Red and green’s cultural perceptions won’t help you communicate what’s selected. There are much better ways to go about this. Be wary of the black and white combination as well. Toggles that use this color scheme are the ones most prone to the confusion rooted in the dark/light mode settings.
  • You may choose a minimalistic path and use the font itself to show the difference between button states. The bold-thin combination is the go-to solution, but you may also use different font sizes. Just make sure to differentiate the font sizes well enough. Using font-weight or size is recommended to support other visual cues as well since it’s very flexible.
  • If you decide to use embossment as the main cue — you really shouldn’t. It proved to be unreliable at communicating the active state of a toggle. Even a simple border was more effective. If you decide to use embossed toggles for their visual appeal, we suggest combining embossment with a primary visual cue, such as bold text or color fill.
  • There’s no shame in using designs that you are sure will work. A tick or a radio-button icon both performed very well. The evident drawback of choosing them is the cumbersome inclusion in the design of your web since radio buttons as UI elements serve a different function from toggles. The ticks could be perceived as outdated (akin to a physical form more than a website). As for radio button icons, you might as well use a radio button instead.

Follow these tips, and your toggle button designs will no longer cause users to hesitate about what’s selected at the moment.

Resources

How to Choose the Right Digital Experience Monitoring Solution

Today’s business landscape is increasingly competitive, demanding that companies maintain an agile mindset when differentiating their products and services from competing brands. For many organizations, this differentiation comes in the form of better user engagement strategies designed to improve the availability and reliability of web services and applications.

But while most businesses understand the basic concept of managing the “digital experience” of their customers, many do not recognize the key ingredients to its long-term success—performance monitoring and optimization.

Databases For Front-End Developers: The Rise Of Serverless Databases (Part 1)

As front-end developers, we understand the foundational role data plays in our daily jobs. It may come from an external API, a CMS, or even a spreadsheet. But god forbid we need to talk about setting up databases.

Those days are over. With serverless databases becoming popular by the day, it has never been easier to create a full-stack architecture with both vertical and horizontal scaling, high availability, and bulletproof consistency.

To fully reap the benefits of such an architecture, it’s essential to understand what decisions are made for you. In the same way that the “learn JavaScript, not a framework” mantra became popular, we also ought to understand the concepts behind database architecture in order to use them reliably. So, welcome to the first part of our “Databases for Front-end Developers” series.

This series is not going to make you an expert on distributed systems or capable of jumping into a database admin role, but it will shed some light on the concepts, terms, and acronyms you will face when getting ready to choose your next stack. See it as a primer on (serverless) databases. Hopefully, it will give you a push into the rabbit hole and make you confident in joining conversations to evaluate tradeoffs for different solutions.

Spreadsheets And Content Management Systems

What?! Spreadsheets? Well, yes. The user interface (you and I, or U and I, or UI) is quite similar to that of a database. Spreadsheets give you a table in which to store data. In some cases, they will only allow you to define specific data types per column. The familiarities are there, but spreadsheets find an abrupt end once we pop the hood.

The availability is questionable: spreadsheets are not meant to serve content, only store content. For starters, they will not fuel an app as it scales, and they may not obey certain best practices when it comes to assuring data integrity. Up to very recently, they were the quickest way to get started with some data layer. But now, there is no point for an app not to use a real (serverless) database (more on this later).

A Content Management System (CMS) is another kind of database. “Content” is a special kind of data that the CMS specializes in. It will provide the user (developer) with enough abstractions to facilitate managing such data to a point where the underlying database is not a concern. It will handle the deliverability, availability, and integrity of your data. But the heavier the abstraction is, the higher the tradeoff. The data types are limited to what the CMS will give you, with most even imposing their own architecture for handling relations, queries, types, etc. Of course, there are still significant and viable use cases for CMSs, and they aren’t going anywhere. So, as long as you’re sure that’s your use case, you’ll be fine with one.

Growth Pains

If you choose the simpler, “abstractionful” route of a spreadsheet or a CMS as your source of truth and your data begins to diversify, obstacles will show up. The first issue with a spreadsheet is usually about the underlying API, it’s often not intended for most average-sized apps’ traffic, and then there are the first refactoring conversations.

With a CMS, APIs are usually not the problem, but managing the data can be. As an app grows and data diversifies, some of it ends up not being content anymore and may be more related to application logic.

When data is not content, managing it in a CMS is not ideal. It’s less flexible and often doesn’t fit the owner-team workflow. Now, while it is perfectly possible for other databases and CMSs to coexist, it’s up to the developers to understand the pros and cons of each solution and decide what is best for their app’s delivery and user experience.

Database Admin Is Hard

As front-end developers, the first time we talk about databases is usually a conversation about “relational vs. non-relational.” From then on, while trying to figure out the differences, we loosely hear a myriad of terms, such as ACID, BASE, and even CAP Theorem. This article will skip a thorough explanation of these differences. We will look better into them in the next part of this series. For now, it is sufficient to say “non-relational” databases impose eventual consistency on an app.

Eventual consistency can also be unwrapped into a longer discussion, but let’s take it as this:

Eventual consistency means that in certain special conditions, the data received is stale.

Like comments in a blog post, they won’t affect your app if a few seconds after a write you still don’t see the latest one. But password updates need to be strongly consistent always, not eventually consistent.

Of course, those are not the only differences. Query performance is different between each type of database. One can imagine being eventually consistent allows for quicker reads because there is less assurance involved.

More Growth Pains

Once the database is decided, the app can grow steadily and smoothly for a while. As an app gets big, data complexity grows, and as data complexity grows, the database becomes slower. At scale, how do we make a database faster?

  • Do you add more resources to a single server? (vertical scale)
  • How do you replicate data across a cluster of machines?
    • Do you split your database into smaller partitions (shards) instead? (horizontal scale, more about this in part 2)
  • Do you add a faster in-memory database in front of it for common queries? (key-value store)

Those are not easy questions to answer. It depends on the user base, the type of data, the amount, frequency, and origin of queries. Is your database read-heavy or write-heavy? And though there is a multitude of factors impacting this decision, there’s also a high cost attached to making the wrong choice.

Additionally, some use cases may even require searching through data easier from user-land. A search engine is not an easy problem to solve and often requires an additional type of database to properly index your data (if sharded, it’s even harder). Having all this around your user’s data also brings a whole set of tools around it just to make it maintainable.

Even more, keeping an eye on our databases (now “data infrastructure” if we’ve got a search engine in the mix) requires a high level of observability and OLAP (Online Analytical Processing). This introduces a whole new level of complexity!

As you may have noticed, very high stakes are associated with creating, maintaining, and growing a database. Decisions that can make or break an app, decisions that are costly to go back on, and that must be made relatively early.

Serverless Databases Are Fun

Because of all the complexity mentioned above, many investors and incubators have their eyes turned to startups creating serverless databases. They are a whole new category of databases. The concepts of traditional ones still apply, but differently.

Serverless Databases

To understand what a “serverless database” really is, we first need to deconstruct the term. It is a common joke that “serverless” is a misnomer. Still, the point of a serverless architecture is to abstract away from the consumer (developer) the complexity of handling site reliability and server maintenance provided by a serverless vendor, such as Netlify, Vercel, Amazon Web Services (AWS), and so many others. I tend to like Xata’s definition of “serverless database”.

A “serverless database” does for databases what serverless does for servers. The complexity is lifted away (to different degrees depending on the chosen platform). Some, like Supabase and Firebase, will offer a multitude of serverless related features to couple with your database; others, like AWS Aurora or PlanetScale, focus on making it easier to use and scale PostgreSQL and MySQL DBs. And finally, there are others that abstract the database entirely, like Xata. They provide you with an ORM-like SDK, keep the database behind an API, and are able to offer a complex set of database features, bending the current limitations of traditional relational and non-relational databases.

Once we get to the next part of this series, we will talk about different kinds of databases. Then you will be ready to pop the hood on any serverless database offering you want and understand the differences for yourself. Meanwhile, let’s keep it superficial.

Batteries Included

Don’t take the “serverless” prefix lightly; these databases are from a different breed. They are able to offer guarantees and performance that “traditional” databases require some effort to reach, sometimes not even so. This is because on serverless databases, the work has been done, just not by your team.

The same way “serverless” means you don’t need to handle your server, “serverless database” means you don’t need to handle your database. The platform will handle it for you.

Because of this, the decisions about scalability and deliverability are often made external to your team. What your team gets is the assurance that any request will receive a response in a timely manner and that data will respect the consistency guarantees. Again, different solutions have different tradeoffs. It’s important to check what each offering imposes before jumping in.

See You On The Next One

Hopefully, this has been enough to spark your curiosity. This is the first article of a 3-part series. In the next ones, we will cover more in-depth information about what databases actually are. Specifically, we’ll look into:

  • Schemas,
  • Theorems and models,
  • Types of databases,
  • whatever you suggest in the comments below!

All that necessary knowledge will enable you to choose the best solution for your app. Understanding the tradeoffs of different serverless solutions and surrounding yourself with the right kind of help is crucial to setting your app up for success. Reach out to me if you need anything meanwhile. Otherwise, see you in a few days!

Further Reading on Smashing Magazine

Core Web Vitals Tools To Boost Your Web Performance Scores

The success of your website depends on the impression it leaves on its users. By optimizing your Core Web Vitals scores, you can gauge and improve user experience. Essentially, a web vital is a quality standard for UX and web performance set by Google. Each web vital represents a discrete aspect of a user’s experience. It can be measured based on real data from users visiting your sites (field metric) or in a lab environment (lab metric).

In fact, several user-centric metrics are used to quantify web vitals. They keep evoling, too: as there were conversations around slowly adding accessibility and responsiveness as web vitals as well. In fact, Core Web Vitals are just a part of this large set of vitals.

It’s worth mentioning that good Core Web Vitals scores don’t necessarily mean that your website scores in high 90s on Lighthouse. You might have a pretty suboptimal Lighthouse score while having green Core Web Vitals scores. Ultimately, for now it seems that it’s only the latter that contribute to SEO ranking — both on mobile and on desktop.

While most of the tools covered below only rely on field metrics, others use a mix of both field and lab metrics. 1

PageSpeed Compare

PageSpeed Compare is a page speed evaluation and benchmarking tool. It measures the web performance of a single page using Google PageSpeed Insights. It can also compare the performance of multiple pages of your site or those of your competitors’ websites. It evaluates lab metrics, field metrics, page resources, DOM size, CPU time, and potential savings for a website. PageSpeed Compare measures vitals like FCP, LCP, FID, CLS, and others using land and field data.

The report it generates lists the resources loaded by a page, the overall size for each resource type category, and the number of requests made for each type. Additionally, it examines the number of third-party requests and resources a page makes. It also lists cached resources and identifies unused Javascript. PageSpeed Compare checks the DOM of the page and breaks down its size, complexity, and children. It also identifies unused images and layout shifts in a graph.

When it comes to CPU time, the tool breaks down CPU time spent for various tasks, Javascript execution time, and CPU blocking. Lastly, it recommends optimizations you can make to improve your page. It graphs server, network, CSS, Javascript, critical content, and image optimizations to show the potential savings you could gain by incorporating fixes into your site. It gives resource-specific suggestions you could make to optimize the performance of your page. For example, it could recommend that you remove unused CSS and show you the savings this would give in a graph.

PageSpeed Compare provides web performance reports in a dashboard-alike overview with a set of graphs. You can compare up to 12 pages at once and presents the report in a simple and readable way since it uses PageSpeed Insights to generate reports. Network and CPU are throttled for lab data tests for more realistic conditions.

Bulk Core Web Vitals Check

Experte's Bulk Core Web Vitals Check is a free tool that crawls up to 500 pages of the entire domain and provides an overview of the Core Web Vitals scores for them. Once the tool has crawled all the pages, it starts performing a Core Web Vitals check for each page and returns the results in a table. Running the test takes a while, as each web page test is done one at a time. So it’s a good idea to let it run for 15-30 mins to get your results.

What’s the benefit then? As a result, you get a full overview of the pages that perform best, and pages that perform worst — and can compare the values over time. Under the hood, the tool uses Pagespeed Insights to measure Core Web Vitals.

You can export the results as a CSV file for Excel, Google Sheets or Apple Pages. The table format in which the results are returned makes it easy to compare web vitals across different pages. The tests can be run for both mobile and desktop.

Alternatively, you can also check David Gossage's article on How to review Core Web Vitals scores in bulk, in which he shares the scripts and how to get an API key to run the script manually without any external tools or services.

Treo

If you’re looking for a slightly more advanced option for bulk Core Web Vitals check, this tool will cover your needs well. Treo Site Speed also performs site speed audits using data from the Chrome UX Report, Lighthouse and PageSpeed Insights.

The audits can be performed across various devices and network conditions. Additionally though, with Treo, you can track the performance of all your pages across your sitemap, and even set up alerts for performance regressions. Additionally, you can receive monthly updates on your website’s performance.

With Treo Site Speed, you can also benchmark a website against competitors. The reports Treo generates are comprehensive, broken down by devices and geography. They are granular and available at domain and page levels. You can export the reports or access their data using an API. They are also shareable.

WebPageTest Core Web Vitals Test

WebPageTest is, of course, a performance testing suite on its own. Yet one of the useful features it provides is a detailed breakdown of Core Web Vitals metrics and pointers to problematic areas and how to fix them.

There are also plenty of Core Web Vitals-related details in the actual performance audit, along with suggestions for improvements which you can turn on without changing a line of code. For some, you will need a pro account though.

Cumulative Layout Shift Debuggers

Basically, the CLS Debugger helps you visualize CLS. It uses the Layout Instability API in Chromium to load pages and calculate their CLS. The CLS is calculated for both mobile and desktop devices and takes a few minutes to complete. The network and CPU are throttled during the test, and the pages are requested from the US.

The CLS debugger generates a GIF image with animations showing how the viewport elements shift. The generated GIF is important in practically visualizing layout shifts. The elements that contribute most to CLS are marked with squares to see their size and layout shift visually. They are also listed in a table together with their CLS scores.


CLS debugger in action: highlighting the shifts frame by frame.

Although the CLS is calculated as a lab metric initially, the CLS debugger receives CLS measurements from the Chrome UX Report as well. The CLS, then, is a rolling average of the past 28 days. The CLS debugger allows you to ignore cookie interstitials — plus, you can generate reports for specific countries, too.

Alternatively, you can also use the Layout Shift GIF Generator. The tool is available on its webpage or as a command line tool. With the CLI tool, you can specify additional options, such as the viewport width and height, cookies to supply to the page, the GIF output options, and the CLS calculation method.

Polypane Web Vitals

If you want to keep your Core Web Vitals scores nearby during development, Polypane Web Vitals is a fantastic feature worth looking into. Polypane is a standalone browser for web development, that includes tools for accessibility, responsive design and, most recently, performance and Core Web Vitals, too.

You can automatically gather Web Vitals scores for each page, and these are then shown at the bottom of your page. The tool also provides LCP visualization, and shows layout shifts as well.

Noteable Mentions
  • Calibre’s Core Web Vitals Checker allows you to check Core Web Vitals for your page with one click. It uses data from the Chrome UX Report and measures LCP, CLS, FID, TTFB, INP and FCP.

How to Disable Gravatars in WordPress

Do you want to disable Gravatars in WordPress?

WordPress uses Gravatars to display user profile photos or Avatars. It is a third-party service that allows users to have the same profile photo on different websites.

Gravatars are highly useful, particularly in WordPress comments. However, some users may not want to use Gravatars at all.

In this article, we’ll show you how to easily disable Gravatars in WordPress. We’ll also show you how to use local avatars instead.

Turn off Gravatars in WordPress

Why Disable Gravatars in WordPress

Gravatars are a third-party service that allows users to add a profile photo to their WordPress website and use it across the internet.

Basically, you create an account and then upload your profile photo.

Managing Gravatar profile

After that, whenever you use that particular email address on a website that supports Gravatar, it will automatically show your profile photo from the Gravatar website.

To learn more see our explainer, What is Gravatar and why you should use it.

However, some website owners may not want to use Gravatars for several reasons.

For instance, they may want to turn it off to improve website performance and speed.

Similarly, some site owners may not want to use Gravatar due to privacy concerns.

That being said, let’s take a look at how to easily disable Gravatars in WordPress.

Disabling Gravatars in WordPress

WordPress makes it super easy to customize or turn off Gravatars on your website.

First, you need to login to the admin area of your website and then go to the Settings » Discussion page.

From here, you need to scroll down to the Avatars section and uncheck the box next to ‘Show Avatars’ option.

Turn off Gravatars in WordPress

Don’t forget to click on the Save Changes button to store your settings.

WordPress will now disable Gravatars across your website. You’ll now see a generic user icon in the admin toolbar instead of your Gravatar image.

User profile photo disabled

Similarly, the comments page inside the admin area will also stop showing Gravatars.

Comments page without Gravatar images

WordPress will also stop showing Gravatar images in the comments area under your posts and pages.

Comments without Gravatars

How to Replace Gravatar with Local Avatars in WordPress?

Some users may want to disable Gravatar but still want to display profile photos under author bios and other places.

This allows you to keep the avatar functionality in WordPress and enable users to upload their own profile photos. At the same time, it disables Gravatars and prevents your website to make any requests to Gravatar website.

To do this, you’ll need to install and activate the WP User Avatars plugin. For more details, see our step by step guide on how to install a WordPress plugin.

Upon activation, you need to visit the Settings » Discussion page and check the box next to the ‘Block Gravatar’ option.

Block Gravatar

Don’t forget to click on the Save Changes button to store your settings.

The plugin will now block any requests to Gravatar.com while allowing you to keep avatar functionality.

To upload profile photos, users will need to upload their own images under their profiles.

Simply go to Users » Profile page. From here, you can upload an image from your computer or use one from the media library.

Local avatar

Don’t forget to click on the Update Profile button to save your changes.

WordPress will now use custom profile photos instead of Gravatars. For all unregistered users it will show the default avatar image you have set in the settings.

For all registered users, it will use the custom avatar image that they uploaded. If a user hasn’t uploaded their custom avatar image, then the plugin will use the default avatar image.

We hope this article helped you learn how to disable Gravatars in WordPress. You may also want to see our guide on how to make a membership website in WordPress, and our comparison of the best WordPress page builder plugins.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Disable Gravatars in WordPress first appeared on WPBeginner.

Collective #717




State of GraphQL

The new annual developer survey of the GraphQL ecosystem. Fill out the survey to help identify which GraphQL tools and features are actually being used.

Check it out










ATMOS

Get on board and discover the most surreal facts about the aviation industry. A fantastic experiment by the folks of Leeroy.

Check it out




Markwhen

In case you didn’t know about it: Markwhen is a text-to-timeline tool. You write markdown-ish text and it gets converted into a nice looking cascading timeline.

Check it out



Modern CSS Reset

CSS Reset that uses modern CSS features such as :where(), logical properties, prefers-reduced-motion and more. By Elly.

Check it out





The post Collective #717 appeared first on Codrops.

7 Best Call Center Software For 2022 (Expert Pick)

Are you looking for call center software for your business?

Many customers like to reach out to businesses by phone to get information or help. Having a call center service for your business can streamline your customer support and provide a better user experience.

In this article, we’ll share the top call center software so that you can select the best option for your business.

Best call center software expert pick

How to Find the Best Call Center Software

Offering customer support through phone calls is a great way of helping your users. However, it can be hard for small businesses that are quickly growing to handle a large volume of phone calls.

With the help of a call center software, you can respond to multiple phone calls at once, answer customers questions more quickly, help your sales team reach a larger audience, improve the overall support process, and increase customer satisfaction.

There are a few features you should look for when selecting a call center software for your business, such as:

  • Interactive Voice Responses (IRV) – You should choose a software solution that offers automated responses through IRVs, greets a customer when they call, and helps to direct them to the right department.
  • Cloud Contact Center – A cloud-based call center allows your remote team to attend to customers from anywhere in the world without having to be on-premises.
  • Multichannel Support – Also called omnichannel routing, this lets your support staff respond to customers from social, live chat, email, phone calls, and other channels, all using the same software.
  • Call Routing & Voicemail Option – You should look for software that allows voicemail and call routing features, so customers can share their queries even when the call center agents aren’t available.
  • CRM Integrations – By integrating customer relationship management (CRM) software, you can make the best use of your customer information and get a complete picture of how often they call, their support tickets, and more. Some tools also offer CTI (computer telephony integration) to identify customers through phone numbers.
  • Reporting and Analytics Tools – Your call center software should provide additional reporting and analytics tools to see how well your customer support is performing.
  • Call Recording – You can perform quality management checks and training by listening to recent call recordings.

That said, let’s take a look at some of the best call center software you can choose for your business.

1. Nextiva

Nextiva

Nextiva is the best virtual business phone number service in the market. It’s the perfect solution for remote teams, since Nextiva is completely cloud-based.

Your support agents can simply log in to the Nextiva desktop or mobile app to handle all incoming calls. Plus, it includes complete help desk software as well. It lets you talk with customers across multiple communication channels, be it phone, voice, SMS, live chat, video, or social media.

With the Nextiva contact center solution, you also get screen popups that can be tailored according to your brand. There’s also a speech-enabled IVR feature that helps customers when they contact you.

You can take IVR a step further and automate routine tasks. This way, you can reduce the cost of hiring more agents and efficiently handle high call volumes. It also allows support agents to focus on attending important calls while IVR solves repetitive problems.

Besides that, Nextiva offers affordable cell phone plans and more features like a toll-free number, voicemail to email option, call recording, HD video conferencing, auto-attendant, and more.

You can also easily integrate it with different CRMs and communication tools like Salesforce, Oracle Sales Cloud, Microsoft, and more. It even offers APIs and SDKs for specific uses and allows you to set up workflow automation.

Note: At WPBeginner, we use Nextiva for all our business phone needs because the software offers robust features and affordable pricing plans. As a small business, it helps our team attend to incoming customer calls from anywhere in the world without having to share their personal cell phone numbers.

Besides that, Nextiva also allows us to send text messages and connect with customers through its video conferencing features.

Expert Review: In our experience, Nextiva helps provide exceptional customer experience and offers a complete cloud-based business phone service, which makes it the best call center software.

2. RingCentral

RingCentral

RingCentral is a popular business VoIP service provider and lets you set up a cloud call center solution for your business. You can quickly provide customer support from anywhere in the world and at any time.

It offers an omnichannel solution where you can define rules to route calls based on capacity, availability, and more. This way, you can speed up your customer support and easily have customer interactions on multiple channels at once.

With RingCentral, you can also boost your support agent’s productivity. The software offers gamification options that can be used to provide incentives to agents. Besides that, it’s a complete workforce management software that makes it super easy to handle your team’s schedule and plan inbound calls based on traffic volumes.

Another advantage of using RingCentral is that it provides detailed analytics about your customer support performance. You can monitor key performance indicators (KPIs) in real-time, track agent performance, set up call monitoring, self-service resources, and more.

Plus, there are data visualization and root cause analysis tools that help you build custom stats dashboards for reporting. Other than that, you get an automated IVR system, seamless integrations, a predictive dialer, and more with RingCentral.

Expert Review: RingCentral offers tailored solutions based on your audience or industry. Whether you’re in the financial, healthcare, education, government, or have an eCommerce store, RingCentral is a great call center software to have.

3. Ooma

Ooma

Ooma is an all-in-one virtual phone solution for businesses of all sizes. Whether you have a startup, small business, or running an enterprise, Ooma offers lots of features to keep your remote teams and customers connected.

Ooma makes it very easy for you to set up a cloud call center and provide exceptional customer support. It provides intelligent call routing functionality and lowers long call queues by helping customers find the right agent without going through repetitive or redundant steps.

You can also create customized call flows for your support team. The service offers a drag and drop call flow designer that helps you build a call sequence in a few minutes.

Other than that, Ooma also offers features like multi-level IVR and automatic call distribution based on caller data, business hours, and agent skills. You can even match callers to the right agent with intelligent reconnect, where the customer is automatically connected with the person they were speaking to before the call dropped.

Ooma also lets you monitor your customer support performance. However, it doesn’t match the 45 different reporting features and reports that Nextiva has to offer for measuring your VoIP call center efficiency.

Expert Review: Ooma is a great solution for small to medium-sized businesses looking to add a call center solution.

4. FreshDesk

FreshDesk

FreshDesk is a famous software that offers a complete contact center solution for businesses. Over 50,000 companies use FreshDesk to provide customer support.

FreshDesk Contact Center software, previously known as Freshcaller, is easy to use and helps you set up the software in just a few clicks. It also offers many features like setting up automated voice responses using artificial intelligence.

With FreshDesk, there’s an option to create a global contact center. You can use bring your own phone (BYOC) or purchase phone numbers from over 90 countries. Plus, it offers affordable pricing plans that you can scale as your business grows.

Other than that, it’s a complete omnichannel solution for your customer support. You can convert a call to a ticket and offer support to users from multiple channels in a single place while lowering wait time.

More features offered by FreshDesk include call recordings, call transcripts, call lifecycle information, voice bots, speech-enabled IVR, reporting tools to monitor agent performance and improve customer support, and more.

Expert Review: FreshDesk is a beginner-friendly call center solution. However, if you’re looking for more powerful features, then we recommend checking out Nextiva.

5. LiveAgent

LiveAgent

LiveAgent is the next call center software on our list, and it offers a lot of features like other services we’ve covered. However, what makes this service different is that you get a 14-day free trial to try the software before committing to a premium plan.

With LiveAgent, you get a cloud-based call center solution. The VoIP phone system helps your support agents to connect with customers from anywhere.

Plus, you get features like IVR, call back requests, call transfers, unlimited call recording, smart call routing, video conference calling functionality, in-app push notifications, chatbot, and automatic call distribution (ACD).

The software also integrates with popular CRMs like Salesforce and HubSpot. It also works seamlessly with email marketing tools like AWeber and Mailchimp. You can even integrate it with your WordPress website and add a live chat button.

Expert Review: If you’re looking for affordable pricing plans along with a free trial to test the software, then LiveAgent is the perfect tool for you.

6. 8×8

8x8

8×8 is a cloud communication platform that offers a secure call center solution. The service is loaded with features and offers 99.99% uptime across UCaaS and CCaaS.

What this means is that the service is reliable and guarantees faster performance without any delays or downtime. Besides, it has 35 data centers located globally to provide great quality of service.

It has a simple user interface and offers a detailed knowledge base, expert connect, and a complete communication hub to help you get started.

8×8 call center also provides features to handle inbound and outbound calls. For instance, you get easy call routing, call recording, speed and text analytics, omnichannel support, IVR, agent workspace management, click to call option, and more. However, you’ll find more features in other software we’ve covered, like Nextiva and RingCentral.

The service also easily integrates with CRMs such as Salesforce, Microsoft Dynamics 365, Azure, and Zendesk. You can also improve your customer support through contact center analytics and even conduct surveys to get customer feedback.

8×8 call center pricing plans are on the expensive side, as they start from $85 per user per month. If you want a more affordable solution, then you’ll get more value for money using Nextiva.

Expert Review: 8×8 is a powerful virtual phone platform that offers a robust call center solution. It is great for SaaS enterprises and large organizations.

7. CloudTalk

CloudTalk

CloudTalk is the last call center solution on our list. It’s a popular virtual call center platform and powers over 2,500 call centers, including companies like DHL, Mercedes Benz, Fujitsu, and GoStudent.

The service offers 140 national phone numbers that you can use for your business or select a toll-free number. CloudTalk has also partnered with multiple telcos across the globe to provide a strong network and ensure crystal clear calls and reliable performance.

Other features offered by CloudTalk include call queuing, call recording, voicemail, adding extensions, fax to email, business hours, conference calls, call masking, 3-way calling, smart outbound auto dialer, and more.

You also get intelligent routing features like a complete call flow designer to create automated workflows, IVR, ACD, skill-based call routing, set a preferred agent for clients, call forwarding, VIP queues, auto-answer functionality, and more.

Expert Review: CloudTalk is a dedicated call center software. You can use it to provide inbound support, outbound sales, and easily collaborate with remote teams.

Which is the Best Call Center Software?

If you’re looking for a complete cloud-based call center, then we highly recommend Nextiva. The software ticks all the boxes for what you should look for in a call center solution.

It offers powerful features that go beyond simply creating a call center. Nextiva is a complete virtual phone solution for businesses that want to take their customer support to the next level.

You get IVR, call recording, video conferencing, mobile and desktop apps, detailed reports and metrics to track performance, and so much more with Nextiva. Plus, it easily integrates different CRMs and marketing tools.

We hope this article helped you find the best call center software. You may also want to see our guide on how to choose the best blogging platform and the best WordPress plugins.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post 7 Best Call Center Software For 2022 (Expert Pick) first appeared on WPBeginner.

The Case For Prisma In The Jamstack

The Jamstack approach originated from a speech given by Netlify’s CEO Matt Biilmann at Smashing Magazine’s very own Smashing Conf in 2016.

Jamstack sites serve static pre-rendered content through a CDN and generate dynamic content through microservices, APIs & serverless functions. They are commonly created using JavaScript frameworks, such as Next.js or Gatsby, and static site generators — Hugo or Jekyll, for example. Jamstack sites often use a Git-based deployment workflow through tools, such as Vercel and Netlify. These deployment services can be used in tandem with a headless CMS, such as Strapi.

The goal of using Jamstack to build a site is to create a site that is high performant and economical to run. These sites achieve high speeds by pre-rendering as much content as possible and by caching responses on “the edge” (A.K.A. executing on servers as close to the user as possible, e.g. serving a Mumbai-based user from a server in Singapore instead of San Francisco).

Jamstack sites are more economical to run, as they don’t require using a dedicated server as a host. Instead, they can provision usage from cloud services (PAASs) / hosts / CDNs for a lower price. These services are also set up to scale in a cost-efficient manner, without developers changing their infrastructure and reducing their workload.

The other tool that makes up this combination is Prisma — an open source ORM (object relational mapping) built for TypeScript & JavaScript.

Prisma is a JavaScript / TypeScript tool that interpretes a schema written in Prisma’s standards and generates a type-safe module that provides methods to create records, read records, update records, and delete records (CRUD).

Prisma handles connections to the database (including pooling) and database migrations. It can connect with databases that use PostgreSQL, MySQL, SQL Server or SQLite (additionally MongoDB support is in preview).

To help you get a sense of Prisma, here’s the some basic example code to handle the CRUD of users:

import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

const user = await prisma.user.create({
  data: {
    name: Sam,
    email: 'sam@sampoder.com',
  },
})

const users = await prisma.user.findMany()

const updateUser = await prisma.user.update({
  where: {
    email: 'sam@sampoder.com',
  },
  data: {
    email: 'deleteme@sampoder.com',
  },
})

const deleteUser = await prisma.user.delete({
  where: {
    email: 'deleteme@sampoder.com',
  },
})

The associated project’s Prisma schema would look like:

datasource db {
  url      = env("DATABASE_URL")
  provider = "postgresql"
}

generator client {
  provider = "prisma-client-js"
}

model User {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  name      String?
}
The Use Cases for Prisma

Armed with a knowledge of how Prisma operates, let’s now explore where we can use it within Jamstack projects. Data is important in two aspects of the Jamstack: whilst pre-rendering static pages and on API routes. These are tasks often achieved using JavaScript tools, such as Next.js for static pages and Cloudfare Workers for API routes. Admitally, these aren’t always achieved with JavaScript — Jekyll, for example, uses Ruby! So, maybe I should amend the title for the case of Prisma in JavaScript-based Jamstack. Anyhow, onwards!

A very common use-case for the Jamstack is a blog, where Prisma will come in handy for a blog to create a reactions system. You’d use it in API routes with one that would fetch and return the reaction count and another that could register a new reaction. To achieve this, you could use the create and findMany methods of Prisma!

Another common use-case for the Jamstack is a landing page, and there’s nothing better than a landing with some awesome stats! In the Jamstack, we can pre-render these pages with stats pulled from our databases which we can achieve using Prisma’s reading methods.

Sometimes, however, Prisma can be slightly overkill for certain tasks. I’d recommend avoiding using Prisma and relational databases in general for solutions that need only a single database table, as it adds additional and often unnecessary development complexity in these cases. For example, it’d be overkill to use Prisma for an email newsletter signup box or a contact form.

Alternatives to Prisma

So, we could use Prisma for these tasks, but we could use a plethora of other tools to achieve them. So, why Prisma? Let’s go through three Prisma alternatives, and I’ll try to convince you that Prisma is preferable.

Cloud Databases / Services

Services like Airtable are incredibly popular in the Jamstack space (I myself have used it a ton), they provide you with a database (like platform) that you can access through a REST API. They’re good fun to use and prototype with, however, Prisma is arguably a better choice for Jamstack projects.

Firstly, with cost being a major factor in Jamstack’s appeal, you may want to avoid some of these services. For example, at Hack Club, we spent $671.54 on an Airtable Pro subscription last month for our small team (yikes!).

On the other hand, hosting an equivalent PostgreSQL database on Heroku’s platform costs $9 a month. There certainly is an argument to make for these cloud services based on their UI and API, but I would respond by pointing you to Prisma’s Studio and aforementioned JavaScript / TypeScript client.

Cloud services also suffer from a performance-issue, especially considering that you, as the user, have no ability to change / improve the performance. The cloud services providing the database put a middleman in between your program and the database they’re using, slowing down how fast you can get to the database. However, with Prisma you’re making direct calls to your database from your program which reduces the time to query / modify the database.

Writing Pure SQL

So, if we’re going to access our PostgreSQL database directly, why not just use the node-postgres module or — for many other databases — their equivalent drivers? I’d argue that the developer experience of using Prisma’s client makes it worth the slightly increased load.

Where Prisma shines is with its typings. The module generated for you by Prisma is fully type-safe — it interprets the types from your Prisma schema — which helps you prevent type errors with your database. Furthermore, for projects using TypeScript, Prisma auto-generates type definitions that reflect the structure of your model. Prisma uses these types to validate database queries at compile-time to ensure they are type-safe.

Even if you aren’t using TypeScript, Prisma also offers autocomplete / Intelli-sense, linting, and formatting through its Visual Studio Code extension. There are also community built / maintained plugins for Emacs (emacs-prisma-mode), neovim (coc-prisma), Jetbrains IDE (Prisma Support), and nova (the Prisma plugin) that implement the Prisma Language Server to achieve code validation. Syntax highlighting is also available for a wide array of editors through plugins.

Other ORMs

Prisma is, of course, not the only ORM available for JavaScript / TypeScript. For example, TypeORM is another high quality ORM for JavaScript projects. And in this case, it is going to come down to personal preference, and I encourage you to try a range of ORMs to find your favourite. I personally choose Prisma to use for my project for three reasons: the extensive documentation (especially this CRUD page, which is a lifesaver), the additional tooling within the Prisma ecosystem (e.g. Prisma Migrate and Prisma Studio), and the active community around the tool (e.g. Prisma Day and the Prisma Slack).

Using Prisma in Jamstack Projects

So, if I’m looking to use Prisma in a Jamstack project, how do I do that?

Next.js

Next.js is growing to be a very popular framework in the Jamstack space, and Prisma is a perfect fit for it. The examples below will serve as pretty standard examples that you can transfer into other projects using different JavaScript / TypeScript Jamstack tools.

The main rule of using Prisma within Next.js is that it must be used in a server-side setting, this means that it can be used in getStaticProps, getServerSideProps, and in API routes (e.g. api/emojis.js).

In code, it looks like this (example taken from a demo app I made for a talk at Prisma Day 2021 which was a virtual sticker wall):

import prisma from '../../../lib/prisma'
import { getSession } from 'next-auth/client'

function getRandomNum(min, max) {
  return Math.random() * (max - min) + min
}

export async function getRedemptions(username) {
  let allRedemptions = await prisma.user.findMany({
    where: {
      name: username,
    },
    select: {
      Redemptions: {
        select: {
          id: true,
          Stickers: {
            select: { nickname: true, imageurl: true, infourl: true },
          },
        },
        distinct: ['stickerId'],
      },
    },
  })
  allRedemptions = allRedemptions[0].Redemptions.map(x => ({
    number: getRandomNum(-30, 30),
    ...x.Stickers,
  }))
  return allRedemptions
}

export default async function RedeemCodeReq(req, res) {
  let data = await getRedemptions(req.query.username)
  res.send(data)
}

As you can see, it integrates really well into a Next.js project. But you may notice something interesting: '../../../lib/prisma'. Previously, we imported Prisma like this:

import { PrismaClient } from '@prisma/client'

const prisma = new PrismaClient()

Unfortunately, this is due to a quirk in Next.js’ live refresh system. So, Prisma recommends you paste this code snippet into a file and import the code into each file.

Redwood

Redwood is a bit of an anomaly in this section, as it isn’t necessarily a Jamstack framework. It began under the banner of bringing full stack to the Jamstack but has transitioned to being inspired by Jamstack. I’ve chosen to include it here, however, as it takes an interesting approach of including Prisma within the framework.

It starts, as always, with creating a Prisma schema, this time in api/db/schema.prisma (Redwood adds this to every new project). However, to query and modify the database, you don’t use Prisma’s default client. Instead, in Redwood, GraphQL mutations and queries are used. For example, in Redwood’s example todo app, this is the GraphQL mutation used to create a new todo:

const CREATE_TODO = gql`
  mutation AddTodo_CreateTodo($body: String!) {
    createTodo(body: $body) {
      id
      __typename
      body
      status
    }
  }
`

And in this case, the Prisma model for a todo is:

model Todo {
  id     Int    @id @default(autoincrement())
  body   String
  status String @default("off")
}

To trigger the GraphQL mutation, we use the useMutation function which is based on Apollo’s GraphQL client imported from @redwoodjs/web:

const [createTodo] = useMutation(CREATE_TODO, {
    //  Updates Apollo's cache, re-rendering affected components
    update: (cache, { data: { createTodo } }) => {
      const { todos } = cache.readQuery({ query: TODOS })
      cache.writeQuery({
        query: TODOS,
        data: { todos: todos.concat([createTodo]) },
      })
    },
  })

  const submitTodo = (body) => {
    createTodo({
      variables: { body },
      optimisticResponse: {
        __typename: 'Mutation',
        createTodo: { __typename: 'Todo', id: 0, body, status: 'loading' },
      },
    })
  }

With Redwood, you don’t need to worry about setting up the GraphQL schema / SDLs after creating your Prisma schema, as you can use Redwood’s scaffold command to convert the Prisma schema into GraphQL SDLs and services — yarn rw g sdl Todo, for example.

Cloudfare Workers

Cloudfare Workers is a popular platform for hosting Jamstack APIs, as it puts your code on the “edge”. However, the platform has its limitations, including a lack of TCP support, which the traditional Prisma Client uses. Though now, through Prisma Data Proxy, it is possible.

To use it, you’ll need a Prisma Cloud Platform account which is currently free. Once you’ve followed the setup process (make sure to enable Prisma Data Proxy), you’ll be provided with a connection string that begins with prisma://. You can use that Prisma connection string in your .env file in place of the traditional database URL:

DATABASE_URL="prisma://aws-us-east-1.prisma-data.com/?api_key=•••••••••••••••••"

And then, instead of using npx prisma generate, use this command to generate a Prisma client:

PRISMA_CLIENT_ENGINE_TYPE=dataproxy npx prisma generate

Your database requests will be proxied through, and you can use the Prisma Client as usual. It isn’t a perfect set-up, but for those looking for database connections on Cloudfare Workers, it’s a relatively good solution.

Conclusion

To wrap up, if you’re looking for a way to connect Jamstack applications with a database, I wouldn’t look further than Prisma. Its developer experience, extensive tooling, and performance make it the perfect choice. Next.js, Redwood, and Cloudfare Workers — each of them has a unique way of using Prisma, but it still works very well in all of them.

I hope you’ve enjoyed exploring Prisma with me. Thank you!

Further Reading on Smashing Magazine

Don’t Sink Your Website With Third Parties

You’ve spent months putting together a great website design, crowd-pleasing content, and a business plan to bring it all together. You’ve focused on making the web design responsive to ensure that the widest audience of visitors can access your content. You’ve agonized over design patterns and usability. You’ve tested and retested the site for errors. Your operations team is ready to go. In short, you’ve done your due diligence in the areas of site design and delivery that you directly control. You’ve thought of everything… or have you?

Your website may be using more third-party services than you realize. These services use requests to external hosts (not servers you control) to deliver JavaScript framework libraries, custom fonts, advertising content, marketing analytics trackers, and more.

You may have a lean, agile, responsive site design only to find it gradually loaded down with more and more “extras” that are often put onto the site by marketing departments or business leaders who are not always thinking about website performance. You cannot always anticipate what you cannot control.

There are two big questions:

  1. How do you quantify the impact that these third-party requests have on website performance?
  2. How do you manage or even mitigate that impact?

Even if you cannot prevent all third-party requests, web designers can make choices that will have an impact. In this article, we will review what third-party resource requests are, consider how impactful they can be to the user experience, and discuss common optimization strategies to reduce the impact on the user experience. By carefully considering how third-party requests will fit into your website during the design stage, you can avoid the most significant negative impacts.

What Are Third-Party Services?

In order to understand third-party services, it may be easier to start with your own website content. Any resource (HTML, CSS, JavaScript, image, font, etc.) that you host and serve from your own domain(s) is called a “first-party” resource. You have control over what these resources are. All other requests that happen when visitors load your pages can be attributed to other parties.

Every major website on the Internet today relies — to some degree — on third-party services. The third-party in this case is someone (usually another commercial enterprise) other than you and your site visitors. In this case, we are not going to be talking about infrastructure services, such as a cloud computing platform like Microsoft Azure or a content distribution network like Akamai. Many websites use these services to deploy and run their businesses and understanding how they impact the user experience is important.

In this article, however, we are going to focus on the third-party services that work their way into the design of your web pages. These third-party resource requests load in your visitor’s browser while your web page is loading, even if your visitors don’t realize it. They may be critical to site functionality, or they have been added as an afterthought, but all of them can potentially affect how fast users perceive your page load times.

The HTTP Archive tracks third-party usage across a large swath of all active websites on the Internet today. According to the Third Parties chapter of their 2021 Web Almanac report, “a staggering 94.4% of mobile sites and 94.1% of desktop sites use at least one third-party resource.” They also found out that “45.9% of requests on mobile and 45.1% of requests on desktop are third-party requests.”

As it was noted in the report, third-party services share a few characteristics, such as:

  • hosted on a shared and public origin,
  • widely used by a variety of sites,
  • uninfluenced by an individual site owner.

In other words, third-party services on your site are outsourced and operated by another party other than you. You have no direct control over where and how the requests are being hosted online. Many other websites may be using the same service, and the company that provides it must balance how to run their services to benefit all of their customers, not just you.

The upside to using third-party services on your site is that you do not need to develop everything you want to do yourself. In many cases, they can be super convenient to add or remove without having to push code changes to the site. The downside is that third-party requests can impact website visitors. Pages loaded up with dozens or hundreds of third-party calls can take longer to render or longer to become interactive.

What About Fourth-party Or Second-Party Services?

While the earliest, simplest third-party services were simple 1x1 pixel images used for tracking visitors, almost all third-party requests today load JavaScript into the browser. And JavaScript can certainly make requests for additional network resources. If these follow-on requests are to a different host or service, you might think of them as “fourth-party services”. If the fourth-party service, in turn, makes a request to yet another domain, then you get “fifth-party service” requests, and so forth. Technically, all of them might be “third parties” in the sense that they are neither you (the “first party”) nor your site visitor (the “second party”), but I think it helps to understand that these services are even more removed from your direct control than the ones you work directly with.

The most common scenario I see where fourth-party requests come into play is in advertising services. If you serve ads on your website through an ad broker, you may not even know what service will finally deliver the ad image that gets displayed in the browser.

Feeling like this is a little bit out of control? There’s at least one other way that resource requests you have no direct control over can impact your visitors’ experience. Sometimes, the visitor’s browser itself can be the origin of network activity.

For example, users can install browser plugins to suggest coupon codes when they are shopping, to scan web pages for malware, to play games or message friends, or do any number of other things. These plugins can fire off “second-party” requests in the middle of your page load, and there is nothing you can do about it. Unlike third-party services, the best you can do is be aware of what these second-party services are, so you know what to ignore when troubleshooting problems.

Create Your Own Request Map

As a discovery tool, request maps are great for identifying the source of third-party, fourth-party, fifth-party, etc., requests. They can also highlight very long redirection chains in your third-party traffic. Simon Hearne, an independent web performance consultant and one of the co-organizers of the London Web Performance Group, maintains an online Request Map tool that uses WebPageTest to get the data and Ghostery to visualize it.

Ad Blockers Makes Sites Faster

In addition to the browser plugins mentioned above, users love to install ad blockers. While many are motivated simply by a desire to see fewer ads, ad blockers also often make web pages load faster. Maciej Kocemba published research findings from Opera that showed that a typical website with ads could be rendered 51% faster if the ads were blocked.

This is obviously a concern for any website owner that monetizes page impressions with ads. They may not realize it, but users may be motivated to block ads partly to deal with slow third-party and fourth-party resource requests that lead to frustrating experiences. Faster page loads may reduce the motivation to use ad blockers.

The Revenue Trade-off You Need To Think About

Poor performance of third-party services can have other business impacts even if your website does not use advertising. Researchers and major companies have been publishing case studies for years, proving that slower page load experiences impact business metrics, including conversion rate, revenue, bounce rate, and more.

No matter how valuable you think a particular third-party service is to your business, that benefit needs to be compared to the cost of lost visitor engagement. Can a fancy third-party custom font give your site a new look and feel? Yes. Will the conversion rate or session length go down slightly as users see slower page loads? Or will visitors find the new look and feel worth the wait?

How To Identify Problematic Third-Party Services On Your Website

If you are like most websites, about half of the resource requests that load in your customers’ browsers — when they load a page from your website — are third-party requests. Identifying them should be straightforward.

Measuring Performance Impact

To quantify the performance impact of third-party resource requests on the user experience, we need to start by measuring page load performance. Many web performance measurement tools can measure the network load times of individual resource requests, and others can measure the client-side impacts of JavaScript resource requests. You may find that no single tool will answer every performance question you have about third parties. These tools fall into several categories.

Some tools that can be helpful in evaluating the impact of third-party resource requests are what you might describe as auditing tools. The most popular, by far, is the Google Lighthouse report (available in Chrome Developer Tools) and Google’s Page Speed Insights. These tools generally work with data from a single page load but go into some greater depth on impact than the tools designed for ongoing monitoring.

Synthetic web performance measurements use scripts to visit one or more pages on your website from one or more probe locations. Much like a laboratory environment (and depending to some degree on the features offered by the particular tool), you have control over the variables of the measurement. You can adjust what browser is used, the kind of network connection to employ, the locations to test from, whether or not the browser’s cache is empty or full, how frequently to take the measurements, and more.

Because most of the variables remain fixed from one test run to the next, synthetic measurements are great for measuring the impact of change but less capable of accurately or comprehensively identifying real visitor experience. They are more of a benchmark than a true measurement of real user experience. Some of the popular synthetic measurement tools are WebPageTest, SiteSpeed.io, Splunk Synthetic Monitoring, and Dynatrace.

For a more comprehensive measurement of visitor experience, you need Real User Measurements (RUM). RUM systems embed a small JavaScript payload onto every page of your site. The code interacts with industry-standard APIs implemented by modern browsers to collect performance data, augments it with additional custom data collection, and transmits this very high-resolution data about the page as a whole and every resource request.

The data may have some limitations, though — the only data that can be collected is what the APIs support, and Cross Origin Resource Sharing (CORS) restrictions in the browsers limit some details, especially around third-party resource requests. Some of the more popular RUM services are offered by Akamai, New Relic, Dynatrace, and AppDynamics.

What To Measure

Third-party resource requests can impact the user experience in several different ways, depending on whether they load early in the page load process or after the page is mostly complete. The risk you should be looking for with the measurement data you are collecting include:

  • Delaying initial render
    Resources that load prior to the initial page render (or First Contentful Paint) can be the most impactful overall. Many studies have shown that site visitors are more sensitive to delays at this point in the page load experience than any point after some visual progress has been achieved. Look for third-party requests that force new DNS lookups, require establishing connections to new origins, introduce redirection chains, include substantial client-side processing delay, or take a long time to download.
  • Other blocking effects
    Any JavaScript resource that blocks other resources from being requested until its processing is completed is a concern. A third-party font request could cause render-blocking. Look for third-party JavaScript resources that block other JavaScript resources that are not being loaded asynchronously from being requested in a timely manner. Avoid third-party requests that introduce contention for scarce resources like bandwidth or CPU utilization. If a third-party resource request is blocking, consider alternatives or approaches to mitigate the risk if the third party is slower than normal or fails.
  • Single Points of Failure (SPOFs)
    A resource request can be considered a SPOF if the web page fails to load or the load time is disastrously longer should the resource itself fail to load. For example, if a third-party host is down and your request takes 60 seconds to time out, if the initial render of the page is delayed 60 seconds as a result, then this is a SPOF.

Testing The Impact Of Specific Requests

Once you have identified potentially impactful third-party resource requests, measuring the specific performance impact of those requests can be challenging. Trying to separate the impact of a single request from all the others can be akin to trying to break down an alloy into its constituent metals because third-party requests are often made in parallel with first-party requests or third-party requests to other hosts, and they are competing with each other for the limited network, CPU, and memory resources of the client. Even with highly-detailed RUM or synthetic measurement data, it may not be practical.

The best way to approach the problem is through applied testing. Specifically, deliver pages with the third-party request or service as normal, and compare the performance to pages delivered without that particular third-party service but which are otherwise identical.

This is easiest to do with synthetic measurement tools. You can blackhole a particular domain so that the synthetic browser will never make the requests in the first place, simulating a page loading without that service on it. This can inform you about the performance (load times) impact of that third-party service. WebPageTest — a free synthetic measurement service (see the following three figures below for an example) — makes this easy.

A more sophisticated approach is to perform multivariate testing on your production site. In a multivariate test, you serve a version of the page with the third-party tag on it to one segment of your visitor population, and the other segment gets a version of the page without the third-party tag.

By using RUM tools, you can directly measure the real-world performance differences between the two test segments as well as the effects on business metrics (such as bounce rate, conversion or session length). Managing multivariate testing is a significant undertaking, but it can pay off in the long run.

Design Optimizations

Once you have a baseline of your site performance and some tools to test the basic performance impact of key third-party resource requests, it is time to implement some strategies to mitigate the impact that third-party services can have on performance.

Consider Removing Unneeded Services

By far, the most impactful change you can make is to remove any obsolete, unused, or unnecessary third-party tags from your site. After all, no resource loads faster than not making a resource request at all. Ironically, this may also be the most challenging optimization to put into practice. In many organizations, third-party tags or services “belong” to a variety of stakeholders, and finding a way to manage them is as much a cultural challenge as a technical one. Some basic steps to take include:

  1. Audit all third-party requests appearing on your pages on a periodic basis (for example, quarterly).
    To make sure you capture all third-party requests, use a RUM service that collects data about every page view. If a third-party domain is showing up in more than a small fraction of page views and you do not already know what it is, find out immediately. New third-party tags may have been added by some stakeholders within your organization, or you may be finding a fourth-party tag because a third-party service changed its behavior. Either way, you need to understand what the third-party tags are and who in your organization is using them.
  2. Keep records on third-party services.
    Specifically, you want to know who the internal stakeholder is that “owns” that service and how it gets on the site. Is it hard-coded into the page HTML source? Is there JavaScript injected on the page by a CDN configuration? Are you using one (or more than one) tag manager? When does the contract with that service expire? The important thing is to have all the information on hand to know how to suspend or remove every third-party service if it becomes a performance issue or suddenly stops working, and who in your organization that is going to need to know.
  3. Consider a periodic stakeholders meeting that includes a discussion of all third-party services to review the cost/benefit they introduce to the business. Even if it is still under contract, consider removing third-party services that stakeholders no longer use.

Geographically Align Your Third-Party Services With Your Visitors

If most of your visitors are in Europe, but a third-party service you are using is serving its resource content from the United States, those requests will likely have very slow load times as the traffic must cross an ocean each way. Some third-party services use a CDN of their own to ensure that they are serving requests from locations close to your visitors, but not all will do so. You may need to ensure that you are using appropriate hostnames or parameters in your requests. CDN Finder is a convenient tool to investigate which CDNs (if any) a third-party tag is using.

Loading Scripts Asynchronously

Blocking other resource requests from being made by the browser (often called “parser blocking”) is one of the most impactful (in a negative way) things a third-party resource can do. Historically, browsers have blocked while loading scripts to ensure that the page load experience is predictable. If scripts always load and evaluate in the same order, there are no surprises. The downside to this is that it takes longer to load the page.

Fortunately, identifying and blocking third-party script resources is relatively easy. Both WebPageTest and PageSpeed Insights (free-to-use tools) highlight resource requests that block other resource requests from being made. These tools work on one page (URL) at a time, so you will need to use them on a representative set of URLs to pick up all the blocking tags on your site.

Depending on how the third-party tag gets onto the page, you may be able to change a blocking script into a non-blocking script. Modern browsers support attributes to the script tag that gives the browser flexibility to load resources in a non-blocking manner. These are your basic options:

  • <script>
    Without an additional attribute, many browsers will block the loading of subsequent scripts until after the script in question is loaded and evaluated. With third-party scripts, this is not only a performance concern but also a potential for a single point of failure (SPOF).
  • <script async>
    With async, the browser can download the script resource in parallel with other HTML parsing and downloading activity, but it will evaluate the JavaScript immediately once it is done downloading and pause HTML parsing while the script evaluation happens. If the script evaluation needs to happen early in the page load, this is the best choice.
  • <script defer>
    With defer, the script load will happen in parallel with HTML parsing and the fetching of other resources, and the script will only be evaluated after the HTML is fully parsed. This is the best choice for any third-party tag whose evaluation is less important than a fast render experience for your visitor.

Cascading StyleSheets

Another kind of blocking that can be impactful to the user experience is render-blocking. Cascading StyleSheets almost always block page render while they are being downloaded and evaluated because the browsers do not want to render content on the screen only to have to change how it looks partway through the page load. For this reason, best practice advice is to load CSS resources as early as you can in the page load, so the browser has all the information to render the page as soon as possible.

Third-party CSS requests are uncommon (mostly limited to custom font support), but if for some reason they are part of your site design, consider loading them directly through script tags in the base page HTML or through your CDN. Using a tag manager will just introduce additional delay in getting a critical resource into the browser as quickly as possible.

Some Further Thoughts On Fonts

Like CSS, custom fonts are also render-blocking. Fonts can radically change the visual appearance of text, so browsers do not want to render text on the screen only to have a visually disruptive change mid-page load. Unlike CSS, I see far more sites using third-party resources for their custom fonts, with Google Fonts and Adobe Typekit being the most popular.

Some implementations of custom fonts also involve loading third-party CSS, which introduces additional render-blocking. The resource requests for these fonts (.woff, woff2, or .ttf files, usually) are also not always done early in the page load. This is a problem for performance and a potential single point of failure.

Here are some ideas for managing third-party custom fonts on your site:

  • Give serious consideration to whether you need custom fonts at all.
    Page load times will be faster without them, and if the custom font is almost visually identical to some of the fantastic pre-installed system fonts now available in modern browsers, the brand impression benefit may be outweighed by the cost of slightly slower page loads frustrating your visitors.
  • If custom fonts are a requirement, consider how to deliver them as first-party resources.
    You may be limited by font licensing restrictions in this respect, and serving fonts from your own domains will result in delivering more bytes to visitors from your CDN or ISP, which can increase costs. On the other hand, you no longer have a SPOF vulnerability, you gain control over caching headers, and your visitors can avoid making connections to yet another third-party host and all the delays that it introduces.
  • If you cannot avoid having third-party fonts on your site, consider using font-display properties in your CSS.
    Setting the font-display property to swap (instead of block), for example, allows the browser to use system fonts until the custom fonts can be swapped in. If the visual change of the custom font is not too disruptive, this could be the best choice to give your visitors the content as early as possible while giving them the brand experience when the fonts do load. The fallback value is another choice that can incorporate a shorter blocking period and otherwise before behaving as a swap. The CSS-Tricks website has good documentation on font-display.

Two Script Management Solutions

One interesting approach to managing the performance impact of third parties is to move as many of them as possible to load via Web Workers. The core idea is to reserve the main thread in the browser for your first-party core scripts and let the browser manage and optimize your resource-intensive third-party scripts using Web Workers. Doing this is not trivial since Web Workers are limited to asynchronous communications with the main thread and many third-party scripts except synchronous access to browser resources, such as documents and windows.

A new open-source project called Partytown provides a library that implements a communications layer to make this work. It is still in the early stages of development, and you would want to test extensively for potential weird side effects. It might also not work well with a tag manager system if that’s a part of your architecture.

Akamai Script Management is a solution that uses Service Workers. This service essentially acts as a proxy inside the browser that has knowledge about the third-party services on the site and a policy about how to handle specific third-party requests. The policy can block requests for specific third parties, defer their request to later in the page load, or change the waiting time before throwing a timeout error for a request. If a third-party request is render blocking but that third-party service is down, for example, Script Management can mitigate the impact by reducing the length of time that the browser waits before deciding that the response is never going to arrive.

Conclusion

Third-party resource requests have become an integral part of the web. These services can provide value to your business, but they do come at a potential cost to the user experience.

You need the right tools for detection and measurement and knowledge of the best practices that help reduce the negative impacts of third-party requests.

A great way to start managing the impacts of third-party requests on your site’s user experience is to audit your site to see which and how many third-party domains and requests are being used. Next, use performance measurement tools to identify those that have the potential to degrade the user experience through render-blocking, resource contention, or single points of failure.

As you apply changes to mitigate the impact of third parties, develop a plan to use ongoing testing (such as Real User Measurement Services) to keep on top of site changes and unexpected changes to your third-party services.

By carefully considering how third-party requests will fit into your site during the design stage, you can avoid the most significant negative impacts. With ongoing performance monitoring, you can ensure that new problems with third-party requests are identified early. Don’t sink your website with third parties!

PostgreSQL EXPLAIN – What Are the Query Costs?

Understanding the Postgres EXPLAIN Cost

EXPLAIN is very useful for understanding the performance of a Postgres query. It returns the execution plan generated by the PostgreSQL query planner for a given statement. The EXPLAIN command specifies whether the tables referenced in a statement will be searched using an index scan or a sequential scan. When reviewing the output of  EXPLAIN the command, you'll notice the cost statistics, so it’s natural to wonder what they mean, how they’re calculated, and how they’re used. In short, the PostgreSQL query planner estimates how much time the query will take (in an arbitrary unit), with both a startup cost and a total cost for each operation. More on that later. When it has multiple options for executing a query, it uses these costs to choose the cheapest, and therefore hopefully fastest, option.

What Unit Are the Costs In?

The costs are in an arbitrary unit. A common misunderstanding is that they are in milliseconds or some other unit of time, but that’s not the case. The cost units are anchored (by default) to a single sequential page read costing 1.0 units (seq_page_cost). Each row processed adds 0.01 (cpu_tuple_cost), and each non-sequential page read adds 4.0 (random_page_cost). There are many more constants like this, all of which are configurable. That last one is a particularly common candidate, at least on modern hardware. We’ll look into that more in a bit.

5 Common Step Functions Issues

Step Functions, the serverless finite state machine service from AWS. With DynamoDB, Lambda, and API Gateway, it forms the core of serverless AWS services. If you have tasks with multiple steps and you want to ensure they will get executed in the proper order, Step Functions is your service of choice.

It offers direct integrations with many AWS services, so you don’t need to use Lambda Functions as glue. This can improve the performance of your state machine and lower its costs.

Collective #707








Building a dialog component

A foundational overview of how to build color-adaptive, responsive, and accessible mini and mega modals with the <dialog> element. By Adam Argyle.

Read it











Lexical

Lexical is an extensible text editor framework that provides excellent reliability, accessibility and performance.

Check it out




Eight Colors

Eight Colors is a block shifting game where the goal is to shift circular blocks to reach the target. Made by Shubham Jain.

Check it out


Dave Seidman

Dave Seidman’s portfolio has a cool project slideshow with a 3D shape in a polygonal look that morphs into another one.

Check it out




The post Collective #707 appeared first on Codrops.

How to Preload Links in WordPress for Faster Loading Speeds

Do you want to preload links in WordPress and improve loading speeds?

Link preloading is a browser technology that will load links in the background before a site visitor clicks them, making your website seem faster.

In this article, we’ll show you how to preload WordPress links for faster loading speeds easily. 

How to preload links in WordPress for faster loading speeds (easy)

Why Preload Links in WordPress?

Link preloading is when your web browser will load the link in the background before the user clicks it. That way, the moment they get to the page, it’s already loaded. 

Improving your WordPress speed and performance is one of the most important things you can do for your site since it makes the user experience better.

Having a faster site can help to increase your blog traffic by improving your WordPress SEO. When your internal pages are preloaded, your visitors are more likely to stay on your website longer and view more pages.

Although there’s a lot more you can do to make your WordPress website faster, link preloading is very simple, and it can have big benefits for speed.

The only thing is, you’ll need to make sure you set up preloading the right way and not make the common mistakes. For example, if your settings are too aggressive and all your internal links are preloaded, then it could have the opposite effect and even crash your server altogether.

That being said, let’s show you how to preload links in WordPress the right way, step by step. 

Preloading WordPress Links and Making WordPress Faster

The easiest way to preload links is by using the Flying Pages plugin. It simply adds intelligent preloading to make sure preloading won’t crash your site or even slow it down.

If it detects any issues like that, then the plugin will stop all preloading. 

First thing you need to do is install and activate the plugin. For more details, see our guide on how to install a WordPress plugin.

Upon activation, navigate to Settings » Flying Pages in your WordPress admin panel to configure the plugin settings. 

Then, you need to set the ‘Delay to start preloading’ time in the drop down. This is the delay to start preloading links if your user’s mouse isn’t moving in the browser window.

You can change this, but we’ll keep the default recommended setting of ‘0 second’.

Flying Links settings set the preloader delay

Next, you can change the ‘Max requests per second’. The lower you set this number, the less impact it will have on your server. 

We’ll keep the default setting of ‘3 requests’ which should work for most WordPress hosting environments.

Set max requests per second

After that, you can check the box to ‘Preload only on mouse hover’. This will only preload links if a user hovers over it and will preload the page just before they click.

This technology makes the perceived load time nearly instant because there’s a 400ms delay between when the user brings their mouse over a link and clicking it.

You can also set the ‘Mouse hover delay’. This is the time that will pass after a user hovers over a link before preloading starts. 

Set preload time and hover delay

Below that, there’s a list of keywords that the plugin will ignore for preloading.

These are standard login pages and image files. You can leave the list as it is or add more keywords if you like.

Set keywords to ignore for preloading

If you’re running an online store, then you may want to add pages like /cart and other dynamic pages in this list, so they are not pre-loaded.

Similarly, if you’re using an affiliate marketing plugin like ThirstyAffiliates or PrettyLinks, then it’s important that you add your affiliate prefix like /refer/ or /go/ to this ignore keywords list. Otherwise, it can break affiliate link tracking.

The final option is to disable preloading for admins.

Overall, this will help to reduce your website server load. If you want to only preload for website visitors who aren’t logged-in admins, then simply check the box.

Disable preloading for admins and save

Once you’re finished, click the ‘Save Changes’ button at the bottom of the page.

That it, you’ve successfully enabled link preloading on your website.

Note: If you’re running a website speed test and you don’t see your score get better, that’s completely normal. Preloading links only improves the speed of link clicking, and it doesn’t speed up the first time your site loads. 

We hope this article helped you learn how to preload links in WordPress for faster loading speeds. You may also want to see our guide on how to create an email newsletter, and our expert picks of the must have WordPress plugins for your websites. 

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Preload Links in WordPress for Faster Loading Speeds first appeared on WPBeginner.

A Guide To Audio Visualization With JavaScript And GSAP (Part 2)

Last week in Part 1, I explained how the idea about how to record audio input from users and then moved on to the visualization. After all, without any visualization, any type of audio recording UI isn’t very engaging, is it? Today, we’ll be diving into more details in terms of adding features and any sort of extra touches you like!

We’ll be covering the following:

Please note that in order to see the demos in action, you’ll need to open and test directly them on the CodePen website.

Pausing A Recording

Pausing a recording doesn’t take much code at all.

// Pause a recorder
recorder.pause()
// Resume a recording
recorder.resume()

In fact, the trickiest part about integrating recording is designing your UI. Once you’ve got a UI design, it’ll likely be more about the changes you need for that.

Also, pausing a recording doesn’t pause our animation. So we need to make sure we stop that too. We only want to add new bars whilst we are recording. To determine what state the recorder is in, we can use the state property mentioned earlier. Here’s our updated toggle functionality:

const RECORDING = recorder.state === 'recording'
// Pause or resume recorder based on state.
TOGGLE.style.setProperty('--active', RECORDING ? 0 : 1)
timeline[RECORDING ? 'pause' : 'play']()
recorder[RECORDING ? 'pause' : 'resume']()

And here’s how we can determine whether to add new bars in the reporter or not.

REPORT = () => {
  if (recorder && recorder.state === 'recording') {

Challenge: Could we also remove the report function from gsap.ticker for extra performance? Try it out.

For our demo, we’ve changed it so the record button becomes a pause button. And once a recording has begun, a stop button appears. This will need some extra code to handle that state. React is a good fit for this but we can lean into the recorder.state value.

See the Pen 15. Pausing a Recording by Jhey.

Padding Out The Visuals

Next, we need to pad out our visuals. What do we mean by that? Well, we go from an empty canvas to bars streaming across. It’s quite a contrast and it would be nice to have the canvas filled with zero volume bars on start. There is no reason we can’t do this either based on how we are generating our bars. Let’s start by creating a padding function, padTimeline:

// Move BAR_DURATION out of scope so it’s a shared variable.
const BAR_DURATION =
  CANVAS.width / ((CONFIG.barWidth + CONFIG.barGap) * CONFIG.fps)

const padTimeline = () => {
  // Doesn’t matter if we have more bars than width. We will shift them over to the correct spot
  const padCount = Math.floor(CANVAS.width / CONFIG.barWidth)

  for (let p = 0; p < padCount; p++) {
    const BAR = {
      x: CANVAS.width + CONFIG.barWidth / 2,
      // Note the volume is 0
      size: gsap.utils.mapRange(
        0,
        100,
        CANVAS.height * CONFIG.barMinHeight,
        CANVAS.height * CONFIG.barMaxHeight
      )(volume),
    }
    // Add to bars Array
    BARS.push(BAR)
    // Add the bar animation to the timeline
    // The actual pixels per second is (1 / fps * shift) * fps
    // if we have 50fps, the bar needs to have moved bar width before the next comes in
    // 1/50 = 4 === 50 * 4 = 200
    timeline.to(
      BAR,
      {
        x: `-=${CANVAS.width + CONFIG.barWidth}`,
        ease: 'none',
        duration: BAR_DURATION,
      },
      BARS.length * (1 / CONFIG.fps)
    )
    }
  // Sets the timeline to the correct spot for being added to
  timeline.totalTime(timeline.totalDuration() - BAR_DURATION)
}

The trick here is to add new bars and then set the playhead of the timeline to where the bars fill the canvas. At the point of padding the timeline, we know that we only have padding bars so totalDuration can be used.

timeline.totalTime(timeline.totalDuration() - BAR_DURATION)

Notice how that functionality is very like what we do inside the REPORT function? We have a good opportunity to refactor here. Let’s create a new function named addBar. This adds a new bar based on the passed volume.

const addBar = (volume = 0) => {
  const BAR = {
    x: CANVAS.width + CONFIG.barWidth / 2,
    size: gsap.utils.mapRange(
      0,
      100,
      CANVAS.height * CONFIG.barMinHeight,
      CANVAS.height * CONFIG.barMaxHeight
    )(volume),
  }
  BARS.push(BAR)
  timeline.to(
    BAR,
    {
      x: `-=${CANVAS.width + CONFIG.barWidth}`,
      ease: 'none',
      duration: BAR_DURATION,
    },
    BARS.length * (1 / CONFIG.fps)
  )
}

Now our padTimeline and REPORT functions can make use of this:

const padTimeline = () => {
  const padCount = Math.floor(CANVAS.width / CONFIG.barWidth)
  for (let p = 0; p < padCount; p++) {
    addBar()
  }
  timeline.totalTime(timeline.totalDuration() - BAR_DURATION)
}

REPORT = () => {
  if (recorder && recorder.state === 'recording') {
    ANALYSER.getByteFrequencyData(DATA_ARR)
    const VOLUME = Math.floor((Math.max(...DATA_ARR) / 255) * 100)
    addBar(VOLUME)
  }
  if (recorder || visualizing) {
    drawBars()
  }
}

Now, on load, we can do an initial rendering by invoking padTimeline followed by drawBars.

padTimeline()
drawBars()

Putting it all together and that’s another neat feature!

See the Pen 16. Padding out the Timeline by Jhey.

How We Finish

Do you want to pull the component down or do a rewind, maybe a rollout? How does this affect performance? A rollout is easier. But a rewind is trickier and might have perf hits.

Finishing The Recording

You can finish up your recording any way you like. You could stop the animation and leave it there. Or, if we stop the animation we could roll back the animation to the start. This is often used in various UI/UX designs. And the GSAP API gives us a neat way to do this. Instead of clearing our timeline on stop, we can move this into where we start a recording to reset the timeline. But, once we’ve finished a recording, let’s keep the animation around so we can use it.

STOP.addEventListener('click', () => {
  if (recorder) recorder.stop()
  AUDIO_CONTEXT.close()
  // Pause the timeline
  timeline.pause()
  // Animate the playhead back to the START_POINT
  gsap.to(timeline, {
    totalTime: START_POINT,
    onComplete: () => {
      gsap.ticker.remove(REPORT)
    }
  })
})

In this code, we tween the totalTime back to where we set the playhead in padTimeline. That means we needed to create a variable for sharing that.

let START_POINT

And we can set that within padTimeline.

const padTimeline = () => {
  const padCount = Math.floor(CANVAS.width / CONFIG.barWidth)
  for (let p = 0; p < padCount; p++) {
    addBar()
  }
  START_POINT = timeline.totalDuration() - BAR_DURATION
  // Sets the timeline to the correct spot for being added to
  timeline.totalTime(START_POINT)
}

We can clear the timeline inside the RECORD function when we start a recording:

// Reset the timeline
timeline.clear()

And this gives us what is becoming a pretty neat audio visualizer:

See the Pen 17. Rewinding on Stop by Jhey.

Scrubbing The Values On Playback

Now we’ve got our recording, we can play it back with the <audio> element. But, we’d like to sync our visualization with the recording playback. With GSAP’s API, this is far easier than you might expect.

const SCRUB = (time = 0, trackTime = 0) => {
  gsap.to(timeline, {
    totalTime: time,
    onComplete: () => {
      AUDIO.currentTime = trackTime
      gsap.ticker.remove(REPORT)
    },
  })
}
const UPDATE = e => {
  switch (e.type) {
    case 'play':
      timeline.totalTime(AUDIO.currentTime + START_POINT)
      timeline.play()
      gsap.ticker.add(REPORT)
      break
    case 'seeking':
    case 'seeked':
      timeline.totalTime(AUDIO.currentTime + START_POINT)
      break
    case 'pause':
      timeline.pause()
      break
    case 'ended':
      timeline.pause()
      SCRUB(START_POINT)
      break
  }
}

// Set up AUDIO scrubbing
['play', 'seeking', 'seeked', 'pause', 'ended']
  .forEach(event => AUDIO.addEventListener(event, UPDATE))

We’ve refactored the functionality that we use when stopping to scrub the timeline. And then it’s a case of listening for different events on the <audio> element. Each event requires updating the timeline playhead. We can add and remove REPORT to the ticker based on when we play and stop audio. But, this does have an edge case. If you seek after the audio has "ended", the visualization won’t render updates. And that’s because we remove REPORT from the ticker in SCRUB. You could opt to not remove REPORT at all until a new recording begins or you move to another state in your app. It’s a matter of monitoring performance and what feels right.

The fun part here though is that if you make a recording, you can scrub the visualization when you seek 😎

See the Pen 18. Syncing with Playback by Jhey.

At this point, you know everything you need to know. But, if you want to learn about some extra things, keep reading.

Audio Playback From Other Sources

One thing we haven’t looked at is how you visualize audio from a source other than an input device. For example, an mp3 file. And this brings up an interesting challenge or problem to think about.

Let’s consider a demo where we have an audio file URL and we want to visualize it with our visualization. We can explicitly set our AUDIO element’s src before visualizing.

AUDIO.src = 'https://assets.codepen.io/605876/lobo-loco-spencer-bluegrass-blues.mp3'
// NOTE:: This is required in some circumstances due to CORS
AUDIO.crossOrigin = 'anonymous'

We no longer need to think about setting up the recorder or using the controls to trigger it. As we have an audio element, we can set the visualization to hook into the source direct.

const ANALYSE = stream => {
  if (AUDIO_CONTEXT) return
  AUDIO_CONTEXT = new AudioContext()
  ANALYSER = AUDIO_CONTEXT.createAnalyser()
  ANALYSER.fftSize = CONFIG.fft
  const DATA_ARR = new Uint8Array(ANALYSER.frequencyBinCount)
  SOURCE = AUDIO_CONTEXT.createMediaElementSource(AUDIO)
  const GAIN_NODE = AUDIO_CONTEXT.createGain()
  GAIN_NODE.value = 0.5
  GAIN_NODE.connect(AUDIO_CONTEXT.destination)
  SOURCE.connect(GAIN_NODE)
  SOURCE.connect(ANALYSER)

  // Reset the bars and pad them out...
  if (BARS && BARS.length > 0) {
    BARS.length = 0
    padTimeline()
  }

  REPORT = () => {
    if (!AUDIO.paused || !played) {
      ANALYSER.getByteFrequencyData(DATA_ARR)
      const VOLUME = Math.floor((Math.max(...DATA_ARR) / 255) * 100)
      addBar(VOLUME)
      drawBars()  
    }
  }
  gsap.ticker.add(REPORT)
}

By doing this we can connect our AudioContext to the audio element. We do this using createMediaElementSource(AUDIO) instead of createMediaStreamSource(stream). And then the audio elements' controls will trigger data getting passed to the analyzer. In fact, we only need to create the AudioContext once. Because once we’ve played the audio track, we aren’t working with a different audio track after. Hence, the return if AUDIO_CONTEXT exists.

if (AUDIO_CONTEXT) return

One other thing to note here. Because we’re hooking up the audio element to an AudioContext, we need to create a gain node. This gain node allows us to hear the audio track.

SOURCE = AUDIO_CONTEXT.createMediaElementSource(AUDIO)
const GAIN_NODE = AUDIO_CONTEXT.createGain()
GAIN_NODE.value = 0.5
GAIN_NODE.connect(AUDIO_CONTEXT.destination)
SOURCE.connect(GAIN_NODE)
SOURCE.connect(ANALYSER)

Things do change a little in how we process events on the audio element. In fact, for this example, when we’ve finished the audio track, we can remove REPORT from the ticker. But, we add drawBars to the ticker. This is so if we play the track again or seek, etc. we don’t need to process the audio again. This is like how we handled playback of the visualization with the recorder.

This update happens inside the SCRUB function and you can also see a new played variable. We can use this to determine whether we’ve processed the whole audio track.

const SCRUB = (time = 0, trackTime = 0) => {
  gsap.to(timeline, {
    totalTime: time,
    onComplete: () => {
      AUDIO.currentTime = trackTime
      if (!played) {
        played = true
        gsap.ticker.remove(REPORT)
        gsap.ticker.add(drawBars) 
      }
    },
  })
}

Why not add and remove drawBars from the ticker based on what we are doing with the audio element? We could do this. We could look at gsap.ticker._listeners and determine if drawBars was already used or not. We may choose to add and remove when playing and pausing. And then we could also add and remove when seeking and finishing seeking. The trick would be making sure we don’t add to the ticker too much when "seeking". And this would be where to check if drawBars was already part of the ticker. This is of course dependent on performance though. Is that optimization going to be worth the minimal performance gain? It comes down to what exactly your app needs to do. For this demo, once the audio gets processed, we are switching out the ticker function. That’s because we don’t need to process the audio again. And leaving drawBars running in the ticker shows no performance hit.

const UPDATE = e => {
  switch (e.type) {
    case 'play':
      if (!played) ANALYSE()
      timeline.totalTime(AUDIO.currentTime + START_POINT)
      timeline.play()
      break
    case 'seeking':
    case 'seeked':
      timeline.totalTime(AUDIO.currentTime + START_POINT)
      break 
    case 'pause':
      timeline.pause()
      break
    case 'ended':
      timeline.pause()
      SCRUB(START_POINT)
      break
  }
}

Our switch statement is much the same but we instead only ANALYSE if we haven’t played the track.

And this gives us the following demo:

See the Pen 19. Processing Audio Files by Jhey.

Challenge: Could you extend this demo to support different tracks? Try extending the demo to accept different audio tracks. Maybe a user can select from dropdown or input a URL.

This demo leads to an interesting problem that arose when working on "Record a Call" for Kent C. Dodds. It’s not one I’d needed to deal with before. In the demo above, start playing the audio and seek forwards in the track before it finishes playing. Seeking forwards breaks the visualization because we are skipping ahead of time. And that means we are skipping processing certain parts of the audio.

How can you resolve this? It’s an interesting problem. You want to build the animation timeline before you play audio. But, to build it, you need to play through the audio first. Could you disable "seeking" until you’ve played through once? You could. At this point, you might start drifting into the world of custom audio players. Definitely out of scope for this article. In a real-world scenario, you may be able to put server-side processing in place. This might give you a way to get the audio data ahead of time before playing it.

For Kent’s “Record a Call”, we can take a different approach. We are processing the audio as it’s recorded. And each bar gets represented by a number. If we create an Array of numbers representing the bars, we already have the data to build the animation. When a recording gets submitted, the data can go with it. Then when we make a request for audio, we can get that data too and build the visualization before playback.

We can use the addBar function we defined earlier whilst looping over the audio data Array.

// Given an audio data Array example
const AUDIO_DATA = [100, 85, 43, 12, 36, 0, 0, 0, 200, 220, 130]

const buildViz = DATA => {
  DATA.forEach(bar => addBar(bar))
}

buildViz(AUDIO_DATA)

Building our visualizations without processing the audio again is a great performance win.

Consider this extended demo of our recording demo. Each recording gets stored in localStorage. And we can load a recording to play it. But, instead of processing the audio to play it, we build a new bars animation and set the audio element src.

Note: You need to scroll down to see stored recordings in the <details> and <summary> element.

See the Pen 20. Saved Recordings ✨ by Jhey.

What needs to happen here to store and playback recordings? Well, it doesn’t take much as we have the bulk of functionality in place. And as we’ve refactored things into mini utility functions, this makes things easier.

Let’s start with how we are going to store the recordings in localStorage. On page load, we are going to hydrate a variable from localStorage. If there is nothing to hydrate with, we can instantiate the variable with a default value.

const INITIAL_VALUE = { recordings: []}
const KEY = 'recordings'
const RECORDINGS = window.localStorage.getItem(KEY)
  ? JSON.parse(window.localStorage.getItem(KEY))
  : INITIAL_VALUE

Now. It’s worth noting that this guide isn’t about building a polished app or experience. It’s giving you the tools you need to go off and make it your own. I’m saying this because some of the UX, you might want to put in place in a different way.

To save a recording, we can trigger a save in the ondataavailable method we’ve been using.

recorder.ondataavailable = (event) => {
  // All the other handling code
  // save the recording
  if (confirm('Save Recording?')) {
    saveRecording()
  }
}

The process of saving a recording requires a little "trick". We need to convert our AudioBlob into a String. That way, we can save it to localStorage. To do this, we use the FileReader API to convert the AudioBlob to a data URL. Once we have that, we can create a new recording object and persist it to localStorage.

const saveRecording = async () => {
  const reader = new FileReader()
  reader.onload = e => {
    const audioSafe = e.target.result
    const timestamp = new Date()
    RECORDINGS.recordings = [
      ...RECORDINGS.recordings,
      {
        audioBlob: audioSafe,
        metadata: METADATA,
        name: timestamp.toUTCString(),
        id: timestamp.getTime(),
      },
    ]
    window.localStorage.setItem(KEY, JSON.stringify(RECORDINGS))
    renderRecordings()
    alert('Recording Saved')  
  }
  await reader.readAsDataURL(AUDIO_BLOB)
}

You could create whatever type of format you like here. For ease, I’m using the time as an id. The metadata field is the Array we use to build our animation. The timestamp field is being used like a "name". But, you could do something like name it based on the number of recordings. Then you could update the UI to allow users to rename the recording. Or you could even do it through the save step with window.prompt.

In fact, this demo uses the window.prompt UX so you can see how that would work.

See the Pen 21. Prompt for Recording Name 🚀 by Jhey.

You may be wondering what renderRecordings does. Well, as we aren’t using a framework, we need to update the UI ourselves. We call this function on load and every time we save or delete a recording.

The idea is that if we have recordings, we loop over them and create list items to append to our recordings list. If we don’t have any recordings, we are showing a message to the user.

For each recording, we create two buttons. One for playing the recording, and another for deleting the recording.

const renderRecordings = () => {
  RECORDINGS_LIST.innerHTML = ''
  if (RECORDINGS.recordings.length > 0) {
    RECORDINGS_MESSAGE.style.display = 'none'
    RECORDINGS.recordings.reverse().forEach(recording => {
      const LI = document.createElement('li')
      LI.className = 'recordings__recording'
      LI.innerHTML = `<span>${recording.name}</span>`
      const BTN = document.createElement('button')
      BTN.className = 'recordings__play recordings__control'
      BTN.setAttribute('data-recording', recording.id)
      BTN.title = 'Play Recording'
      BTN.innerHTML = SVGIconMarkup
      LI.appendChild(BTN)
      const DEL = document.createElement('button')
      DEL.setAttribute('data-recording', recording.id)
      DEL.className = 'recordings__delete recordings__control'
      DEL.title = 'Delete Recording'
      DEL.innerHTML = SVGIconMarkup
      LI.appendChild(DEL)
      BTN.addEventListener('click', playRecording)
      DEL.addEventListener('click', deleteRecording)
      RECORDINGS_LIST.appendChild(LI)
    })
  } else {
    RECORDINGS_MESSAGE.style.display = 'block'
  }
}

Playing a recording means setting the AUDIO element src and generating the visualization. Before playing a recording or when we delete a recording, we reset the state of the UI with a reset function.

const reset = () => {
  AUDIO.src = null
  BARS.length = 0
  gsap.ticker.remove(REPORT)
  REPORT = null
  timeline.clear()
  padTimeline()
  drawBars()
}

const playRecording = (e) => {
  const idToPlay = parseInt(e.currentTarget.getAttribute('data-recording'), 10)
  reset()
  const RECORDING = RECORDINGS.recordings.filter(recording => recording.id === idToPlay)[0]
  RECORDING.metadata.forEach(bar => addBar(bar))
  REPORT = drawBars
  AUDIO.src = RECORDING.audioBlob
  AUDIO.play()
}

The actual method of playback and showing the visualization comes down to four lines.

RECORDING.metadata.forEach(bar => addBar(bar))
REPORT = drawBars
AUDIO.src = RECORDING.audioBlob
AUDIO.play()
  1. Loop over the metadata Array to build the timeline.
  2. Set the REPORT function to drawBars.
  3. Set the AUDIO src.
  4. Play the audio which in turn triggers the animation timeline to play.

Challenge: Can you spot any edge cases in the UX? Any issues that could arise? What if we are recording and then choose to play a recording? Could we disable controls when we are in recording mode?

To delete a recording, we use the same reset method but we set a new value in localStorage for our recordings. Once we’ve done that, we need to renderRecordings to show the updates.

const deleteRecording = (e) => {
  if (confirm('Delete Recording?')) {
    const idToDelete = parseInt(e.currentTarget.getAttribute('data-recording'), 10)
    RECORDINGS.recordings = [...RECORDINGS.recordings.filter(recording => recording.id !== idToDelete)]
    window.localStorage.setItem(KEY, JSON.stringify(RECORDINGS))
    reset()
    renderRecordings()    
  }
}

At this stage, we have a functional voice recording app using localStorage. It makes for an interesting start point that you could take and add new features to and improve the UX. For example, how about making it possible for users to download their recordings? Or what if different users could have different themes for their visualization? You could store colors, speeds, etc. against recordings. Then it would be a case of updating the canvas properties and catering for changes in the timeline build. For “Record a Call”, we supported different canvas colors based on the team a user was part of.

This demo supports downloading tracks in the .ogg format.

See the Pen 22. Downloadable Recordings 🚀 by Jhey.

But you could take this app in various directions. Here are some ideas to think about:

  • Reskin the app with a different "look and feel"
  • Support different playback speeds
  • Create different visualization styles. For example, how might you record the metadata for a waveform type visualization?
  • Displaying the recordings count to the user
  • Improve the UX catching edge cases such as the recording to playback scenario from earlier.
  • Allow users to choose their audio input device
  • Take your visualizations 3D with something like ThreeJS
  • Limit the recording time. This would be vital in a real-world app. You would want to limit the size of the data getting sent to the server. It would also enforce recordings to be concise.
  • Currently, downloading would only work in .ogg format. We can’t encode the recording to mp3 in the browser. But you could use serverless with ffmpeg to convert the audio to .mp3 for the user and return it.
Turning This Into A React Application

Well. If you’ve got this far, you have all the fundamentals you need to go off and have fun making audio recording apps. But, I did mention at the top of the article, we used React on the project. As our demos have got more complex and we’ve introduced "state", using a framework makes sense. We aren’t going to go deep into building the app out with React but we can touch on how to approach it. If you’re new to React, check out this "Getting Started Guide" that will get you in a good place.

The main problem we face when switching over to React land is thinking about how we break things up. There isn’t a right or wrong. And then that introduces the problem of how we pass data around via props, etc. For this app, it’s not too tricky. We could have a component for the visualization, the audio playback, and recordings. And then we may opt to wrap them all inside one parent component.

For passing data around and accessing things in the DOM, React.useRef plays an important part. This is “a” React version of the app we’ve built.

See the Pen 23. Taking it to React Land 🚀 by Jhey.

As stated before, there are different ways to achieve the same goal and we won’t dig into everything. But, we can highlight some of the decisions you may have to make or think about.

For the most part, the functional logic remains the same. But, we can use refs to keep track of certain things. And it’s often the case we need to pass these refs in props to the different components.

return (
  <>
    <AudioVisualization
      start={start}
      recording={recording}
      recorder={recorder}
      timeline={timeline}
      drawRef={draw}
      metadata={metadata}
      src={src}
    />
    <RecorderControls
      onRecord={onRecord}
      recording={recording}
      paused={paused}
      onStop={onStop}
    />
    <RecorderPlayback
      src={src}
      timeline={timeline}
      start={start}
      draw={draw}
      audioRef={audioRef}
      scrub={scrub}
    />
    <Recordings
      recordings={recordings}
      onDownload={onDownload}
      onDelete={onDelete}
      onPlay={onPlay}
    />
  </>
)

For example, consider how we are passing the timeline around in a prop. This is a ref for a GreenSock timeline.

const timeline = React.useRef(gsap.timeline())

And this is because some of the components need access to the visualization timeline. But, we could approach this a different way. The alternative would be to pass in event handling as props and have access to the timeline in the scope. Each way would work. But, each way has trade-offs.

Because we’re working in "React" land, we can shift some of our code to be "Reactive". The clue is in the name, I guess. 😅 For example, instead of trying to pad the timeline and draw things from the parent. We can make the canvas component react to audio src changes. By using React.useEffect, we can re-build the timeline based on the metadata available:

React.useEffect(() => {
  barsRef.current.length = 0
  padTimeline()
  drawRef.current = DRAW
  DRAW()
  if (src === null) {
    metadata.current.length = 0      
  } else if (src && metadata.current.length) {
    metadata.current.forEach(bar => addBar(bar))
    gsap.ticker.add(drawRef.current)
  }
}, [src])

The last part that would be good to mention is how we persist recordings to localStorage with React. For this, we are using a custom hook that we built before in our "Getting Started" guide.

const usePersistentState = (key, initialValue) => {
  const [state, setState] = React.useState(
    window.localStorage.getItem(key)
      ? JSON.parse(window.localStorage.getItem(key))
      : initialValue
  )
  React.useEffect(() => {
    // Stringify so we can read it back
    window.localStorage.setItem(key, JSON.stringify(state))
  }, [key, state])
  return [state, setState]
}

This is neat because we can use it the same as React.useState and we get abstracted away from persisting logic.

// Deleting a recording
setRecordings({
  recordings: [
    ...recordings.filter(recording => recording.id !== idToDelete),
  ],
})
// Saving a recording
const audioSafe = e.target.result
const timestamp = new Date()
const name = prompt('Recording name?')
setRecordings({
  recordings: [
    ...recordings,
    {
      audioBlob: audioSafe,
      metadata: metadata.current,
      name: name || timestamp.toUTCString(),
      id: timestamp.getTime(),
    },
  ],
})

I’d recommend digging into some of the React code and having a play if you’re interested. Some things work a little differently in React land. Could you extend the app and make the visualizer support different visual effects? For example, how about passing colors via props for the fill style?

That’s It!

Wow. You’ve made it to the end! This was a long one.

What started as a case study turned into a guide to visualizing audio with JavaScript. We’ve covered a lot here. But, now you have the fundamentals to go forth and make audio visualizations as I did for Kent.

Last but not least, here’s one that visualizes a waveform using @react-three/fiber:

See the Pen 24. Going to 3D React Land 🚀 by Jhey.

That’s ReactJS, ThreeJS and GreenSock all working together! 💪

There’s so much to go off and explore with this one. I’d love to see where you take the demo app or what you can do with it!

As always, if you have any questions, you know where to find me.

Stay Awesome! ʕ •ᴥ•ʔ

P.S. There is a CodePen Collection containing all the demos seen in the articles along with some bonus ones. 🚀

7 Fresh Links on Performance For March 2022

I have a handful of good links to articles about performance that are burning a hole in my bookmarks folder, and wanna drop them here to share.

Screenshot of the new WebPageTest homepage, a tool for testing performance metrics.
The new WebPageTest website design

7 Fresh Links on Performance For March 2022 originally published on CSS-Tricks. You should get the newsletter.

Getting Started With Pandas: Lesson 4

Introduction

We begin with the fourth and final article of our saga of training with Pandas. In this article, we are going to make a summary of the different functions that are used in Pandas to perform missing data treatment. Dealing with missing data is key and a standard challenge of day-by-day data science work, and it has a direct impact on algorithmic performance.

Missing Data

Before we start, we are going to visualize the example dataset that we are going to follow to explain the functions. It is a dataset created by us that includes several cases of use to be able to clearly deal with all the examples that we will call `uncompleted_data`.

How to Create a Fitness Tracker in WordPress (With Charts)

Do you want to create a fitness tracker in WordPress?

Many health and fitness-related businesses and online communities offer fitness tracking tools for their users. This helps to keep users engaged and grow your business.

In this article, we’ll show you how to easily create a fitness tracker in WordPress to boost user engagement on your website.

Creating a fitness tracker in WordPress

What is a Fitness Tracker?

A fitness tracker is an online tool that helps users track different aspects of their health and fitness performance.

It could be a weight loss tracker, a BMI calculator, a meal planner, or other type of health tracker. These online tools can be created using no-code WordPress plugins that calculate different values on the fly.

Why You Should Add a Fitness Tracker to Your WordPress Site

If you run a WordPress website for a health and fitness business or an online community, then adding a fitness tracker to your website is an easy way to build user engagement.

This includes websites like:

  • Gym websites
  • Weight loss websites
  • Fitness trainer’s personal site
  • Nutritional site or food blog
  • Health and fitness community
  • Lifestyle communities
  • and more

You can provide your users with actual tools to track their fitness performance, which is more likely to keep them on your site longer.

Improved user engagement leads to higher conversion rates and better customer retention for your business.

Building an Online Fitness Community

One of the easiest ways to monetize a health and fitness website is by using MemberPress. It is the best WordPress membership plugin and allows you to easily sell online courses and subscriptions.

You can create different types of fitness plans, hide members-only content behind a paywall, create online courses, and more.

Users can then use your built-in fitness tracker to measure their performance and progress over time. This helps them spend more time on your website which improves subscription renewals, upsells, and customer retention.

For more details, see our step by step tutorial on how to create a membership website in WordPress.

Creating an Online Fitness Tracker in WordPress

To create an online fitness tracker in WordPress, you’ll need Formidable Forms.

It is the best WordPress calculator plugin on the market that allows you to create advanced forms and calculators for your website. The drag and drop form builder makes it easy to create your fitness tracking forms without having to write any code or hire a developer.

Plus, it works great with other tools that you may already be using like MemberPress, WooCommerce, or your email service provider.

First, you need to install and activate the Formidable Forms plugin. For more details, see our step-by-step guide on how to install a WordPress plugin.

Note: There is a limited free version of the plugin called Formidable Lite. However, you’ll need the premium version to unlock more features.

Upon activation, you need to visit the Formidable » Global Settings page to enter your plugin license key. You can find this information under your account on the Formidable Forms website.

Formidable Forms license key

After that, you need to visit the Formidable » Forms page.

Here, simply click on the Add New button to create your fitness tracking form.

Create a new fitness tracking form

Next, you will be asked to choose a template for your form.

There are a bunch of templates that you can use, but for this tutorial we’ll be starting with a blank form.

Choose blank form template

Next, provide a name and description for your form and click on the Create button.

This will launch the Formidable Forms drag and drop builder. In the left column, you’ll see a list of the form fields that you can add.

To your right, you’ll see the form preview. Since our form is blank, there are no fields in the preview column.

Form builder interface

Let’s change that and add the form fields for our weight loss fitness tracker.

For this tracker, we’ll be adding the following form fields.

  1. User ID – This will be automatically filled by Formidable Forms for logged in users so that users can see their own performance.
  2. Date – Users will be able to enter the date they measured their weight.
  3. Number – We’ll rename this field to ‘Weight’ and ask users to enter their weight in lbs or kg.
Add form fields

After adding the fields, you can just click on a field to change its properties.

For instance, we edited the number field to change its label to ‘Weight’ and provided instructions in the description option.

Edit the form field

Once you are finished editing the form, click on the Update button to save your form.

Save your form

Adding Fitness Tracker in a WordPress Post or Page

Next, you would want to add the fitness tracker form to your WordPress website.

If you are using MemberPress, then you can simply edit the Account page. You can also create a new page and restrict it to members-only. This way users will be required to login to enter their fitness data.

On the page edit screen, simply add the Formidable Forms block to your page and choose your Fitness Tracker from the drop down menu.

Formidable Forms block

Formidable Form will now display a preview of your form in the page editor. You can go ahead and save your changes.

You can now go ahead login with a dummy new user account and fill out a few test entries.

Live fitness tracker

Display Fitness Form Tracker Data in WordPress

Formidable Forms makes it super easy to display the data collected by your forms on your WordPress website.

You can choose exactly which fields you want to show and display the data in graphs and charts.

Simply edit the post or page where you want to display the form data. Obviously, if you are using MemberPress then you want to restrict that page so that only logged in users can view their own fitness data.

Next, you will need to add a shortcode to your page in the following format.

[frm-graph fields="22" y_title="Weight" x_axis="21" x_title="Date" type="line" title="Weight tracking" user_id="current" data_type="average"]

This shortcode has the following parameters.

  • Fields – ID for the field you want to use to display data from (in this case, the weight field).
  • y_title – Title for the Y Axis. In this case we will be using Weight.
  • x_axis – ID of the field you want to use in the x_axis. In this case the date field.
  • x_title – Title for x_axis. In this case, we will use Date.
  • user_id – ‘Current’ so that only logged in users can view their own data.

You can find the field ID by simply editing your fitness tracker form. You’ll see the ID for each field in the form preview.

Finding the field ID

After adding the shortcode, don’t forget to save your changes.

Next, you need to login with the dummy user account you used earlier to add test entries, and visit the page you just created.

Here is how it looked on our test website:

Fitness tracker data chart

Creating More Fitness Tracking Tools in WordPress

Formidable Forms is the most advanced tools builder for WordPress.

Apart from the weight-loss tracking form, you can also use it to create several other types of online fitness calculators and tools.

It even comes with built-in templates for a BMI calculator and a Daily Calorie Intake calculator.

Fitness calculator templates

We hope this article helped you learn how to easily add a fitness tracker in WordPress. You may also want to see our expert pick of the best live chat software for small business, or follow our complete WordPress SEO guide to get more free visitors from search engines.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Create a Fitness Tracker in WordPress (With Charts) first appeared on WPBeginner.