Automattic Invests $30M in Titan, a Business Email Startup

source: Titan.email

Automattic has invested $30 million in Titan, a professional email suite aimed at businesses and companies offering white-labeled email solutions for customers. At WordCamp India 2021, Automattic CEO Matt Mullenweg said that the company had just made “a pretty large investment” in the India-based startup and stated that it “will be a big part of how WordPress.com offers email going forward.” The Series A investment in Titan is Automattic’s largest to date and values the company at $300 million.

Although Automattic has gained notoriety for its “no offices or email” approach to business, most of the working world has not yet transitioned away from relying heavily on email.

“I think email is definitely on its way out, between things like P2 and Slack, which is a work place chat tool,” Mullenweg said on Glenn Leibowitz’s podcast in 2015. “Email just has so many things wrong with it. I’ve never heard anyone who’ve said they love email, they want more of it–have you?”

Six years later, email is still a reliable source of misery for most working people, but Titan aims to transform it into a more meaningful communication channel for businesses with help of Automattic’s investment. It includes features like scheduled send, follow-up reminders, smart filters and custom folders, email templates, and white labeling with deep integration for various platforms.

WordPress.com’s marketing has increasingly been aimed at small businesses over the past few years with a strong push for users to make money by selling things through their websites. It’s easy to see how Titan makes sense as a supporting product that legitimizes any business with a custom branded email address. Customers who have registered, transferred, or mapped a custom domain through WordPress.com are offered a three-month free trial of Titan-powered email services.

Setting up custom branded email addresses separately would be a much more inconvenient process and most customers with custom domains are likely better off rolling email services into their existing WordPress.com setup. This strategically enables WordPress.com to be more of a one-stop shop for business needs. People are often reluctant to change their email providers so Titan has the effect of making WordPress.com’s products a more sticky subscription that would require some effort to reproduce elsewhere.

“We need an alternative to Google and Microsoft, which have started to monopolize email,” Mullenweg told Bloomberg. “Of about 6 billion email accounts in the world, only a fraction are small business email accounts and they need a product that’s focused on their needs,” he said.

After just two years, Titan has more than 100,000 small business customers. In addition to its relationships with WordPress.com, HostGator, NameSilo, and other web providers, Titan aims to grow its customer base by partnering with popular hosting companies, domain registrars, and site builders.

Is WordPress Development Really All That Hard To Get Into Today?

Oh, how easily we forget the WordPress of 10, 15 years ago.

We are spoiled. We are spoiled by the gluttony of documentation and tutorials, a wealth of knowledge created over more than a decade. We are spoiled by our own expertise, built-in our more vigorous youth, now sitting on our haunches as we have aged along with our beloved platform.

We have grown to become the proverbial grumpy old men. “Back in my day, we didn’t need all these fancy tools to help us write code. We pulled ourselves up by our bootstraps and built everything from scratch.”

I kid. Sort of. I count myself among the old-school developers who helped build the WordPress that so many are still nostalgic about — I think I have earned the right to joke about myself. They were “simpler” times but not really.

Having been in the community as long as I have, I can remember the backlash each time a new feature landed. I recall the days when there really was non-existent documentation for pretty much everything.

Lately, there has been a growing conversation around the difficulty of overcoming WordPress’s current barrier to entry for developers. This has been an ongoing discussion for a few years now, but the latest flare-up comes on the heels of a tweet by Chris Wiegman:

The deeper I get with modern WP dev the more I understand why newer devs don’t like to work on it. This is not the same project as it was in the past. The learning curve is now extremely high regardless of past experience.

I built my first block plugin in a few hours about a month ago. When writing on the experience, I said the barrier to entry was much higher than when I had built my first plugin in 2007. Having had the time to sit back and think about that, I am not sure it was a fair statement. We tend to view the past through rose-colored glasses while forgetting the real struggle.

What I had wanted was to build the plugin in 30 minutes. Had everything been in PHP, that would have been an easy feat for me. Objectively, I am an expert (or close enough) in the language. However, my JavaScript knowledge is 10 years behind.

It had been a while since I had been challenged in that way. That was a distressing experience for someone who had become comfortable in his own skills.

I griped about the docs. But, let’s be honest. WordPress has never had the sort of deep documentation that could teach a budding developer everything. I know this because I have written at least a couple hundred tutorials in my career. Nearly every time, I dug into the project’s source code to make sense of it, which allowed me to teach other developers how to work with various features. And many other developers in the space did the same.

In time, WordPress.org added more robust developer documentation, but this was not built overnight. It is a constantly evolving project.

I also built my first block type with vanilla JavaScript. No build tools. No React docs open. Just plain ol’ JS code in my editor. I needed to crawl before I could walk, and getting that first iteration of the code into a workable state was necessary before I jumped into anything more complex.

In the days after, I re-coded it all to use more modern JavaScript and compiled it with webpack. A week after that, I built a second block plugin with more advanced features.

Was it hard? Definitely. Was the barrier to entry higher than when I first developed plugins? Probably. Truthfully, I did not struggle as much, but I am also at a different point in my life. At 37, I no longer have quite as much drive and likely less capacity for picking up new skills as quickly as in my late teens and early 20s. However, I have a strong foundation and enough experience to overcome some of the hurdles I encountered.

Would a 20-year-old me struggle with this JavaScript landscape more than a strictly PHP-based WordPress? I doubt it. Both had huge learning curves for someone new.

Someone’s first introduction to Subversion or Composer can be just as scary as their initial dive into webpack and npm. For a fresh mind, an open canvas that has yet to be painted with over a decade of doing things the “WordPress way,” I am unsure if the barrier to entry is so much higher.

For us old-schoolers, our world has been flipped upside down. There is no denying that. The Gutenberg project, which is at the core of nearly every new WordPress feature, moves so fast that it is next to impossible to keep up with while also upping your skills. It is easy to get overwhelmed. When this happens to me, I usually take a step back and return when I have had a chance to rest my mind.

Contributing to the WordPress ecosystem has always had one barrier or another. Whether it be the privilege of time, knowledge of PHP, or some other skill, the project has left some people out. That is changing in some ways. Some parts are now available to users that were never accessible before. This is easiest to see from the theming side of things.

“I wish people would see that theme development is heading the opposite way,” tweeted Carolina Nymark. “The entry barrier for designers and new developers will be lower. When people get stuck saying, ‘But I can’t use my hooks in a block theme,’ it is because they are looking at what exists today, not ahead.”

Having spent more time on the theming side of the block editor than plugin development, I agree wholeheartedly. Theme authors have been given a clean slate, or at least by the time block-based themes are supported in core WordPress, this will be true.

While I could write ad nauseum on the details of how theme development itself is leaps and bounds better, the revolutionary part is how the system welcomes those who had no entryway in the past.

Alongside version 5.8, WordPress.org opened the first iteration of its pattern directory. Soon, any user will be able to contribute custom block patterns without writing a single line of code. They can simply create layouts from the editor, copy them, and share them with others.

When the site editor lands, it will once again change the game. Non-coders will have the power to essentially create entire front-end designs without any preexisting programming knowledge.

If WordPress must become more complex for developers to provide end-users this much power, I can live with that.

The highest barrier to entry — as it has always been — is contributing directly to WordPress. Or at least contributing to the block side of things via Gutenberg.

The Getting Started With Code Contribution section of the Block Editor Handbook is a dizzying list of installation notes and procedures that can be off-putting to even the most seasoned developer. Because just about everything is a third-party tool, any trouble you run into just setting up your system is likely to land you in support forums or chatrooms outside of WordPress. Even moving past setup, contributing code to Gutenberg is unlike the days of yore.

What is lacking is the history. We had a decade and a half to perfect our systems for classic WordPress. It was often ugly and brutal building the platform and the ecosystem around it to a point where it was a comfortable space for developers. We have had only three years for modern WordPress to feel as natural as in years past.

I am ever the optimist, hoping that in another 15 years’ time, we are having these same discussions about the new technology stack that WordPress 10.0 has introduced. In the meantime, I look forward to seeing our documentation evolve, our developer community expanding its skillset, and new WordPressers coming along for the journey.

Continued Reading

In this discussion, there are no right or wrong answers. The conversation matters because it enriches our knowledge and informs how we build the next version of WordPress and the web.

The following are links related to this topic that helped inform my thoughts. Each is worth a read, listen, or viewing. If I missed any that others have published, feel free to link them in the comments.

How to Fine-Tune BERT Transformer With spaCy v3.0

Since the seminal paper “Attention Is All You Need” of Vaswani et al, transformer models have become by far the state of the art in NLP technology. With applications ranging from NER, text classification, question answering, or text generation, the applications of this amazing technology are limitless.

More specifically, BERT — which stands for Bidirectional Encoder Representations from Transformers — leverages the transformer architecture in a novel way. For example, BERT analyses both sides of the sentence with a randomly masked word to make a prediction. In addition to predicting the masked token, BERT predicts the sequence of the sentences by adding a classification token [CLS] at the beginning of the first sentence and tries to predict if the second sentence follows the first one by adding a separation token [SEP] between the two sentences.

Choice Words about the Upcoming Deprecation of JavaScript Dialogs

It might be the very first thing a lot of people learn in JavaScript:

alert("Hello, World");

One day at CodePen, we woke up to a ton of customer support tickets about their Pens being broken, which ultimately boiled down to a version of Chrome that shipped where they ripped out alert() from functioning in cross-origin iframes. And all other native “JavaScript Dialogs” like confirm(), prompt() and I-don’t-know-what-else (onbeforeunload?, .htpasswd protected assets?).

Cross-origin iframes are essentially the heart of how CodePen works. You write code, and we execute it for you in an iframe that doesn’t share the same domain as CodePen itself, as the very first line of security defense. We didn’t hear any heads up or anything, but I’m sure the plans were on display.

I tweeted out of dismay. I get that there are potential security concerns here. JavaScript dialogs look the same whether they are triggered by an iframe or not, so apparently it’s confusing-at-best when they’re triggered by an iframe, particularly a cross-origin iframe where the parent page likely has little control. Well, outside of, ya know, a website like CodePen. Chrome cite performance concerns as well, as the nature of these JavaScript dialogs is that they block the main thread when open, which essentially halts everything.

There are all sorts of security and UX-annoyance issues that can come from iframes though. That’s why sandboxing is a thing. I can do this:

<iframe sandbox></iframe>

And that sucker is locked down. If some form tried to submit something in there: nope, won’t work. What if it tries to trigger a download? Nope. Ask for device access? No way. It can’t even load any JavaScript at all. That is unless I let it:

<iframe sandbox="allow-scripts allow-downloads ...etc"></iframe>

So why not an attribute for JavaScript dialogs? Ironically, there already is one: allow-modals. I’m not entirely sure why that isn’t good enough, but as I understand it, nuking JavaScript dialogs in cross-origin iframes is just a stepping stone on the ultimate goal: removing them from the web platform entirely.

Daaaaaang. Entirely? That’s the word. Imagine the number of programming tutorials that will just be outright broken.

For now, even the cross-origin removal is delayed until January 2022, but as far as we know this is going to proceed, and then subsequent steps will happen to remove them entirely. This is spearheaded by Chrome, but the status reports that both Firefox and Safari are on board with the change. Plus, this is a specced change, so I guess we can waggle our fingers literally everywhere here, if you, like me, feel like this wasn’t particularly well-handled.

What we’ve been told so far, the solution is to use postMessage if you really absolutely need to keep this functionality for cross-origin iframes. That sends the string the user uses in window.alert up to the parent page and triggers the alert from there. I’m not the biggest fan here, because:

  1. postMessage is not blocking like JavaScript dialogs are. This changes application flow.
  2. I have to inject code into users code for this. This is new technical debt and it can harm the expectations of expected user output (e.g. an extra <script> in their HTML has weird implications, like changing what :nth-child and friends select).
  3. I’m generally concerned about passing anything user-generated to a parent to execute. I’m sure there are theoretical ways to do it safely, but XSS attack vectors are always surprising in their ingenouity.

Even lower-key suggestions, like window.alert = console.log, have essentially the same issues.

Allow me to hand the mic over to others for their opinions.

Couldn’t the alert be contained to the iframe instead of showing up in the parent window?

Jaden Baptista, Twitter

Yes, please! Doesn’t that solve a big part of this? While making the UX of these dialogs more useful? Put the dang dialogs inside the <iframe>.

“Don’t break the web.” to “Don’t break 90% of the web.” and now “Don’t break the web whose content we agree with.”

Matthew Phillips, Twitter

I respect the desire to get rid of inelegant parts [of the HTML spec] that can be seen as historical mistakes and that cause implementation complexity, but I can’t shake the feeling that the existing use cases are treated with very little respect or curiosity.

Dan Abramov, Twitter

It’s weird to me this is part of the HTML spec, not the JavaScript spec. Right?!

I always thought there was a sort of “prime directive” not to break the web? I’ve literally seen web-based games that used alert as a “pause”, leveraging the blocking nature as a feature. Like: <button onclick="alert('paused')">Pause</button>[.] Funny, but true.

Ben Lesh, Twitter

A metric was cited that only 0.006% of all page views contain a cross-origin iframe that uses these functions, yet:

Seems like a misleading metric for something like confirm(). E.g. if account deletion flow is using confirm() and breaks because of a change to it, this doesn’t mean account deletion flow wasn’t important. It just means people don’t hit it on every session.

Dan Abramov, Twitter

That’s what’s extra concerning to me: alert() is one thing, but confirm() literally returns true or false, meaning it is a logical control structure in a program. Removing that breaks websites, no question. Chris Ferdinandi showed me this little obscure website that uses it:

Speaking of Chris:

The condescending “did you actually read it, it’s so clear” refrain is patronizing AF. It’s the equivalent of “just” or “simply” in developer documentation.

I read it. I didn’t understand it. That’s why I asked someone whose literal job is communicating with developers about changes Chrome makes to the platform.

This is not isolated to one developer at Chrome. The entire message thread where this change was surfaced is filled with folks begging Chrome not to move forward with this proposal because it will break all-the-things.

Chris Ferdinandi, “Google vs. the web”

And here’s Jeremy:

[…] breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

Jeremy Keith, “Foundations”

I’ve painted a pretty bleak picture here. To be fair, there were some tweets with the Yes!! Finally!! vibe, but they didn’t feel like critical assessments to me as much as random Google cheerleading.

Believe it or not, I generally am a fan of Google and think they do a good job of pushing the web forward. I also think it’s appropriate to waggle fingers when I see problems and request they do better. “Better” here means way more developer and user outreach to spell out the situation, way more conversation about the potential implications and transition ideas, and way more openness to bending the course ahead.


The post Choice Words about the Upcoming Deprecation of JavaScript Dialogs appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Move Fast, Break Things &amp; Win: How Facebook Builds Software

Founder, professional poker player, podcaster and author, Jeff Meyerson, "broke his brain" to bring you the inside-story of Facebook.

Interviewing more than two dozen Facebook engineers, Jeff spent two and a half years writing his new book Move Fast: How Facebook Builds Software. Written from Facebook's own view of their software strategy and tactics, Move Fast explores the product strategy, cultural principles, and technologies that made Facebook the dominant social networking company. 

gPass: A Password Generator CLI

Yes, it's true, I created my own password generator CLI tool and published it as an NPM package. It has only ~12kB, easy-to-use and it's open-source.

You can find the package on this link here.

Serverless Kafka in a Cloud-Native Data Lake Architecture

Apache Kafka became the de facto standard for processing data in motion. Kafka is open, flexible, and scalable. Unfortunately, the latter makes operations a challenge for many teams. Ideally, teams can use a serverless Kafka SaaS offering to focus on business logic. However, hybrid scenarios require a cloud-native platform that provides automated and elastic tooling to reduce the operations burden. This blog post explores how to leverage cloud-native and serverless Kafka offerings in a hybrid cloud architecture. We start from the perspective of data at rest with a data lake and explore its relation to data in motion with Kafka.

Data at Rest - Still the Right Approach?

Data at Rest means to store data in a database, data warehouse, or data lake. This means that the data is processed too late in many use cases - even if a real-time streaming component (like Kafka) ingests the data. The data processing is still a web service call, SQL query, or map-reduce batch process away from providing a result to your problem.

Microservices vs Monolith: The Ultimate Comparison 2021

Microservices vs Monolith

Microservices are something new that has hit the software market thread. In the emerging trend of microservices, debates over Microservices vs Monoliths are inevitable. The microservice architecture provides tangible benefits like scalability and flexibility and is a cost-effective way for creating heavy applications. Tech giants like Netflix, Amazon, and Oracle generally implement microservices in one application or the other. On the contrary, the monolithic approach is losing its value as it poses risks to current software delivery methodologies. Before we go into the final comparison of Microservices vs Monoliths, let’s look at both the architectures one-by-one.

1. What Are Microservices?

Microservices are small deployable services that are modeled around complex applications. A microservice is nothing but a newer version of Service-Oriented Architecture (SOA). They communicate with each other using different techniques, and they also have the advantage of being technology agnostic.

How Typescript Enums Work

How Typescript Enums Work

Enums, short for Enumerations, are preset constants that can be defined by a developer for use elsewhere in the code.

For Javascript developers, the concept of enums is usually new, but they are relatively straightforward to understand. Enums add context to what we're writing.

Guide to Verification Emails – Best Practices and Examples

This post is originally published on Designmodo: Guide to Verification Emails – Best Practices and Examples

Guide to Verification Emails - Best Practices and Best Examples

Email validation scares entrepreneurs away since it sounds like a thing that needs extraordinary technical skills. That is not entirely true. Even though it is a complex mechanism that includes a series of sophisticated tests and algorithms, it is straightforward …

For more information please contact Designmodo

How AI Can Eliminate Coupon Frauds

The use of online coupons and vouchers has changed the outlook on online shopping. Sighted rarely a few years ago, digital coupons have become an integral part of the eCommerce industry. Whatever website or application you are using, whether it is an online mart or an appliance store, a restaurant or an online boutique, these vouchers will greet you everywhere. 

The purpose of these vouchers is to increase customer traffic on the company's websites and applications. These coupons are used by e-commerce websites to attract more people to their websites and download their mobile apps. Discounts are given on placing the first orders, referring to friends, and many more other ways to make the users their permanent customers. 

5 Optimistic Ways That Artificial Intelligence Is Revolutionizing Mental Healthcare

Indeed, William Gibson has very well stated, “The future is already here, the fact is it’s not just very evenly distributed.”

Revolutionary artificial intelligence algorithms are creeping into mental healthcare and are reshaping its dimensions. You might already be discussing with an AI bot right now the question “how does it make you feel to hear that?” Your AI therapist might be successful enough to take you out from the feeling of worry about what direction the future will take with the advent of artificial intelligence. Looking beyond the horrifying headlines of Skynet coming true, the progressive utilization of artificial intelligence in mental healthcare is absolutely splendid news for many of us. 

HTTP/3 From A To Z: Core Concepts (Part 1)

You may have read some blog posts or heard conference talks on this topic and think you know the answers. You’ve probably heard things like: “HTTP/3 is much faster than HTTP/2 when there is packet loss”, or “HTTP/3 connections have less latency and take less time to set up”, and probably “HTTP/3 can send data more quickly and can send more resources in parallel”.

These statements and articles typically skip over some crucial technical details, are lacking in nuance, and usually are only partially correct. Often they make it seem as if HTTP/3 is a revolution in performance, while it is really a more modest (yet still useful!) evolution. This is dangerous, because the new protocol will probably not be able to live up to these high expectations in practice. I fear this will lead to many people ending up disappointed and to newcomers being confused by heaps of blindly perpetuated misinformation.

I’m afraid of this because we’ve seen exactly the same happen with HTTP/2. It was heralded as an amazing performance revolution, with exciting new features such as server push, parallel streams, and prioritization. We would have been able to stop bundling resources, stop sharding our resources across multiple servers, and heavily streamline the page-loading process. Websites would magically become 50% faster with the flip of a switch!

Five years later, we know that server push doesn’t really work in practice, streams and prioritization are often badly implemented, and, consequently, (reduced) resource bundling and even sharding are still good practices in some situations.

Similarly, other mechanisms that tweak protocol behavior, such as preload hints, often contain hidden depths and bugs, making them difficult to use correctly.

As such, I feel it is important to prevent this type of misinformation and these unrealistic expectations from spreading for HTTP/3 as well.

In this article series, I will discuss the new protocol, especially its performance features, with more nuance. I will show that, while HTTP/3 indeed has some promising new concepts, sadly, their impact will likely be relatively limited for most web pages and users (yet potentially crucial for a small subset). HTTP/3 is also quite challenging to set up and use (correctly), so take care when configuring the new protocol.

This series is divided into three parts:

  1. HTTP/3 history and core concepts
    This is targeted at people new to HTTP/3 and protocols in general, and it mainly discusses the basics.
  2. HTTP/3 performance features (coming up soon!)
    This is more in depth and technical. People who already know the basics can start here.
  3. Practical HTTP/3 deployment options (coming up soon!)
    This explains the challenges involved in deploying and testing HTTP/3 yourself. It details how and if you should change your web pages and resources as well.

This series is aimed mainly at web developers who do not necessarily have a deep knowledge of protocols and would like to learn more. However, it does contain enough technical details and many links to external sources to be of interest to more advanced readers as well.

Why Do We Need HTTP/3?

One question I’ve often encountered is, “Why do we need HTTP/3 so soon after HTTP/2, which was only standardized in 2015?” This is indeed strange, until you realize that we didn’t really need a new HTTP version in the first place, but rather an upgrade of the underlying Transmission Control Protocol (TCP).

TCP is the main protocol that provides crucial services such as reliability and in-order delivery to other protocols such as HTTP. It’s also one of the reasons we can keep using the Internet with many concurrent users, because it smartly limits each user’s bandwidth usage to their fair share.

Did You Know?

When using HTTP(S), you’re really using several protocols besides HTTP at the same time. Each of the protocols in this “stack” has its own features and responsibilities (see image below). For example, while HTTP deals with URLs and data interpretation, Transport Layer Security (TLS) ensures security by encryption, TCP enables reliable data transport by retransmitting lost packets, and Internet Protocol (IP) routes packets from one endpoint to another across different devices in between (middleboxes).

This “layering” of protocols on top of one another is done to allow easy reuse of their features. Higher-layer protocols (such as HTTP) don’t have to reimplement complex features (such as encryption) because lower-layer protocols (such as TLS) already do that for them. As another example, most applications on the Internet use TCP internally to ensure that all of their data are transmitted in full. For this reason, TCP is one of the most widely used and deployed protocols on the Internet.

TCP has been a cornerstone of the web for decades, but it started to show its age in the late 2000s. Its intended replacement, a new transport protocol named QUIC, differs enough from TCP in a few key ways that running HTTP/2 directly on top of it would be very difficult. As such, HTTP/3 itself is a relatively small adaptation of HTTP/2 to make it compatible with the new QUIC protocol, which includes most of the new features people are excited about.

QUIC is needed because TCP, which has been around since the early days of the Internet, was not really built with maximum efficiency in mind. For example, TCP requires a “handshake” to set up a new connection. This is done to ensure that both client and server exist and that they’re willing and able to exchange data. It also, however, takes a full network round trip to complete before anything else can be done on a connection. If the client and server are geographically distant, each round-trip time (RTT) can take over 100 milliseconds, incurring noticeable delays.

As a second example, TCP sees all of the data it transports as a single “file” or byte stream, even if we’re actually using it to transfer several files at the same time (for example, when downloading a web page consisting of many resources). In practice, this means that if TCP packets containing data of a single file are lost, then all other files will also get delayed until those packets are recovered.

This is called head-of-line (HoL) blocking. While these inefficiencies are quite manageable in practice (otherwise, we wouldn’t have been using TCP for over 30 years), they do affect higher-level protocols such as HTTP in a noticeable way.

Over time, we’ve tried to evolve and upgrade TCP to improve some of these issues and even introduce new performance features. For example, TCP Fast Open gets rid of the handshake overhead by allowing higher-layer protocols to send data along from the start. Another effort is called MultiPath TCP. Here, the idea is that your mobile phone typically has both Wi-Fi and a (4G) cellular connection, so why not use them both at the same time for extra throughput and robustness?

It’s not terribly difficult to implement these TCP extensions. However, it is extremely challenging to actually deploy them at Internet scale. Because TCP is so popular, almost every connected device has its own implementation of the protocol on board. If these implementations are too old, lack updates, or are buggy, then the extensions won’t be practically usable. Put differently, all implementations need to know about the extension in order for it to be useful.

This wouldn’t be much of a problem if we were only talking about end-user devices (such as your computer or web server), because those can relatively easily be updated manually. However, many other devices are sitting between the client and the server that also have their own TCP code on board (examples include firewalls, load balancers, routers, caching servers, proxies, etc.).

These middleboxes are often more difficult to update and sometimes more strict in what they accept. For example, if the device is a firewall, it might be configured to block all traffic containing (unknown) extensions. In practice, it turns out that an enormous number of active middleboxes make certain assumptions about TCP that no longer hold for the new extensions.

Consequently, it can take years to even over a decade before enough (middlebox) TCP implementations become updated to actually use the extensions on a large scale. You could say that it has become practically impossible to evolve TCP.

As a result, it was clear that we would need a replacement protocol for TCP, rather than a direct upgrade, to resolve these issues. However, due to the sheer complexity of TCP’s features and their various implementations, creating something new but better from scratch would be a monumental undertaking. As such, in the early 2010s it was decided to postpone this work.

After all, there were issues not only with TCP, but also with HTTP/1.1. We chose to split up the work and first “fix” HTTP/1.1, leading to what is now HTTP/2. When that was done, the work could start on the replacement for TCP, which is now QUIC. Originally, we had hoped to be able to run HTTP/2 on top of QUIC directly, but in practice this would make implementations too inefficient (mainly due to feature duplication).

Instead, HTTP/2 was adjusted in a few key areas to make it compatible with QUIC. This tweaked version was eventually named HTTP/3 (instead of HTTP/2-over-QUIC), mainly for marketing reasons and clarity. As such, the differences between HTTP/1.1 and HTTP/2 are much more substantial than those between HTTP/2 and HTTP/3.

Takeaway

The key takeaway here is that what we needed was not really HTTP/3, but rather “TCP/2”, and we got HTTP/3 “for free” in the process. The main features we’re excited about for HTTP/3 (faster connection set-up, less HoL blocking, connection migration, and so on) are really all coming from QUIC.

What Is QUIC?

You might be wondering why this matters? Who cares if these features are in HTTP/3 or QUIC? I feel this is important, because QUIC is a generic transport protocol which, much like TCP, can and will be used for many use cases in addition to HTTP and web page loading. For example, DNS, SSH, SMB, RTP, and so on can all run over QUIC. As such, let’s look at QUIC a bit more in depth, because it’s here where most of the misconceptions about HTTP/3 that I’ve read come from.

One thing you might have heard is that QUIC runs on top of yet another protocol, called the User Datagram Protocol (UDP). This is true, but not for the (performance) reasons many people claim. Ideally, QUIC would have been a fully independent new transport protocol, running directly on top of IP in the protocol stack shown in the image I shared above.

However, doing that would have led to the same issue we encountered when trying to evolve TCP: All devices on the Internet would first have to be updated in order to recognize and allow QUIC. Luckily, we can build QUIC on top of the one other broadly supported transport-layer protocol on the Internet: UDP.

Did You Know?

UDP is the most bare-bones transport protocol possible. It really doesn’t provide any features, besides so-called port numbers (for example, HTTP uses port 80, HTTPS is on 443, and DNS employs port 53). It does not set up a connection with a handshake, nor is it reliable: If a UDP packet is lost, it is not automatically retransmitted. UDP’s “best effort” approach thus means that it’s about as performant as you can get: There’s no need to wait for the handshake and there’s no HoL blocking. In practice, UDP is mostly used for live traffic that updates at a high rate and thus suffers little from packet loss because missing data is quickly outdated anyway (examples include live video conferencing and gaming). It’s also useful for cases that need low up-front delay; for example, DNS domain name lookups really should only take a single round trip to complete.

Many sources claim that HTTP/3 is built on top of UDP because of performance. They say that HTTP/3 is faster because, just like UDP, it doesn’t set up a connection and doesn’t wait for packet retransmissions. These claims are wrong. As we’ve said above, UDP is used by QUIC and, thus, HTTP/3 mainly because the hope is that it will make them easier to deploy, because it is already known to and implemented by (almost) all devices on the Internet.

On top of UDP, then, QUIC essentially reimplements almost all features that make TCP such a powerful and popular (yet somewhat slower) protocol. QUIC is absolutely reliable, using acknowledgements for received packets and retransmissions to make sure lost ones still arrive. QUIC also still sets up a connection and has a highly complex handshake.

Finally, QUIC also uses so-called flow-control and congestion-control mechanisms that prevent a sender from overloading the network or the receiver, but that also make TCP slower than what you could do with raw UDP. The key thing is that QUIC implements these features in a smarter, more performant way than TCP. It combines decades of deployment experience and best practices of TCP with some core new features. We will discuss these features in more depth later in this article.

Takeaway

The key takeaway here is that there is no such thing as a free lunch. HTTP/3 isn’t magically faster than HTTP/2 just because we swapped TCP for UDP. Instead, we’ve reimagined and implemented a much more advanced version of TCP and called it QUIC. And because we want to make QUIC easier to deploy, we run it over UDP.

The Big Changes

So, how exactly does QUIC improve upon TCP, then? What is so different? There are several new concrete features and opportunities in QUIC (0-RTT data, connection migration, more resilience to packet loss and slow networks) that we will discuss in detail in the next part of the series. However, all of these new things basically boil down to four main changes:

  1. QUIC deeply integrates with TLS.
  2. QUIC supports multiple independent byte streams.
  3. QUIC uses connection IDs.
  4. QUIC uses frames.

Let’s take a closer look at each of these points.

There Is No QUIC Without TLS

As mentioned, TLS (the Transport Layer Security protocol) is in charge of securing and encrypting data sent over the Internet. When you use HTTPS, your plaintext HTTP data is first encrypted by TLS, before being transported by TCP.

Did You Know?

TLS’s technical details, luckily, aren’t really necessary here; you just need to know that encryption is done using some pretty advanced math and very large (prime) numbers. These mathematical parameters are negotiated between the client and the server during a separate TLS-specific cryptographic handshake. Just like the TCP handshake, this negotiation can take some time. In older versions of TLS (say, version 1.2 and lower), this typically takes two network round trips. Luckily, newer versions of TLS (1.3 is the latest) reduce this to just one round trip. This is mainly because TLS 1.3 severely limits the different mathematical algorithms that can be negotiated to just a handful (the most secure ones). This means that the client can just immediately guess which ones the server will support, instead of having to wait for an explicit list, saving a round trip.

In the early days of the Internet, encrypting traffic was quite costly in terms of processing. Additionally, it was also not deemed necessary for all use cases. Historically, TLS has thus been a fully separate protocol that can optionally be used on top of TCP. This is why we have a distinction between HTTP (without TLS) and HTTPS (with TLS).

Over time, our attitude towards security on the Internet has, of course, changed to “secure by default”. As such, while HTTP/2 can, in theory, run directly over TCP without TLS (and this is even defined in the RFC specification as cleartext HTTP/2), no (popular) web browser actually supports this mode. In a way, the browser vendors made a conscious trade-off for more security at the cost of performance.

Given this clear evolution towards always-on TLS (especially for web traffic), it is no surprise that the designers of QUIC decided to take this trend to the next level. Instead of simply not defining a cleartext mode for HTTP/3, they elected to ingrain encryption deeply into QUIC itself. While the first Google-specific versions of QUIC used a custom set-up for this, standardized QUIC uses the existing TLS 1.3 itself directly.

For this, it sort of breaks the typical clean separation between protocols in the protocol stack, as we can see in the previous image. While TLS 1.3 can still run independently on top of TCP, QUIC instead sort of encapsulates TLS 1.3. Put differently, there is no way to use QUIC without TLS; QUIC (and, by extension, HTTP/3) is always fully encrypted. Furthermore, QUIC encrypts almost all of its packet header fields as well; transport-layer information (such as packet numbers, which are never encrypted for TCP) is no longer readable by intermediaries in QUIC (even some of the packet header flags are encrypted).

For all this, QUIC first uses the TLS 1.3 handshake more or less as you would with TCP to establish the mathematical encryption parameters. After this, however, QUIC takes over and encrypts the packets itself, whereas with TLS-over-TCP, TLS does its own encryption. This seemingly small difference represents a fundamental conceptual change towards always-on encryption that is enforced at ever lower protocol layers.

This approach provides QUIC with several benefits:

  1. QUIC is more secure for its users.
    There is no way to run cleartext QUIC, so there are also fewer options for attackers and eavesdroppers to listen in on. (Recent research has shown how dangerous HTTP/2’s cleartext option can be.)
  2. QUIC’s connection set-up is faster.
    While for TLS-over-TCP, both protocols need their own separate handshakes, QUIC instead combines the transport and cryptographic handshake into one, saving a round trip (see image above). We’ll discuss this in more detail in part 2 (coming soon!).
  3. QUIC can evolve more easily.
    Because it is fully encrypted, middleboxes in the network can no longer observe and interpret its inner workings like they can with TCP. Consequently, they also can no longer break (accidentally) in newer versions of QUIC because they failed to update. If we want to add new features to QUIC in the future, we “only” have to update the end devices, instead of all of the middleboxes as well.

Next to these benefits, however, there are also some potential downsides to extensive encryption:

  1. Many networks will hesitate to allow QUIC.
    Companies might want to block it on their firewalls, because detecting unwanted traffic becomes more difficult. ISPs and intermediate networks might block it because metrics such as average delays and packet loss percentages are no longer easily available, making it more difficult to detect and diagnose problems. This all means that QUIC will probably never be universally available, which we’ll discuss more in part 3 (coming soon!).
  2. QUIC has a higher encryption overhead.
    QUIC encrypts each individual packet with TLS, whereas TLS-over-TCP can encrypt several packets at the same time. This potentially makes QUIC slower for high-throughput scenarios (as we’ll see in part 2 (coming soon!)).
  3. QUIC makes the web more centralized.
    A complaint I’ve encountered often is something like, “QUIC is being pushed by Google because it gives them full access to the data while sharing none of it with others”. I mostly disagree with this. First, QUIC doesn’t hide more (or less!) user-level information (for example, which URLs you are visiting) from outside observers than TLS-over-TCP does (QUIC keeps the status quo).

Secondly, while Google initiated the QUIC project, the final protocols we’re talking about today were designed by a much wider team in the Internet Engineering Task Force (IETF). IETF’s QUIC is technically very different from Google’s QUIC. Still, it is true that the people in the IETF are mostly from larger companies like Google and Facebook and CDNs like Cloudflare and Fastly. Due to QUIC’s complexity, it will be mainly those companies that have the necessary know-how to correctly and performantly deploy, for example, HTTP/3 in practice. This will probably lead to more centralization in those companies, which _is_ a real concern.

On A Personal Note:

This is one of the reasons I write these types of articles and do a lot of technical talks: to make sure more people understand the protocol’s details and can use them independently of these big companies.

Takeaway

The key takeaway here is that QUIC is deeply encrypted by default. This not only improves its security and privacy characteristics, but also helps its deployability and evolvability. It makes the protocol a bit heavier to run but, in return, allows other optimizations, such as faster connection establishment.

QUIC Knows About Multiple Byte Streams

The second big difference between TCP and QUIC is a bit more technical, and we will explore its repercussions in more detail in part 2 (coming soon!). For now, though, we can understand the main aspects in a high-level way.

Did You Know?

Consider first that even a simple web page is made up of a number of independent files and resources. There’s HTML, CSS, JavaScript, images, and so on. Each of these files can be seen as a simple “binary blob” — a collection of zeroes and ones that are interpreted in a certain way by the browser. When sending these files over the network, we don’t transfer them all at once. Instead, they are subdivided into smaller chunks (typically, of about 1400 bytes each) and sent in individual packets. As such, we can view each resource as being a separate “byte stream”, as data is downloaded or “streamed” piecemeal over time.

For HTTP/1.1, the resource-loading process is quite simple, because each file is given its own TCP connection and downloaded in full. For example, if we have files A, B, and C, we would have three TCP connections. The first would see a byte stream of AAAA, the second BBBB, the third CCCC (with each letter repetition being a TCP packet). This works but is also very inefficient because each new connection has some overhead.

In practice, browsers impose limits on how many concurrent connections may be used (and thus how many files may be downloaded in parallel) — typically, between 6 and 30 per page load. Connections are then reused to download a new file once the previous has fully transferred. These limits eventually started to hinder web performance on modern pages, which often load many more than 30 resources.

Improving this situation was one of the main goals for HTTP/2. The protocol does this by no longer opening a new TCP connection for each file, but instead downloading the different resources over a single TCP connection. This is achieved by “multiplexing” the different byte streams. That’s a fancy way of saying that we mix data of the different files when transporting it. For our three example files, we would get a single TCP connection, and the incoming data might look like AABBCCAABBCC (although many other ordering schemes are possible). This seems simple enough and indeed works quite well, making HTTP/2 typically just as fast or a bit faster than HTTP/1.1, but with much less overhead.

Let’s take a closer look at the difference:

However, there is a problem on the TCP side. You see, because TCP is a much older protocol and not made for just loading web pages, it doesn’t know about A, B, or C. Internally, TCP thinks it’s transporting just a single file, X, and it doesn’t care that what it views as XXXXXXXXXXXX is actually AABBCCAABBCC at the HTTP level. In most situations, this doesn’t matter (and it actually makes TCP quite flexible!), but that changes when there is, for example, packet loss on the network.

Suppose the third TCP packet is lost (the one containing the first data for file B), but all of the other data are delivered. TCP deals with this loss by retransmitting a new copy of the lost data in a new packet. This retransmission can, however, take a while to arrive (at least one RTT). You might think that’s not a big problem, as we see there is no loss for resources A and C. As such, we can start processing them while waiting for the missing data for B, right?

Sadly, that’s not the case, because the retransmission logic happens at the TCP layer, and TCP does not know about A, B, and C! TCP instead thinks that a part of the single X file has been lost, and thus it feels it has to keep the rest of X’s data from being processed until the hole is filled. Put differently, while at the HTTP/2 level, we know that we could already process A and C, TCP does not know this, causing things to be slower than they potentially could be. This inefficiency is an example of the “head-of-line (HoL) blocking” problem.

Solving HoL blocking at the transport layer was one of the main goals of QUIC. Unlike TCP, QUIC is intimately aware that it is multiplexing multiple, independent byte streams. It, of course, doesn’t know that it’s transporting CSS, JavaScript, and images; it just knows that the streams are separate. As such, QUIC can perform packet loss detection and recovery logic on a per-stream basis.

In the scenario above, it would only hold back the data for stream B, and unlike TCP, it would deliver any data for A and C to the HTTP/3 layer as soon as possible. (This is illustrated below.) In theory, this could lead to performance improvements. In practice, however, the story is much more nuanced, as we’ll discuss in part 2 (coming soon!).

We can see that we now have a fundamental difference between TCP and QUIC. This is, incidentally, also one of the main reasons why we can’t just run HTTP/2 as is over QUIC. As we said, HTTP/2 also includes a concept of running multiple streams over a single (TCP) connection. As such, HTTP/2-over-QUIC would have two different and competing stream abstractions on top of one another.

Making them work together nicely would be very complex and error-prone; so, one of the key differences between HTTP/2 and HTTP/3 is that the latter removes the HTTP stream logic and reuses QUIC streams instead. As we’ll see in part 2 (coming soon!), though, this has other repercussions in how features such as server push, header compression, and prioritization are implemented.

Takeaway

The key takeaway here is that TCP was never designed to transport multiple, independent files over a single connection. Because that is exactly what web browsing requires, this has led to many inefficiencies over the years. QUIC solves this by making multiple byte streams a core concept at the transport layer and handling packet loss on a per-stream basis.

QUIC Supports Connection Migration

The third major improvement in QUIC is the fact that connections can stay alive longer.

Did You Know?

We often use the concept of a “connection” when talking about web protocols. However, what exactly is a connection? Typically, people speak of a TCP connection once there has been a handshake between two endpoints (say, the browser or client and the server). This is why UDP is often (somewhat misguidedly) said to be “connectionless”, because it doesn’t do such a handshake. However, the handshake is really nothing special: It’s just a few packets with a specific form being sent and received. It has a few goals, main among them being to make sure there is something on the other side and that it’s willing and able to talk to us. It’s worth repeating here that QUIC also performs a handshake, even though it runs over UDP, which by itself doesn’t.

So, the question becomes, how do those packets arrive at the correct destination? On the Internet, IP addresses are used to route packets between two unique machines. However, just having the IPs for your phone and the server isn’t enough, because both want to be able to run multiple networked programs at each end simultaneously.

This is why each individual connection is also assigned a port number on both endpoints to differentiate connections and the applications they belong to. Server applications typically have a fixed port number depending on their function (for example ports 80 and 443 for HTTP(S), and port 53 for DNS), while clients usually choose their port numbers (semi-)randomly for each connection.

As such, to define a unique connection across machines and applications, we need these four things, the so-called 4-tuple: client IP address + client port + server IP address + server port.

In TCP, connections are identified by just the 4-tuple. So, if just one of those four parameters changes, the connection becomes invalid and needs to be re-established (including a new handshake). To understand this, imagine the parking-lot problem: You are currently using your smartphone inside of a building with Wi-Fi. As such, you have an IP address on this Wi-Fi network.

If you now move outside, your phone might switch to the cellular 4G network. Because this is a new network, it will get a completely new IP address, because those are network-specific. Now, the server will see TCP packets coming in from a client IP that it hasn’t seen before (although the two ports and the server IP could, of course, stay the same). This is illustrated below.

But how can the server know that these packets from a new IP belong to the “connection”? How does it know these packets don’t belong to a new connection from another client in the cellular network that chose the same (random) client port (which can easily happen)? Sadly, it cannot know this.

Because TCP was invented before we were even dreaming of cellular networks and smartphones, there is, for example, no mechanism that allows the client to let the server know it has changed IPs. There isn’t even a way to “close” the connection, because a TCP reset or fin command sent to the old 4-tuple wouldn’t even reach the client anymore. As such, in practice, every network change means that existing TCP connections can no longer be used.

A new TCP (and possibly TLS) handshake has to be executed to set up a new connection, and, depending on the application-level protocol, in-process actions would need to be restarted. For example, if you were downloading a large file over HTTP, then that file might have to be re-requested from the start (for example, if the server doesn’t support range requests). Another example is live video conferencing, where you might have a short blackout when switching networks.

Note that there are other reasons why the 4-tuple might change (for example, NAT rebinding), which we’ll discuss more in part 2 (coming soon!).

Restarting the TCP connections can thus have a severe impact (waiting for new handshakes, restarting downloads, re-establishing context). To solve this problem, QUIC introduces a new concept named the connection identifier (CID). Each connection is assigned another number on top of the 4-tuple that uniquely identifies it between two endpoints.

Crucially, because this CID is defined at the transport layer in QUIC itself, it doesn’t change when moving between networks! This is shown in the image below. To make this possible, the CID is included at the front of each and every QUIC packet (much like how the IP addresses and ports are also present in each packet). (It’s actually one of the few things in the QUIC packet header that aren’t encrypted!)

With this set-up, even when one of the things in the 4-tuple changes, the QUIC server and client only need to look at the CID to know that it’s the same old connection, and then they can keep using it. There is no need for a new handshake, and the download state can be kept intact. This feature is typically called connection migration. This is, in theory, better for performance, but, as we will discuss in part 2 (coming soon!), it’s, of course, a nuanced story again.

There are other challenges to overcome with the CID. For example, if we would indeed use just a single CID, it would make it extremely easy for hackers and eavesdroppers to follow a user across networks and, by extension, deduce their (approximate) physical locations. To prevent this privacy nightmare, QUIC changes the CID every time a new network is used.

That might confuse you, though: Didn’t I just say that the CID is supposed to be the same across networks? Well, that was an oversimplification. What really happens internally is that the client and server agree on a common list of (randomly generated) CIDs that all map to the same conceptual “connection”.

For example, they both know that CIDs K, C, and D in reality all map to connection X. As such, while the client might tag packets with K on Wi-Fi, it can switch to using C on 4G. These common lists are negotiated fully encrypted in QUIC, so potential attackers won’t know that K and C are really X, but the client and server would know this, and they can keep the connection alive.

It gets even more complex, because clients and servers will have different lists of CIDs that they choose themselves (much like they have different port numbers). This is mainly to support with routing and load balancing in large-scale server set-ups, as we’ll see in more detail in part 3 (coming soon!).

Takeaway

The key takeaway here is that in TCP, connections are defined by four parameters that can change when endpoints change networks. As such, these connections sometimes need to be restarted, leading to some downtime. QUIC adds another parameter to the mix, called the connection ID. Both the QUIC client and server know which connection IDs map to which connections and are thus more robust against network changes.

QUIC Is Flexible and Evolvable

A final aspect of QUIC is that it’s specifically made to be easy to evolve. This is accomplished in several different ways. First, as discussed, the fact that QUIC is almost fully encrypted means that we only need to update the endpoints (clients and servers), and not all middleboxes, if we want to deploy a newer version of QUIC. That still takes time, but typically in the order of months, not years.

Secondly, unlike TCP, QUIC does not use a single fixed packet header to send all protocol meta data. Instead, QUIC has short packet headers and uses a variety of “frames” (kind of like miniature specialized packets) inside the packet payload to communicate extra information. There is, for example, an ACK frame (for acknowledgements), a NEW_CONNECTION_ID frame (to help set up connection migration), and a STREAM frame (to carry data), as shown in the image below.

This is mainly done as an optimization, because not every packet carries all possible meta data (and so the TCP packet header usually wastes quite some bytes — see also the image above). A very useful side effect of using frames, however, is that defining new frame types as extensions to QUIC will be very easy in the future. A very important one, for example, is the DATAGRAM frame, which allows unreliable data to be sent over an encrypted QUIC connection.

Thirdly, QUIC uses a custom TLS extension to carry what are called transport parameters. These allow the client and server to choose a configuration for a QUIC connection. This means they can negotiate which features are enabled (for example, whether to allow connection migration, which extensions are supported, etc.) and communicate sensible defaults for some mechanisms (for example, maximum supported packet size, flow control limits). While the QUIC standard defines a long list of these, it also allows extensions to define new ones, again making the protocol more flexible.

Lastly, while not a real requirement of QUIC by itself, most implementations are currently done in “user space” (as opposed to TCP, which is usually done in “kernel space”). The details are discussed in part 2 (coming soon!), but this mainly means that it’s much easier to experiment with and deploy QUIC implementation variations and extensions than it is for TCP.

Takeaway

While QUIC has now been standardized, it should really be regarded as QUIC version 1 (which is also clearly stated in the Request For Comments (RFC)), and there is a clear intent to create version 2 and more fairly quickly. On top of that, QUIC allows for the easy definition of extensions, so even more use cases can be implemented.

Conclusion

Let’s summarize what we’ve learned in this part. We have mainly talked about the omnipresent TCP protocol and how it was designed in a time when many of today’s challenges were unknown. As we tried to evolve TCP to keep up, it became clear this would be difficult in practice, because almost every device has its own TCP implementation on board that would need to be updated.

To bypass this issue while still improving TCP, we created the new QUIC protocol (which is really TCP 2.0 under the hood). To make QUIC easier to deploy, it is run on top of the UDP protocol (which most network devices also support), and to make sure it can evolve in the future, it is almost entirely encrypted by default and makes use of a flexible framing mechanism.

Other than this, QUIC mostly mirrors known TCP features, such as the handshake, reliability, and congestion control. The two main changes besides encryption and framing are the awareness of multiple byte streams and the introduction of the connection ID. These changes were, however, enough to prevent us from running HTTP/2 on top of QUIC directly, necessitating the creation of HTTP/3 (which is really HTTP/2-over-QUIC under the hood).

QUIC’s new approach gives way to a number of performance improvements, but their potential gains are more nuanced than typically communicated in articles on QUIC and HTTP/3. Now that we know the basics, we can discuss these nuances in more depth in the next part of this series. Stay tuned!

How to Add Live Ajax Search to Your WordPress Site (The Easy Way)

Do you want to add live Ajax search to your WordPress site?

Adding instant search to WordPress improves the default site search and makes it easier for your visitors to find the pages and posts they’re looking for.

In this article, we’ll show you how to add live Ajax search to your WordPress site, step by step.

How to add live Ajax search to your WordPress site (the easy way)

Why Add Live Ajax Search to WordPress?

Live Ajax search, also called instant search, improves the default WordPress search experience by adding the drop down and autocomplete feature that’s common in search engines like Google.

Here’s an example of this in action:

Google search live example

Live search guesses what users are searching for as they type and helps them find relevant content faster. This is a huge improvement from the default WordPress search.

By helping users find what they’re looking for quickly, live search will help them stay on your site longer, which can increase pageviews and reduce bounce rate.

That being said, let’s take a look at how you can simply add live Ajax search to your WordPress blog or website.

Adding Ajax Search to WordPress with a WordPress Plugin

The easiest way to add Ajax live search to WordPress is using the SearchWP plugin. It’s the best WordPress search plugin on the market used by over 30,000 websites.

SearchWP

This plugin goes beyond indexing post content and will index everything on your website, like custom fields, PDF documents, text files, WooCommerce products, and more.

For this tutorial, you can use the free SearchWP Live Ajax Lite Search plugin, since it automatically enables Ajax live search.

First thing you need to do is install and activate the plugin. For more details, see our step by step guide on how to install a WordPress plugin.

Upon activation, the default WordPress search form will now automatically include the Ajax live search feature.

Displaying Ajax Live Search on Your WordPress Site

Since any active search bar now has live search, all you have to do is decide where you want them to display.

Below you’ll learn how to add the live search bar to common locations on your WordPress website.

Adding Live Ajax Search to WordPress Sidebar

One of the most popular areas to add a search bar is the WordPress sidebar. This makes it easy for your visitors to do a search no matter where they are on your website.

To add the search widget to WordPress, simply go to Appearance » Widgets to bring up the blocks based widget editor.

Customize widget blocks

Each widget area of your WordPress theme will have a separate tab in the block editor.

On our test site, our sidebar widget area is called ‘Right Sidebar,’ but yours may have a different name depending on your theme.

Simply click the ‘+’ icon underneath the sidebar section.

Add sidebar widget block

Then, type ‘SearchWP’ into the search bar and click the ‘SearchWP Live Search’ icon.

This will automatically insert the live Ajax search widget into your sidebar.

Add SearchWP live search widget

You can customize the Title section to change the heading for the search box. When you’re finished, click the Update button to save your changes and make the search bar live.

Customize and save live search widget

Now your live search bar will be usable by all of your website visitors.

Live widget search example

You can follow the same process to add the live search bar to any other widget area of your website. To add it to your navigation menu area, see our guide on how to add a search bar to your WordPress menu.

Adding Live Ajax Search to WordPress Pages

You may also want to add a live Ajax search box to other pages of your website. For example, you could have an archive page that lets your visitors search through your content.

To do this, you’ll need to navigate to the post or page you want to edit. For this example, we’ll show you how to add the live search bar to a WordPress page.

First, go to Pages » All Pages and then click on the page you want to edit.

Open up WordPress page

Once the page is open, click the ‘+’ icon on the page editor screen.

This will bring up the blocks menu.

Add new page block

Next, type ‘Search’ into the box and then click on the ‘Search’ icon to add it to your page.

It will automatically place the search bar for you.

Select search block

You can also customize the search title and the placeholder text inside the search box.

After that, make sure to click the ‘Update’ button in the upper right corner of your page.

Save live search block on page

Now, your visitors can use the live search bar on your website to quickly find what they’re looking for.

You can use the same process to add a search bar to any post or page.

Live search page example

Customizing Instant WordPress Search Results

SearchWP is a very versatile search plugin and goes beyond just the post content and indexes everything on your site including custom fields, ACF fields, text files, PDF document content, custom tables, custom taxonomies, WooCommerce product attributes, and more.

The pro version of the SearchWP plugin lets you completely customize your search results by creating your own relevance scale and adjust the algorithm without writing any code.

SearchWP Custom Engines

You also get the Metrics feature that lets you see what your visitors are searching for, and many other powerful features to further improve your on-site search feature.

We hope this article helped you learn how to add live Ajax search to your WordPress site. You may also want to see our guide on how to choose the best domain name registrar, and our expert picks of the best webinar software for small businesses.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Add Live Ajax Search to Your WordPress Site (The Easy Way) appeared first on WPBeginner.