Creating the Perfect Contact Form with Forminator

Contact forms not only let visitors get in touch with you, they are also essential for lead generation. With our free 5-star Forminator plugin, adding a contact form to your WordPress site is easy, as this guide will show you!

This tutorial will demonstrate how to set up the perfect contact form and create the ability for visitors to reach you (and reach back out to them) in just a few clicks.

Here are the areas that we’ll be covering:

  1. Create Contact Forms Faster Using Time-Saving Templates
  2. Customize Forms to Suit Your Needs by Adjusting Their Appearance
  3. Control the Behavior of Your Form for Maximum Performance
  4. Set Up Email Notifications for Instant Alerts
  5. Advance Forms Further by Integrating with Popular 3rd Party Apps
  6. Control Your Data Storage to Your Standards in Settings
  7. Separate a Form Perfectly with Pagination
  8. Preview and Easily Implement Your Form Using Shortcodes

A new form is a cinch with Forminator. Let’s get started!

1. Create Contact Forms Faster Using Time-Saving Templates

With Forminator’s premade templates, just click Contact Form, name it, hit Create, and it will be set up for you immediately.

Templates in Forminator.
Click this button, and let’s create a contact form for your site.

This form comes with all the essentials a simple contact form needs: First Name, Email Address, Phone Number, Message, and Send Message.

Contact form fields.
Use the default fields to instantly create a basic contact form.

You can add more fields and edit accordingly. However, if this serves its purpose, just click Publish, and Forminator delivers a shortcode for use on any WordPress page, post, or acceptable widget.

Creating a contact form with Forminator is quick, simple, and easy!

Forminator automatically syncs with your admin email, so when users opt-in, you’ll get notified.

You now have a form that’s ready to collect information and be used on your website. It’s that simple.

 2. Customize Forms to Suit Your Needs by Adjusting Their Appearance

Forminator gives you a lot of flexibility when it comes to adjusting the appearance of your contact form with colors, fonts, custom CSS, and more.

The Design Style area comes with a pre-made style.

The design style area.
Pick Forminator’s premade designs or jazz it up with some CSS.

You can adjust the Design Style with options like Default, Flat, Bold, Material, or you can opt for None.

Pick the style that works best for your contact form.

From here, you can change the colors in the Colors section. This is where you can customize it perfectly to fit your color palette on your website.

Adjust the colors on many elements of the form, such as the form container, submission indicator, field basics, and more. Or use the default colors.

Various color options.
Color options are endless when it comes to customizing your form.

Click on an individual element to change its colors.

Add custom colors to any form element.

Similar to the colors, you can change many font levels in Fonts. Forminator’s font options include inheriting the theme’s fonts, or you can customize these.

The fonts area.
Match your brand’s fonts throughout the entire form.

Any of these fonts can be overwritten with custom ones by Google Fonts. You can also change the font size and weight.

Choose from hundreds of fonts.

You can also customize your form container’s border and padding in the Form Container area.

Beyond that, you can adjust the radius, thickness, and style. Also, in the Spacing area, you can adjust the spacing with options of Comfortable, Compact, or Custom — where you’ll enter your spacing in pixels.

The form container where you can adjust spacing.
Need more space? The border and spacing can be customized however you’d like.

You can also add custom CSS for more advanced modifications.

3. Control the Behavior of Your Form for Maximum Performance

Your contact form’s behavior is how your form functions while a user is filling it out and when the user hits submit.

Being able to edit and modify key points any way you’d like puts you in control of your contact form’s behavior.

To start with, you can adjust the Submission Behavior. Modify submission behavior with an Inline Message, Redirect to a specific URL, or Hide Form upon submission.

You can also customize the message and set the time (in seconds) to auto-close the success message (which indicates the form was submitted okay).

Submission behavior section.
Add a personalized message to your users.

Forminator allows you to choose your Method of reloading the page. You can use AJAX to send the form without reloading the page or reload the page.

The Validation method is for fields that you’ve chosen to validate. This feature shows how these will behave when a user submits, whether to run validation checks when the user submits the form using Ajax or on the server-side using PHP, and returning any error messages after a page reload.

Also, there is a Submission Indicator feature that shows the loader if activated.

The method of submission including validation and submission indicator.
Keep users in the loop by letting them know that the form will be submitted.

There are also options for enabling Autofill, Security (Honeypot protection and logged in submissions only), and the Lifespan of a form.

Lifespan allows your form to expire on a specific date or after a certain number of submissions.

The Autofill area.
Make it easy to submit a form with Autofill.

4. Set Up Email Notifications for Instant Alerts

Customize email notifications for your admin and site visitors in just a few clicks with Email Notifications.

You can create emails that get delivered to your admin team or any specified email addresses when a form is submitted.

There are also advanced options for CC Emails, BBC Emails, Reply-to Emails. Plus, you can add conditions based on user input and rules for additional customization.

Where you add email notificaitons.
Get a notification the way that you want it when a new form is submitted.

5. Advance Forms Further by Integrating with Popular 3rd Party Apps

You can send a specific form’s data to a connected 3rd party app, such as MailChimp, FortressDB, Slack, and more. It’s all done right in Forminator’s dashboard.

Forminator will show you a list of connected apps that you have available; from there, you can activate it to connect with the form by hitting the plus sign and Activate App.

The Integrations area.
All contacts from a form can go directly into MailChimp.

Every app will have different prompts to get it to synchronize and will walk you through exactly how to do it. Once finished, your form will sync with the app, and it will work accordingly.

You can connect and disconnect from any 3rd party app at any time.

3rd party apps are perfect for continuing marketing through email providers, storing information on the cloud, and much more.

6. Control Your Data Storage to Your Standards in Settings

With Settings, you can control the Data Storage by disabling any submissions in your database. All submissions are stored by default if not enabled.

You can also customize how long you’d like to retain the form’s submission for a specific amount of time in the Privacy settings.

Finally, there are options for handling account erasure requests and keeping files that are submitted if the form gets deleted.

Settings area with data storage and privacy settings.
Data storage controls have never been more straightforward.

Edit data storage in Forminator’s global privacy settings at any time.

7. Separate a Form Perfectly with Pagination

Add pagination to your form to break it up into individual sections. Pagination leaves the style not as cluttered looking, which is beneficial if you have a lot of fields that are required.

Simply add the Page Break field, and Pagination will appear on top. You can then insert page breaks in between various fields and separate the form however you’d like.

You can adjust the Progress Indicator, Buttons Text, and the Labels.

Once completed, your users will go step-by-step through the form.

Use pagination to break up complex forms.

8. Preview and Easily Implement Your Form Using Shortcodes

Does your form look good? Hit Preview any time to get a glimpse of it and try it out…

Preview and test your new contact form.

As mentioned at the beginning of this article, when you’re ready, once you hit Publish on your new contact form, Forminator provides you with a shortcode that you can paste into WordPress pages, posts, or acceptable widgets to display your form.

A look at Forminator's shortcode.
Forminator’s shortcode gets the “thumbs-up”.

You can always access the shortcode in Forminator’s admin. Along with that, you can change, edit, and adjust your form at any time and the changes will automatically propagate throughout your site wherever you have added the form via the shortcode.

And Like That, It Forms

With all the elements in place, you can have your contact form up and running in no time.

The perfect contact form will contain information that is required to serve your purpose. With Forminator, you can adjust a form exactly how you want to ensure it functions perfectly for your WordPress site.

You can view more about constructing contact forms and also about getting started with them.

Speaking of contact forms, be sure to use ours to reach out to our 24/7 support team superheroes if you ever have any questions and need to contact us.

And to keep tabs on what’s coming up with Forminator, check out our Roadmap any time.

JSHint is Now Free Software after Updating License to MIT Expat

The world of open source tooling has expanded to welcome JSHint, as the project’s maintainers have finally completed the necessary work to adopt the MIT Expat license. Previously, the JavaScript linter’s code was partially published under the JSON license, with an additional seemingly innocuous clause that stated: “The Software shall be used for Good, not Evil.” This clause prevented it from being recognized by FSF as a free software license and similarly was not recognized as open source by the Open Source Initiative.

In an essay titled Watching the Ship Sink, JSHint co-maintainer Mike Pennisi describes how the license hurt the project. Despite having captured the distinction of being the most popular JavaScript linter in 2015, the tool has been brutally outpaced during the past five years by its contemporary, ESLint, largely due to the effects of having non-free licensing.

credit: Mike Pennisi

“Legally-conscious objectors aren’t betraying their own dastardly motivations; they’re refusing to enter into an ambiguous contract,” Pennisi said. “Put differently: they’re not saying, ‘I’m an evildoer,’ they’re saying, ‘I don’t understand what you want.’ This consideration disqualified JSHint from inclusion in all sorts of contexts.”

Licensing concerns prevented developers from the Debian and Fedora GNU/Linux distributions from including JSHint. Pennisi even dips into a bit of WordPress history, when he detailed how programming platforms that “repackaged” JSHint also reconsidered due to its additional clause.

“There was a time when the popular content management system WordPress repackaged JSHint in this way,” he said. “Once they learned of the JSON license, they replaced JSHint in a matter of weeks.” Pennisi referenced a ticket for WordPress 4.9 wherein JSHint was removed from core’s implementation of CodeMirror, as well as WordPress’ build tools.

“When a project like JSHint loses users, it also loses contributors,” Pennisi said. “This slows the addition of new features and the correction of bugs. Timeliness is important for these things, and people perceive delays very negatively. The best example of this comes from JSHint’s delayed support for async functions.”

JSHint had become what Pennisi described as a “bizarrely-encumbered JavaScript linter.” Unfortunately, the process of going open source after seven years was not as simple as submitting a pull request for a license change. In a series of essays, he unfolds the grueling process of requesting permission from all of the project’s 200+ contributors, only to end up receiving one refusal and some who weren’t available for contact. Ultimately, the JSHint team was forced to rewrite the source code but only for the parts that were contributed by the five people who had not permitted the license change.

At the beginning of August, JSHint updated to use the MIT Expat license in version 2.12.0 and is now GPL-compatible. Pennisi’s cautionary tale of what he called “the liberation of JSHint” is a fascinating read that details the struggle of overcoming the challenges of the project’s original license. The key takeaway from this story is that software creators should strongly consider the ramifications of licensing up front, even if a large community of users seems unimaginable at first. Open source licensing takes a project further than its creator could ever have brought it alone.

“For many people, licensing is an esoteric part of software development,” Pennisi said. “It’s a relatable opinion: the legal frameworks are intimidating, and most considerations can be addressed by simply defaulting to well-known free/open-source licenses.

“The trouble is that not all software is distributed under well-known free/open-source licenses. My hope is that the particulars of JSHint’s decay help folks understand why licensing matters.”

Top 7 Security Measures for IoT Systems

It is important to understand that the Internet of Things (IoT) is based on the concept of providing remote user access anywhere around the world to acquire data, operate computers, and other devices. The widespread IoT network includes computing devices along with unrelated machines that are solely responsible to transfer data excluding human-to-computer or human-to-human involvement.

The outbreak of technology and vitality smart devices in diverse sectors such as energy, finance, government, etc, makes it imperative to focus on their security standards. As per security firm, Kaspersky, close to one-third (28%) of companies managing IoT systems were threatened with attacks impacting their internet-connected devices during the year 2019. Furthermore, almost 61% of organizations are actively making use of IoT platforms; thereby, enhancing the overall scope for IoT security in the coming years.

8 Best Big Data Tools in 2020

In today’s reality, data gathered by a company is a fundamental source of information for any business. Unfortunately, it is not that easy to drive valuable insights from it.

Problems with which all data scientists are dealing are the amount of data and its structure. Data has no value unless we process it. To do so, we need big data software that will help us in transforming and analyzing data.

Chapter 2: Browsers

Previously in web history…

Sir Tim Berners-Lee creates the technologies behind the web — HTML, HTTP, and the URL which blend hypertext with the Internet — with a small team at CERN. He convinces the higher-ups in the organizations to put the web in the public domain so anyone can use it.

Dennis Ritchie had a problem.

He was working on a new, world class operating system. He and a few other colleagues were building it from the ground up to be simple and clean and versatile. It needed to run anywhere and it needed to be fast.

Ritchie worked at Bell Labs. A hotbed of innovation, in the 60s, and 70s, Bell employed some of the greatest minds in telecommunications. While there, Ritchie had worked on a time-sharing project known as Multics. He was fiercely passionate about what he saw as the future of computing. Still, after years of development and little to show for it, Bell eventually dropped the project. But Ritchie and a few of his colleagues refused to let the dream go. They transformed Multics into a new operating system adaptable and extendable enough to be used for networked time sharing. They called it Unix.

Ritchie’s problem was with Unix’s software. More precisely, his problem was with the language the software ran on. He had been writing most of Unix in assembly code, quite literally feeding paper tape into the computer, the way it was done in the earliest days of computing. Programming directly in assembly — being “close to the metal” as some programmers refer to it — made Unix blazing fast and memory efficient. The process, on the other hand, was laborious and prone to errors.

Ritchie’s other option was to use B, an interpreted programming language developed by his co-worker Ken Thompson. B was much simpler to code with, several steps abstracted from the bare metal. However, it lacked features Ritchie felt were crucial. B also suffered under the weight of its own design; it was slow to execute and lacked the resilience needed for time-sharing environments.

Ritchie’s solution was to chose neither. Instead, he created a compiled programming language with many of the same features as B, but with more access to the kinds of things you could expect from assembly code. That language is called C.

By the time Unix shipped, it had been fully rewritten in C, and the programming language came bundled in every operating system that ran on top of it, which, as it turned out, was a lot of them. As more programmers tried C, they adapted to it quickly. It blended, as some might say, perfectly abstract functions and methods for creating predictable software patterns with the ability to get right down to the metal if needed. It isn’t prescriptive, but it doesn’t leave you completely lost. Saron Yitabrek, host of the Command Heroes podcast, describes C as “a nearly universal tool for programming; just as capable on a personal computer as it was on a supercomputer.”

C has been called a Swiss Army language. There is very little it can’t do, and very little that hasn’t been done with it. Computer scientist Bill Dally once said, “It set the tone for the way that programming was done for several decades.” And that’s true. Many of the programming paradigms developed in the latter half of the 20th century originated in C. Compilers were developed beyond Unix, available in every operating system. Rob Pike, a software engineer involved in the development of Unix, and later Go, has a much simpler way of putting it. “C is a desert island language.”

Ritchie has a saying of his own he was fond of repeating. “C has all the elegance and power of assembly language with all the readability and maintainability of… assembly language.” C is not necessarily everyone’s favorite programming language, and there are plenty of problems with it. (C#, created in the early 2000s, was one of many attempts to improve it.) However, as it proliferated out into the world, bundled in Unix-like operating systems like X-Windows, Linux, and Mac OSX, software developers turned to it as a way to speak to one another. It became a kind of common tongue. Even if you weren’t fluent, you could probably understand the language conversationally. If you needed to bundle up and share a some code, C was a great way to do it.

In 1993, Jean-François Groff and Sir Tim Berners-Lee had to release a package with all of the technologies of the web. It could be used to build web servers or browsers. They called it libwww, and released it to the public domain. It was written in C.


Think about the first time you browsed the web. That first webpage. Maybe it was a rich experience, filled with images, careful design and content you couldn’t find anywhere else. Maybe it was unadorned, uninteresting, and brief. No matter what that page was, I’d be willing to bet that it had some links. And when you clicked that link, there was magic. Suddenly, a fresh page arrives on your screen. You are now surfing the web. And in that moment you understand what the web is.

Sir Tim Berners-Lee finished writing the first web browser, WorldWideWeb, in the final days of 1990. It ran on his NeXT machine, and had read and write capabilities (the latter of which could be used to manage a homepage on the web). The NeXTcube wasn’t the heaviest computer you’ve ever seen, but it was still a desktop. That didn’t stop Berners-Lee from lugging it from conference to conference so he could plug it in and show people the web.

Again and again, he ran into the same problem. It will seem obvious to us now when considering the difficulty of demonstrating a globally networked hypertext application running on a little-used operating system (NeXT) on a not-widely-owned computer (NeXT Computer System) alone at a conference without the Internet. The problem came after the demo with the inevitable question: how can I start using it? The web lacks its magic if you can’t connect to the network yourself. It’s entirely useless isolated on a single computer. To make the idea click, Berners-Lee need to get everybody surfing the web. And he couldn’t very well lend his computer out to anybody that wanted to use it.

That’s where Nicola Pellow came in. An undergraduate at Leicester Polytechnic, Pellow was still an intern at CERN. She was assigned to Berners-Lee’s and Calliau’s team, so they tasked her with building an interoperable browser that could be installed anywhere. The fact that she had no background in programming (she was studying mathematics) and that she was at CERN as part of an internship didn’t concern her much. Within a couple of months she picked up a bit of C programming and built the Line Mode Browser.

Using the Line Mode Browser today, you would probably feel like a hacker from the 1980s. It was a text-only browser designed to run from a command line terminal. In most cases, just plain white text on a black background, pixels bleeding from edge to edge. Typing out a web address into the browser would bring up that website’s text on the screen. The up and down arrows on a keyboard could be used for navigation. Links were visible as a numbered list, and one could jump from site to site by entering the right number.

It was designed that way for a reason. Its simplicity guaranteed interoperability. The Line Mode Browser holds the unique distinction of being the only browser for many years to be platform-agnostic. It could be installed anywhere, on just about any computer or operating system. It made getting online easy, provided you knew what to do once you installed it. Pellow left CERN a few months after she released the Line Mode Browser. She returned after graduation, and helped build the first Mac browser.

Almost soon as Pellow left, Berners-Lee and Cailliau wrangled another recruit. Jean-François Groff was working at CERN, one office over. A programmer for years, Groff had written the French translation of the official C Programming Guide by Brian Kernighan and the language’s creator, Dennis Ritchie. He was working on a bit of physics software for UNIX systems when he got a chance to see what Berners-Lee was working on.

Not everybody understood what the web was going for. It can be difficult to grasp without the worldwide picture we have today. Groff was not one of those people. He longed for something just like the web. He understood perfectly what the web could be. Almost as soon as he saw a demo, he requested a transfer to the team.

He noticed one problem right away. “So this line mode browser, it was a bit of a chicken and egg problem,” he once described in an interview, “because to use it, you had to download the software first and install it and possibly compile it.” You had to use the web to download a web browser, but you needed a web browser to use the web. Groff found a clever solution. He built a simple mechanism that allowed users to telnet in to the NeXT server and browse the web using its built-in Line Mode Browser. So anyone in the world could remotely access the web without even needing to install the browser. Once they were able to look around, Groff hoped, they’d be hooked.

But Groff wanted to take it one step further. He came from UNIX systems, and C programming. C is a desert island language. Its versatility makes it invaluable as a one-size-fits-all solution. Groff wanted the web to be a desert island platform. He wanted it to be used in ways he hadn’t even imagined yet, ways that scientists at research institutions couldn’t even fathom. The one medium you could do anything with. To do that, he would need to make the web far more portable.

Working alongside Berners-Lee, Groff began pulling out the essential elements of the NeXT browser and porting them to the C programming language. Groff chose C not only because he was familiar with it, but because he knew most other programmers would be as well. Within a few months, he had built the libwww package (its official title would come a couple of years later). The libwww package was a set of common components for making graphical browsers. Included was the necessary code for parsing HTML, processing HTTP requests and rendering pages. It also provided a starting point for creating browser UI, and tools for embedding browser history and managing graphical windows.

Berners-Lee announced the web to the public for the first time on August 7, 1991. He posted a brief description along with a simple note:

If you’re interested in using the code, mail me. It’s very prototype, but available by anonymous FTP from info.cern.ch. It’s copyright CERN but free distribution and use is not normally a problem.

If you were to email Sir Tim Berners-Lee, he’d send you back the libwww package.

By November of 1992, the library had fully matured into a set of reusable tools. When CERN put the web in the public domain the following year, its terms included the libwww package. By 1993, anyone with a bit of time on their hands and a C compiler could create their own browser.

Before he left CERN to become one of the first web consultants, Groff did one final thing. He created a new mailing list, called www-talk, for a new generation of browser developers to talk shop.


On December 13, 1991 — almost a year after Berners-Lee had put the finishing touches on the first ever browser — Pei-Yuan Wei posted to the www-talk mailing list. After a conversation with Berners-Lee, he had built a browser called ViolaWWW. In a few months, it would be the most popular of the early browsers. In the middle of his post, Wei offhandedly — in a tone that would come off as bragging if it weren’t so sincere — mentioned that the browser build was a one night hack.

A one night hack. Not even Berners-Lee or Pellow could pull that off. Wei continued the post with the reasons he was able to get it up and running so quickly. But that nuance would be lost to history. What programmers would remember is that the it only took one day to build a browser. It was “hacked” together and shipped to the world, buggy, but usable. That phrase would set the tone and pace of browser development for at least the next decade. It is arguably the dominant ideology among browser makers today.

The irony is the opposite was true. ViolaWWW was the product of years of work that simply culminated in a single night. Wei is a great software programmer. But he also had all the pieces he needed before the night even started.

Pei-Yuan Wei has made a few appearances on the frontlines of web history. Apart from the ViolaWWW browser, he was hired by Dale Dougherty to work on an early version of GNN.com, the first commercial website. He was at a meeting of web pioneers the day the idea of the W3C was first discussed. In 2012, he was on the list of witnesses to speak in court to the many dangers of the Stop Online Privacy Act (SOPA). In the web’s early history Wei was a persistent presence.

Wei was a student at UCLA Berkley in the early 90s. It was HyperCard that set off his fascination with hypertext software. HyperCard was an application built for the Mac operating system in the late 80s. It allowed its users to create stacks of virtual “cards,” each with a bit of info. Users could then connect these cards however they wanted, and quickly sort, search, and navigate through their stacks. People used it to organize their recipes, replace their Rolodexes, organize research notes, and a million other things. HyperCard is the kind of software that attracts a person who demands a certain level of digital meticulousness, the kind of user that organizes their desktop folders into neat sections and precisely tags their data. This core group of power users manipulated the software using its built-in scripting language, HyperScript, to extend it to new heights.

Wei had just glimpsed Hypercard before he knew he needed to use it. But he was on an X-Windows computer, and HyperCard could only run on a Mac. Wei was not to be deterred. Instead of buying a Mac computer (an expensive but reasonable solution the problem) Wei began to write software of his own. He even went one step further. Wei began by creating his very own programming language. He called it Viola, and the first thing he built with it was a HyperCard clone.

Wei felt that the biggest limitation of HyperCard — and by extension his own hypertext software — was that it lacked access to a network. What good was data if it was locked up inside of a single computer? By the time he had reached that conclusion, it was nearing the end of 1991, around the time he saw a mention of the World Wide Web. So one night, he took Viola, combined it with libwww, and built a web browser. ViolaWWW was officially released.

ViolaWWW was built so quickly because most of it was already done by the time Wei found out about the web project. The Viola programming language was in the works for a couple of years at that point. It had already been built to accept hyperlinks and hypermedia for the HyperCard clone. It had been built to be extendable to other possible applications. Once Wei was able to pick apart libwww, he ported his software to read HTML, which itself was still a preposterously simple language. And that piece, the final tip of the iceberg, only took him a single night.

ViolaWWW would be the site of a lot of experimentation on the early web. Wei was the first to include an early version of stylesheets. He added a bookmarking function. The browser supported forms and embedded media. In a prescient move, Wei also included downloadable applets, allowing fairly advanced applications running inside of the browser. This became the template for what would eventually become Java applets.

For X-Windows users, ViolaWWW was the most popular browser on the market. Until the next thing came along.


Releasing a browser in the early 90s was almost a rite of passage. There was a useful exercise in downloading the libwww package and opening it up in your text editor. The web wasn’t all that complicated: there was a bit of code for rendering HTML, and processing HTTP requests from web servers (or other origins, like FTP or Gopher). Programmers of the web used a browser project as a way of getting familiar with its features. It was kind of like the “Hello World” of the early web.

In June of 1993, there were 130 websites in the entire world. There was easily a dozen browsers to chose from. That’s roughly one browser for every ten websites.

This rapid development of browsers was driven by the nature of innovation in the web community. When Berners-Lee put the web in the public domain, he did more than just give it to the world. He put openness at the center of its ideology. It would take five years — with the release of Netscape — for the web to get its first commercial browser. Until then, the “browser makers” were a small community of programmers talking things out the www-talk mailing list trying to make web browsing feel as revolutionary as they wanted it to be.

Some of the earliest projects ported one browser to another operating system. Occasionally, one of the browser makers would spontaneously release something that now feels essential. The first PDF rendering inside of a browser window was a part of the Midas browser. HTML tables were introduced and properly laid out in another called Arena. Tabbed browsing was a prominent feature in InternetWorks. All of these features were developed before 1995.

Most early browsers have faded into obscurity. But the people behind them didn’t. Counted among the earliest browser makers are future employees at Netscape, members of the W3C and the web standards movement, the inventor of cookies (and the blink tag), and the creators of some of the most important websites of the early web.

Of course, no one knew that at the time. To most of the creators, it was simply an exercise in making something cool they could pass along to their Internet friends.


The New York Times introduced its readers to the web on December 8, 1993. “Think of it as a map to the buried treasures of the Information Age,” read the first line. But the “map” the writer was referring to — one he would spend the first half of the article describing — wasn’t the World Wide Web; it was its most popular browser. A browser called Mosaic.

Mosaic was created, in part, by Marc Andreessen. Like many of the early web pioneers, Andreessen is a man of lofty ambition. He is drawn to big ideas and grand statements (he once said that software will “eat the world”). In college, he was known for being far more talkative than your average software engineer, chatting it up about the next bing thing.

Andreessen has had a decades-long passion for technology. Years later, he would capture the imagination of the public with the world’s first commercial browser: Netscape Navigator. He would grace the cover of Time magazine. He would become a cornerstone of Silicon Valley, define its rapid “ship first, think later” ethos for years, and seek and capture his fortune in the world of venture capital.

But Mosaic’s story does not begin with a commanding legend of Silicon Valley overseeing, for better or worse, the future of technology. It begins with a restless college student.

When Sir Tim Berners-Lee posted the initial announcement about the web, about a year before the article in The New York Times, Andreessen was an undergraduate student at the University of Illinois. While he attended school he worked at the university-affiliated computing lab known as the National Center for Supercomputing Applications (NCSA). NCSA occupied a similar space as ARPA in that they both were state-sponsored projects without an explicit goal other than to further the science of computing. If you worked at NCSA, it was possible to move from project to project without arising too much suspicion from the higher ups.

Andreessen was supposed to be working on visualization software, which he had found a way to run mostly on auto-pilot. In his spare time, Andreessen would ricochet around the office listening to everyone about what it was they were interested in. It was during one of those sessions that a colleague introduced him to the World Wide Web. He was immediately taken aback. He downloaded the ViolaWWW browser, and within a few days he had decided that the web would be his primary focus. He decided something else too. He needed to make a browser of his own.

In 1992, browsers could be cumbersome software. They lacked the polish and the conventions of modern browsers without decades of learning to build off of. They were difficult to download and install, often requiring users to make modifications to system files. And early browser makers were so focused on developing the web they didn’t think too much about the visual interface of their software.

Andreessen wanted to build a well-designed, performant, easy-to-install browser while simultaneously building on the features that Wei was adding to the ViolaWWW browser. He pitched his idea to a programmer at NCSA, Eric Bina. “Marc’s a very good salesman,” Bina would later recall, so he joined up.

Taking their cue from the pace of others, Andreessen and Bina finished the first version of the Mosaic browser in just a few weeks. It was available for X Windows computers. To announce the browser, Andreessen posted a download link to the www-talk mailing list, with the message “By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released.” The web got more than just a popular browser. It got its first pitchman.

That first version of the browser was impressive in a somewhat crowded field. To be sure, it had forms and some media support early on. But it wasn’t the best browser, nor was it the most advanced browser. Instead, Andreessen and Bina focused on something else entirely. Mosaic set itself apart because it was the easiest to use. The installation process was simple and the interface was, relatively speaking, intuitive.

The Mosaic browser’s secret weapon was its iteration. Before long, other programmers at NCSA wanted in on the project. They parceled off different operating systems to port the browser to. One team took the Mac, another Windows. By the fall of 1993, a few months after its initial release, Mosaic had feature-paired versions on Mac, Windows and Unix systems, as well as compatible server software.

After that, the pace of development only accelerated. Beta versions were released often and were available to download via FTP. New features were added at a rapid pace and new versions seemed to ship every week. The NCSA Mosaic was fully engaged with the web community, active in the www-talk mailing list, talking with users and gathering bug reports. It was not at all unusual to submit a bug report and hear back a few hours later from an NCSA programmer with a fix.

Andreessen was a particularly active presence, posting to threads almost daily. When the Mosaic team decided they might want to collect anonymous analytics about browser usage, Andreessen polled the www-talk list to see if it was a good idea. When he got a lot of questions about how to use HTML, he wrote a guide for beginners.

When one Mosaic user posted some issues he was having, it led to a tense back and forth between that user and Andreessen. He claimed he wasn’t a customer, and Andreessen shouldn’t care too much about what he thought. Andreessen replied, “We do care what you think simply because having the wonderful distributed beta team that we essentially have due to this group gives us the opportunity to make our product much better than it could be otherwise.” What Andreessen understood better than any of the early browser makers was that Mosaic was a product, and feedback from his users could drive its development. If they kept the feedback loop tight, they could keep the interface clean and bug-free while staying on the cutting edge of new features. It was the programming parable given enough eyeballs, all bugs are shallow come to life in browser development.

There was an electricity to Mosaic development at NCSA. Internal competition fueled OS teams to get features out the door. Sometimes the Mac version would get to something first. Sometimes it was Bina and Andreessen continuing to work on X-Mosaic. “We would get together, middle of the night, and come up with some cool idea — images was an example of that — then we would go off and race and see who would do it first,” creator of the Windows version of Mosaic Jon Mittelhauser later recalled. Sometimes, the features were duds and would hardly go anywhere at all. Other times, as Mittelhauser points out, they were absolutely essential.

In the months after launch, they started to surpass the feature list of even their nearest competitor, ViolaWWW. They added forms support and rich media. They added bookmarks for users to keep track of their links. They even created their own “What’s New” page, updated every single day, which tracked the web’s most popular links. When you opened up Mosaic, the NCSA What’s New page was the first thing you saw. They weren’t just building a browser. They were building a window to the web.

As Mittelhauser points out, it was the <img> tag which became Mosaic’s defining feature. It succeeded in doing two things. The tag was added without input from Sir Tim Berners-Lee or the wider web community. (Andreessen posted a note to www-talk only after it had already been implemented.) So firstly, that set the Mosaic team in a conflict with other browser makers and some parts of the web community that would last for years.

Secondly, it made Mosaic infinitely more popular. The <img> tag allowed for images to be embedded directly inline in the Mosaic browser. People found the web boring to browse. It was sterile, rigid, and scientific. Inline images changed all that. Within a few months, a new class of web designer was beginning to experiment with what was possible with images on the web. In some ways, it was the tag that made the web famous.

The image tag prompted the feature in The New York Times, and a subsequent write-up in Wired. By the time the press got around to talking about the web, Mosaic was the most popular browser and became a surrogate for the larger web world. “Mosaic” was to browsing the web as “Google” is to searching now.

Ultimately, the higher ups got involved. NCSA was not a tech company. They were a supercomputing lab. They came in to help make the Mosaic browser more cohesive, and maybe, more profitable. Licenses were parceled out to a dozen or so companies. Mosaic was bundled into Spry’s Internet in a Box product. It was embedded in enterprise software by the Santa Cruz Operation.

In the end, Mosaic split off into two directions. Pressure from management pushed Andreessen to leave and start a new company. It would be called Netscape. Another of the licensees of the software was a company called Spyglass. They were beginning to have talks with Microsoft. Both would ultimately choose to rewrite the Mosaic browser from scratch, for different reasons. Yet that browser would be their starting point and their products would have lasting implications on the browser market for decades as the world began to see its first commercial browsers.


The post Chapter 2: Browsers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Improved Server-Side Rendering of Dynamic WordPress Blocks

Over the weekend, David Gwyer announced a custom server-side render component for block plugins. The co-founder of WPGO Plugins primarily built his component to speed up the rendering process for dynamic blocks within his own plugins. However, he has now released this component for other block developers in the WordPress community.

Most blocks are static. Their output remains the same and has no need to change. However, some blocks are dynamic. Their output needs to change based on a variety of reasons, such as the context they are output in or other changes within the WordPress system. For example, the core Latest Posts block is dynamic because the posts that it displays change as new posts are written. If they were output as a static block, the end-user would need to update the block each time a new post was written. Therefore, dynamic blocks come in handy because they are rendered by the server in the editor and on the front end.

The problem with rendering from the server is that it can be slow, especially if the user is making several successive options changes to a particular block. With each change, the block must be re-rendered. The core experience with dynamic blocks has not been ideal.

Gwyer’s new component is available via GitHub. The project has little code, and its primary JavaScript file weighs in at just over 4kb (uncompressed). It introduces a new <ServerSideRenderX /> component, which works similarly to WordPress’s existing <ServerSideRender />. Block developers should have little trouble switching to this version for a quick test.

He is currently using his component within the Flexible FAQs plugin. After running through a few tests, the plugin’s dynamic blocks feel much more responsive, almost to the point where there is little difference between it and a JavaScript-rendered static block.

Example of how the <ServerSideRenderX /> component works when updating a block.
Live rendering when updating a dynamic block’s options in the editor.

He also has plans to use it within his Simple Sitemap plugin and any other future dynamic blocks. That is assuming WordPress does not improve its server-side rendering component in the meantime.

How the Component Works

Gwyer’s component is a fork of the core <ServerSideRender> component, which he says works well aside from the point where it transitions between render states. His custom component seeks to rectify that issue. “The main additions were a new piece of state to track the previous block content to use as placeholder content, and a new component prop to handle the spinner location,” he said.

He laid out how both the core and his component works with the core component rendering as follows:

  1. Render block.
  2. Block attribute(s) updated.
  3. Replace entire block content with spinner.
  4. Render new block content.

His new component makes an important change that creates at least a perceived visual speed increase:

  1. Render block.
  2. Block attribute(s) updated.
  3. Replace entire block content with placeholder content (current/previous content), plus spinner in the top right corner.
  4. Render new block content.

“Because the block content is essentially left unaltered until the new content is ready to be rendered it looks a lot faster as well as smoother,” said Gwyer.

The real question is whether this fork should make its way into the Gutenberg project and eventually merged into WordPress. WordPress developer Ben Gillbanks thinks so and has created a new GitHub ticket with the request.

“I’d like to see it added to Gutenberg as it’s a much better rendering experience for dynamic blocks,” said Gwyer. “I’d happily liaise with the team if they’re are interested in including it in core.”

Seeing 5XXs When Configuring a Kubernetes API Gateway for the First Time?

Kubernetes is a fantastic foundation for an application platform, but it is just that: a foundational component. In order for K8s to be useful for application developers the following components must be added to Kubernetes: ingress, an API gateway, and observability; you need to get user traffic into your applications, and you need to be able to understand what is going on. 

Getting K8s Ingress up and running for the first time can be challenging due to the various cloud vendor load balancer implementations. I've seen my fair share of 5XX HTTP errors, and have not been able to identify where the problem lies...

Continuous Delivery Pipeline for Kubernetes Using Spinnaker

Kubernetes is now the de-facto standard for container orchestration. With more and more organizations adopting Kubernetes, it is essential that we get our fundamental ops-infra in place before any migration. This post will focus on pushing out new releases of the application to our Kubernetes cluster i.e. Continuous Delivery.

Pre-Requisites

  1. Running Kubernetes Cluster (GKE is used for the purpose of this blog).
  2. A Spinnaker set-up with Jenkins CI enabled.
  3. Github webhook enabled for Jenkins jobs.

Strategy Overview

  1. Github + Jenkins : CI System to build the docker image and push to registry.
  2. Docker hub: Registry to store docker images.
  3. Spinnaker : CD System to enable automatic deployments to Staging environment and supervised deployment to Production.
Continuous Delivery Pipeline for Kubernetes

Continuous Integration System

Although this post is about the CD system using Spinnaker. I want to briefly go over the CI pipeline so that the bigger picture is clear.

The Principles of Chaos Engineering

Resilience is something those who use Kubernetes to run apps and microservices in containers aim for. When a system is resilient, it can handle losing a portion of its microservices and components without the entire system becoming inaccessible.

Resilience is achieved by integrating loosely coupled microservices. When a system is resilient, microservices can be updated or taken down without having to bring the entire system down. Scaling becomes easier too, since you don’t have to scale the whole cloud environment at once.

A Non-Technical Release Lead’s Journey to Becoming a Mentor for WordPress Core Development

In the summer of 2019, I was asked to help out with a WordPress release. A few months before, the Core Team representatives reached out to other teams in an effort to increase the diversity of the release teams, and I started seriously considering it.

At the time, I was already heavily involved in the WordPress ecosystem and was in my second year as the WordPress Community and Partnership Manager at SiteGround, but I had no experience whatsoever on how WordPress gets done from a Core point of view. Still, when Josepha Haden, WordPress.org Executive Director, pinged me, I said yes without hesitation. And it proved one of the most challenging and rewarding experiences of my life. Here is how. 

Josepha Haden and Francesca Marano walking around Vienna
Josepha and I walking around Vienna, WCEU 2016 – Photo by Luca Sartoni

An Accidental Contributor: My Path in Tech

From an early age, I seemed to be predestined to become a developer. My parents are programmers, they started in the sixties, and I got my first personal computer in 1982 when people in Italy didn’t really have an idea of what those were.

I followed after their work ethos and I thought that their job was fascinating, making a machine do what you want, but I was drawn to other career options. In fact, I didn’t really know what I wanted to do when I grew up, but computers and websites kept being a big part of my personal and professional life.

While back-end programming was never something that interested me, I found myself taking a class on web design in 1999, then signed up for a degree in Arts and Multimedia in 2004. I finally found WordPress in 2008 and started making a living off of it in 2010. 

Soon, I realized my true skill was helping clients who were coming to me with a request for a website to better focus on their “why” for the website and think about their business and marketing strategy before they hired me. I wrote books on business planning, productivity, and websites. I also started giving talks at WordCamps and other events to educate freelancers on those topics. 

In 2015, I randomly met some people who were involved in the WordPress community, which led me to start contributing too. I didn’t have development skills so I never thought I could contribute to OSS, but it turns out that was unnecessary. I met people who pointed me to the many different teams that make WordPress and started being active in Polyglots first and Community later.

Francesca Marano speaking at WordCamp London 2016
My first WordCamp Talk: The Rebirth of the Italian Community, at WordCamp London 2016

I kept working on my business, but the more I contributed to WordPress, the more I wanted to find a way to help thousands of people at a time. My outreach efforts of giving talks, helping community organizers, and writing content needed to scale. 

This is where I met SiteGround. In the summer of 2017, they were looking for a Community Manager and despite not being one by trade I decided to apply and got the job. Joining the company allowed me to have sponsored time to contribute to WordPress. It also allowed me to tap into the collective knowledge of my colleagues when I start cooking up new ideas for the project.

So I said yes without hesitation, but the truth is that this yes was almost five years in the making. In addition, I felt that Josepha and SiteGround trusted me to do a good job. In return, I trusted the WordPress community to help me figure out all the things that I needed to learn.

How WordPress Gets Done

The other encouraging factor was that ever since WordPress 5.0, a release was no longer made by one person, as it used to be for years, or a person with a couple of deputies. Now there was a whole team at work, affectionately known as “the squad,” so there are many hands on deck.

A Lot of Communication

During a release cycle, there is a lot of communication. There are blog posts from different Make teams. At each stage of the release, there are blog posts in the News section of WordPress.org. There is constant chatter in the public Slack channel and there is a private one which is the safety net for the new people that initially might feel intimidated by asking questions in a large public channel. 

The Different Roles in the Release Squad

Screenshot of the WordPress 5.3 Development Cycle Page with the names of the squad
WordPress 5.3 had a release squad of 12 people and 654 contributors. WordPress 5.5 threw it out of the park with 805 contributors!

The thing that I love the most about this model for the release is the variety of roles that it includes. There are developers, designers, marketers, technical writers, and project managers. WordPress is not only made of code, and it’s great to see all these different skills coming together to contribute to its release. 

The role of the Release Coordinator (the one I covered for WordPress 5.3 and 5.4) and of the Triage PM (role that was covered by the excellent David Baumwald for 5.3, 5.4, and 5.5) is to try to keep an eye on all the moving parts. And I say try because it’s nearly impossible. This is why there are focus leads for the different parts that are getting worked on.

Matt Mullenweg is the project lead and has been the release lead since WordPress 5.0. He comes up with the high-level roadmap and the focus projects.  But beyond that, he is not involved with the day-to-day life of Core development. In over one year of being involved in Core releases, Matt asked only once to add a feature

I am annoyed when people think that everything that happens in WordPress is because Matt wants it that way. It diminishes the role of all the people who care about the project and take it upon themselves to move things forward, to shepherd issues, to champion tickets, and in general to commit to contribute to make WordPress better for everyone, no matter if they do it for one ticket or work on it full time. 

Component Maintainers and Core Committers

A group of people who are instrumental in shaping a release are the component maintainers. They are responsible to look after a certain component that makes up Core and see how tickets in that area are proceeding. They are the ones who can evaluate if a ticket is ready to be merged.

Once a ticket is deemed ready, Core Committers enter the scene. They do a final review of the ticket. They might request some changes, or make the changes themselves while committing. This is the thing that surprised me the most probably. I really didn’t think that a commit could take hours, but it definitely can. In the releases I coordinated, I definitely observed not a lot of engagement from maintainers and committers, and this is very demotivating for people working on tickets. Not everything can go into a release, even if the patch is ready, because there aren’t enough people to review, give feedback, and ultimately commit. With few resources, you have to make choices and those will not always align with each WordPress user or contributor preferences. 

This is probably one of the biggest challenges WordPress will have to tackle moving forward: How can we reactivate people who can give a big help?

The Release Party

People Dancing at WordCamp Europe Party
Photo by Florian Ziegler

Despite these issues, things get done and when the release is ready, we celebrate with a party. I don’t know who started calling them Release Parties or when they started. What I know is that for 5.3 and 5.4, I hosted quite a few, and they were all a lot of fun. 

On the day of one of the steps of the release (it might be Betas, Release Candidates or General Release) the Core channel gets very active: a lot of people come online to see how the version of WordPress gets released. There are multiple steps and different people involved with different tasks. The release steps are documented in the Core handbook and are followed publicly so everyone can see them all. 

The biggest party is the general release day; there is one specific moment which is incredibly powerful. WordPress has a download counter, so before releasing the new version, the squad takes a screenshot of the previous one, we all say goodbye and welcome the new kid. Despite everything being virtual, this moment is almost tangible and will never cease to move me. We made WordPress, once again.

12 Months as a Core Contributor

While I was writing this article, it occurred to me that I have been a Core contributor for a year now. I still have my full-time role at SiteGround, which at times I found hard to juggle, so I have to give my team credit for their support.

I still can’t write PHP and despise JavaScript deeply, but when I look back, I am incredibly proud of the changes that have happened in the past 12 months. I can not take credit for all of them, but I am happy I was able to be somehow part of them.

Release Schedule

One thing that a lot of contributors asked for was a mid-term schedule of releases, to better fit them around their work and personal calendar. Being the new kid can be hard because you don’t know the whole history and background of why things are done a certain way, but that is also a perk. You are free to restart conversations. After discussing it with the squad and other teams, it was clear to me that it was just a matter of “who is going to bring this up with Matt”. And so I did. A couple of days later a tentative release schedule until WordPress 6.0 was published on the Core blog, and we have been using it ever since.

Bigger Release Squad and Mentorship

The release squad is also getting bigger with every release. Many teams are involved in making it and affected by it. It’s important for all these teams to be represented in the process. In WordPress 5.5, there are several new roles, and in 5.6 there will be even more: Test, Documentation, Support are all vital components of what makes WordPress great, so having their feedback while the software is in active development is important.

And it’s important to have mentors. This is a major improvement that Josepha introduced in WordPress 5.3. The release squad is not only made of focus leads, but there is a growing group of mentors able to help new contributors learn the ropes. The idea is that those people will eventually become mentors and teach new people. This is another great way to have more and more people involved in Core, with different skills and backgrounds. 

And this brings me to the biggest change (and challenge) of all. WordPress 5.6, which is shaping up to be a massive release, will have a squad entirely made of women and people who identify as female. Like a lot of things in WordPress, it all started with a “Thinking out loud” moment and is now a reality. Work on this release will start very soon, and I am excited to be part of it as a mentor.

Women walking in hallway during WordCamp Torino
Fellow female contributors leading the Polyglots team at WordCamp Torino 2018. Picture by Gianni Vascellari

WordPress Needs Your Help

I wish I could say it is all unicorns and rainbows, but it’s not. The number of people actively involved in making this project a reality is still very small compared to the magnitude of its reach.

I am very much a doer, so I wish people took the time and energy they take into critiquing WordPress and turn it into active contribution time. Yes, sometimes it requires being very stubborn about a ticket and it requires to follow up on it relentlessly, but I still think it’s worth it.

Active participation also means leaving constructive feedback in tickets or offering to take notes during dev chat. That is the curse and the beauty of a massive project. There is always something to do!

In the last few years, I have also seen an increase in contribution from different kinds of companies. At SiteGround, for example, we mostly contributed to events and the community for years. We sponsored, and we volunteered, we were organizers and speakers. We worked a lot within the Spanish WordPress community to help it develop and grow, and now it’s one of the largest in the global community. In the last year, we have increased the hours we dedicate to more technical teams. I am still active in Core as a mentor and as the team representative. One of our WordPress engineers, Stanimir Stoyanov, is part of the Security team, and one of our JavaScript Engineers, Kiril Zhelyazkov, is now dedicating a couple of days per week to Gutenberg.

Stanimir Stoyanov from SiteGround on stage at WordCamp Sofia 2019
My colleague and Core and Security contributor, Stanimir Stoyanov

These topics align with our values, so it was a natural progression for us to become more involved.

Finally, I hope to see people get involved in a proposal I published a few days ago in the Core blog about end-to-end tests. Right now there is one, and I’m sure we can do better. Again, developers are not the only ones needed. Users are the rarest contributors and probably the ones the project needs the most to finally have some user testing in place. I am not a developer, and I’m happy that non-developers can make an impact. 

My Personal Concerns and Hopes for the Future of the Project

When I started contributing to Core, I started a note on my computer with some observations. Not having 17 years of experience in the project helps me see things without bias, and not being a developer helps me see the project more as a living, breathing body, instead of components or tickets. Allow me to share my concerns, hopes, and dreams for the future.

Component Maintainers and Core Committers: You Are Needed More Than Ever

At the time of writing this article, the project has about 60 committers and 60 component maintainers, with a lot of people pulling double, triple, and sometimes sextuple duties. But the reality is that in WordPress 5.4 and 5.5 hundreds of commits were made by Sergey Biryukov. I am incredibly grateful for Sergey’s work. At the same time, I feel like we are inadvertently building a bus factor into Core. The majority of the people with Core Commit access did not commit one ticket. Similarly, I reached out to all the component maintainers to hear about their plans for the upcoming releases and only about 50% of the components replied.

How do we make sure that the people who have the power, and thus the responsibility, to help with committing and shepherding tickets are involved? But also, how do we encourage people to step down and declare themselves inactive so new people can step up? 

My career spans over 25 years in different industries, and one thing remains the same: when people see there is someone else filling a role, they will be less motivated and sometimes even intimidated to step up. Scarcity not only drives purchases, it drives new engagement.

The Community Team, for example, maintains a list of deputies and their different statuses. I have been wondering if Core could do something similar so when new people want to step up they can see at a first glance which components are missing maintainers. People who complain about “The Core Developers” will not see them as a blob, but as individuals who at any point in time might be inactive for a period. When you see that there are actually only a few people actively reviewing and committing, you might be more prone to understand why not every ticket can make it to the finish line.

Documentation Is the Highest Form of Generosity

I say this every time I speak about contributing to OSS: documentation is frequently lacking. Oftentimes, what is there is outdated. 

How do we make sure that documentation is not an afterthought but is baked into the development process?

Screenshot of the documentation for translating WordPress into Italian
Handbook from it.wordpress.org – How to translate WordPress to Italian. Photo by Gianni Vascellari

There is a lot of work put into writing dev notes for the changes that affect development, but that is not the only documentation that is needed. Some of the processes described in the Core handbooks are outdated, some are missing because they live in experienced contributors’ minds.

As a big fan of Gutenberg and rich, engaging text, I wish our handbooks would fully leverage the power of the block editor and be more inviting. Right now they are a wall of text and whenever we tell people to look at the handbooks I feel my heart shrinking.

Possible solutions, which I am not sure are technically doable, but a girl can dream: sync with GitHub to solve at least the version control issue. Then recruit, recruit, recruit and work with Documentation, Meta, and Design to provide useful, engaging, readable, easy-to-scan handbooks.

Keep Track of the Moving Parts and Work as One

The other thing that I notice often is how teams, focuses, and components work in silos.

This is absolutely not done to be gatekeepers, it’s just how every team self-organized over the years.

We need to find a way to have a bird’s eye view of what is going into the next release and what are all the moving parts.

People sitting at round tables during a contributor day
People Making WordPress at Contributor Day, WordCamp Europe 2015 – Photo by Florian Ziegler

Trac is very granular and you have a number of ready-made reports, you can filter by milestones and see how many tickets are in each component, but that is just part of the story.

Yes, I am talking about finding a way to manage the project as a whole and not as bits and bobs. 

Enter GitHub. At Some Point. 

This is not happening anytime soon, but I hope it will eventually happen. Move development and project management of WordPress to GitHub, like Gutenberg has been doing. 

I know that for many it will be an incentive to contribute to WordPress in a way that is more familiar. It will lower the bar to entrance, which is always welcome. With some handy tutorials, it will allow non-technical people to contribute to documentation, testing, and project management.

The Future is Bright

Despite all the issues, or maybe because of them, the future of WordPress is bright. 

I have been lurking around multiple teams in these years, and lately I notice more people coming on board, more people being involved in each release, more people stepping up in leadership roles in different teams. I have also noticed an increase in diversity, which is always a welcome change.

Bottom line: WordPress needs all of us to make it happen. I hope to see you on board!

Practical Use Cases for JavaScript’s closest() Method

Have you ever had the problem of finding the parent of a DOM node in JavaScript, but aren’t sure how many levels you have to traverse up to get to it? Let’s look at this HTML for instance:

<div data-id="123">
  <button>Click me</button>
</div>

That’s pretty straightforward, right? Say you want to get the value of data-id after a user clicks the button:

var button = document.querySelector("button");


button.addEventListener("click", (evt) => {
  console.log(evt.target.parentNode.dataset.id);
  // prints "123"
});

In this very case, the Node.parentNode API is sufficient. What it does is return the parent node of a given element. In the above example, evt.targetis the button clicked; its parent node is the div with the data attribute.

But what if the HTML structure is nested deeper than that? It could even be dynamic, depending on its content.

<div data-id="123">
  <article>
    <header>
      <h1>Some title</h1>
      <button>Click me</button>
    </header>
     <!-- ... -->
  </article>
</div>

Our job just got considerably more difficult by adding a few more HTML elements. Sure, we could do something like element.parentNode.parentNode.parentNode.dataset.id, but come on… that isn’t elegant, reusable or scalable.

The old way: Using a while-loop

One solution would be to make use of a while loop that runs until the parent node has been found.

function getParentNode(el, tagName) {
  while (el && el.parentNode) {
    el = el.parentNode;
    
    if (el && el.tagName == tagName.toUpperCase()) {
      return el;
    }
  }
  
  return null;
}

Using the same HTML example from above again, it would look like this:

var button = document.querySelector("button");


console.log(getParentNode(button, 'div').dataset.id);
// prints "123"

This solution is far from perfect. Imagine if you want to use IDs or classes or any other type of selector, instead of the tag name. At least it allows for a variable number of child nodes between the parent and our source.

There’s also jQuery

Back in the day, if you didn’t wanted to deal with writing the sort of function we did above for each application (and let’s be real, who wants that?), then a library like jQuery came in handy (and it still does). It offers a .closest() method for exactly that:

$("button").closest("[data-id='123']")

The new way: Using Element.closest()

Even though jQuery is still a valid approach (hey, some of us are beholden to it), adding it to a project only for this one method is overkill, especially if you can have the same with native JavaScript.

And that’s where Element.closest comes into action:

var button = document.querySelector("button");


console.log(button.closest("div"));
// prints the HTMLDivElement

There we go! That’s how easy it can be, and without any libraries or extra code.

Element.closest() allows us to traverse up the DOM until we get an element that matches the given selector. The awesomeness is that we can pass any selector we would also give to Element.querySelector or Element.querySelectorAll. It can be an ID, class, data attribute, tag, or whatever.

element.closest("#my-id"); // yep
element.closest(".some-class"); // yep
element.closest("[data-id]:not(article)") // hell yeah

If Element.closest finds the parent node based on the given selector, it returns it the same way as  document.querySelector. Otherwise, if it doesn’t find a parent, it returns null instead, making it easy to use with if conditions:

var button = document.querySelector("button");


console.log(button.closest(".i-am-in-the-dom"));
// prints HTMLElement


console.log(button.closest(".i-am-not-here"));
// prints null


if (button.closest(".i-am-in-the-dom")) {
  console.log("Hello there!");
} else {
  console.log(":(");
}

Ready for a few real-life examples? Let’s go!

Use Case 1: Dropdowns

Our first demo is a basic (and far from perfect) implementation of a dropdown menu that opens after clicking one of the top-level menu items. Notice how the menu stays open even when clicking anywhere inside the dropdown or selecting text? But click somewhere on the outside, and it closes.

The Element.closest API is what detects that outside click. The dropdown itself is a <ul> element with a .menu-dropdown class, so clicking anywhere outside the menu will close it. That’s because the value for evt.target.closest(".menu-dropdown") is going to be null since there is no parent node with this class.

function handleClick(evt) {
  // ...
  
  // if a click happens somewhere outside the dropdown, close it.
  if (!evt.target.closest(".menu-dropdown")) {
    menu.classList.add("is-hidden");
    navigation.classList.remove("is-expanded");
  }
}

Inside the handleClick callback function, a condition decides what to do: close the dropdown. If somewhere else inside the unordered list is clicked, Element.closest will find and return it, causing the dropdown to stay open.

Use Case 2: Tables

This second example renders a table that displays user information, let’s say as a component in a dashboard. Each user has an ID, but instead of showing it, we save it as a data attribute for each <tr> element.

<table>
  <!-- ... -->
  <tr data-userid="1">
    <td>
      <input type="checkbox" data-action="select">
    </td>
    <td>John Doe</td>
    <td>john.doe@gmail.com</td>
    <td>
      <button type="button" data-action="edit">Edit</button>
      <button type="button" data-action="delete">Delete</button>
    </td>
  </tr>
</table>

The last column contains two buttons for editing and deleting a user from the table. The first button has a data-action attribute of edit, and the second button is delete. When we click on either of them, we want to trigger some action (like sending a request to a server), but for that, the user ID is needed.

A click event listener is attached to the global window object, so whenever the user clicks somewhere on the page, the callback function handleClick is called.

function handleClick(evt) {
  var { action } = evt.target.dataset;
  
  if (action) {
    // `action` only exists on buttons and checkboxes in the table.
    let userId = getUserId(evt.target);
    
    if (action == "edit") {
      alert(`Edit user with ID of ${userId}`);
    } else if (action == "delete") {
      alert(`Delete user with ID of ${userId}`);
    } else if (action == "select") {
      alert(`Selected user with ID of ${userId}`);
    }
  }
}

If a click happens somewhere else other than one of these buttons, no data-action attribute exists, hence nothing happens. However, when clicking on either button, the action will be determined (that’s called event delegation by the way), and as the next step, the user ID will be retrieved by calling getUserId:

function getUserId(target) {
  // `target` is always a button or checkbox.
  return target.closest("[data-userid]").dataset.userid;
}

This function expects a DOM node as the only parameter and, when called, uses Element.closest to find the table row that contains the pressed button. It then returns the data-userid value, which can now be used to send a request to a server.

Use Case 3: Tables in React

Let’s stick with the table example and see how we’d handle it on a React project. Here’s the code for a component that returns a table:

function TableView({ users }) {
  function handleClick(evt) {
    var userId = evt.currentTarget
    .closest("[data-userid]")
    .getAttribute("data-userid");


    // do something with `userId`
  }


  return (
    <table>
      {users.map((user) => (
        <tr key={user.id} data-userid={user.id}>
          <td>{user.name}</td>
          <td>{user.email}</td>
          <td>
            <button onClick={handleClick}>Edit</button>
          </td>
        </tr>
      ))}
    </table>
  );
}

I find that this use case comes up frequently — it’s fairly common to map over a set of data and display it in a list or table, then allow the user to do something with it. Many people use inline arrow-functions, like so:

<button onClick={() => handleClick(user.id)}>Edit</button>

While this is also a valid way of solving the issue, I prefer to use the data-userid technique. One of the drawbacks of the inline arrow-function is that each time React re-renders the list, it needs to create the callback function again, resulting in a possible performance issue when dealing with large amounts of data.

In the callback function, we simply deal with the event by extracting the target (the button) and getting the parent <tr> element that contains the data-userid value.

function handleClick(evt) {
  var userId = evt.target
  .closest("[data-userid]")
  .getAttribute("data-userid");


  // do something with `userId`
}

Use Case 4: Modals

This last example is another component I’m sure you’ve all encountered at some point: a modal. Modals are often challenging to implement since they need to provide a lot of features while being accessible and (ideally) good looking.

We want to focus on how to close the modal. In this example, that’s possible by either pressing Esc on a keyboard, clicking on a button in the modal, or clicking anywhere outside the modal.

In our JavaScript, we want to listen for clicks somewhere in the modal:

var modal = document.querySelector(".modal-outer");

modal.addEventListener("click", handleModalClick);

The modal is hidden by default through a .is-hidden utility class. It’s only when a user clicks the big red button that the modal opens by removing this class. And once the modal is open, clicking anywhere inside it — with the exception of the close button — will not inadvertently close it. The event listener callback function is responsible for that:

function handleModalClick(evt) {
  // `evt.target` is the DOM node the user clicked on.
  if (!evt.target.closest(".modal-inner")) {
    handleModalClose();
  }
}

evt.target is the DOM node that’s clicked which, in this example, is the entire backdrop behind the modal, <div class="modal-outer">. This DOM node is not within <div class="modal-inner">, hence Element.closest() can bubble up all it wants and won’t find it. The condition checks for that and triggers the handleModalClose function.

Clicking somewhere inside the nodal, say the heading, would make <div class="modal-inner"> the parent node. In that case, the condition isn’t truthy, leaving the modal in its open state.

Oh, and about browser support…

As with any cool “new” JavaScript API, browser support is something to consider. The good news is that Element.closest is not that new and is supported in all of the major browsers for quite some time, with a whopping 94% support coverage. I’d say this qualifies as safe to use in a production environment.

The only browser not offering any support whatsoever is Internet Explorer (all versions). If you have to support IE, then you might be better off with the jQuery approach.


As you can see, there are some pretty solid use cases for Element.closest. What libraries, like jQuery, made relatively easy for us in the past can now be used natively with vanilla JavaScript.

Thanks to the good browser support and easy-to-use API, I heavily depend on this little method in many applications and haven’t been disappointed, yet.

Do you have any other interesting use cases? Feel free to let me know.


The post Practical Use Cases for JavaScript’s closest() Method appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.