How to create a log history in VB.NET and saves it into database?

Hi, below is the code in my login button, yet I'm confused what code to put in logout button. I only know few knowledge about vb.net and I am trying to understand it as much as I can. Thank you for understanding! :)

Public Class Form2

    Private OPConStr As String = ("server=localhost;username=root;password=07292021;database=usersaccount")

    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
        Dim ID As Integer
        Using connection As New MySqlConnection(OPConStr),
            cmd As New MySqlCommand("SELECT `StudentId` FROM `usersaccount` WHERE `StudentId` = @username AND `Account Password` = @password", connection)
            cmd.Parameters.Add("@username", MySqlDbType.VarChar).Value = Username.Text
            cmd.Parameters.Add("@password", MySqlDbType.VarChar).Value = Pass.Text
            connection.Open()
            ID = CInt(cmd.ExecuteScalar())
        End Using
        If ID = 0 Then
            MessageBox.Show("Invalid Username Or Password")
            Exit Sub
        End If
        Using con As New MySqlConnection("server=localhost;username=root;password=07292021;database=logsrecord"),
                cmd As New MySqlCommand("Insert into loghistory.logsrecord (StudentID, DateIn, Action) Values (@ID, @In, @Action);", con)
            cmd.Parameters.Add("@ID", MySqlDbType.VarChar).Value = ID
            cmd.Parameters.Add("@In", MySqlDbType.DateTime).Value = Now()
            cmd.Parameters.Add("@Action", MySqlDbType.Int32).Value = 1
            con.Open()
            cmd.ExecuteNonQuery()
        End Using
        Form3.Show()
        Hide()
    End Sub

This is the table in my database:

  • StudentID (VarChar)
  • In (Datetime)
  • Out (Datetime)
  • Action (Int)

Birmingham to Host First In-Person WordCamp, February 4-5, 2022

WordCamp Birmingham is the first in-person WordCamp on the schedule for 2022. The event will be held at the Sidewalk Film Center and Cinema in downtown Birmingham on February 4-5. It is one of the first cracks in WordPress’ iced over event landscape after the pandemic brought in-person gatherings to a halt.

“WordCamp Birmingham was one of 40 or more WordCamps that needed to cancel or postpone in 2020,” co-organizer Ryan Marks said. “We had intentions of just postponing until 2021. During WordFest 2021 in July, Matt Mullenweg said, ‘I encourage people to start planning. As soon as you feel safe to do so, do so.’ The proposal to return to in-person WordCamps was announced within a week of that interview. The local team met in August and targeted early February for our event. When the announcement updating the guidelines for in-person WordCamps was posted in September, it gave us a green light to keep moving forward.” 

The updated guidelines for in-person WordCamps require that attendees be fully vaccinated, recently tested negative, or recently recovered in the last three months. Marks said the organizing team has not made a decision about whether to require masks but will be monitoring local health guidelines and communicate any expectations with attendees in January.  

WordCamp Birmingham secured a flexible arrangement with the venue in case they need to cancel.

The contract with Sidewalk Film Center + Cinema gives us the flexibility to cancel without any loss of deposit as long as we give notice more than 7 days before the event,” Marks said. “This was sufficient for our local organizing team and WordPress Community Support.”

Marks reported that the process of getting the WordCamp approved was “quite smooth” thanks to assistance from their mentor, Kevin Cristiano, who worked with them on budget review.

Although the maximum capacity of the venue is 300, the organizers have capped attendee numbers at 200 as a precaution.

Tickets are on sale and the calls for speakers, sponsors, and volunteers is open. Organizers expect that it will sell out quickly since it’s the first in-person WordCamp since all the pandemic cancellations.

“The best thing about WordPress isn’t the software, it’s the community,” WordCamp Birmingham speaker wrangler Nathan Ingram said. “And WordCamps are where the community meets, shares, and learns together. Virtual WordCamps have been necessary, but just aren’t the same as being together face to face.

“WordCamp Birmingham is the oldest WordCamp in the Southeast – our first WordCamp was in 2008. I hope that this year’s WordCamp Birmingham is a family reunion – a place where friends and colleagues can gather and enjoy the community that makes WordPress so great.”

Everything You Need to Know About Web Application Firewalls (WAFs)

This article is your one-stop, 360-degree resource covering all the information you need to know about WAFs, including how they function, what they protect against, how to implement them, and much more!

Protecting your web applications against malicious security attacks is essential. Luckily, WAFs (Web Application Firewalls) are here to help.

In a nutshell, a WAF works as a shield between the web application and the internet, preventing mishaps that could occur without it.

WAFs can protect you and your clients’ applications from cross-site forgery attacks, XSS (cross-site-scripting), and SQL injections, amongst others.

diagram of a waf
WAFs are here to help protect your site from hackers and malicious threats.

More and more so, web application security has become more crucial, considering web application attacks are one of the most common reasons for breaches.

As you’re about to see, WAFs are a critical part of security to guard against vulnerabilities.

In this article, we’ll be covering:

Let’s start at the beginning, with…

What is a WAF?

A Web Application Firewall (WAF) is a specific type of firewall that protects your web applications from malicious application-based attacks.

In layman’s terms, a WAF acts as the middle person or security guard for your WordPress site.

It will help protect web applications from attacks like cross-site scripting (XSS), cookie poisoning, SQL injection, cross-site forgery, and more.

WAFs will stand guard between the internet and your web applications, all the while monitoring and filtering the HTTP traffic that wants to get to your server.

It does this by adhering to policies that assist in determining what traffic is malicious and what traffic isn’t. Similar to how a proxy server acts as a mediator to protect the identity of a client, WAF functions in a similar way — but in reverse.

It’s a reverse proxy, which acts as a go-between that protects the web application server from a possible malicious client.

WAFs use a set of rules (or policies) to help identify who’s actually on your guest list and who’s just looking to cause trouble.

WAFs and Network Firewalls

WAFs should not be confused with your standard Network Firewall (Packet Filtering), which assesses incoming data based on a set of criteria, including IP addresses, packet type, port numbers, and more.

Network firewalls are okay and great at what they do. The only downside is they don’t understand HTTP, and as a result, cannot detect specific attacks that target security flaws in web applications.

That’s where WAFs save the day and can help bolster your web security in ways a Network Firewall cannot. There are many layers to it.

And employing different security measures can help you further protect the individual layers.

The OSI Model

To understand these layers, you need to understand the OSI Model (Open Systems Interconnection Model).

The OSI model is a framework that divides the overall architecture of a network into seven different sections.

Every layer has its own security postures and mechanisms, and anyone overly concerned with security should know how to detect and establish appropriate security methods for each.

The seven network layers are as follows:

A look at the various layers of a network
The OSI model breaks a network into seven distinct layers.

When analyzing the layers above, your typical Network Firewall helps secure layers 3 – 4, and a WAF assists with the protection of layer 7.

This should also serve as a reminder that WAFs are NOT a one-size-fits-all solution. And they’re best paired with other effective security measures – such as a quality Network Firewall.

Differences Between Network-Based, Host-Based, and Cloud-Based WAFs

WAFs are used in one of three various ways — network-based, host-based, and cloud-based. Each has benefits and disadvantages, so let’s take a look at each one individually and see how they compare.

Network-Based: Network-based WAFs are typically hardware-based. They are installed locally; therefore they minimize latency. However, they’re an expensive option that also requires storage and maintenance of equipment.

Host-Based: In terms of costs, this is less than network-based WAFs. Plus, it offers more customization options. One of the downsides of this type of WAF is the consumption of local server resources, maintenance costs, and it can be complex to implement.

Cloud-Based: This is an affordable option — and it’s easy to implement. Usually, it’s just a matter of change in DNS to redirect traffic. Also, cloud-based WAFs have a low upfront cost, with flexible payment options. These WAFs are consistently updated to help protect against the newest threats that arise that won’t require any work or expenses on the user’s side.

Probably the biggest downside of this type of WAF is it’s from a 3rd party source, so you are limited to customization options and rely solely on their services.

Now that we have a basic idea of what a WAF is and the different types, let’s dive deeper into HOW it protects your precious web apps.

How WAFs Protect Your Web Applications From Malicious Attacks

According to a 2019 web applications report by Positive technologies, on average, hackers can attack users in 9 out of 10 web applications. Yikes!

The report also found that breaches of sensitive data were a threat in 68% of web applications.

Statistics like these reinforce the need for more effective web app protection.

As mentioned earlier, WAFs protect your server by analyzing the HTTP traffic passing through – detecting and blocking anything malicious BEFORE it reaches your web applications (see below).

A look at how a WAF protects your site from cyber attacks
Talk to the WAF hand pesky attacker.

As we just discussed, WAFs can also be network (hardware) based, software-based, or cloud-based, meaning virtual or physical.

When it comes to how WAFs filter, detect, and block malicious traffic – they achieve this in a couple of different ways…

WAF Security Models: Blocklist, Allowlist, Or Both

WAFs typically follow either a “Blocklist” (negative) or “Allowlist” (positive) security model, or sometimes both.

When employing a Blocklist security model, basically, you can assemble a list of unwanted IP addresses or user agents that your WAF will automatically block.

The Allowlist model does the opposite and allows you to create an exclusive list of IP addresses and user agents that are permitted. Everything else is denied.

Both models have their pros and cons, so modern WAFs often offer a hybrid security model that gives you access to both.

Attacks Prevented by WAFs

Obviously, not every attack out there can be stopped by a WAF, however, they help handle a lot of them.

Some of the major attacks that WAF security can help stop are:

SQL Injection: This is malicious code that is injected or inserted into a web entry field. The injections allow attacks to compromise the application and also underlying systems.

Cross-site Scripting (XSS): Client-side scripts are injected by attackers into web pages other users view.

Web Scraping: Used to extract data from websites by data scraping.

Unvalidated Input: HTTP requests are tampered with by attackers to bypass security mechanisms on a site.

Cookie Poisoning: When a cookie is modified to gain unauthorized info about the user for malicious purposes, such as identity theft.

Layer 7 DoS: HTTP flood attack that makes use of valid requests in typical URL data.

Security enhancements are constantly being updated and implemented, so keep in mind a good WAF can cover a lot more than just noted above.

When determining a WAF provider, or implementing one, be sure it’s up-to-date and includes the essentials, especially the OWASP Top 10 — which we’ll be discussing next.

How WAFs Guard Your Web Apps Against The “The OWASP Top 10”

OWASP image
OWASP has a Top 10 that all good WAFs should protect against — or else that can sting.

As well as performing based on one of the three security models mentioned earlier, WAFs come automatically armed with a specific set of rules (or policies).

These policies combine rule-based logic, parsing, and signatures to help detect and prevent many different web application attacks like previously mentioned.

In particular, WAFs are well known for protecting against a number of the top 10 web application security risks listed every year by OWASP (Open Web Application Security Project).

This includes malicious attacks such as Server-Side Request Forgery (SSRF), Injections, and Security Logging.

Here’s a look at the current Top 10. You can see that there is some consolidation and new categories from 2017.

owasp top 10
These are what’s ranking in 2021 for OWASP. (Source: https://owasp.org/www-project-top-ten/)

Find more information about OWASP here.

Virtual Patch

Another adequate safeguard you’ll hear many WAF providers talk about is something called a “virtual patch.”

A VP is essentially a rule (or often a set of rules) that can help resolve a vulnerability in your software without needing to adjust the code itself.

Many WAFs can deploy virtual patches to repair WordPress core, plugin, and theme vulnerabilities when required.

How WAFs Also Help You Meet Legal Security Standards

Along with security, a WAF can help with legalities.

If your organization works with, processes, or stores sensitive information (credit card details, etc.), it’s essential you comply with security requirements and standards. This is where a WAF comes into play.

WAFs can help businesses of all sizes comply with regulatory standards like the PCI, HIPAA, and GDPR, making the firewall valuable from compliance and security perspectives.

For example, the number one requirement for organizations under the Payment Card Industry Data Security Standard (PCI) is: “Installing and maintaining a firewall configuration to protect cardholder data.”

And let’s face it, keeping in compliance with legalities also gives you a great reputation. It’s a win-win to use a WAF to meet legal standards.

Different Types of WordPress Firewalls

Considering WordPress is the world’s most popular content manager and a frequent target of attacks, it’s important WordPress sites have a WAF in place. There are several types of firewalls types you can deploy, which are:

  • WAF Security Plugins
  • On-site Dedicated WordPress WAFs
  • Online WordPress Website WAFs

Here’s a look at each one.

WAF Security Plugins

Most self-hosted WordPress firewalls are WordPress plugins. They’re ideal, considering how easy they are to implement and affordable. Plus, it’s common for the WAF plugins to have malware scanners, too.

Some follow a “SAAS” model, offering an easy and stress-free introduction to the world of application firewalls.

On the other side of the coin, some plugins won’t fit the bill.  It’s all dependent on the level at which the WAF sits.

For example, some plugin WAFs sit at the DNS level, which usually means the firewall monitors and filters HTTP traffic before reaching their cloud proxy servers.

This is the recommended level for these kinds of firewall plugins. Some well-known WAF providers are set up in this way (e.g. Cloudflare — which is one of the providers we’ll be discussing later in this article).

Then you have other WordPress security plugins with built-in WAFs that sit at the application level. This means the firewall examines incoming traffic after it has already reached your server – but before loading WordPress scripts.

Plugins are a simple and effective solution to WAF and generally work for small or medium-sized websites. We’ll be going over some options of WAF vendors later on in this article.

On-site Dedicated WordPress WAFs

These types of firewalls are installed between your WordPress sites and an internet connection. This means that every HTTP request sent to your WordPress site initially passes through the WAF.

Web application WAFs are a bit more secure opinion than plugins. That being said, they’re more expensive and will require some technical knowledge to manage.

Online WordPress Firewalls

This type of firewall does not need to be installed on the same network as your webserver to function. It’s an online service that works like a proxy server, where your site’s traffic comes through it for filtering and is then forwarded to your website.

With an online WordPress firewall, your site’s domain’s DNS records will need to be configured to point to the online WAF. So, this entails your WordPress visitors communicating with the online WordPress firewall, not precisely with your WordPress website.

The downside? Your web server needs to be accessible over the internet for the WAF to forward traffic to your website. In other words, people can continue to communicate directly with your web server if the IP address is known.

Basically, in a non-targeted WordPress attack, in which attackers scan entire networks for vulnerable sites, your web server and site will still be reachable.

Luckily, you can configure your server’s firewall to only respond to traffic coming from the online WordPress firewall, so if this attack happens, you won’t be a victim.

Limitations of WordPress Firewalls

Like anything, firewalls can be imperfect. Sure, they offer added protection, but there are some vulnerabilities.

A couple of examples of this are Limited Zero-Day Vulnerability Protection, and Web Application Firewall Bypasses.

With the zero-day WordPress vulnerability, there’s potential that your WordPress firewall won’t block an attack.

This is why your vendor responsive menu is critical. Plus, you should always use software from responsive and trusted businesses to ensure the firewall rules are updated.

In the case of web application firewall bypasses, it’s just a matter of them having vulnerabilities. There are techniques out there about bypassing the protection of WAFs.

Here again, if your vendor is responsive and can remediate issues in a quick time frame, you should be okay.

It’s also not uncommon for WAFs to have false positives (where they block harmless traffic) and false negatives (letting harmful traffic through). This is because the application is protected by WAF changes regularly.

Additionally, some security protocols are often neglected. This includes preventative measures, such as code and infrastructure audits not being taken.

There will always be new WAF vulnerabilities that arise as new digital tools emerge. Many security issues get resolved, but some aren’t noticed right away.

All this being said, WAFs need to be actively maintained and configured to ensure they’re up-to-date.

WAF Deployment

WAFs are deployed in a few ways. This all depends on where your applications are deployed, what services are needed, how you want them managed, and the level of flexibility and performance required.

Here’s the quick rundown…

Reverse Proxy: The WAF is a proxy to the application server, so device traffic heads directly to the WAF.

Transparent Reverse Proxy: This is a reverse proxy with transparent mode. Because of this, the WAF separately sends filtered traffic to web applications, which allows for IP masking by having the address of the application server hidden.

Transparent Bridge: This is where HTTP traffic goes straight to the web application. The result is the WAF is transparent between the device and the server.

You’ll have to decide what method of deployment works best and covers all that you need.

WAF Vendors

When it comes to implementing WAFs, there’s no shortage of companies and vendors that are out there to help. Just google “WAF Vendors” — and a ton of results will appear, including a lot of Top 10 lists and more.

That being said, here is a look at some of the top companies out there that have stuck out to us as major contenders when it comes to WAFs. They all have features that cater to individual needs.

We’ll take a look at the following WAF vendors:

  • AWS
  • Cloudflare
  • Azure
  • WPMU DEV
  • Imperva
  • Prophaze
  • Akamai
  • Wordfence
  • Sucuri

There’s a summary of who they are and what they’re best at. Plus, we’ll point out some of the top features of each company and the significant preventative security measures they take care of.

AWS

aws logo.
AWS is an excellent WAF solution for small to large businesses.

Amazon’s AWS WAF helps stop attacks from web exploits and bots that can alter availability, affect your security, and consume a ton of resources.

With this WAF, you’ll be in control of how traffic reaches your applications by setting up security rules that run bot traffic and block common attack patterns (e.g. SQL Injections).

This WAF is deployed on Amazon CloudFront as part of your CDN. What’s especially lovely about this WAF is that you pay only for what you use, and the costs are based on the number of rules you have. Plus, there are costs associated with the number of web requests your application receives.

Top Features: Amazon’s AWS WAF includes its cost-effective web application protection. Along with that, it has an ease of deployment and maintenance. Security is also integrated depending on how you develop your applications, giving you more customization options than other WAFs.

Best For: Businesses of all sizes, as long as they’re AWS clients.

Helps Mitigate: DDoS attacks, SQL Injections, and Cross-Site Scripting (XSS).

Cloudflare

Cloudflare logo.
Cloudflare is here to help secure your assets with layered defenses.

Cloudflare is a top-rated cloud-delivered application security company. And, of course, a powerful WAF is integrated with its protection. Their WAF blocks over 57 billion cyber threats per day.

Its global 100 Tbps network sees 30M requests per second, so it’s up for the job when it comes to handling your websites. It offers complete application security from the same cloud network, making it practical and uniform when it comes to security posture.

Cloudflare’s network has unparalleled visibility into threats, which yields the sharpest and most effective machine learning.

Top Features: It has layered defenses, including Cloudfare managed rules, that offer advanced zero-day vulnerability protections. Plus, it utilizes the core OWASP rules, uses custom rulesets, monitors & blocks stolen or exposed credentials, and has flexible response options.

Additionally, it has logging & reporting, issue tracking, analytics, and application-layer control.

Best For: Personal use to small and mid-sized businesses. Also, it’s excellent for high-level enterprises and companies. Plus, it has WordPress WAF rules, so it’s great for WordPress sites.

Helps Mitigate: OWASP Top 10, Comment Spam, DDoS attacks, SQL injections, HTTP Headers, and more.

Azure

Azure logo.
Azure is Microsoft’s WAF solution.

Microsoft’s Azure is a cloud-native WAF that is one of the most successful cloud platforms out there.

The Azure service offers a range of software that provide utilities to other systems, and one of the products is the WAF. It tracks for the top ten vulnerabilities logged by OWASP, and you can add custom rules, too.

It has a metered charge rate, calculated on an hourly rate and data throughput rate — then charged monthly. This provides much lower upfront costs compared to some other WAF providers.

Top Features: Azure has comprehensive protection for OWASP, real-time visibility into your environment, and security alerts. Plus, it has full REST API support so that it can automate DevOps processes. It also has DDoS protection.

Best For: Major and small businesses, alike.

Helps Mitigate: OWASP Top 10, DDos Attacks, and any custom rules (and more).

WPMU DEV

wpmu dev logo
Yes, our hosting includes a WAF.

We couldn’t let this article go by without mentioning our very own highly optimized WAF here at WPMU DEV. Our WAF is completely free to use with our hosting, already tweaked for WordPress, updated daily, and much more.

The WAF we use uses fewer server resources by not running in PHP. Additionally, it doesn’t need to use a line of code, so your site’s performance will remain strong.

We also have over 300+ firewall rules (or policies). These policies combine rule-based logic, parsing, and signatures — which lets them detect and stop web application attacks.

Top Features: After testing, our WAF is 25% faster than leading plugin-based firewall. On top of our 300+ firewall ruleset, we also protect against the OWASP Top Ten. Additionally, it’s free with any hosted account!

Best For: Small to major WordPress sites, hosting resellers, and any agency or individual that manages multiple websites.

Helps Mitigate: Attacks ranging from SQL injections, XSS, and many more.

Imperva

Imperva logo.
Imperva is a great option that you can try for free.

Imperva’s WAF stops attacks with practically zero errors when it comes to false positives. It also has a global SOC to make sure your company is protected within moments of discovery.

It’s an all-in-one security solution that has all the features required for website security. There are free tools for Data Classification and Database Vulnerability Testing.

Top Features: Imperva features secure cloud and on-premises applications. It stops OWASP Top 10 and Automated Top 20, plus has attack detection, SIEM integration, and reporting.

Best For: Small to large-sized companies.

Helps Mitigate: OWASP Top 10 and Automated Top 20 and more.

Prophaze

Prophaze logo
Porphaze offers unlimited rule sets.

Prophaze WAF handles a ton when it comes to security. Not only is it a WAF, but it’s also a combination of RASP, CDN, DDoS, and more.

It offers real-time website protection by implementing powerful cloud-based technologies that work against the latest threats. It automatically scans your site for thousands of vulnerabilities and the OWASP Top 10. On top of that, it doesn’t need any additional configurations and automatic updates to tackle new threats.

Prophaze has unlimited rule sets. Plus, custom integrations with SIEM Solutions and supports all public clouds (e.g. AWS).

Top Features: Some key security features are Bot Migration, Real-Time Dashboard, 24-7 support, and ML Based Threat Intelligence.

Best For: A range from midmarket to high level enterprise.

Helps Mitigate: OWASP Top 10 API, DDoS, Bot Protection, and more.

Akamai

Akamai WAF image.
Akamai WAF uses crowdsourced intelligence to help protect against threats.

Akamai’s WAF is a dependable solution that will protect your site against all known attacks. Its a world leader in DDoS, plus integrates complete DDoS protection with its WAF. That makes it so you won’t need to have traffic routed through two companies to receive positive requests to your web server.

With Akamai, detect threats with crowdsourced intelligence. Plus, deploy and manage efficiently with just a few clicks.

Top Features: Akamai has more automation than many other options. It’s also easy to use with protection against DDoS attacks and more. It also features a dashboard, alerts, and additional information about blocked attacks and how your site was protected.

Best For: Small to Large Companies

Helps Mitigate: DDoS Attacks and all OWASP Top 10.

Wordfence

Wordfence logo
Wordfence is a WAF that runs at the endpoint, which makes for deep integration with WordPress.

Wordfence is another solid option for a WAF that’s made for WordPress sites as a popular all-in-one security plugin with over two million active installs. It includes an endpoint firewall and malware scanner that was specifically built for WordPress.

Its WAF runs at the endpoint, which enables deep integration with WordPress, which is different than cloud alternatives since it doesn’t break encryption, can’t be bypassed, and can’t leak data.

It also comes with a nice dashboard that indicates security threats, scans, and more.

Top Features: Spam filter, scheduled security scans, brute force attack prevention, live traffic monitoring, and more.

Best For: WordPress sites and small to large corporations.

Helps Mitigate: Brute force attacks, OWASP Top 10, and other malicious attacks.

Sucuri

sucuri logo
Another excellent option for your WAF and WordPress.

Sucuri is a leading security company for WordPress. It features a cloud-based WAF that’s consistently updated to improve detection and mitigation against new and evolving threats. Plus, you can add your own custom rules.

With Sucuri, you can also enhance your WordPress’s performance. It features caching optimization, Analyst CDN, and website acceleration.

Top Features: DNS Level Firewall, malware & blocklist removal services, and brute force protection.

Best For: WordPress sites and companies/businesses of any size.

Helps Mitigate: All known attacks (e.g. SQL injections, RCE, RFU, etc.).

Of course, there are many more options out there as well. This is just a shortlist of some highly rated companies that can serve you well when it comes to WAFs.

It’s No Gaffe That You Need a WAF

Now that we’ve covered the spectrum of WAFs, in case you didn’t know, you can see that they’re beneficial for security, compliance, reputation, and peace of mind. And hopefully, you learned more about WAFs than you ever thought you would!

Plus, with the many vendors to provide a WAF, you can have one up and running in a matter of moments. Whether you run a WordPress site or not — there’s a WAF for you.

Hopefully, this reference guide has helped to answer any questions you or your clients have about WAFs.

340: With George Francis

Chris gets to chat with George Francis, an incredible digital artist in the generative art space as well as educator and all-around developer. George has been all over the place lately, producing really outstanding work (CSS-Tricks uses a Houdini paint worklet from George in the header and footer). Not only does George make art that has that little special something that turns heads, he helps you do it too by sharing all the tools and techniques he uses in blog posts.

Time Jumps

  • 00:28 Guest introduction
  • 01:35 Do you like the term Generative Code?
  • 03:27 Limiting the randomness
  • 06:04 How do I random blob?
  • 10:52 Sponsor: Netlify
  • 12:22 Which blobs get popular on CodePen
  • 16:00 Working with Houdini
  • 23:08 What goals do you have with your work?
  • 26:49 NFTs and generative code
  • 29:46 Tell us about your day job

Sponsor: Netlify

Netlify has used the slogan “Static without limits” — which I really like. It’s useful to think of Netlify as a static file host as the foundation. That’s what Jamstack is, after all. But it doesn’t mean that you are limited in what you can build. You can make a site that is every bit dynamic as any other architecture out there, and Netlify not only encourages it but will help you do it with features like hosted cloud functions.

The post 340: With George Francis appeared first on CodePen Blog.

WordPress Has Never Offered an Ideal Writing Experience

It needed to be said. I know some of you loved writing in the classic editor. I know some of you enjoy the current block editor. Some of you may have even been thrilled with the platform’s earlier attempt at a distraction-free writing mode.

But, for actual writing, WordPress has always been kind of, sort of, OK — maybe even good — but not great.

Coupled with a content-focused theme with great typography and a registered editor stylesheet, both the classic and block editors could be equals. They would offer an interface and experience of editing the content as seen on the front end. However, having the back and front ends meet does not necessarily mean you have an ideal writing experience. It can be a top-tier platform for layout and design. However, for typing words on a screen, there are better tools.

When I talk about writing, I am generally referring to mid or long-form content. If you are penning 200-word posts, dropping in photos, or designing a landing page, WordPress is as good as it comes. For publishing software, it is a powerhouse that few systems can rival.

However, publishing and writing are two different things.

There was a time that I wrote pages upon pages of essays, fiction, and everything else by hand. With a pen and pad, I spent hours drafting papers for my college classes. Even in my final two years, as I took four or five English and journalism courses at a time, I clung to what I knew best. The feel of the pen in my hand was a source of comfort. It glided atop the page in legible-but-imperfect cursive.

It was not until an ethnography class that I had to put down the pen and move on to the technological upgrade of the computer. Don’t get me wrong. I was a speedy typist at the time and was well on my way to becoming a WordPress developer. I did not come of age with computers, but I picked up the skills I needed quickly. I was even writing blog posts in the OG classic editor back then.

However, writing was such a personal act for me, and the keyboard and screen felt impersonal. A 30-page ethnographic paper on modern literacy changed my view on the matter. Since then, I have not looked back.

If you are concerned that I will say that you are stuck in the past, that is not the case. The tools we use can be a great comfort to us. I would not tell a pianist not to compose their next piece on the old church piano they have played since childhood. That may be one source of their inspiration, likewise, for someone’s favorite writing software.

What I have learned is to try out new things once in a while. I am very much the type of person who gets stuck using the tools that I am comfortable with, so I remind myself to mix it up from time to time.

The classic WordPress editor and I never clicked. Eventually, I learned to write in Markdown and port those posts to the WordPress editor. Mark Jaquith’s Markdown on Save plugin was a godsend for many years. Eventually, I switched to Jetpack’s Markdown module. Today, the block editor converts my preferred writing format to blocks automatically as I paste it in.

As much as I love the block editor, I rarely use it during the drafting process. I am literally writing this post in Atom.

Screenshot of a blog post written in a monospace font in the Atom editor.
My writing workspace (Atom).

Atom is known more for being a code editor, but its packages come in handy for Markdown enthusiasts. I also like using something with quick folder access for traversing through various ongoing stories and projects. I use a simple “bucket” system for working, published, and trashed posts to organize everything. Once I finish drafting and running the first edit, I copy and paste the text directly into the WordPress editor. Then, I dive into the final editing rounds. This is where WordPress becomes far more beneficial to my flow. I can make adjustments that I did not see in plainer text format, and dropping in media is simple.

I am sure many people would dislike my choice of writing tools or my workflow. Some people enjoy writing in Microsoft Word — really, I have heard such people exist. Others publish via email, apps, or other computer programs.

Currently, I am giving Dabble a try during National Novel Writing Month (NaNoWriMo). I wrote via Atom the last time I participated in the writing challenge. However, the tool I enjoy most for writing blog posts offers a sub-par experience for something as complex as a 50,000-word manuscript.

Screenshot of the Dabble writing software. Content areas shows words written on the screen.
Writing a novel manuscript in Dabble.

Dabble is a platform specifically built for writing books. I wish it was open-source, but it is hard to come by equivalent software without restrictive licensing. Nevertheless, it does its job and sticks in its lane. It also does not hurt that it updates word counts through the NaNoWriMo API.

Thus far, I am loving the Dabble experience. It is also imperative that those who work on the WordPress platform step outside our bubbles and try related software. We should learn and grow from it. Then, bring those experiences back into the WordPress fold.

I cannot imagine writing a novel in WordPress without first creating a plugin that added the extra bits, such as scene and character cards, and cut away almost everything else. The editing canvas might be acceptable with the right style adjustments. Note: if anyone wants to build this, I would be happy to offer direct feedback.

WordPress may never be the ideal writing experience for all people. However, it should always offer a pathway toward publishing, regardless of what tools its users prefer.

It should also continue striving to create a more well-rounded writing experience. Besides a few oddities, the block editor seems to be on this path. Every now and again, I write a post in it. It is part of my promise to step outside my comfort zone. Each time, the experience is better. It continues to be in that “sort of good” zone, and I am OK with that. WordPress is making progress.

Continue the conversation. This post builds on the following articles:

Java Localization – Create ResourceBundles from text files

Introduction

When localizing your application, ResourceBundle can be a great way to load Strings from different locales.

The only concrete implementation of ResourceBundle is PropertyResourceBundle, which allows us to load static Strings from a text file. Because .properties is a common file extension used for ResourceBundles, creating ResourceBundle objects using this file format would be the focus of this tutorial.

Property files are commonly used to load application properties for Spring Boot projects, so if you have been working with Spring Boot, then this file format should be familiar to you; the only difference to what we create today is that we will be using property files to load localization data for the ResourceBundle object, so there are some rules that we must follow.

Goals

At the end of the tutorial, you would have learned:

  1. How to create ResourceBundle objects from property files.
Prerequisite Knowledge
  1. Basic Java.
  2. Basic understanding of the java.util.Locale class.
Tools Required
  1. A Java IDE such as IntelliJ Community Edition.
Project Setup

To follow along with the tutorial, perform the steps below:

  1. Create a new Java project.
  2. Create a package com.example.
  3. Create a class called Entry.
  4. Create the main() method inside Entry.java.
Concept: Family

When working with ResourceBundle, the most basic concept that developers need to understand is family. A family is technically just a group of property files with a common base (file) name. For example, all of the property files in the list below belong to the same family

  • Menu.properties
  • Menu_en.properties
  • Menu_en_US.properties
  • Menu_es.properties
  • Menu_es_MX.properties

because they all share the common base name of Menu.

Creating the property files

To continue with our tutorial, we would need to create some resource bundle files. The scenario that we will use for this tutorial is a restaurant menu that can display multiple languages.

The instructions below are specific to IntelliJ, but other IDEs might have similar options. You only need to make sure that the directory containing the property files is on the class-path/module-path when building your project.

  1. Right-click on the project root -> New -> Directory.
  2. Name it res, which will be used to store our property files.
  3. Right-click on the res directory -> Mark directory as -> Resources Root

Property files are only text files, but my IDE(IntelliJ) treats them as special files. I can simply

  1. right-click on res ->
  2. New ->
  3. Resource Bundle

resourcebundle.png

Then a Create Resource Bundle menu will pop up, at which point I can conveniently add multiple resource files for different locales at once.

create_resource_bundle.png

The created files will also be nicely grouped together by the common base name as well.

resource_bundle_group.png

If your IDE does not support this feature, you can just create 3 empty property files using the same names above.

Language and country codes

If you have been wondering what the language and country codes after the base name do(Menu_en_US), then I will explain them now.

To create a ResourceBundle object, we can use static builder methods getBundle(). It mandates that the language and the country code must be specified in a specific order, separated by underscores(_). The formula is

baseName + _ + language + _ + script + _ + country + _ + variant

Except for the baseName, everything else is optional. If any operand is omitted, the getBundle() method will just use the default Locale. We are not concerned about the script and the variant in this tutorial.

The property file with the vi_VN Locale stands for Vietnamese_VIETNAM. I chose Vietnamese as a translation target because I am fluent in it so the translations would be as natural as possible. I did not want to pick a language that I was not familiar with and risk translating English to something weird/offensive with Google Translate.

Property file format

In its simplest form, the content inside the property file follows a simple format

propertyName=propertyValue

The property name follows the camelCase convention, whereas the property value is implementation-specific. In our scenario, we can just use plain English/Vietnamese. We can also skip the double quotes entirely.

In the Menu_en.properties file, add the properties below.

pumpkinSoup=Pumpkin Soup
cheeseCake=Cheese Cake

In the Menu_vi_VN.properties file, add the properties below,

pumpkinSoup=Canh B 
cheeseCake=Bnh Ph Mai
Create the ResourceBundle

Now, we can create a method to show the menu to our English-speaking patrons from the code below(inside Entry class).

private static void printEnglishMenu(){
   ResourceBundle rb = ResourceBundle.getBundle("Menu", Locale.ENGLISH); //1
   System.out.println("~~~English Menu~~~");
   System.out.println(rb.getString("pumpkinSoup")); //2
   System.out.println(rb.getString("cheeseCake")); //3
}
  1. As explained previously, we can use the getBundle() method to get an instance of the ResourceBundle object, which explains line 1.
  2. We can access the property value by giving the property name to the getString() instance method on line 2 and line 3.

When we call this method in main(), we get:

~~~English Menu~~~
Pumpkin Soup
Cheese Cake

If you have already set up your property files correctly(added them to the classpath), then ResourceBundle should not have any problem finding your resource bundles. If your program throws MissingResourceException, however, then you should double check your build/run configurations to see if the property files are included.

Now that we have an English menu ready, the next step is to add another method to show the Vietnamese menu.

Because the Locale class did not provide a convenient constant for the Locale vi, VN, create a constant in our program to make our code cleaner whenever we wanted to use the vi_VN Locale. In the Entry class, create a constant VIETNAM like the code below:

private static final Locale VIETNAM = new Locale("vi", "VN");

Finally, add the code for the printVietnameseMenu() method:

private static void printVietnameseMenu(){
   ResourceBundle rb = ResourceBundle.getBundle("Menu", VIETNAM); //using constant
   System.out.println("~~~Vietnamese Menu~~~");
   System.out.println(rb.getString("pumpkinSoup"));
   System.out.println(rb.getString("cheeseCake"));
}

The printVietnameseMenu() method is almost identical to the printEnglishMenu() method, except for the VIETNAM constant. When called in main(), this method prints:

~~~Vietnamese Menu~~~
Canh B
Bnh Ph Mai
Solution Code

Entry.java

package com.example;

import java.util.Locale;
import java.util.ResourceBundle;

public class Entry {
   private static final Locale VIETNAM = new Locale("vi", "VN");

   public static void main(String[] args){
       //printEnglishMenu();
       printVietnameseMenu();
   }

   private static void printEnglishMenu(){
       ResourceBundle rb = ResourceBundle.getBundle("Menu", Locale.ENGLISH); //1
       System.out.println("~~~English Menu~~~");
       System.out.println(rb.getString("pumpkinSoup")); //2
       System.out.println(rb.getString("cheeseCake")); //3
   }

   private static void printVietnameseMenu(){
       ResourceBundle rb = ResourceBundle.getBundle("Menu", VIETNAM); //using constant
       System.out.println("~~~Vietnamese Menu~~~");
       System.out.println(rb.getString("pumpkinSoup"));
       System.out.println(rb.getString("cheeseCake"));
   }
}

Menu_en.properties

pumpkinSoup=Pumpkin Soup
cheeseCake=Cheese Cake

Menu_vi_VN.properties

pumpkinSoup=Canh B
cheeseCake=Bnh Ph Mai
Summary

Besides text files, there are also other ways to create ResourceBundles such as ListResourceBundle or implementing your own ResourceBundle.

Also, another benefit of property files for translations is that you can have the translation team responsible for their own files without access to your code base.

The full project code can be found here https://github.com/dmitrilc/DaniwebPropertiesFile

Junit 5 – Test Instance Lifecycle

Introduction

Junit is a popular framework to create tests for Java applications. Although individual unit tests are mostly straightforward, integration and functional tests are a little bit more involved because they usually require multiple components working together. For that reason, understanding the life cycle of Junit tests can be greatly beneficial.

In this tutorial, we will learn about the lifecycle of Junit 5 test instances.

Goals

At the end of the tutorial, you would have learned:

  1. Different stages of a Junit instance lifecycle.
Prerequisite Knowledge
  1. Basic Java.
  2. Basic Junit.
Tools Required
  1. A Java IDE such as IntelliJ Community Edition.
Project Setup

To follow along with the tutorial, perform the steps below:

  1. Create a new Gradle Java project. I am using Java 17 and Gradle 7.2, but any Java version 8+ should work.
  2. Add the dependency for unit-jupiter-engine. The latest version is 5.8.1 as of this writing.

Below is the content of my build.gradle file.

plugins {
   id 'java'
}

group 'org.example'
version '1.0-SNAPSHOT'

repositories {
   mavenCentral()
}

dependencies {
   testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.1'
   testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.1'
}

test {
   useJUnitPlatform()
}
  1. Under src/test/java, create a new package called com.example.
  2. Under com.example, create a new class called LifecycleTest.
  3. We do not need the main() method for this tutorial.
Lifecycle Overview

There are 5 stages of a Junit 5 test instance lifecycle. The list below orders them from first to last. The @ symbol means that there is an annotation matching that lifecycle stage as well.

  1. @BeforeAll: executed before all tests.
  2. @BeforeEach: executed before each test.
  3. @Test: the test itself.
  4. @AfterEach: executed after each test.
  5. @AfterAll: executed after all tests.

To see how they work together, we need to add some tests into our code. In LifecycleTest.java, add the code below.

package com.example;

import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.TestInstance.Lifecycle.PER_CLASS;

//@TestInstance(PER_CLASS)
class LifecycleTest {

   @BeforeAll
   void beforeAll(){
       System.out.println("Before All");
   }

   @BeforeEach
   void beforeEach(){
       System.out.println("Before Each");
   }

   @Test
   void test(){
       System.out.println("Test");
   }

   @AfterEach
   void afterEach(){
       System.out.println("After Each");
   }

   @AfterAll
   void afterAll(){
       System.out.println("After All");
   }
}

But there is a problem with the code above. If we run the test, it will actually throw a JunitException.

@BeforeAll method 'void com.example.LifecycleTest.beforeAll()' must be static unless the test class is annotated with @TestInstance(Lifecycle.PER_CLASS).

The reason for the JunitException is that the methods annotated with @BeforeAll and @AfterAll must either be static or the enclosing class must be annotated with @TestInstance(Lifecycle.PER_CLASS). By default, Junit creates a new instance of LifecycleTest for every test with @TestInstance(Lifecycle.PER_METHOD). So if we want the code to run correctly, we would need to add @TestInstance(Lifecycle.PER_CLASS) on top of the LifecycleTest class declaration.

Go ahead and uncomment the line

//@TestInstance(Lifecycle.PER_CLASS)

in the code. When we run the test again, we should now see the test passing and the output prints out the lines in expected order.

Before All
Before Each
Test
After Each
After All
Repeated Tests

While the lifecycle is simple to understand when executing only a single test, repeated tests behave a little bit differently. When running repeated tests, @BeforeAll and @AfterAll are only executed once, while @BeforeEach and @AfterEach are always executed for each test. To demonstrate, let us comment out the @Test method in our code.

//    @Test
//    void test(){
//        System.out.println("Test");
//    }

And add a repeated test method. This test method will run twice because of the int value we passed to the @RepeatedTest annotation.

@RepeatedTest(2)
void repeatTest(){
   System.out.println("Repeat Test");
}

When we run the test, we will see @BeforeAll and @AfterAll were only executed once, while @BeforeEach and@AfterEach are both executed twice(I have added some line separators to make the output easier to read).

Before All

Before Each
Repeat Test
After Each

Before Each
Repeat Test
After Each

After All
Solution Code
package com.example;

import org.junit.jupiter.api.*;
import static org.junit.jupiter.api.TestInstance.Lifecycle.PER_CLASS;

@TestInstance(PER_CLASS)
class LifecycleTest {

   @BeforeAll
   void beforeAll(){
       System.out.println("Before All");
   }

   @BeforeEach
   void beforeEach(){
       System.out.println("Before Each");
   }

//    @Test
//    void test(){
//        System.out.println("Test");
//    }

   @RepeatedTest(2)
   void repeatTest(){
       System.out.println("Repeat Test");
   }

   @AfterEach
   void afterEach(){
       System.out.println("After Each");
   }

   @AfterAll
   void afterAll(){
       System.out.println("After All");
   }
}
Summary

We have learned about the different stages of a Junit 5 test instance lifecycle. The full project code can be found here https://github.com/dmitrilc/DaniwebJunitLifecycle/tree/master

Chapter 10: Browser Wars

In June of 1995, representatives from Microsoft arrived at the Netscape offices. The stated goal was to find ways to work together—Netscape as the single dominant force in the browser market and Microsoft as a tech giant just beginning to consider the implications of the Internet. Both groups, however, were suspicious of ulterior motives.

Marc Andreessen was there. He was already something of a web celebrity. Newly appointed Netscape CEO James Barksdale also came. On the Microsoft side was a contingent of product managers and engineers hoping to push Microsoft into the Internet market.

The meeting began friendly enough, as the delegation from Microsoft shared what they were working on in the latest version of their operating system, Windows 95. Then, things began to sour.

According to accounts from Netscape, “Microsoft offered to make an investment in Netscape and give Netscape’s software developers crucial technical information about the Windows operating system if Netscape would agree not to make a browser for [the] Windows 95 operating system.” If that was to be believed, Microsoft would have tiptoed over the line of what is legal. The company would be threatening to use its monopoly to squash competition.

Andreessen, no stranger to dramatic flair, would later dress the meeting up with a nod to The Godfather in his deposition to the Department of Justice: “I expected to find a bloody computer monitor in my bed the next day.”

Microsoft claimed the meeting was a “setup,” initiated by Netscape to bait them into a comprising situation they could turn to their advantage later.

There are a few different places to mark the beginning of the browser wars. The release of Internet Explorer 1, for instance (late summer, 1995). Or the day Andreessen called out Microsoft as nothing but a “poorly debugged set of device drivers” (early 1995). But June 21, 1995—when Microsoft and Netscape came to a meeting as conceivable friends and left as bitter foes—may be the most definitive.


Andreessen called it “free, but not free.”

Here’s how it worked. When the Netscape browser was released it came with fee of $39 per copy. That was officially speaking. But fully function Netscape beta versions were free to download for their website. And universities and non-profits could easily get zero-cost licenses.

For the upstarts of the web revolution and open source tradition, Netscape was free enough. For the buttoned-up corporations buying in bulk with specific contractual needs, they could license the software for a reasonable fee. Free, but not free. “It looks free optically, but it is not,” a Netscape employee would later describe it. “Corporations have to pay for it. Maintenance has to be paid.”

“It’s basically a Microsoft lesson, right?” was how Andreessen framed it. “If you get ubiquity, you have a lot of options, a lot of ways to benefit from that.” If people didn’t have a way to get quick and easy access to Netscape, it would never spread. It was a lesson Andreessen had learned behind his computer terminal at the NCSA research lab at the University of Illinois. Just a year prior, he and his friends built the wildly successful, cross-platform Mosaic browser.

Andreessen worked on Mosaic for several years in the early ’90’s. But he began to feel cramped by increasing demands from higher-ups at NCSA hoping to capitalize on the browser’s success. At the end of 1993, Andreessen headed west, to stake his claim in Silicon Valley. That’s where he met James Clark.

Netscape Communications Corporation co-founders Jim Clark, left, and Marc Andreessen (AP Photo/HO)

Clark had just cut ties with Silicon Graphics, the company he created. A legend in the Bay Area, Clark was well known in the valley. When he saw the web for the first time, someone suggested he meet with Andreessen. So he did. The two hit it off immediately.

Clark—with his newly retired time and fortune—brought an inner circle of tech visionaries together for regular meetings. “For the invitees, it seemed like a wonderful opportunity to talk about ideas, technologies, strategies,” one account would later put it. “For Clark, it was the first step toward building a team of talented like-minded people who populate his new company.” Andreessen, still very much the emphatic and relentless advocate of the web, increasingly moved to the center of this circle.

The duo considered several ideas. None stuck. But they kept coming back to one. Building the world’s first commercial browser.

And so, on a snowy day in mid-April 1994, Andreessen and Clark took a flight out to Illinois. They were there with a single goal: Hire the members of the original Mosaic team still working at the NCSA lab for their new company. They went straight to the lobby of a hotel just outside the university. One by one, Clark met with five of the people who had helped create Mosaic (plus Lou Montulli, creator of Lynx and a student at University of Kansas) and offered them a job.

Right in a hotel room, Clark printed out contracts with lucrative salaries and stock options. Then he told them the mission of his new company. “Its mandate—Beat Mosaic!—was clear,” one employee recalled. By the time Andreessen and Clark flew back to California the next day, they’d have the six new employees of the soon-to-be-named Netscape.

Within six months they would release their first browser—Netscape Navigator. Six months after that, the easy-to-use, easy-to-install browser, would overrun the market and bring millions of users online for the first time.

Clark, speaking to the chaotic energy of the browser team and the speed at which they built software that changed the world, would later say Netscape gave “anarchy credibility.” Writer John Cassidy puts that into context. “Anarchy in the post-Netscape sense meant that a group of college kids could meet up with a rich eccentric, raise some money from a venture capitalist, and build a billion-dollar company in eighteen months,” adding, “Anarchy was capitalism as personal liberation.”


Inside of Microsoft were a few restless souls.

The Internet, and the web, was passing the tech giant by. Windows was the most popular operating system in the world—a virtual monopoly. But that didn’t mean they weren’t vulnerable.

As early as 1993, three employees at Microsoft—Steven Sinofsky, J. Allard, and Benjamin Slivka—began to sound the alarms. Their uphill battle to make Microsoft realize the promise of the Internet is documented in the “Inside Microsoft” profile penned by Kathy Rebell, which published in Bloomberg in 1996. “I dragged people into my office kicking and screaming,” Sinofsky told Rebello, “I got people excited about this stuff.”

Some employees believed Microsoft was distracted by a need to control the network. Investment poured into a proprietary network, like CompuServe or Prodigy, called the Microsoft Network (or MSN). Microsoft wanted to control the entire networked experience. But MSN would ultimately be a huge failure.

Slivka and Allard believed Microsoft was better positioned to build with the Internet rather than compete against it. “Microsoft needs to ensure that we ride the success of the Web, instead of getting drowned by it,” wrote Slivka in some of his internal communication.

Allard went a step further, drafting an internal memo named “Windows: The Next Killer Application for the Internet.” Allard’s approach, laid out in the document, would soon be the cornerstone of Microsoft’s Internet strategy. It consisted of three parts. First, embrace the open standards of the web. Second, extend its technology to the Microsoft ecosystem. Finally (and often forgotten), innovate and improve web technologies.

After a failed bid to acquire BookLink’s InternetWorks browser in 1994—AOL swooped in and outbid them—Microsoft finally got serious about the web. And their meeting with Netscape didn’t yield any results. Instead, they negotiated a deal with NCSA’s commercial partner Spyglass to license Mosaic for the first Microsoft browser.

In August of 1995, Microsoft released Internet Explorer version 1.0. It wasn’t very original, based on code that Spyglass had licensed to dozens of other partners. Shipped as part of an Internet Jumpstart add-on, the browser was bare-bones, clunkier and harder to use than what Netscape offered.

Source: Web Design Museum

On December 7th, Bill Gates hosted a large press conference on the anniversary of Pearl Harbor. He opened with news about the Microsoft Network, the star of the show. But he also demoed Internet Explorer, borrowing language directly from Allard’s proposal. “So the Internet, the competition will be kind of, once again, embrace and extend,” Gates announced, “And we will embrace all the popular Internet protocols… We will do some extensions to those things.”

Microsoft had entered the market.


Like many of her peers, Rosanne Siino began learning the world of personal computing on her own. After studying English in college—with an eye towards journalism—Siino found herself at a PR firm with clients like Dell and Seagate. Siino was naturally curious and resourceful, and read trade magazines and talked to engineers to learn what she could about personal computing in the information age.

She developed a special talent for taking the language and stories of engineers and translating them into bold visions of the future. Friendly, and always engaging, Siino built up a Rolodex of trade publication and general media contacts along the way.

After landing a job at Silicon Graphics, Siino worked closely with James Clark (he would later remark she was “one of the best PR managers at SGI”). She identified with Clark’s restlessness when he made plans to leave the company—an exit she helped coordinate—and decided if the opportunity came to join his new venture, she’d jump ship.

A few months later, she did. Siino was employee number 19 at Netscape; its first public relations hire.

When Siino arrived at the brand new Netscape offices in Mountain View, the first thing she did was sit down and talk to each one of the engineers. She wanted to hear—straight from the source—what the vision of Netscape was. She heard a few things. Netscape was building a “killer application,” one that would make other browsers irrelevant. They had code that was better, faster, and easier to use than anything out there.

Siino knew she couldn’t sell good code. But a young and hard working group of fresh-out-of-college transplants from rural America making a run at entrenched Silicon Valley; that was something she could sell. “We had this twenty-two-year-old kid who was pretty damn interesting and I thought, ‘There’s a story right there,'” she later said in an interview for the book Architects of the Web, “‘And we had this crew of kids who had come out from Illinois and I thought, ‘There’s a story there too.'”

Inside of Netscape, some executives and members of the board had been talking about an IPO. With Microsoft hot on their heels, and competitor Spyglass launching a successful IPO of their own, timing was critical. “Before very long, Microsoft was sure to attack the Web browser market in a more serious manner,” writer John Cassidy explains, “If Netscape was going to issue stock, it made sense to do so while the competition was sparse.” Not to mention, a big, flashy IPO was just what the company needed to make headlines all around the country.

In the months leading up to the IPO, Siino crafted a calculated image of Andreeseen for the press. She positioned him as a leader of the software generation, an answer to the now-stodgy, Silicon-driven hardware generation of the 60’s and 70’s. In interviews and profiles, Siino made sure Andreeseen came off as a whip-smart visionary ready to tear down the old ways of doing things; the “new Bill Gates.”

That required a fair bit of cooperation from Andreeseen. “My other real challenge was to build up Marc as a persona,” she would later say. Sometimes, Andreessen would complain about the interviews, “but I’d be like, ‘Look, we really need to do this.’ And he’s savvy in that way. He caught on.'” Soon, it was almost natural, and as Andreeseen traveled around with CEO James Barksdale to talk to potential investors ahead of their IPO, Netscape hype continued to inflate.

August 9, 1995, was the day of the Netscape IPO. Employees buzzed around the Mountain View offices, too nervous to watch the financial news beaming from their screens or the TV. “It was like saying don’t notice the pink elephant dancing in your living room,” [Siino said later]. They shouldn’t have worried. In its first day of trading, the Netscape stock price rose 108%. It was best opening day for a stock on Wall Street. Some of the founding employees went to bed that night millionaires.

Not long after, Netscape released version 2 of their browser. It was their most ambitious release to date. Bundled in the software were tools for checking email, talking with friends, and writing documents. It was sleek and fast. The Netscape homepage that booted up each time the software started sported all sorts of nifty and well-known web adventures.

Not to mention JavaScript. Netscape 2 was the first version to ship with Java applets, small applications run directly in the browser. With Java, Netscape aimed to compete directly with Microsoft and their operating system.

To accompany the release, Netscape recruited young programmer Brendan Eich to work on a scripting language that riffed on Java. The result was JavaScript. Ecih created the first version in 10 days as a way for developers to make pages more interactive and dynamic. It was primitive, but easy to grasp, and powerful. Since then, it has become one of the most popular programming languages in the world.

Microsoft wasn’t far behind. But Netscape felt confident. They had pulled off the most ambitious product the web had ever seen. “In a fight between a bear and an alligator, what determines the victor is the terrain,” Andreessen said in an interview from the early days of Netscape. “What Microsoft just did was move into our terrain”


There’s an old adage at Microsoft, that it never gets something right until version 3.0. It was true even of their flagship product, Windows, and has notoriously been true of its most famous applications.

The first version of Internet Explorer was a rushed port of the Mosaic code that acted as little more than a a public statement that Microsoft was going into the browser business. The second version, released just after Netscape’s IPO in late 1995, saw rapid iteration, but lagged far behind. With Internet Explorer 3, Microsoft began to get the browser right.

Microsoft’s big, showy press conference hyped Internet Explorer as a true market challenger. Behind the scenes, it operated more like a skunkworks experiment. Six people were on the original product team. In a company of tens of thousands. “A bit like the original Mac team, the IE team felt like the vanguard of Microsoft,” one-time Internet Explorer lead Brad Silverberg would later say, “the vanguard of the industry, fighting for its life.”

That changed quickly. Once Microsoft recognized the potential of the web, they shifted their weight to it. In Speeding the Net, a comprehensive account of the rise of Netscape and its fall at the hands of Microsoft, authors Josh Quittner and Michelle Slatall, describe the Microsoft strategy. “In a way, the quality of it didn’t really matter. If the first generation flopped, Gates could assign a team of his best and brightest programmers to write an improved model. If that one failed too, he could hire even better programmers and try again. And again. And again. He had nearly unlimited resources.”

By version 3, the Internet Explorer team had a hundred people on it (including Chris Wilson of the original NCSA Mosaic team). That number would reach the thousands in a few short years. The software rapidly closed the gap. Internet Explorer introduced features that had given Netscape an edge—and even introduced their own HTML extensions, dynamic animation tools for developers, and rudimentary support of CSS.

In the summer of 1996, Walt Mossberg talked up Microsoft’s browsers. Only months prior he had labeled Netscape Navigator the “clear victor.” But he was beginning to change his mind. “I give the edge, however, to Internet Explorer 3.0,” he wrote upon Microsoft’s version 3. “It’s a better browser than Navigator 3.0 because it is easier to use and has a cleaner, more flexible user interface.”

Microsoft Internet Explorer 3.0.01152
Netscape Navigator 3.04

Still, most Microsoft executives knew that competing on features would never be enough. In December of 1996, senior VP James Allchin emailed his boss, Paul Maritz. He laid out the current strategy, an endless chase after Netscape’s feature set. “I don’t understand how IE is going to win,” Allchin conceded, “My conclusion is that we must leverage Windows more.” In the same email, he added, “We should think first about an integrated solution — that is our strength.” Microsoft was not about to simply lie down and allow themselves to be beaten. They focused on two things: integration with Windows and wider distribution.

When it was released, Internet Explorer 4 was more tightly integrated with the operating system than any previous version; an almost inseparable part of the Windows package. It could be used to browse files and folders. Its “push” technology let you stream the web, even when you weren’t actively using the software. It used internal APIs that were unavailable to outside developers to make the browser faster, smoother, and readily available.

And then there was distribution. Days after Netscape and AOL shook on a deal to include their browser on the AOL platform, AOL abruptly changed their mind and when with Internet Explorer instead. It would later be revealed that Microsoft had made them, as one writer put it (extending The Godfather metaphor once more), an “offer they couldn’t refuse.” Microsoft had dropped their prices down to the floor and—more importantly—promised AOL precious real estate pre-loaded on the desktop of every copy of the next Windows release.

Microsoft fired their second salvo with Compaq. Up to that point, all Compaq computers had shipped with Netscape pre-installed on Windows. When Windows threatened to suspend their license to use Windows at all (which was revealed later in court documents), that changed to Internet Explorer too.

By the time Windows ’98 was released, Internet Explorer 4 came already installed, free for every user, and impossible to remove.


“Mozilla!” interjected Jamie Zawinski. He was in a meeting at the time, which now rang in deafening silence for just a moment. Heads turned. Then, they kept going.

This was early days at Netscape. A few employees from engineering and marketing huddled together to try to come up with a name for the thing. One employee suggested they were going to crush Mosaic, like a bug. Zawinski—with a dry, biting humor he was well known for—thought Mozilla, “as in Mosaic meets Godzilla.”

Eventually, marketer Greg Sands settled on Netscape. But around the office, the browser was, from then on, nicknamed Mozilla. Early marketing materials on the web even featured a Mozilla inspired mascot, a green lizard with a know-it-all smirk, before they shelved it for something more professional.

Credit: Dave Titus

Credit: Dave Titus
Credit: Dave Titus

It would be years before the name would come back in any public way; and Zawinski would have a hand in that too.

Zawinski had been with Netscape since almost the beginning. He was employee number 20, brought in right after Rosanne Siino, to replace the work that Andreessen had done at NCSA working on the flagship version of Netscape for X-Windows. By the time he joined, he already had something of a reputation for solving complex technical challenges.

Jaime Zawinski

Zawinski’s earliest memories of programming date back to eighth grade. In high school, he was a terrible student. But he still managed to get a job after school as a programmer, working on the one thing that managed to keep him interested: code. After that, he started work for the startup Lucid, Inc., which boasted a strong pedigree of programming legends at its helm. Zawinski worked on the Common Lisp programming language and the popular IDE Emacs; technologies revered in the still small programming community. By virtue of his work on the projects, Zawinski had instant credibility among the tech elite.

At Netscape, the engineering team was central to the way things worked. It was why Siino had chosen to meet with members of that team as soon as she began, and why she crafted the story of Netscape around the way they operated. The result was a high-pressure, high-intensity atmosphere so indispensable company that it would become party of the companies mythology. They moved so quickly that many began to call such a rapid pace of development “Netscape Time.”

“It was really a great environment. I really enjoyed it,” Zawinski would later recall. “Because everyone was so sure they were right, we fought constantly but it allowed us to communicate fast.” But tempers did flare (one article details a time when he threw a chair against the wall and left abruptly for two weeks after his computer crashed), and many engineers would later reflect on the toxic workplace. Zawinski once put it simply: “It wasn’t healthy.”

Still, engineers had a lot of sway at the organization. Many of them, Zawinski included, were advocates of free software. “I guess you can say I’ve been doing free software since I’ve been doing software,” he would later say in an interview. For Zawinski, software was meant to be free. From his earliest days on the Netscape project, he advocated for a more free version of the browser. He and others on the engineering team were at least partly responsible for the creative licensing that went into the company’s “free, but not free” business model.

In 1997, technical manager Frank Hecker breathed new life into the free software paradigm. He wrote a 30-page whitepaper proposing what several engineers had wanted for years—to release the entire source of the browser for free. “The key point I tried to make in the document,” Hecker asserted, “was that in order to compete effectively Netscape needed more people and companies working with Netscape and invested in Netscape’s success.”

With the help of CTO Eric Hahn, Hecker and Zawinski made their case all the way to the top. By the time they got in the room with James Barksdale, most of the company had already come around to the idea. Much to everyone’s surprise, Barksdale agreed.

On January 23, 1998, Netscape made two announcements. The first everyone expected. Netscape had been struggling to compete with Microsoft for nearly a year. The most recent release of Internet Explorer version 4, bundled directly into the Windows operating system for free, was capturing ever larger portions of their market share. So Netscape announced it would be giving its browser away for free too.

The next announcement came as a shock. Netscape was going open source. The browser’s entire source code—millions of lines of code—would be released to the public and open to contributions from anybody in the world. Led by Netscape veterans like Michael Toy, Tara Hernandez, Scott Collins, and Jamie Zawinski, the team would have three months to excise the code base and get it ready for public distribution. The effort had a name too: Mozilla.

Firefox 1.0 (Credit: Web Design Museum)

On the surface, Netscape looked calm and poised to take on Microsoft with the force of the open source community at their wings. Inside the company, things looked much different. The three months that followed were filled with frenetic energy, close calls, and unparalleled pace. Recapturing the spirit of the earliest days of innovation at Netscape, engineers worked frantically to patch bugs and get the code ready to be released to the world. In the end, they did it, but only by the skin of their teeth.

In the process, the project spun out into an independent organization under the domain Mozilla.org. It was staffed entirely by Netscape engineers, but Mozilla was not technically a part of Netscape. When Mozilla held a launch party in April of 1998, just months after their public announcement, it didn’t just have Netscape members in attendance.

Zawinski had organized the party, and he insisted that a now growing community of people outside the company who had contributed to the project be a part of it. “We’re giving away the code. We’re sharing responsibility for development of our flagship product with the whole net, so we should invite them to the party as well,” he said, adding, “It’s a new world.”


On the day of his testimony in November of 1998, Steve McGeady sat, as one writer described, “motionless in the witness box.” He had been waiting for this moment for a long time; the moment when he could finally reveal, in his view, the nefarious and monopolist strain that coursed through Microsoft.

The Department of Justice had several key witnesses in their antitrust case against Microsoft, but McGeady was a linchpin. As Vice President at Intel, McGeady had regular dealings with Microsoft; and his company stood outside of the Netscape and Microsoft conflict. There was an extra layer of tension to his particular testimony though. “The drama was heightened immeasurably by one stark reality,” noted in one journalist’s accounting of the trial, “nobody—literally, nobody—knew what McGeady was going to say.”

When he got his chance to speak, McGeady testified that high-ranking Microsoft executives had told him that their goal was to “cut off Netscape’s air supply.” Using their monopoly position in the operating system market, Microsoft threatened computer manufacturers—many of whom Intel had regular dealings—to ship their computers with Internet Explorer or face having their Windows licenses revoked entirely.

Drawing on the language Bill Gates used in his announcement of Internet Explorer, McGeady claimed that one executive had laid out their strategy: “embrace, extend and extinguish.” According to his allegations, Microsoft never intended to enter into a competition with Netscape. They were ready to use every aggressive tactic and walk the line of legality to crush them. It was a major turning point for the case and a massive win for the DOJ.

The case against Microsoft, however, had begun years earlier, when Netscape retained a team from the antitrust law firm Sonsini Goodrich & Rosati in the summer of 1995. The legal team included outspoken anti-Microsoft crusader Gary Reback, as well as Susan Creighton. Reback would be the most public member of the firm in the coming half-decade, but it would be Creighton’s contributions that would ultimately turn the attention of the DOJ. Creighton began her career as a clerk for Supreme Court Justice Sandra Day O’Conner. She quickly developed a reputation for precision and thoroughness. Her patterned, deliberate and methodical approach made her a perfect fit for a full and complete breakdown of Microsoft’s anti-competitive strategy.

Susan Creighton (Credit: Wilson Sonsini Goorich & Rosati)

Creighton’s work with Netscape led her to write a two-hundred and twenty-two page document detailing the anti-competitive practices of Microsoft. She laid out her case plain, and simply. “It is about a monopolist (Microsoft) that has maintained its monopoly (desktop operating systems) for more than ten years. That monopoly is threatened by the introduction of a new technology (Web software)…”

The document was originally planned as a book, but Netscape feared that if the public knew just how much danger they were in from Microsoft, their stock price would plummet. Instead, Creighton and Netscape handed it off the Department of Justice.

Inside the DOJ, it would trigger a renewed interest in ongoing antitrust investigations of Microsoft. Years of subpoenaing, information gathering, and lengthy depositions would follow. After almost three years, in May of 1998, the Department of Justice and 20 state attorneys filed an antitrust suit against Microsoft, a company which had only just then crossed over a 50% share of the browser market.

“No firm should be permitted to use its monopoly power to develop a chokehold on the browser software needed to access the Internet,” announced Janet Reno—the prosecuting attorney general under President Clinton—when charges were brought against Microsoft.

At the center of the trial was not necessarily the stranglehold Microsoft had on the software of personal computers—not technically an illegal practice. It was the way they used their monopoly to directly counter competition in other markets. For instance, the practice threatening to revoke licenses to manufacturers that packaged computers with Netscape. Netscape’s account of the June 1995 meeting factored in as well (when Andreessen was asked why he had taken such detailed notes on the meeting, he replied “I thought that it might be a topic of discussion at some point with the US government on antitrust issues.”)

Throughout the trial, both publicly and privately, Microsoft reacted to scrutiny poorly. They insisted that they were right; that they were doing what was best for the customers. In interviews and depositions, Bill Gates would often come off as curt and dismissive, unable or unwilling to yield to any cessation of power. The company insisted that the browser and operating system were co-existent, one could not live without the other—a fact handily refuted by the judge when he noted that he had managed to uninstall Internet Explorer from Windows in “less than 90 seconds.” The trial became a national sensation as tech enthusiasts and news junkies waited with bated breath for each new revelation.

Microsoft President Bill Gates, left, testifies on Capitol Hill, and Tuesday, March 3, 1998. (Credit: Ken Cedeno/AP file photo)

In November of 1999, the presiding judge issued his ruling. Microsoft had, in fact, used its monopoly power and violated antitrust laws. That was followed in the summer of 2000 by a proposed remedy: Microsoft was to be broken up into two separate companies, one to handle its operating software, and the other its applications. “When Microsoft has to compete by innovating rather than reaching for its crutch of the monopoly, it will innovate more; it will have to innovate more. And the others will be free to innovate,” Iowa State Attorney General Tom Miller said after the judge’s ruling was announced.

That never happened. An appeal in 2002 resulted in a reversal of the ruling and the Department of Justice agreed to a lighter consent decree. By then, Internet Explorer’s market share stood at around 90%. The browser wars were, effectively, over.


“Are you looking for an alternative to Netscape and Microsoft Explorer? Do you like the idea of having an MDI user interface and being able to browse in multiple windows?… Is your browser slow? Try Opera.”

That short message announced Opera to the world for the first time in April of 1995, posted by the browser’s creators to a Usenet forum about Windows. The tone of the message—technically meticulous, a little pointed, yet genuinely idealistic—reflected the philosophy of Opera’s creators, Jon Stephenson von Tetzchner and Geir Ivarsøy. Opera, they claimed, was well-aligned with the ideology of the web.

Opera began as a project run out of the Norwegian telecommunications firm Telnor. Once it became stable, von Tetzchner and Ivarsøy rented space at Telnor to spin it out into an independent company. Not long after, they posted that announcement and released the first version of the Opera web browser.

The team at Opera was small, but focused and effective, loyal to the open web. “Browsers are in our blood,” Tetzchner would later say. Time and time again, the Opera team would prove that. They were staffed by the web’s true believers, and have often prided themselves on leading the development of web standards and an accessible web.

In the mid-to-late 90’s, Geir Ivarsøy was the first person to implement the CSS standard in any browser, in Opera 3.5. That would prove more than enough to convince the creator of CSS, Håkon Wium Lie, to join the company as CTO. Ian Hickson worked at Opera during the time he developed the CSS Acid Test at the W3C.

The original CSS Acid Test (Credit: Eric Meyer)

The company began developing a version of their browser for low-powered mobile devices in developing nations as early as 1998. They have often tried to push the entire web community towards web standards, leading when possible by example.

Years after the antitrust lawsuit of Microsoft, and resulting reversal in the appeal, Opera would find themselves embroiled in a conflict on a different front of the browser wars.

In 2007, Opera filed a complaint with the European Commission. Much like the case made by Creighton and Netscape, Opera alleged that Microsoft was abusing its monopoly position by bundling new versions of Internet Explorer with Windows 7. The EU had begun to look into allegations against Microsoft almost as soon as the Department of Justice, but the Opera complaint added a substantial and recent area of inquiry. Opera claimed that Microsoft was limiting user choice by making opaque additional browser options. “You could add more browsers, to give consumers a real choice between browsers, you put them in front of their eyeballs,” Lie said at the time of the complaint.

In Opera’s summary of their complaints they evoked in themselves the picture of a free and open web. Opera, they argued, were advocates of a web as the web was intended—accessible, universal, and egalitarian. Once again citing the language of “embrace, extend, and extinguish,” the company also called out Microsoft for trying to take control over the web standards process. “The complaint calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious ‘Embrace, Extend and Extinguish’ strategy,” it read.

The browser “ballot box“ (Credit: Ars Technica)

In 2010, the European Commission issued a ruling, forcing Microsoft to show a so-called “ballot box” to European users of Windows—a website users could see the first time they accessed the Internet that listed twelve alternative browsers to download, including Opera and Mozilla. Microsoft included this website in their European Windows installs for five years, until their obligation lapsed.


Netscape Navigator 5 never shipped. It echoes, unreleased, in the halls of software’s most public and recognized vaporware.

After Netscape open-sourced their browser as part of the Mozilla project, the focus of the company split. Between being acquired by AOL and continuing pressure from Microsoft, Netscape was on its last legs. The public trial of Microsoft brought some respite, but too little, too late. “It’s one of the great ironies here,” Netscape lawyer Gary Reback would later say, “after years of effort to get the government to do something, by [1998] Netscape’s body is already in the morgue.” Meanwhile, management inside of Netscape couldn’t decide how best to integrate with the Mozilla team. Rather than work alongside the open-source project, they continued to maintain a version of Netscape separate and apart from the public project.

In October of 1998, Brendan Eich—who was part of the core Mozilla team—published a post to the Mozilla blog. “It’s time to stop banging our heads on the old layout and FE codebase,” he wrote. “We’ve pulled more useful miles out of those vehicles than anyone rightly expected. We now have a great new layout engine that can view hundreds of top websites.”

Many Mozilla contributors agreed with the sentiment, but the rewrite Eich proposed would spell the project’s initial downfall. While Mozilla tinkered away on a new rendering engine for the browser—which would soon be known as Gecko—Netscape scrapped its planned version 5.

Progress ground to a halt. Zawinski, one of the Mozilla team members opposed to the rewrite, would later describe his frustration when he resigned from Netscape in 1999. “It constituted an almost-total rewrite of the browser, throwing us back six to 10 months. Now we had to rewrite the entire user interface from scratch before anyone could even browse the Web, or add a bookmark.” Scott Collins, one of the original Netscape programmers, would put it less diplomatically: “You can’t put 50 pounds of crap in a ten pound bag, it took two years. And we didn’t get out a 5.0, and that cost us everything, it was the biggest mistake ever.”

The result was a world-class browser with great standards support and a fast-running browser engine. But it wasn’t ready until April of 2000, when Netscape 6 was finally released. By then, Microsoft had eclipsed Netscape, owning 80% of the browser market. It would never be enough to take back a significant portion of that browser share.

“I really think the browser wars are over,” said one IT exec after the release of Netscape 6. He was right. Netscape would sputter out for years. As for Mozilla, that would soon be reborn as something else entirely.


The post Chapter 10: Browser Wars appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

#9 – Tara King on Encouraging Developers Towards a Gutenberg Future

About this episode.

On the podcast today we have Tara King.

Tara has recently begun working for Automattic in the developer relations role. Tara will lead a newly formed team who will get out and about; trying to understand the pain points which people are having with the new Block Editor and Full Site Editing. They will then report their findings back to the developer and contributor teams, and hopefully establish a feedback loop to make the editor better.

They are also creating blogs, podcasts, courses and many other types of content to help people get up to speed with the Block Editor.

It’s no secret that whilst there are many people who love the Block Editor, there are many who remain unconvinced. Unconvinced might not be a strong enough word, but you get the idea. I wanted to hear about the purpose of this new team and how it’s going to be working. Will it have a real impact upon the future of the Block Editor? What will they be offering? How can they be reached? Who is deciding what’s included and what’s left out? What motivations are behind all these decisions?

We also get into a chat about the fact that WordPress is changing; moving away from a legacy of easy-to-understand PHP code and moving towards a JavaScript and React based future. Is the pain of learning these new skills going to be worth it, and is there going to be any support to help people get there?

It’s a wide-ranging discussion at an important moment in WordPress’ history.

Time will tell if Tara’s team can win the hearts and minds of unconvinced developers.

Have a listen to the podcast and leave a comment below.

Tara’s email address: tara.king [at] automattic [dot] com

Automattic: Developer Relations Job Description

Transcript
Nathan Wrigley

Welcome to the ninth edition of the Jukebox podcast from WP Tavern. My name is Nathan Wrigley. Jukebox is a podcast which is dedicated to all things WordPress, the people, events, plugins, themes, blocks, and in this case developers and Gutenberg. Each month we’re bringing you someone from that community to discuss a topic of current interest.

If you liked the podcast, please share it with your friends. You might also like to think about subscribing so that you’ll get all of the episodes in your podcast player automatically, and you can do that by searching for WP Tavern in your podcast player, or by going to WP Tavern dot com forward slash feed forward slash podcast.

You can also play the podcast episodes on the WP Tavern website if you prefer that. If you have any thoughts about the podcast, perhaps a suggestion of a guest or an interesting subject, then please head over to WP Tavern dot com forward slash contact forward slash jukebox, and use the contact form there. We would certainly welcome your input.

Okay, so on the podcast today we have Tara King. Tara has recently begun working for Automattic, in developer relations, and it’s an important role within the WordPress community. Tara will be leading a newly formed team who will be getting out and about, trying to understand the pain points which people are having with the new block editor and with full site editing. They will then report this back to the developer and contributor teams and hopefully establish a feedback loop to make the editor better. They are also creating blogs, podcasts, courses, and all sorts of other content to help people get up to speed, and perhaps begin using, or better understanding, the block editor.

It’s no secret that whilst there are many people who love the block editor, there are many who remain unconvinced. Unconvinced might not be a strong enough word, but you get the idea.

I wanted to hear about the purpose of this new team and how it’s going to be working. Will it have a real impact upon the future of the block editor? What will they be offering? How can they be reached and who is making the decisions about what’s included and what’s left out? And what motivations are behind all of these decisions?

We also get into a chat about the fact that WordPress is changing. It’s moving away from a legacy of easy to understand PHP code and moving towards a JavaScript and React based future. Is the pain of learning these new skills going to be worth it? And is there going to be any support to help people get there?

It’s a wide ranging discussion at an important moment in WordPress’s history. Time will tell if Tara’s team are able to win the hearts and minds of unconvinced developers.

If any of the points raised in this podcast, resonate with you, be sure to head over and find the post at WP Tavern dot com forward slash podcast, and leave a comment there. And so without further delay, I bring you Tara King.

I am joined today on the podcast by Tara King. Hello, Tara.

Tara King

Hello. Thanks for having me.

Nathan Wrigley

You are very welcome. It is an absolute pleasure. We’ve spent the last couple of minutes just getting to know one another. We haven’t ever spoken before, so this’ll be a really interesting chat. We’ve got a lot of ground to cover. And first of all, I’d like to congratulate you on your brand new, shiny new job over at Automattic. I wonder if you might spend the first couple of minutes telling us what your new job is and what your title is and what you do.

Tara King

Yeah. So I’ll give you the short version first, which is that my job is to lead a team that is basically going out into the community to hear where people are struggling with Gutenberg, struggling with full site editing. Bring that context back into the development teams and the contributor teams that are building the product and then make it better. And in addition to that feedback cycle part of things, we’re also creating content, courses, blogs, podcasts, all kinds of things to help people get up to speed with where Gutenberg is right now, where it’s going to go next and how to make the leap over from the Classic Editor.

Nathan Wrigley

That’s really interesting. I did actually read the job description that was posted on the website. I don’t know obviously what the final job description entails, but I was really fascinated to see that it was very much bolted on to the Gutenberg project as opposed to something a bit wider. And that is fascinating. It does feel at this moment in time, we’re recording it towards the latter end of 2021. And it does feel at this time that there’s quite a lot of, disagreement shall we say about how the WordPress project is being taken forward? And a lot of that disagreement is centering around Gutenberg and it seems that a few new roles, not just your role, but some others have been created particularly to handle the way that the community interact and the way that they feel and the way that they’re receiving knowledge about it. Have I got that right? Are Automattic putting jobs out there for people to do exactly that.

Tara King

Yeah, I think my team especially it came out of the 5.0 retrospective. So when Gutenberg came out and it was pushed out into the community, I think anybody who was around at that time of the WordPress community was aware of the pushback and the unhappiness in the community, around some of the things that happened.

And looking at that, I was not part of the project in a serious way, at that point, I was actually doing support, so I was hearing all of the people who are unhappy tell us how can I change away from Gutenberg? How can I fix this? So that was my role in it at the time was just living through person by person with the impact of it.

What I’ve heard is basically they looked at the 5.0 release and said we need to communicate better, first of all right, but it’s not just pushing information out, it’s also, we need to listen better. We need to be aware of what people are feeling earlier so that when we’re trying to make this work, it’s not only perceived as, but I think experienced as a one-way street kind of thing, because I don’t think that’s ever been the spirit of the WordPress project, and I don’t think Gutenberg actually was meant to change that feeling, if that makes sense. But it did so for some people it really did. And so my team especially is really about listening and trying to engage more people, bring more people into the room to be part of those discussions, to be part of those decisions, because I don’t think anybody wants Gutenberg to succeed just for Gutenberg’s sake.

I think it’s a really good tool. And so we’re trying to make sure that everyone can be involved. So that’s my team in particular, but I think in general, there is a sense that Gutenberg is still struggling to be understood. It’s a really big change for the community on a technical level. And so we just need to be putting more energy and more attention to helping people bridge the gap between where they are now and where they need to be for Gutenberg.

Nathan Wrigley

Just dwelling on the team for a moment you may be allowed, you may not be allowed I don’t know, to describe how big that team is and what the specifics are about how it’s going to be implementing that. I’m just wondering if you can give us some insight, because it would be interesting, certainly from my part, it would be interesting to know how many people are on the ground now, doing that kind of work specifically in your team.

Tara King

Yeah, there are four people aside from myself, so five people in total. We have people doing specific programs. Anne McCarthy has been doing amazing work around the full site editing outreach program. So that’s been part of this team before I started, they were doing that work. And then we have other folks doing courses and meet up presentations. Daisy Olson has been doing those also for a while. We have two new teammates, so Birgit from Gutenberg Times, which is a very amazing connection to have, is going to keep doing that. We’re basically supporting Birgit to do more and more Gutenberg Times as much as she’s willing to do. And then we have Ryan Welcher, a new hire from TenUp, who is helping on the sort of more technical side.

So we have four people which means each person is responsible for, I think, 10 and a half percent of the internet. So it’s quite a big job, I would say.

Nathan Wrigley

That’s a fascinating way of actually thinking about it. Forgive me, I’m going to quote from the Automattic job description that came to you. This is the thing that you applied for. And again, please forgive me if this has now morphed in some way, but it basically says “We’re looking for someone to join our Automattic team dedicated to aiding the WordPress open source systems effort, specifically around developer relations. Your focus will be communicating with community developers about WordPress, Gutenberg and the surrounding ecosystem to build a positive and sustainable relationship with WordPress developers and reduce barriers to Gutenberg adoption”. And then there’s a bullet point list of what the ideal candidate will have, which presumably you met admirably. Congratulations. The thing that jumps out for me, there is the word developer is used multiple times. And is that where your efforts are going to lie? You’re reaching out to developers as opposed say to end users or perhaps people that are new in the community who are unfamiliar with how Gutenberg and WordPress works.

Tara King

Yeah. So we are one month in. So we’re still working out the details, but very much focused on developers. I think I’ll say for myself, I am actually from the Drupal project, I’ve been in WordPress for a long time, but I have a much deeper kind of contributor history in Drupal actually. And in the Drupal project, it’s always like developers first. Basically it’s not official, but it’s very focused on the developer experience and, coming to WordPress, I was always looking around… who’s talking about developers and WordPress, where are they meeting? Where are they talking? So it’s a very natural thing to focus on developers for me. But I do think it’s a little bit new in the WordPress project. Certainly not developer first. I think the user is still always, maybe even the visitor is always going to be first, but the user of WordPress is always going to be the primary audience. But I think Gutenberg is really a product, a tool for the user, but in order to get it out there, I think developers really need to adopt it. Especially anybody who’s extending WordPress. We need them to understand how to make Gutenberg work with that. Because that really does, I am blown away every time I use Gutenberg, and I know that’s my job to say that, but it’s actually also true. It’s part of why I took the job. I think it’s such a fantastic tool when you’re giving somebody a site and they’re going to be managing it. Without any code, they can do really advanced things in terms of layout and display.

We need all the developers in the community to get on board and make it available via their various extensions. So we really are focused on developers and that goes everywhere from, so there’s the theory of care in the WordPress community? I don’t know if you’re familiar. It’s there’s the leadership. There’s the contributors. There’s the extenders. Users and visitors, I’m kind of sticking with that model. There’s developers all the way down to the user level. People who are not writing a lot of new code necessarily, but maybe a little bit here and there. So we’re talking to those folks. We’re talking definitely to the extender group. So people who are writing plugins and themes, people who are running hosting companies or agencies, large universities, anybody kind of working with WordPress at a larger scale. And then of course the contributors who are literally developing the project. So it doesn’t sound very focused when I say it like that, because that’s a lot of people, but it’s everybody who’s writing code to support WordPress, whether that’s for one site or for all of WordPress.

Nathan Wrigley

I am really interested by the fact that this role in this team now exists. As far as I’m aware it’s the first time that your role has existed. That’s right, isn’t it? You are the first person to.

Tara King

That is correct.

Nathan Wrigley

Okay. So that speaks to me that the team over at Automattic, as you said, they’ve listened and they’ve realized that this team needs to exist. WordPress is growing and fortunately Automattic have the capability to put this together. I suppose later on in the podcast, we’ll get into the problems that people are experiencing and some of the things that presumably you’re going to be addressing, but I am also keen to understand how people will be interacting with you.

So in the future, how are they going to be getting their concerns in front of you and your team. Is it all about outreach from you or is it doors open, you can email me. How are people going to make contact with you and your team and express what it is that they need to express?

Tara King

Yeah, that’s a great question. Going back to the 10.5% of the internet per person. It’s a really hard problem to solve. We can’t be everywhere at once. As much as I would like to have someone who’s on every WordPress related Stack Overflow or Stack Exchange, Reddit, Facebook, Twitter, I could keep going with where we might go to listen to the community.

We’re still working that out in terms of the details. We’re in Make Slack all the time, but I know Make Slack isn’t always terribly welcoming. It’s welcoming, nobody’s mean, but it’s a bit confusing, I think when you’re new to the community or new to that space, there’s always meetings happening. I know I personally have had the experience of, I don’t know when I’m supposed to talk with, if I’m interrupting a meeting.

You know, there are places that we definitely are. I think my email is going to, I assume it’ll be in the podcast notes, it’s tara dot king at automattic dot com. I am happy to hear from folks. I may regret saying that I don’t know how many emails I’ll get, but I think for me right now, especially because I am transitioning from having one foot in both Drupal and WordPress into being more WordPress focused, I’m really looking to meet people and genuinely hear what folks are struggling with because it’s wide. So my mandate is Gutenberg focused, but, that’s not the only thing that causes problems. You might be also struggling with some particular part of Core that isn’t Gutenberg. Like it’s a very tangled knot in terms of when you’re having issues with WordPress. So I like to hear about it because maybe right now the radar is focused on Gutenberg, but we’re not going to be focused on Gutenberg forever. We can expand out from that narrow focus.

So long story long, I wish I had a super simple answer other than to email myself personally. We are listening as much as we can or going to events, camps, and meetups and things. We are listening on Twitter. We’re listening on Post Status. We are trying to be in all the major places, but feel free to reach out to myself or anyone else on the team that you feel comfortable with.

There’s a lot of people out there for a very small team, but we are trying to listen. One other thing I’ll say before I finish on this topic, is there are very specific calls for testing that we’re doing. So if you want to be more involved in the full site editing development before it happens, right? So a lot of people have a very reactive approach, which is, it comes out and they’re unhappy, but actually there are pathways to be involved sooner. And this was one of the easiest ones. You can go to the Make Slack, there’s a channel called F S E dash outreach. And if you join that channel, you will be presented with calls for testing that are, in my opinion, I hope that other people find this to be true as well, fairly clearly outlined. You know, step-by-step how to do the test in question. And then where to give your feedback. This is helping with everything from how a navigation block works from how widgets work. There’s been some open-ended ones around what themers, theme builders needs. So that’s one way to get a very specific kind of feedback, right? It’s not general, but it’s very effective to get specific feedback.

Nathan Wrigley

I sometimes feel that despite the fact that those channels are publicly available and anybody can hop in, I do sometimes wonder if things like the Make Slack and Github and so on, I do wonder if there’s room for improvement there. And I don’t mean throw the baby out with the bath water, but they can be quite intimidating. It is difficult to backtrack and figure out where the conversation began that’s currently going on. The interface for Slack is excellent if you’re a part of a team and your daily grind is to be in a particular Slack channel, and you’re constantly checking in and you see where the conversation has flowed from and where the conversation is right now. But I feel it’s difficult for people who are just hopping in to make almost any sense of those conversations at all. And so of course, the easy thing to do is to glance in open the door at a tiny bit, stare through the crack and then just run away in fear and continue to feel annoyed.

Tara King

Yeah, I totally agree. I think, here I am barging in kinda new to the community. It was lots of opinions. I’ve been feeling very much like the Make Slack and the Make blogs are more welcoming to people who are contributing because they’re in it every day. It’s easy for them to understand what’s happening. Whereas I don’t actually feel like we have a great location for developers at large. We have documentation. We have the Github for Gutenberg, but again, they’re very contributor focused. There are people who just need to know how to build a plugin. How to build a block pattern on WordPress in general. And I don’t feel like we have a great place for those discussions to happen right now. I don’t know what we’re going to land on, but that’s one thing that this team is working on, trying to figure out what would be the right way to consolidate conversations for that community. Because right now it does feel like if you’re a developer who has a WordPress problem, you shout into the void and you hope somebody hears. That might happen on Twitter. It might happen at a WordCamp. There are ways to be heard, but they’re hard to find. I think we need much better pathways. To have those conversations.

Nathan Wrigley

I am not committing you to any particular platform or any particular piece of software, but it’s just, it is nice to hear though, that you have that, on the radar, you’re thinking about that because I think that’s really important. Many of us are used to different platforms, probably more social in nature that seem to work in inverted commas, better, but that’s fascinating, thank you.

Okay. Let’s get stuck into the side of Gutenberg where people are concerned. Feeling disgruntled. Now I do want to definitively spell out at this point that you are not responsible for the way that Gutenburg is right now. I really want to make that very clear. So anybody listening to the podcast, it is not your fault, but people have concerns.

I think right now we seem to be seeing more concern than ever before. I’ve been using WordPress for about, I’m going to go for nine years, that feels more or less, right. Prior to that I was using a piece of software, which you just mentioned, Drupal. And I was extremely happy with Drupal. Drupal did everything that I wanted to do. It really was fabulous. In fact, if you could rewind the clock, I was telling my clients that Drupal was probably going to overtake WordPress in its use. How wrong could I have been? But there came a moment in time where that community became something that I no longer was part of. And it was because of the fact that Drupal deals with point releases, so from five to six to seven. There is a real line drawn in the sand. Drupal five doesn’t sit well with Drupal six and six doesn’t sit well with seven and so on. And I left at the point where there was one of these moments. It was from Drupal seven to Drupal eight, and I couldn’t cope with the fact that I was going to have to do an enormous amount of work, just to keep things that had already built, up and running. Now the parallels that are there are fairly major I think, WordPress has done an unbelievably good job of being backwards compatible, but now we have what feels like, I’m going to call it a Drupal moment. Where we are at an inflection point, something radical has changed in WordPress, and it really is bifurcating the path. Some users extremely happy, giving it a go, getting involved, loving it, other people, disliking it, not wanting to be a part of it and ultimately, just stopping being part of the community and not using WordPress at all. So I hope my analogy there with Drupal sits and you understand what I’m saying?

Tara King

Yeah, it does. Yep. I was in the community of the Drupal community when that happened as well. It is very interesting. I think for a long time, I’ve talked to people in both communities, I’ve talked to people using both software. And one of the differences, when people ask what’s the difference is that WordPress is backwards compatible and Drupal’s not. And the seven to eight was Drupal becoming object oriented, was the main change. And so people were used to writing procedural PHP, and now they had to write object oriented and they weren’t used to it. And, not only were they not used to it, it was just unbelievable amount of work to update all of the extensions and make everything work. And then there’s no migration path that’s very clean between seven to eight in Drupal. Having lived through that, the Drupal project forked at that point, there’s now a separate fork of the project called Backdrop. It was a very painful time. It was honestly a very painful time for me personally. I’m sure it was painful for other people as well, but it was painful for me because I had gotten into Drupal in Drupal six and I was essentially a solo shop. I was building sites by myself, occasionally getting in a contractor and I could make sites pretty cheaply and pretty easily for lots and lots of people on Drupal. And like you said, loved, I just loved the software. I loved it so much. And the switch to Drupal eight felt very personal, like we don’t care about people like you Tara. Obviously, no one’s said that to me, but that’s what it felt like. It felt like I don’t have the resources to make this kind of a change for my clients. And I think ultimately it led me to stop freelancing and start working for agencies because they did have the resources. So it actually did change my career trajectory. So it’s very serious for people. These kinds of changes in a software project, it seems kind technical or niche, but it’s not, it’s people’s livelihoods and it’s people’s entire way of being in the world. Like you’re changing how someone is working, you’re changing, what kind of work they’re able to do.

So I think it’s a really relevant parallel to draw to Gutenberg because I think a lot of people are feeling that same way now, and it’s no surprise that they’re going to have very strong reactions when their livelihood is threatened. I don’t play a single person for having that struggle. The reason I took when I was talking about the job and interviewing and things like that, it definitely feels like we’re starting a little bit behind because the community is already upset. It would have been nice if we could’ve started before we Gutenberg came out and built those relationships earlier, but hindsight’s 2020.

And I thought to myself, there’s so many people doing so many cool things with WordPress right now. I think Gutenberg is a really powerful tool. And if we can help people make that bridge. Not have to build the bridge to becoming a Gutenberg developer themselves, but have one provided. If we can help people feel heard and welcomed and important again. Cause I think that’s why we come to these communities as we feel that way, we feel like we’re important and we have somewhere to matter. So anyway, for me, long story short, it’s very emotional and I really want to honor and respect people and meet them where they’re at because I’ve been there in the Drupal project.

Nathan Wrigley

A couple of quotes. I should say that I reached out to a few of my friends. I am going to name no names. They didn’t ask me to not to name names, but I won’t. Just a few little things just to give you an indication of where people are at. So this is from somebody who creates WordPress websites for a living. I don’t think they would describe themselves as a developer, but they say, “Push and you get push back. If Gutenberg had been developed as an add-on plugin, for example, which was optional, where folks could opt in, then it would have become something that they could choose. And that for me is what made WordPress so successful”.

So that was from one person, and then from another person who is involved in themes shall we say, “To every new feature or whatnot, which is added to Gutenberg, there’s a but to go with it. And those things are never addressed. All in all, that is why I’m losing passion for WordPress”.

It’s those kinds of feelings I think, I could probably have put in some stronger ones, and certainly there were some ones which were less strong than that, but it gives you an indication. This is really, like I said, bifurcating the community and it really isn’t a case of people just tutting a bit and being a little bit annoyed and then just shrugging it off and getting over it. This is genuinely people who’ve been doing things for a long time, are dedicated to WordPress, commit to WordPress, use it every day, promote it. And they’re thinking of walking away are, like I did with Drupal.

Tara King

Yeah. Yeah. It’s really hard to hear quotes like that, but it’s also just so important. Honestly, I find that the WordPress community has been very patient. Gutenberg came out now, I think three years ago. And obviously some people were not patient, some people took off. But I do feel like people have been pretty patient. And whereas in Drupal before Drupal eight even came out, people were like, I made a fork. I’m leaving. Here’s my talk at DrupalCon about how Drupal’s terrible. I really hope, I’m not here to try to save people.

Everybody has to make their own decisions about what software project is the right for them. I think in general, this is about people’s passions, whatever that might be, it’s not necessarily about WordPress. They want to be able to do what they need to do. I’m not trying to save every last person, but I do think it’s important to hear when people are having these reactions and to really hear it right, to let it sink in.

I hope if my team can’t counteract some of these feelings about the software being pushed onto people about development, ignoring the feedback that’s coming in, I think we will have failed. I am very optimistic at this time, one month in, to say that I think we have some really good people who are really passionate and very deep in the community who know what people need. They’re on the other side too, they’re also developers. We’re not hiring marketing people, no offense to marketing people, but that’s not what this team is. We sit inside the product team. We’re talking to the developers of the product. We’re talking to developers in the community. And like I said, there’s four of us, 42% of the web. Can’t really hear everyone, but I’m hopeful that as we listen. One person who stands up and says, I’m losing passion for WordPress because of this, represents a hundred people who didn’t or a thousand people who didn’t, I don’t know what the numbers actually are, but if we can address these people, one-to-one with personal caring, with strong, clear feedback to the product teams that are working on WordPress. I am hopeful that we can make this feel more like a collaboration, more like you’re opting in, and it’s your choice to use this cool tool instead of, oh, I have to. So that’s the goal for the team.

Nathan Wrigley

The two things that keep coming up in the conversations that I have on this side of the fence are that it was pushed into Core without the sort of necessary time for it to be examined and the entire community to have their say on it.

And the other one seems to revolve around the fact that it’s now been going on for such a long time, and it feels like almost like a public beta that’s been going on for two, nearly three years where we are asked to use a piece of software, which is still very much in development. And so concerns around those. And I’m interested, you may know, you may not know what the decision-making processes were in the past for how that happened. You may be able to talk about that again, you may not, but I’m wondering if you have any thoughts on whether the decision-making process for how things are going to be implemented, are going to change. Is there going to be more openness about what’s coming up? Are we able to communicate directly with the people who are making these changes? I think the feeling is it’s top down. That a few people who make very big decisions and they make them, and the rest of us have to go along with that. And I think people would like to understand whether that governance model is up for debate. That’s my question really.

Tara King

I don’t know if it’s up for debate, to be honest, in terms of the very highest levels of the project. I don’t think it is. I think that we have Matt, we have Josepha and they’re the leaders. And I think almost every major open source project has one or two people, typically one person in that position.

And I’m not in the room for those discussions. I should say there’s anything that’s it is going to change there. I don’t know about it. That said, I think it’s very clear to me, I actually have not yet spoken with Matt, but I spoken with Josepha who’s in dot org. Like we work together very pretty closely. And I can tell that Joseph is really listening. It’s obviously hard for someone who’s not seeing her regularly to know that. I fully understand why people think that it’s very top-down, but that is part of this team is to go out and to try to listen and to help people understand how their feedback can come in.

That’s why I feel kind terrible that I can’t say to you, this is exactly how you can give us your feedback, but that’s absolutely the top list of priorities. Hopefully by the end of this year, we can have something clearer. There’s the obvious ways of you can go in and contribute. But it’s a pretty high barrier to entry.

And I think what most people actually want is just to be able to give a little feedback. They don’t want to write new code to fix something, they want to be able to say, oh, this didn’t work for me because X, so that’s what my team is going to be doing. And maybe it’s not fair to call it a beta, I don’t think, but it is ongoing development in public because that’s the open-source way. But it’s very challenging, having come from the Drupal community where people are making these big changes all the time, it feels, yeah, that’s what we do. But I know in WordPress that hasn’t been the case. What we are trying to do very specifically with my team is, get ahead of the release. So 5.9 is coming out in December. We are working right now on documenting exactly what is and is not going in. Is there any kind of breaking change? Those are pretty rare still, but if there is anything like that, we want to get ahead of that. We want to know, is there education that needs to happen around a certain technology to make this a success.

And we’re trying to push that out, to, I think right now we’re going to try to push it out to things like large agencies, universities, big groups that can then disseminate it internally just for purposes of scaling. Not because we don’t care about individuals, it’s just hard to reach them. So we’re trying to work that process, get that smoothed out. While that is getting refined, also building ways for individual developers of any kind to opt into that kind of information. So it is very much an experimental piece of software at this point. It’s production ready to, it’s both. It’s very interesting to be in this middle, the middle of it all. And I know it feels like it’s been going on for a long time and I know it feels like it’s never going to end, but it actually is going to end.

And as somebody whose mandate is to work on it, there’s even almost a little bit of not dread, but existential sort of conundrum when Gutenberg ends. What do I do then? So as much as it feels like it’s never going to end it. It is, it will be done, it will finish.

Nathan Wrigley

Moving the debate ever so slightly, but more or less the same wheelhouse really, there seems to be this under current, and in a sense, it feels a little bit, I’m going to say conspiratorial. Seems to be a lot of people who are equating the Gutenberg project with, so the dot org side of things with the dot com side of things, almost as if the people on the dot org side are the Guinea pigs, for want of a better word that, is probably entirely the wrong word, but you get the idea, for the project and that the dot com side obviously has a financial model, which the dot org side doesn’t. And I just wondered if you had any thoughts on that, whether those concerns could be assuaged as well, whether there is in fact a problem there or not.

Tara King

You know, I don’t see. I have only been there a month, I don’t have this sort of deep WordPress roots that other folks do. So I’m like new, I guess I’m an outsider still a little bit. And so I was concerned, I’m not going to, when I took the job, I was a little concerned about that because when I’m not working, before I worked at Automattic, I was constantly, oh, it’s so annoying that there’s a dot org and a dot com. It’s so confusing. It’s so annoying. So coming from the outside of the company and from like a fairly commercial place, honestly, from my interactions with WordPress, I don’t see it. I have not met anybody from the dot com side. And I mean that literally like the entire non.org side I’ve met one person because she lives in my hometown. We had coffee, that’s it? So, no one has told me anything from the dot com side needs to be implemented on our side. If anything, I almost feel like it’s inverted, which is that, I would guess if you talk to folks who work on dot com, they are just maybe not just as frustrated, but close to as frustrated as folks outside the company, as they’re waiting to ingest information from dot org, I’ve heard that from folks like we need training, we need to be able to, to train dot com customers. So there’s frustrations there too. So I hear the conspiracy. I see where that comes from and why it exists. My experience has been completely not that way.

Nathan Wrigley

Okay. It’s definitely something which gets raised from time to time. So I thought it was worth bringing up. But again, the caveats that we mentioned at the top of the podcast, that you’ve just begun in your line of work and so.

Tara King

Exactly, I’m very new. And the thing about conspiracy theories is you can’t really prove them wrong. Most of the time, they’re unable to be proven wrong. So I can’t prove it. I just don’t see it.

Nathan Wrigley

One of the things that I guess you are going to have repeatedly over the next year or so is the chatter about the move into new technologies in WordPress?

So React, increased reliance on JavaScript and the move away from PHP. And this also speaks to the debate about people moving away and getting alarmed that their websites that they’ve already built and their capability to build things and have a business that’s easy for them to manage in the future, is going to be difficult. And I wonder what you thought about those horizons. I wonder if you’ve got any words of comfort for people who have those concerns. And a related question, I wondered if there was possible responsibility, that again maybe too strong, a word, but I’m going to use it, a responsibility on Automattic to provide guidance, training, materials, whatever would be needed to help people cross that bridge and to ease the burden of learning these new things.

Tara King

Yeah. So I’m a PHP developer, I taught myself PHP and I always held JavaScript at arms length, right? It was like, nope, that is too far. I will not do it. When JavaScript started becoming more and more popular, I just was like, nope, I don’t have to, I know PHP. So I feel very much the pain of why do I have to learn JavaScript again?

I think the concerns I’m hearing, and again, I said my email, I’ll say it again at the end of the podcast, but the concerns I’m hearing and the concerns I’ve had are, yeah, it’s just, I don’t want to learn JavaScript because I don’t need to, why do I need to? There’s a build step, right? There’s often a more, a slightly more complicated kind of environment needed to Gutenberg development versus just straight PHP. You just write it, hit save and I’ll see if it works. So there’s additional complications of writing Gutenberg code. Not every host necessarily well, set up if you wanted to do that remotely or something like that. For some of us, I thought we stopped compiling things. I have to compile things. It feels a little, old. So I hear all of that and I’m sure there’s other objections people have. The things that I’m excited about with it, now that I’m having to do more of it, I’m realizing JavaScript’s not that hard. We’re all going to be okay. It’s not that hard. And it opens up so much in terms of greater web technologies.

And again, this feels very parallel to Drupal seven, to Drupal eight, which was moving to object oriented programming actually made me a better developer. I was a fine PHP procedural programmer. I was a reasonable developer and then having to learn it, which I know it’s frustrating when you have to learn something, but I don’t regret having done it. It made it easier actually for me to get into Gutenberg development, it’s made my whole development life much easier. I don’t think JavaScript is going away on the web in general. I think if anything, it’s going to continue to eat the web. As an individual, it’s powerful to have that tool in your toolbox as an agency, it’s powerful to be able to sell that work. Talk to people, have a more diverse kind of set of skills on the team. I’m pro learning in general, right? It’s I think something that helps every open source project grow. I think the backwards compatibility with WordPress, I hesitate to say it, but it feels like it’s gone a little bit too far. At some point, if you maintain backwards compatibility, the software can’t move forward because the old stuff is pulling it back.

I think it’s a wonderful model and Drupal is moving more towards it. It’s kind interesting to see the two communities converge there, but this might just be a case where there’s going to be a few pain points. Every web developer, no matter what tool they’re working with is going to have pain points where they have to learn something new.

I think it’s useful on an individual level. And then in terms of offering support for the transition, that is absolutely something that I think needs to happen. Whether or not it’s Auttomatic’s responsibility. I think it’s best when these things are community-wide efforts. I would love to see WordCamps and meetups offer, people volunteering to run… hey, this is how I got started. That happened a lot in Drupal. I have a friend who ran a talk about how Pokemon can teach you object oriented programming. Very accessible. And so I think, it’s not necessarily Automattic’s responsibility, but that said it is something that my team is actively working on right now is what kind of materials are needs to help people get over there. Is it, we need to help people understand how to make, build environment, that dev environment that can do the build steps for React, or is it just general JavaScript knowledge? So we’re actually, this week, looking at what options are currently out there. What’s up to date. There were, when Gutenberg launched, there were a number of products and educational things that came out from the community that were great, but have not been updated. And people are still being directed to stuff that’s two years old, and doesn’t help them now? So my team is actively working on this.

How can we help people do this? Because like I said, it’s actually not that hard, but we don’t give the tools people need. I tried to build a Gutenberg plugin recently entirely just from wordpress dot org documentation. I was like, no blog posts, no outside resources, just wordpress dot org. And it was not easy. So whether or not it’s Automattic’s responsibility, it’s something that we’re taking on, because it needs the community does need it. So look for something better in that space, soonish.

Nathan Wrigley

Thank you. Encouraging, just to hear that the flag has been raised and the concern has been written down and it does sound to me like you are actually planning to bring something to the table and it’s been thought about, so that’s really encouraging. Thank you for that.

It feels like we’ve been bashing for a long time, we’ve probably spent half an hour dissecting all the bad. So before we draw to a close let’s flip that entirely. Let’s turn it to the good. And I just want to offer you a platform to say why it is you’ve taken this job with Gutenberg as the sole focus. What is it about Gutenberg that you feel is better? Why do you think it’s the future? In other words, what I’m saying is, here’s a crowd of naysayers, here’s a crowd of people in front of you, they’ve got their pitchforks out, they are furious about the way that things are going, you’ve got an opportunity now to just address that crowd and see if you can turn some heads.

Tara King

Oh, I wish I had practiced. WordPress has always been about freedom and empowerment of people, of individuals. This is my personal take on it, this is not the Automattic take on it necessarily, it’s just how I feel. When I was building small sites, I used to run a consultancy for artists, artists are famously, not necessarily wealthy. Don’t have a lot of money to put into these things. And they’re also a very do it yourself kind of group. So I was making websites for artists. And if I could just get them started, give them a little push, install, some WordPress on a server, maybe pick out a theme for them. They could do it. People who almost refuse to touch computers because they’re just busy off making their art could come back and use WordPress and share their work, talk about it, sell it, do really cool things.

And I think I’ve always been very passionate about that kind of end user being able to make their own website. I am personally just so not interested in having to go to a developer to say I need to post my new blog post. I need to add a little widget here with my new event. It’s feels so old fashioned to me, and it’s so disempowers, like I said, the user of the website. And so when I was looking at this job, thinking to myself, self, nobody, like everybody’s mad about Gutenberg. Do you really want to talk about it and try to make them like it, what it really came down to was a genuine feeling, when I was interviewing and talking to people at Automattic, genuine feeling that they wanted this to be a collaborative experience, that they wanted it to be in conversation with the entire community, which is really where my passion derives from. And then Gutenberg itself as a tool is just incredible. I wouldn’t have taken it if I didn’t think the tool was worth it. If it was like, oh, there’s this like terrible piece of software, but it’s okay. I’m getting a salary. I’m not going to work 40 hours a week on something like that. So the tool allows people to do really powerful things and really control stuff that I haven’t seen in other CMS’s. I’ve built sites for clients in Wix and Weebly and Squarespace and Drupal and WordPress and other more niche platforms. And I just see my clients over and over again, bumping up against, oh, I just want to put two pictures next to each other. And they can’t because they don’t know HTML or they don’t know how to make a table.

I just want to be able to make all my pictures, have a little, like a header cover image with some text on it and they have to call me and I have to code that in and put it up there. And obviously Gutenberg doesn’t have every kind of block and every kind of pattern that you might imagine. But having now built several sites, just with vanilla WordPress, I haven’t installed any themes or anything like that, and just a couple of block packages that are out there, you can get pretty far, I think much farther. Yesterday I was watching a video on YouTube about, it was 10 minutes to a block theme, and it was like, make these five files and now you can put a block widget as your header, which means the users can make their own headers. And I don’t have to go in and do all of those little things for them all the time. I think that’s scary for some folks because they rely on that work. They rely on it being difficult. But ultimately, it’s really empowering. It makes more people able to make more websites. Like it really grows the size of the pie if you will. Drupal’s like jealous of it and there’s a Gutenberg port to Drupal and it’s really very cool. It’s very powerful. And I think, the community can really benefit from it. We just need to be able to actually speak to each other and hear each other and work together. And that’s the part that my team is really trying to build that bridge and to make that a reality, obviously we can’t fix everything for everybody, but we can fix more things than we have been fixing.

Nathan Wrigley

That, I feel is a really excellent place to call it a day. You mentioned just before we finish, you did mention earlier that you were going to drop your email in once more. It may be that people have heard it and haven’t written it down. Can I encourage you to do that once again?

Tara King

Absolutely. My email is t a r a dot k i n g at automattic dot com. And there are two T’s on the end of that. So it’s a u t o m a t t i c dot com. I’m also sparklingrobots on Twitter. Like I said, R I P my inbox let’s see how this goes. But I, I believe my DMs are open on Twitter or you can just tweet at me because I am actively looking to have conversations in the community. One-on-one conversations actually move things forward quite a bit. So I’m excited to have those.

Nathan Wrigley

Tara thank you very much for coming on the podcast today.

A Deep Dive Into Serverless UI With TypeScript

If you’ve been looking for a clear explanation of how applications can be developed and deployed to AWS with less configuration as possible, then I’ve prepared just the article for you. We’ll be breaking it all down into two parts: deploying a static web application (in this case a Notes application), and then a serverless web application to CloudFront using the Serverless UI library.

Note: To follow along, you’ll need a basic understanding of AWS and web development in order to understand how the TypeScript project is built and used to deploy to AWS.

Requirements

Before starting to build our project, the following requirements need to be met:

  • Basic knowledge of React, React Hooks, and Material UI;
  • Good knowledge of TypeScript;
  • Node.js version >= 12.x.x installed on your local machine;
  • Have an AWS verified account;
  • Configured your AWS CLI with local credentials;
  • Ensure that npm or yarn is also installed as the package manager.
Introduction

We’ll start with a few introductions on Serverless UI, but at the end of this tutorial, you should be able to comfortably use Serverless UI in your applications — from installing to understanding the concepts and implementing it in your very own projects. According to the docs on GitHub:

“Serverless UI is simply a free, open-source command-line utility for quickly building and deploying serverless applications on the AWS platform.”

As stated, it’s a lightweight library that’s quickly installed over the terminal, and can be used to set up configure-domain, deploy static or serverless websites — all done on the terminal. This permits you to easily couple any choice of front-end framework with Serverless UI to deploy existing and new applications to AWS stress-free.

Serverless UI also works great with any static website, and websites that use serverless functions to handle requests to some sort of API. This makes it great for building serverless back-end applications. The deploy process through Serverless UI gives you the control to automatically deploy each part or in better words, iteration of your application with a different and separate URL. Though, this means you get to monitor the continuous integration and testing of your application with confidence in real-time.

Using Serverless UI in production, you can choose to have your project or serverless functions written in native JavaScript or TypeScript. Either way, they’ll be bundled down extremely quickly and your functions deployed as Node.js 14 Lambda functions. Your functions within the ./functions folder are deployed automatically as serverless functions on AWS. This approach means that we’ll be writing our code in the form of functions that will handle different tasks or requests within the application. So when we deploy our functions, we’ll invoke them in the format of an event.

Then the need for a fast and very small application file size makes the Serverless UI be of good essence within our application. Being a command-line tool, it doesn’t need to be bundled inside the application — it can be installed globally, npm install -g @serverlessui/cli or as a devDependency within our application. This means no file size was added to our application, giving us the benefit of having only the code needed for our application to function. No extra added bundle size to our application. As with any migration, we developers know that migrating existing applications can be tough and troubling without downtime for our users, but it is doable depending on the use case.

Pros And Cons Of Using Serverless UI

Using Serverless UI within our projects, whether existing or new project has some benefits that it gives us:

  • There are no middleman services unlike others; Serverless UI gives you out-of-the-box benefits of a pre-configured infrastructure without having to go through a middleman.
  • It supports and works in almost any CI (Continuous Integration) environment owing that it’s a command-line tool readily available via npm. This is a plus for the backend and infrastructure setup.
  • For already existing serverless applications or those that may have additional CloudFormation and/or CDK infrastructure, there is a full provision of CDK constructs for each of the CLI actions.
  • Serverless UI provides almost any option during deploying your application — deploy your static website, Lambda functions or production code.
  • Almost all configurations (such as configure-domain and deploying applications) are all done on the command line.
  • Front-end frameworks like React, Svelte, Vue, or JQuery are all supported, as long as it compiles down to static code.
  • Gives serverless applications the ability to scale dynamically per request, and won’t require any capacity planning or provisioning for the application.

These are some downsides of Serverless UI that we should consider before deciding to use it within our projects:

  • There is only support for projects built using TypeScript or JavaScript within the project.
  • Within recent time, the library core infrastructure is written with aws-cdk, which means the only platform our applications could be deployed to is AWS.

Recommended Reading: Local Testing A Serverless API (API Gateway And Lambda)

Setting Up The Notes Application

Nowadays, several tools are available for developers to efficiently manage infrastructures, for example, the Serverless UI, the console, or one of the frameworks available online. As explained above, our goal is to set up a simple demo of a Notes application in TypeScript, which will quickly help us to demonstrate how Serverless UI could be used in hosting it, so you can quickly grasp and implement it within your own projects.

For this tutorial, we’ll quickly explore and explain the different parts of a Notes application, then install Serverless UI library to host the application on AWS.

We proceed to clone the remote repository on our local machine and run the command that will install all the dependencies.

git clone https://github.com/smashingmagazine/serverless-UI-typescript.git

yarn install

The above command clones a Note application that has the functional components built already, and then goes ahead to install the dependencies that are needed for the components to function. Here’s the list of the dependencies that are required for this Notes application to function:

{
  ...
  "dependencies": {
    "@testing-library/jest-dom": "^5.11.4",
    "@testing-library/react": "^11.1.0",
    "@testing-library/user-event": "^12.1.10",
    "@types/jest": "^26.0.15",
    "@types/node": "^12.0.0",
    "@types/react": "^17.0.0",
    "@types/react-dom": "^17.0.0",
    "react": "^17.0.1",
    "react-dom": "^17.0.1",
    "react-scripts": "4.0.3",
    "typescript": "^4.1.2",
    "web-vitals": "^1.0.1"
  },
  ...
}

The above list contains dependencies and their type definitions to work optimally with TypeScript. We proceed to explain the working parts of the application. But let’s first define interfaces for the Note data and the Props argument that will be passed down into our functions. Create a /src/interfaces.ts file and include the following:

export interface INote {
  note: string;
}
export interface Props {
  content: INote;
  delContent(noteToDelete: string): void;
}

Here we’re defining the type structure that acts as a syntax contract between our components and the props passed within them. Also defines the unit data of our application state, INote.

For this application, we’ll focus mainly on the /src/components folder and the /src/App.tsx file. We’ll start from the components folder then gradually explain the rest of the application.

Note: The styles defined and used throughout this Notes application can be found in the /src/App.css file.

The components folder contains one file, the Note.tsx file; which will define the UI structure of each Note data we create.

import { INote } from "../Interfaces";

interface Props {
  content: INote;
  delContent(noteToDelete: number): void;
}

const Note = ({ content, delContent }: Props) => {
  return (
    <div className="note">
      <div className="content">
        <span>{content.note}</span>
      </div>
      <button
        onClick={() => {
          delContent(content.id);
        }}
      >
        X
      </button>
    </div>
  );
};
export default Note;

Within the Note function, we’re destructuring a props parameter that has the data type definition of Props, and contains the content and delContent fields. The content field further contains the note field whose value will be the input value of our users. While the delContent field is a function to delete content from the application.

We’ll proceed to build the general UI of the application, defining its two sections; one for creating the notes and the other to contain the list of notes already created:

const App: FC = () => {
  return (
    <div className="App">
      <div className="header">
      </div>

      <div className="noteList">
      </div>
    </div>
  );
};
export default App;

The div tag with the header class contains the input and the button elements for creating and adding notes to the application:

const App: FC = () => {
  return (
    <div className="App">
      <div className="header">
        <div className="inputContainer">
          <input
            type="text"
            placeholder="Add Note..."
            name="note"
            value={noteContent}
            onChange={handleChange}
          />
        </div>
        <button onClick={addNote}>Add Note</button>
      </div>

      ...
    </div>
  );
};
export default App;

In the above code we recorded a new state, noteContent, for the input element’s value. Also an onChange event to update the input value. The button element has onClick event that will handle generating new content from the input’s value and adding it to the application. The above UI markup coupled with the already defined styles will look like:

Now let’s define the new states, noteContent and noteList, then the two events, handleChange and addNote functions to update our application functionalities:

import { FC, ChangeEvent, useState } from "react";
import "./App.css";
import { INote } from "./Interfaces";

const App: FC = () => {
  const [noteContent, setNoteContent] = useState<string>("");
  const [noteList, setNoteList] = useState<INote[]>([]);

  const handleChange = (event: ChangeEvent<HTMLInputElement>) => {
      setNoteContent(event.target.value.trim());
  };

  const addNote = (): void => {
    const newContent = { Date.now(), note: noteContent };
    setNoteList([...noteList, newContent]);
    setNoteContent("");
  };

  return (
    <div className="App">
      <div className="header">
        <div className="inputContainer">
          <input
            type="text"
            placeholder="Add Note..."
            name="note"
            value={noteContent}
            onChange={handleChange}
          />
        </div>
        <button onClick={addNote}>Add Note</button>
      </div>

      ...
    </div>
  );
};
export default App;

The noteList state contains all the notes created within the application. We add and remove from it to update the UI with more notes created. Within the handleChange function, we’re regularly updating noteContent with the changes made to the input field using the setNoteContent function. The addNote function creates a newContent object with a note field whose value is gotten from noteContent. It then calls the setNoteList functions and creates a new noteList array from its previous state and newContent.

Next is to update the second section of the App function with the JSX code to contain the list of notes created:

...

import Note from "./Components/Note";

const App: FC = () => {
  ...

  return (
    <div className="App">
      <div className="header">
        ...
      </div>

      <div className="noteList">
        {noteList.map((content: INote) => {
          return <Note key={content.id} content={content} delContent={delContent} />;
        })}
      </div>
    </div>
  );
};

export default App;

We’re looping through the noteList using the Array.prototype.map method to create the dump of notes within our application. Then we imported the Note component which defines the UI of our note, passing the key, content and delContent props into it. The delContent function as discussed earlier deletes content from the application:

...
import Note from "./Components/Note";

const App: FC = () => {
  ...
  const [noteList, setNoteList] = useState<INote[]>([]);

  ...

  const delContent = (noteID: number) => {
    setNoteList(
      noteList.filter((content) => {
        return content.id !== noteID;
      })
    );
  };
  return (
    <div className="App">
      <div className="header">
        ...
      </div>

      <div className="noteList">
        {noteList.map((content: INote) => {
          return <Note key={content.id} content={content} delContent={delContent} />;
        })}
      </div>
    </div>
  );
};
export default App;

The delContent function filters out of noteList the contents that are not in any way equivalent to the noteToDelete argument. The noteToDelete is equivalent to content.note but gets passed down to delContent whenever a note is created by calling the Note component.

Coupling the two sections of the App component together, your code should look like the below:

import { FC, ChangeEvent, useState } from "react";
import "./App.css";
import Note from "./Components/Note";
import { INote } from "./Interfaces";

const App: FC = () => {
  const [noteContent, setNoteContent] = useState<string>("");
  const [noteList, setNoteList] = useState<INote[]>([]);

  const handleChange = (event: ChangeEvent<HTMLInputElement>) => {
      setNoteContent(event.target.value.trim());
  };

  const addNote = (): void => {
    const newContent = { id: Date.now(), note: noteContent };
    setNoteList([...noteList, newContent]);
    setNoteContent("");
  };

  const delContent = (noteID: number): void => {
    setNoteList(
      noteList.filter((content) => {
        return content.id !== noteID;
      })
    );
  };

  return (
    <div className="App">
      <div className="header">
        <div className="inputContainer">
          <input
            type="text"
            placeholder="Add Note..."
            name="note"
            value={noteContent}
            onChange={handleChange}
          />
        </div>
        <button onClick={addNote}>Add Note</button>
      </div>

      <div className="noteList">
        {noteList.map((content: INote) => {
          return <Note key={content.id} content={content} delContent={delContent} />;
        })}
      </div>
    </div>
  );
};
export default App;

And if we go ahead and add a few notes to our application, then our final UI will look like this:

Now we have created a simple Notes application that we can add and delete Notes, let’s move on to using Serverless UI to deploy this application to AWS and as well deploy a serverless back-end application (serverless functions).

Deploying Notes Application With Serverless UI

Now we’re done explaining the components that make up our Notes application, it’s time to deploy our application using Serverless UI on the terminal. The first step in deploying our application to AWS is to configure the AWS CLI on our machine. Check here for comprehensive steps to take.

Next is to install the Serverless UI library globally on our local machine:

npm install -g @serverlessui/cli

This installs the package globally, meaning no extra file size was added within the build code.

Next is to make a build folder of the project, this is the folder we’ll reference within our terminal:

sui deploy --dir="build"
...
❯ Website Url: https://xxxxx.cloudfront.net

But for our project, we’ll run the yarn command that builds our application into a static website within the build folder, after which we run the Serverless UI command to deploy the application:

yarn build 
...
Done in 80.63s.

sui deploy --dir="build"
...

✅  ServerlessUIAppPreview1c9ec9f1

Outputs:
ServerlessUIAppPreview1c9ec9f1.ServerlessUIBaseUrlCA2DC891 = https://dal254gl37fow.cloudfront.net

Stack ARN:
arn:aws:cloudformation:us-west-2:261955174750:stack/ServerlessUIAppPreview1c9ec9f1/e4dc82e0-fe44-11eb-b959-064619847e85

Our application was successfully deployed, and the total time it took to deploy was less than five minutes. The application was deployed to Cloudfront here.

Deploying Serverless Functions With Serverless UI

Here, we’ll focus on deploying Lambda functions written in our local environment, other than on the IDE provided on the AWS web platform. With Serverless UI, we’ll remove the hassle of doing a lot of configuration and set up before deploying it on AWS.

You’ll also want to ensure your local environment is as close to the production environment as possible. This includes the runtime, Node.js version. As a reminder, you need to install a version of Node.js supported by AWS Lambda.

The code or the /serverless folder used within this part of the article can be found here. This folder contains the source file, that makes a request to an API to get a random note; a joke.

const nodefetch = require("node-fetch");

exports.handler = async (event, context) => {
  const url = "https://icanhazdadjoke.com/";
  try {
    const jokeStream = await nodefetch(url, {
      headers: {
        Accept: "application/json"
      }
    });
    const jsonJoke = await jokeStream.json();
    return {
      statusCode: 200,
      body: JSON.stringify(jsonJoke)
    };
  } catch (err) {
    return { statusCode: 422, body: err.stack };
  }
};

Before we deploy the serverless folder, we’ll need to install esbuild library. This will help make bundling of the application files more fast and accessible.

npm install esbuild --save-dev

The next step to deploy the serverless function on AWS is by specifying the folder location with the --functions flag as we previously did with the --dist flag when deploying our static website.

sui deploy --functions="serverless"

While the above command helps us build our application, the serverless function successfully deploys it:

...

✅  ServerlessUIAppPreview560dbd41

Outputs:
ServerlessUIAppPreview560dbd41.ServerlessUIFunctionPathjokesD9F032B9 = https://dwh6k64yrlqcn.cloudfront.net/api/jokes

Stack ARN:
arn:aws:cloudformation:us-west-2:261955174750:stack/ServerlessUIAppPreview560dbd41/21de6780-fb93-11eb-a0fb-061a2a83f0b9
  • The serverless function is now deployed to Cloudfront here.

As a side note, we should be able to reference our API URL by relative path in our UI code like /api/jokes instead of the full URL if deployed at the same time with the /dist or /build folder. This should always work — even with CORS — since the UI and API are on the same domain.

But by default, Serverless Ui will create a new stack for every preview deployed, which means each URL will be different and unique. In order to deploy to the same URL multiple times, the --prod flag needs to be passed.

sui deploy --prod --dir="dist" --functions="serverless"

Let’s create a /src/components/Quote folder and inside it create an index.tsx file. This contains the JSX code to house the quotes.

import { useState } from "react";

const Quote = () => {
  const [joke, setJoke] = useState<string>();
  return (
    <div className="container">
      <p className="fade-in">{joke}</p>
    </div>
  );
};
export default Quote;

Next, we will make a request to the deployed serverless functions to retrieve a joke from it within a set interval of time. This way the note, i.e the joke, within the <p className="fade-in">{joke}</p> JSX markup gets updated every 2000 milliseconds.

import { useEffect, useState } from "react";

const Quote = () => {
  const [joke, setJoke] = useState<string>();

  useEffect(() => {
    const getRandomJokeEveryTwoSeconds = setInterval(async () => {
      const url = process.env.API_LINK || "https://dwh6k64yrlqcn.cloudfront.net/api/jokes";
      const jokeStream = await fetch(url);
      const res = await jokeStream.json();
      const joke = res.joke;
      setJoke(joke);
    }, 2000);
    return () => {
      clearInterval(getRandomJokeEveryTwoSeconds);
    };
  }, []);

  return (
    <div className="container">
      <p className="fade-in">{joke}</p>
    </div>
  );
};
export default Quote;

The code snippet added to the above source code will use useEffect hook to make API calls to the serverless functions, updating the UI with the jokes returned from the request by using the setJoke function provided from the useState hook.

Let’s restart our local development server to see the new changes added to our UI:

Before deploying the updates to your existing application, you can set up a custom domain, and using Serverless UI deploy and push subsequent code updates to this custom domain.

Configure Domain With Serverless UI

We can deploy our serverless application to our custom domain rather than the default one provided by CloudFront. Configuring and deploying to our custom domain may take 20 – 48 hours to fully propagate but only needs to be completed once. Navigate into your project directory and run the command:

sui configure-domain --domain="<custom-domain.com>"

Replace the above value of the --domain flag with your own custom URL. Then you can continuously update the already deployed project by adding the --prod flag when using the sui deploy command again.

Recommended Reading: Building A Serverless Contact Form For Your Static Site

Conclusion

In this article, we introduced Serverless UI by discussing different merits that make it a good fit for deploying your application with it. Also, we created a demo of a simple Notes application and deployed it with the library. You can further build back-end serverless functions that are triggered by events happening with the application, and deploy them to your AWS lambda.

For the advanced use case of Serverless UI, we configured the default domain provided by CloudFront with our own custom domain name using Serverless UI. And for existing serverless projects or those that may have additional CloudFormation and/or CDK infrastructure, Serverless UI provides CDK constructs for each of the CLI actions. And with Serverless UI, we can easily configure a private S3 bucket — an extra desired feature for enhanced security on our serverless applications. Click here to read up more on it.

  • The code used within this article can be found on Github.

Resources

Quality backlinks

I want to get some idea and strategy on how to get more quality backlink to improve my site. If you have some idea or an seo expert feel free to leave a comment, some suggestion will be appreciated.

How to Effectively Attract and Manage Guest Bloggers in WordPress

Are you looking for ways to attract guest bloggers and manage them in WordPress?

Guest blogging is a powerful way to gain exposure and build brand awareness. You can publish different types of content through guest post submissions and boost your traffic.

In this article, we will show you how to effectively attract and manage guest bloggers in WordPress.

Manage Guest Bloggers in WordPress

Benefits of Accepting Guest Posts for Publishers

Guest blogging has tons of benefits for the guest author or the company they represent, helping them to get publicity and backlinks to their website.

But what’s in it for you as a publisher?

Here are some of the advantages of accepting guest posts on your website.

  • New Perspective – Every author brings their unique perspective with their writing. Your audience will like a little change of pace and ideas.
  • New Audience – Often the guest author will share the published post with their audience. This will attract new users to your website and grow your audience.
  • New Connection – By allowing the other person to guest post on your site, you can build a relationship with them. This increases your chances of helping each other in the future.
  • New Post – You get an extra post on your site that you didn’t have to write. You can use that time to focus on growing other areas of your brand.

Now that you see the benefits of accepting guest posts on your site, let’s find out how to attract guest bloggers in WordPress.

Attracting Guest Bloggers in WordPress

There are various ways you can attract guest bloggers in WordPress. The simplest way of doing this is by creating a Write for Us page on your site.

You can highlight the details for guest post submissions and offer publishing guidelines for writers on the write for us page. If you have particular topics to cover, then you can also list them down on the page.

Write for us section

Besides that, it’s a good idea to make this page visible to your visitors. You can place the link in the main navigation area of your WordPress website, like in the top menu or sidebar.

Other than that, you can place the ‘write for us’ page after each post in the author bio or on each guest author post.

Bonus Tip: You can create stunning write for us pages using SeedProd. It’s the best landing page plugin for WordPress and offers a drag and drop builder along with numerous customization options. You follow our detailed guide on how to create a landing page in WordPress for more details.

Another way to attract guest bloggers to your website is by offering them a monetary reward. You can set different prices for different types of content.

For example, DAME Magazine offers guest authors a monetary reward of $150 for essays and between $300 to $500 for reported features.

Sites that pay for guest posts

You can also partner with other businesses by guest posting on other sites and allowing their authors to guest post on your website.

Often bloggers tend to reciprocate guest posts, which can work out great, especially if you are in the same niche.

Lastly, you can join different communities of guest bloggers and look for opportunities to attract new guest posts to your website.

Once you know how to attract guest bloggers, let’s find out how to accept guest posts on your WordPress blog.

Accepting Guest Posts in WordPress

There are several ways to accept guest posts in WordPress. The easiest way is by allowing users to submit guest posts from the front end of your WordPress website.

This way, you won’t have to give access to the WordPress admin area or require users to register. Guest bloggers can simply upload their content using a post-submission form.

For this tutorial, we’ll be using WPForms. It’s the best form plugin for WordPress and offers a drag and drop form builder. The plugin offers a post submissions addon that makes it easy for users to upload content to your site.

You’ll need the WPForms Pro version as it includes the post submissions addon.

First, you’ll need to install and activate the WPForms plugin. If you need help, then simply follow our guide on how to install a WordPress plugin.

Once the plugin is active, you can head over to WPForms » Settings from your WordPress admin area and enter the license key. You can find the license key in your WPForms account area.

Enter WPForms license key

Next, you’ll need to go to WPForms » Addons page. Then scroll down to the Post Submissions Addon and click the ‘Install’ button. The addon will now automatically install and activate.

Installing the WPForms post submissions addon

Upon installing the addon, you’re now ready to create your post submission form. To start, simply go to WPForms » Add New to launch the WPForm’s form builder.

After that, go ahead and enter a name for your form and then select the ‘Blog Post Submission Form’ template in the Select a Template area.

Select Blog Post Submission Form template

Now, you can use the drag and drop builder to customize your form. Simply add new form fields by dragging them from the options on your left and placing them where you want in the form.

Add new form fields

WPForms also lets you customize each individual field. All you have to do is click on any field you want to edit, and you’ll see options to change their label, size, format, add a description, and more.

After you’re done customizing your post submission form, you can head over to the ‘Settings’ tab.

In the General settings, you’ll be able to edit your form name, form description, change the submit button text, edit the anti-spam protection option, enable AJAX form submissions, and more.

General Form Settings

Next, you can go to the Notifications settings tab to change the email address and message you’ll receive when someone submits a guest post using the form.

Once that’s done, you can head over to the Confirmations settings tab and edit the message people will see once they submit a form. WPForms lets you show a message, direct users to a new URL, or display a page.

After that, go ahead and click on the Post Submissions settings tab to map each form field to the respective fields in WordPress.

Change the Post Submission settings

Now, save your settings to store your post submission form and exit the form builder.

Next, you’ll need to add your guest post submission form to your website. You can do that by adding a new page or editing an existing one.

Once you’re in the WordPress block editor, simply click the plus (+) button and add a WPForms block.

WPForms block

After that, you’ll need to select your posts submission form from the dropdown menu in the WPForms block.

Select your post submission form from the dropdown menu

You can now go ahead and publish your page and visit your website to see the post submission form in action.

Post submission form example

Aside from using WPForms, there are more ways to allow users to submit guest posts to your WordPress website. For instance, you can create individual WordPress accounts for each contributor.

However, this would mean allowing guest writers to access your WordPress admin area and view other blog posts and pages on your website.

If you’re looking for more options to accept guest posts, then please refer to our guide on how to allow users to submit posts to your WordPress site.

Set Up Website Traffic Tracking by Authors

Once you start publishing guest posts, it’s important to know how they’re performing. One way of tracking their performance is by finding out which authors drive the most traffic to your website.

This way, you’ll get to see the most popular guest author on your website. You’ll also get to know which content your audience likes, so you can accept more guest posts on similar topics.

With MonsterInsights, you can easily set up author tracking in Google Analytics. MonsterInsights is the best Analytics solution for WordPress and is used by over 3 million businesses.

It makes it very easy to add Google Analytics to WordPress without editing code or hiring a professional. Using the MonsterInsights Dimensions addon, you can identify the most popular contributors on your blog.

Select Author from the custom dimensions dropdown menu

The Dimensions addon lets you set up custom dimensions in WordPress. Custom dimensions are additional information that you can track in Google Analytics. This includes authors, post type, user ID, category, logged-in users, and more.

The best part about using MonsterInsights is that you can see the data inside your WordPress admin area and don’t have to switch between tabs or windows.

To view the most popular author on your site, simply head over to Insights » Reports and go to the ‘Dimensions’ tab.

Top authors report in MonsterInsights

For more details, you can follow our step-by-step guide on how to enable author tracking in WordPress.

Bonus: Tips for Accepting Guest Posts

Ever since Google has started cracking down on paid text links, SEO companies and spammers rely on guest posts to pick up the slack. For this very reason, no matter how popular your blog is, you will see at least a few guest posts request.

When your blog is relatively new and you get a guest post request, you get really excited. In that excitement, you tend to make the mistake of approving sub-par or even low-quality content.

To help you out, here are some rules that we think you should follow when accepting guest posts.

Ask What Keyword or Backlinks Do You Want?

You don’t want to link to spammy sites like porn, inkjet printers, car insurance companies, etc. You also don’t want to link to a specific keyword, which isn’t relevant to your industry or niche.

If you don’t ask your guest bloggers which keyword they’re focusing on or are they linking to spammy websites, then they will write an article that won’t add value.

At this point, if you reject their post, it sort of looks bad. It’s best not to waste time and get this out of the way.

Ask for Topic Ideas and Summary Before the Final Post

Often these SEO companies and spammers tend to have pre-written articles. They will say that we want to write for your blog, but they don’t suggest ideas.

Chances are, you will get a pre-written post that has been published on numerous sites. This is bad for your site as duplicate content can hurt your WordPress SEO.

It’s always best to ask them for topic ideas along with a summary or an outline of the article. This shows you how qualified they are to write the post, and you can approve or reject the topic.

It will also help you screen out generic posts or list posts that have already been covered by numerous other websites.

We hope this article helped you learn how to effectively attract and manage guest bloggers in WordPress. You may also want to check out our guides on how to choose the best blogging platform and our expert pick of the must have WordPress plugins for all websites.

If you liked this article, then please subscribe to our YouTube Channel for WordPress video tutorials. You can also find us on Twitter and Facebook.

The post How to Effectively Attract and Manage Guest Bloggers in WordPress appeared first on WPBeginner.