How to Fix the Most Common SEO and UX Issues on Your Site Using 301 Redirects

Whenever you see a commercial or an article talking about creating websites, or you hear someone talking about starting one, the emphasis is always on “I was surprised at how quickly I have created my website, it only took me a couple of days!”. It makes you feel like creating an online presence, and earning […]

The post How to Fix the Most Common SEO and UX Issues on Your Site Using 301 Redirects appeared first on WPArena.

Parallel Grid Search in H2O

H2O is, at its core, a platform for distributed, in-memory computing. On top of the distributed computation platform, machine learning algorithms are implemented. At H2O, we design every operation, be it data transformation, training of machine learning models, or even parsing to utilize the distributed computation model. In order to work with big data fast, it’s necessary.

However, a single operation usually can not utilize clusters' computational resources to the very maximum. Data needs to be distributed across the cluster, and many operations require sequential execution of tasks, which, even if implemented in a distributed manner, follow after each other and require data exchange. These and many other smaller factors, if summed up together, may introduce a significant overhead.

Principles to Handle Thousands of Connections in Java Using Netty

C10K problem is a term that stands for ten thousand concurrently handling connections. To achieve this, we often need to make changes in the settings of created network sockets and default settings of Linux kernel, monitor the usage of the TCP Send/Receive Buffers and Queues, and, in particular, adjust our application to be a good candidate for solving this problem.

In today's article, I'm going to discuss common principles that need to be followed if we want to build a scalable application that can handle thousands of connections. I will refer to Netty Framework, TCP, and Socket internals and some useful tools if you want to get some insights from your application and underlying system.

Join the Future of WordPress Themes Conversation: Theme Review Team to Hold Biweekly Discussions

In collaboration with the core design and editor teams, the WordPress theme review team will begin hosting biweekly (fortnightly) meetings on the future of themes. The meetings will be held every other Wednesday on the #themereview WordPress Slack channel at 16:00 UTC. The first meeting is on February 5.

Phase 2 of the Gutenberg project is about tackling site customization. This covers everything from turning sidebars into block containers to redefining how themes will work in a block-based system in the coming years. The latter is a huge unanswered question. There are several ideas on how themes should be handled.

Kjell Reigstad, a design director for Automattic, proposed the meeting as a step toward answering the future-of-themes question. “The main thing I’d like to accomplish is to build up regular cross-team communication around the theme plus full-site editing work,” he said. “There are so many potential changes on the horizon, and we really need perspective from both the Gutenberg folks and theme authors. I know it’s difficult to keep up with all the development happening, and I thought this dedicated meeting would be a great place to stay up to date and share ideas on a regular basis.”

Currently, the agenda for the first meeting is still open but should be posted next week. Anyone who wants to participate or make sure an idea sees discussion, should let the team know in the announcement post’s comments.

“I’d initially like to try and get everyone on the same page in terms of what’s happening already on the Gutenberg front,” said Reigstad. “So for instance, the experimental block-based themes implementation and the global styles work. We’ll likely go over those a little bit, share links and updates, and then pivot into some discussion questions.”

Bringing in the theme review team is imperative for a smooth transition into whatever themes eventually become. “There’s already a lot of full-site editing work going on, and there are already experimental reference documents for block-based themes,” said Reigstad earlier this week in the team’s regular meeting. “It’s important for the TRT and the theme community to keep up to date on this work, and to develop a clear communication loop with the Gutenberg teams.”

There is some concern that the concept of full, block-based themes will simply be railroaded into core WordPress, regardless of feedback. Not all members of the theme review team or theme authors are supportive of the idea.

Theme reviewer Joy Reynolds pointed out in the announcement’s comments that using the phrase “block-based themes” in the meeting title shows bias in favor of themes made of blocks. “Why is the current Full Site Editing code outside the scope of the Customizer?” she asked. “What is the goal? Is it even something that makes sense for themes? Don’t we need a merge proposal? Or even a consensus on design before forcing these changes into core and having meetings about using experimental code as if it’s the only choice?”

These are questions that will certainly come up in the meeting.

Block-based themes already feel like a foregone conclusion. The initial code is currently in the Gutenberg plugin, albeit as an experimental feature. There is already documentation for building such themes. There is a core theme experiments repository Everything seems to be moving full-steam ahead in that direction.

Whatever direction themes end up going, the meeting will at least offer an opportunity for the community to add their input. For success, the editor, design, and theme review team members will need to find some common ground to begin their discussions.

How to Implement Splunk Enterprise On-Premise for a MuleSoft App

What Is Splunk?

Splunk is a tool used for logging, analyzing, reporting, visualizing, monitoring, or searching the machine data in real time.

Machine data is information that is generated by a computer process, application, device, or any other mechanism without any active intervention from humans. Machine data is everywhere, and it can be generated automatically from various sources like computer processes, elevators, cars, smartphones, etc., and generally, such data is generated in forms of events in an unstructured form.

Innovation Can’t Keep the Web Fast

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.

Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you're building a really simple website, you can get away with literally writing raw HTML. It doesn't take long to need a bit more abstraction than that. Even if you're building a three-page site, that's three HTML files, and your programmer's mind will be looking for ways to not repeat yourself. You'll probably find a way to "include" all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I'm feeling much more Jamstack-y and I'd probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they "hydrate" from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here's Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil's idea is that a service worker (a native browser technology) can save a cache of a site's header and footer. Then server requests only need to be made for the "guts" while the full HTML document can be created on the fly.

It's a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil's site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I'd think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.

Prettier

Prettier has become the predominant code formatting tool used by web designers and developers. It has a nice set of defaults that work great for CSS, JavaScript, and even HTML (and most of the preprocessor we support, like SCSS).

We've long offered one-click access to formatting, and now those are powered by Prettier.

Here's an extreme example (video) of minimized CSS:

Your CSS is more likely to have minor indentation weirdnesses and such, but hey this thing can whip anything into shape. As long as it's valid, that is, which is an important point: if your code has syntax errors, we won't format it. It's just a silent failure for now, we'll have to see if we can improve the error messaging there, but if your code doesn't format after selecting the option, that's probably why.

Here's some vanilla JavaScript:

And here's some HTML:

Note the formatting is a little unusual for some folks, with attributes on their own line and force-truncated line lengths. That's valid HTML and how it's commonly seen in JSX.

The post Prettier appeared first on CodePen Blog.

The Complete Unit Testing Collection [Tutorials and Frameworks]

Breaking Unit Testing Down

Unit Test Lifecycle

Componentized Cloud Management: The Way Ahead For AWS Automation

When something gets complex, our primary approach is to break it down — even cloud management. If you’re a part of a growing company that uses the cloud, you can see your infrastructure becoming more complex as you scale and expand. That means its management does, too. 

Earlier, you could put a few scripts together and use them to perform basic management functions, or you could use an external tool for certain functions, but they’re limited in their offering. It’s only backups, or only scheduling, or only monitoring, all of which are a little too siloed. But now these methods simply don’t scale — and it’s not "basic" cloud management functions anymore; we’ve moved on, we’ve become more sophisticated. 

Hybrid Cloud: Cloud Rolls Out To Data Centers in Different Hues

The term "hybrid cloud" in popular vocabulary represents a topology in which an organization's IT infrastructure is spread over public cloud(s) and on-premise data centers. An on-premise datacenter can include the enterprise's own data center or any colocation facility used by the enterprise. Hybrid has also lately been extended to include edge locations whether in a device or in a telecom provider's location. These variants are sometimes also referred to as "private cloud."

Although in a utopian world, the complete data center can be placed in the cloud, in reality, there are invariably some use cases that require workloads to be running on-premise. This is especially true of large enterprises that have considerable IT assets many of which need to continue to reside in the private cloud for various reasons. 

DevOps: Architecture Monitoring

“It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change” – Charles Darwin

Software development is constantly changing. Teams need to be responsive to survive. DevOps was created to help organizations deal with constant change by responding quickly. This movement is designed to bring development and operations closer together so that they may collaborate and communicate more effectively.