Recognizing Constraints

There’s a “C” word in web development that we don’t give enough attention to. No, I’m not talking about “continuous integration”, or even “CSS”. The “C” word I’m talking about is “constraints”. Understanding constraints is a vital part of building software that works the best it can in its targeted environment(s). Yet, the difficulty of that task varies based on the systems we develop for.

Super Nintendo games were the flavor of the decade when I was younger, and there’s no better example of building incredible things within comparably meager constraints. Developers on SNES titles were limited to, among other things:

  • 16-bit color.
  • 8 channel stereo output.
  • Cartridges with storage capacities measured in megabits, not megabytes.
  • Limited 3D rendering capabilities on select titles which embedded a special chip in the cartridge.
SNES Cartridge for Secret of Mana

Despite these constraints, game developers cranked out incredible and memorable titles that will endure beyond our lifetimes. Yet, the constraints SNES developers faced were static. You had a single platform with a single set of capabilities. If you could stay within those capabilities and maximize their potential, your game could be played—and adored—by anyone with an SNES console.

PC games, on the other hand, had to be developed within a more flexible set of constraints. I remember one of my first PC games had its range of system requirements displayed on the side of the box:

  • Have at least a 386 processor—but Pentium is preferred.
  • Ad Lib or PC speaker supported—but Sound Blaster is best.
  • Show up to the party with at least 4 megabytes of RAM—but more is better.
Software box for Hexen

If you didn’t have a world-class system at the time, you could still have an enjoyable experience, even if it was diminished in some ways.

Console and PC game development are great examples of static and variable constraints, respectively. One forces buy-in of a single hardware configuration to participate, while the other allows participation on a variety of hardware configurations with a gradient of performance outcomes.

Does this sound familiar?

Web developers arguably have the most difficult set of constraints to contend with. This is because we have to reconcile three distinct variables to create fast websites:

  1. The network.
  2. The device.
  3. The browser.

With every year that passes, I gain more understanding of just how challenging those constraints are to work within. It’s a lesson I learn repeatedly with every project, every client, and every new technology I evaluate.

Coping with the constraints the web imposes is a hard job. The part of me that abhors how much JavaScript we ship has difficulty knowing where to draw the line of when too much is too much. Developer experience has a role in our day-to-day work, and we need just enough of it to grease the skids, but also without tanking the user experience. Because, as our foundational documents tell us, users are first in line for consideration.

So what did I learn this year?

The same thing I relearn every year, just in a subtly different way every time: there are costs and trade-offs associated with our technology choices. This year I relearned—in clear and present fashion—how our technology choices can lock us into architectures that can both harm the user experience if we don’t step lightly and become increasingly difficult to break out of when we must.

Another thing I learned is that using the platform is hard work. Yet, the more I use it, the stronger my grasp on its abstractions becomes. Direct use of the platform isn’t always the best or most scalable way to work, but using it on a regular basis instead of installing whatever package scratches whatever itch I have right this second helps me to understand how the web works at a deeper level. That’s valuable knowledge that pays off over time, and your ability to build useful abstractions becomes more difficult without it.

Finally, I learned yet again this year that our constraints are variable. It’s acceptable if some things don’t work as well as they should everywhere—but we need to be very mindful of what those things are. How acceptable those lapses in our responsibility to the public depends on the function we serve. If it’s a remotely crucial function, we need to proceed with the utmost care and consideration of users. If this year of rising unemployment and remote learning has taught us anything, the internet is for more than commerce.

My hope is that the web becomes more adaptive in 2021 than it has been in years past. I hope that we start to have the same expectations for the user experience that we did when we were kids playing PC games—that an experience can vary in its fidelity in order to accommodate slower systems—and that’s a perfectly fine thing for the web. It’s certainly more flexible than expecting everyone to cope with the exact same experience, whether they’re on an iPhone 12 or an Android Go phone.


The post Recognizing Constraints appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Innovation Can’t Keep the Web Fast

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.