Chris Corner: Unusual Ideas with Great Results

SVG Short Circuiting

SVG is normally a pretty efficient file format. If an image is vector in nature, leaving it as vector is normally a good plan as it will like scale well and look pretty darn crips. But of course, It Depends. Super complex vector graphics can get huge, and a raster (i.e. JPG, PNG, etc) version can actually be smaller. This can happen with little tiny images too where the straight up low amount of pixels is just pretty efficient.

This should be the kind of thing computers are good at, right? You’re in luck if you’re using Eleventy. Zach wrote about a thing the Image component can do for Eleventy called SVG Short Circuiting. The idea is, if your source image is SVG, it can make raster versions to help with efficiency. But if the SVG version ends up smaller than any of the other produced versions, it will discard the raster versions.

A nice looking font that helps dyslexia

Worth knowing:

According to the International Dyslexia Association, as much as 15 to 20 percent of the U.S. population may have symptoms of dyslexia. Those include slow or inaccurate reading, weak spelling, and poor writing.

Jill Stakke

Also worth knowing: these people, and really probably anybody can be helped along with better typefaces. That is, typefaces designed in such a way that the are less confusing and less problematic for people with dyslexia.

I’ve seen Dyslexie before, which is pretty neat. But to be frank, it does look a smidge childish which might make it a tough choice when a brand voice needs to be more serious looking. A crappy trade-off, but such is life.

I’ve just seen Oliva King’s Inclusive Sans which, to my eyes, it extremely nice looking and covers the general criteria laied out by Sophie Beier in Designing for Legibility.

  1. Clear distinction between I, l and 1
  2. Non-mirroring of letters d, b, q and p
  3. Distinction between O and 0
  4. Wider, more open counter forms on c, o, a and e
  5. A higher x-height for easier readability at small sizes
  6. Wider default letter-spacing
  7. Clear difference between capital height and ascender height

Just look at how #2 is handled:

Super classy if you ask me. I wanna use it for something. I’m stoked at how good it looks at body copy sizes.

An HTML element as a mask

The vast majority of masks are either shapes in black/white such that they hide or reveal what is behind them in that shape exactly. Or a gradient, such that they fade out what is behind them little by little.

Artur Bień has another idea of what a mask can be: any HTML element. You can set up a simple-but-clever SVG filter to filter out all black.

I gave it a quick shot myself just to have a play and it worked great.


Now that you’re primed into thinking of layering things on top of each other and doing exotic filtering to get weird and cool results, you’re ready for this next one.

Javier Bórquez: Motion extraction with mostly CSS.

Say you wanted to look at a video where only the things that are moving are visible, and the rest is essentially blacked out. Why? I don’t know don’t think about that part too hard. Maybe it’s a way to spot changes in security video easier. Or more likely it’s just a really cool final effect.

You’d think getting that done would involve sophisticated video processing technology. But nope: CSS. The trick is so perfect:

One video is placed on top of the other, playing slightly ahead. Then, by styling the top video with mix-blend-mode: difference in CSS, we make is so only the pixels that have changed between the two frames are shown.

So cool. That’s my favorite trick I’ve seen in a while.

Single Element Gradient Borders

Actually I have another trick that is right in the zone with the last two that is also just extremely cool. You gotta admit the gradient border look is pretty hot right now.

There are number of ways to pull that off, but they typically involve multiple stacked elements and decently involved trickery or limitations. The above is just one element, and it’s showcasing how you also aren’t limited with what you want to do in the body of the element (there using a backdrop-blur).

Ben Frain documents a trick he found in the freeCodeCamp forums. You slap a pseudo element on the main element to create the border, and then essentially knock out a hole in the middle.

Here is the clever bit I have never seen before; we then use a mask, and a mask composite. This allows us to create a ‘shape’, that our gradient border will inhabit. To create this shape, we need to composite two images together and find the difference. That might sound like a lot of work but we can make those two images with CSS using a linear-gradient. It doesn’t matter that the linear-gradient is actually just a flat white colour, the fact that it is defined as a linear-gradient means that the browser renders the outcome of that notation as an image and the image can be composited. So the first mask is a linear gradient set to the padding-box, which then crucially does not include the border, and the second gradient is the full size, and the difference between them is the border shape. Genius!!!!

Genius indeed.

Chris’ Corner: Design the Job

Y’all use Figma for design work? I’d be willing to bet a lot of you do at your organization. I’m still wrapping my brain around the fact that Adobe has to write a billion dollar check to not acquire it. It’s no wonder why they wanted it — there is a new household name in design software after Adobe had that on lock for(counts fingers)ever.

I have no particular allegiances, except to the web, so I’m pleased that Figma is very web native. I’m also impressed that Photoshop is a website now, too, but Figma entirely embraces webness. Figma has been doing lots of releases focused on web developers. Variables seems big. Cool to see CodePen alum Jake talk Code Connect, a way to wire up your real componentry with Figma (!!).

Not to mention their whole Dev Mode thing, a way of using Figma that’s more like a developer consuming what’s there rather than a designer building what’s there.

You even build in Figma with components, which obviously jives with most modern day web development. But there is some buy-in cost to building a component. I think of this banger intro paragraph from Blair Culbreth:

When to make components in Figma? Start too soon in your design process and you feel too locked in when you’re still experimenting. Too late and suddenly going back and componentizing is huge undertaking.

Eternal Struggle of the Systemless Design File: Getting Into the Habit of Using Components in Figma

I don’t have any great advice beyond what Blair says. I really like her advice on making a Header and Footer component right away. At least you’ll have that going for you. Then make more as soon as its… fairly obvious you should.


Speaking of componentry, the one everyone always thinks of first is the Button. Of course, all websites are littered in the things. The variations can be absolutely endless. Sizes, colors, groups, icons, full width, toggles. Then of course all the different states. Do you consider the links inside menus buttons? Sometimes? Phew. I likely can’t write anything you haven’t heard before, but I do like reading what other companies actually do. For example, Domas Markevičius’ Designing the perfect button for Wix. It sounds like they had some success with thinking about buttons from fairly first principles, so, hey, nice. I particularly like the focus on clarity:

A button must clearly communicate what it does, with zero space for interpretation. Text is the primary element that explains intention.


Speaking of what companies actually do, I appreciated Design Engineering at Vercel. Watch all the little 8 second videos in the post! They offer a pretty decent definition even:

Design Engineers care about delivering exceptional user experiences that resonate with the viewer. For the web, this means:

  • Delightful user interactions and affordances
  • Building reusable components/primitives
  • Page speed
  • Cross-browser support
  • Support for inclusive input modes (touch, pointers, etc.)
  • Respecting user preferences
  • Accessible to users of assistive technology

There is a lot of work behind the pretty pixels. Design Engineers must go beyond visual appeal and ensure the other pieces that make an exceptional user experience are taken care of.

Even the job title Design Engineering feels relatively new, almost being retroactively applied to people doing that style of work. Maybe Design Engineering is a way for the less-JavaScript-focused side of The Divide coming back into good graces. I love writeups of specific examples, like Jim there writing about the intricate details of how a resizer bar works (don’t we know it). Certainly feels good to see the work being appreciated instead of this gaslighting situation mired in the opposite.

Selfishly, I think of Design Engineering partially as “the kind of cool stuff you see on CodePen a lot, but you do it for a job.”

Chris’ Corner: Server Side Reconnaissance

If you tend to follow React stuff, you might know that React has a new thing called “Server Components”. Mayank has an excellent blog post about them. It starts out with calling out the nice things about them, and then fairly calls out all sorts of not-so-good things about them. Me, I think it’s all weird as hell. Just the fact that React was “just a UI library” for so long now needs a Node.js server behind it to take full advantage is a heck of a leap. And it’s already gone so far that you have to say "use client" when you want a component not to be a server component? (But actually it means: “it’s both a server component and a client component”). Ooof.

I’d link you to the docs for Server Components, but there aren’t any. There is just an update blog post, and little mention in the Bleeding-edge React frameworks section:

These features are getting closer to being production-ready every day, and we’ve been in talks with other bundler and framework developers about integrating them.

So if you want to use them, you may only do so in Next.js. If you’d like to build them into your framework, you hold your breath until the React team reaches out to collaborate. Maybe that’s a little unfair, but I don’t see what you’d read to get started with it all, aside from trying to dig through Next.js code and see how they did it. We use Next.js here at CodePen so we’ll be able to take advantage, I just think it all feels strange.

Next.js is an ultra popular way to use React. So if you just happen to be using Next and staying up to date, you’re using Server Components. That might be advantageous to you. That’s frameworks at their best, really. You do very little, frameworks evolve and you take advantage of the magical things they do behind the scenes. But reality has shown that framework upgrades can be painful. Rarely are there major version upgrades that don’t require work due to incompatibilities. One casualty of this Server Side Components changeup is most of the CSS-in-React landscape.

Josh Comeau has a solid deep dive into this situation. It’s ultimately a pretty simple problem with no real solution at the moment for some of the libraries, like styled-components, arguable the biggest player:

The fundamental incompatibility is that styled-components are designed to run in-browser, whereas Server Components never touch the browser.

Internally, styled-components makes heavy use of the useContext hook. It’s meant to be tied into the React lifecycle, but there is no React lifecycle for Server Components. And so, if we want to use styled-components in this new “React Server Components” world, every React component that renders even a single styled-component needs to become a Client Component.

It’s not the end of the world because, well, if you use them your stuff will need to be client-side only like it already is.

Just to prove what a smart, forward-thinking, attractive, good-smelling person I am, I’ve long been a fan of CSS Modules, and they have no such problem, as they don’t promise to do dynamic things that only JavaScript can do, it’s largely just a scoping API. Likewise, any other of these CSS-in-React libraries that promise “Zero-Runtime” are in good shape. That was really the way to go all along, if you ask me. Styling choices shipping as static CSS is with the grain of the web in a good way.

React is evolving in other similarly massive ways as well. Adrienne Ross has a pretty great rundown in Get your codebase ready for React 19. The massive thing is that it’s going to be a compiled framework (!!!!?!). So totally gone is the “it’s just a UI library” situation that was never really true but now is extremely very not true. While it’s a massive change for React itself, I imagine it won’t be a massive change for developers. People developing with React are almost certainly using a build process anyway and the React compiler will become a part of that. If you’re on the Next bandwagon, surely it will smurf its way into that pipeline. Maybe it will only be available in Next?! I’d say that sounds wild, but since that’s literally what is going on with Server Components, it almost seems likely.

Svelte feels like the first major framework of this generation to require a compiler, and by and large I think people applaud it. It makes client side bundles smaller and it makes authoring work easier. Easier, because you aren’t responsible for figuring out specific details when the framework needs help being performant. The use of useMemo and useCallback in React are performance-specific hooks that if you aren’t using or using incorrectly are hurting your application. That sucks. The fact that you don’t have to think about them anymore with a compiler is a welcome upgrade.

What I’d like to see, and I know there are many who agree here, are client-size bundles sizes actually coming down. In that first article I linked up, Mayank noted that despite Server Components existing now, JavaScript bundles headed to the client are increasing. Again, that sucks. I’m sure the story is very different in fully fleshed out applications that can take big advantage of Server Components than it is for a Hello, World scaffold, but still, we want them coming down across the board.

React is such a monster player on the web right now, I think of it in the Too Big to Fail category. Whatever whacky choices they make, developers will just fall in line. Companies write checks for developers that know React, and the job market sucks right now, so the pressure is even higher to know React. Perhaps even force yourself to love it.

Technologies do tend to come and go. I’m sure we all have our own examples of web tech that was once big and is now all but gone, or at least gone from good graces. But when tech gets big enough, it tends to not go. WordPress is huge, and it’s been huge every second of my entire web dev career. To me it echos social media in a way. In the middle days of Facebook, it’s demise was often predicted. Friendster died, after all. MySpace bit the dust. Google+ came and went. People are fickle. So too will Facebook die and be replaced by the new and shiny. But it didn’t, and it’s demise is no longer predicted. It’s too big to fail. So too is React.

Chris’ Corner: Things I Like

I like Melanie Sumner’s coining of the phrase Continuous Accessibility. To me, it’s like a play on the term Continuous Integration (CI) that is very pervasive. We build CI pipelines to lint our code, test our code, and test our code primarily, but all sorts of things can be done. We can test new code for performance regressions for example. So too could and should we be testing our new code for accessibility issues. Continuously, as it were. I’m very guilty of failing at the continuous part. I care about and act on accessibility issues, but I tend to do it in waves or sprints rather than all the time.


I like the career value that Ben Nadel assigns to learning RegEx. He says not a day goes by he doesn’t use them in some form (!). I wouldn’t say that’s true for me, but certainly every week or two I need to think about them, and over the years, my fear-factor in using them has scaled down to zero. They really aren’t that bad, it’s just a long steady learning curve to the point where eventually you feel like even if I’m slow I can ultimately reason this out. Either figuring out an existing one or writing a new one. Ben doesn’t just talk about it abstractly, he lists loads of practical examples.

Before I move on, allow me to show you Hillel Wayne agreeing with Regexes are Cool and Good (that’s my kind of title). Hillel mentions some valid reasons why people have a distaste for them, but then brings up a super good point: RegExes are particularly good when you’ve got some muscle memory for them and use them in little one-off use cases.

… where regex really shines is in interactive use. When you’re trying to substitute in a single file you have open, or grep a folder, things like that. Readability doesn’t matter because you’re writing a one-off throwaway, and fragility is fine because you’re a human-in-the-loop. If anything goes wrong you will see that and tweak the regex.

Heck yeah. If you get good at it, using them for Find/Replace reasons in your code editor can make you look like damn superhero.

Oh and thank heavens for RegEx101 and sites like it. So good.


I really like the CSS only “scroll-to-top” trick that David Darnes created and Stefan Judis wrote up. It’s just so deliciously clever. A scroll-to-top link is just a UX convenience and accessibility feature, as it not only scrolls to the top but moves focus back up there as well. It’s like…

<a href="#top" class="back-to-top-link">Back to Top</a>

But where and when do you show it? It could just be down in the footer of a site. But a classy way to do it is to show it on long-scrolling pages pretty much all the time — just not when the page is already scrolled to the top. Say you want to wait until the user has scrolled down at least 200px or something like that. Feels like JavaScript territory, but no, that’s where David’s trick shines: this can all be done in CSS.

The bare-bones part of the trick:

.back-to-top-link {
  margin-block-start: calc(100vh + 200px);
  position: sticky;
  bottom: 1rem;
  left: 1rem;
}

Here’s a demo.


I like the relative color syntax. Support for it is coming along and it’s in Interop 2024 so “actually using it” isn’t terribly far away. What I like is that it allows you to manipulate a color on the fly in a way that will actually work well without needing anything other than native CSS.

Thought process:

  • I’ve got this orange color
  • I wish I had a version of it that was a bit darker
  • … and is a bit alpha transparent.

So I’ve got the color:

body {
  --color: #f06d06;
}

Then I can use it, and I can use my modified version easily.

.box {
  background: var(--color);
  border: 20px solid oklch(from var(--color) calc(l - 0.5) c h / 0.5);
}

I’m using OKLCH there because it has “perceptually uniform lightness” so if I manipulate all my colors the same amount, they will feel the same amount manipulated. I don’t have to use that function, I could use rgb() or hsl() or even the generic color(). That’s the thing with the relative color syntax, it’s not any particular function, it’s largely that from keyword.


I like the idea of things challenging the dominance of npm. Much like Deno is challenging Node, and is literally from the same creator, now vlt is challenging npm and is from the same creator. Well, he’s on the team anyway. I remember listening to Darcy Clarke on Syntax saying some smart stuff about this new package manager possibility. Like, I’m the user, right? It’s my computer, and I’m asking this tool to go get a package for me. Why can’t I say “don’t get the README though, I don’t need it, and definitely skip the 1.5 MB JPG in that README, I just need the JavaScript file.” Makes a lot of sense to me. Give me a pre-built version! Don’t give me the types, I’m not using TypeScript. That kind of thing. I’m sure that like 1% of what this thing will be able to do, I just like the fresh thinking. I’m sure it’s more about the security. We’ve got JSR in there shaking stuff up now too, which I like because it isn’t friggin lowecase.

Chris’ Corner: Tricks With CSS

There are plenty of very legit reasons you’d want to have a scrolling element start out scrolled to the bottom, and stay scrolled to the bottom (as long as a user hasn’t scrolled back up). As ever, you could do this with JavaScript, as JavaScript can adjust scroll positions of elements. There is a way to do this primarily with CSS now that the anchor-overflow property exists, and I think it’s an extremely great CSS trick.

There is another way though! Kitty Giraudel covers it in CSS-only bottom-anchored scrolling area. The base of the trick is quite simple and requires no additional elements. You just set flex-direction: column-reverse; and then put the HTML inside in reverse order. So the stuff you want pinned at the bottom visually you put at the top of the element. In a way, this makes sense to me, as the thing you want to read first is at the top.

Element with scrolling pinned to bottom (as long as you add the stuff at the visual-bottom to the top of the DOM). Think of a chat interface.

But there is an accessibility concern that Kitty notes. It “creates a disconnect between the visual order and the DOM order, which can be confusing for screen-reader users.” I’d want to verify that with a screen reader user I think (probably applies mostly to people who use a screen reader and have some vision). But it’s a good point and a classic problem that comes up any time you use CSS to position things in such a way they appear visually differently than the source order suggests. I’m sure you can imagine the akwardness of focus states jumping around the screen unpredictably.

The thing that makes all this so news-worthy to me is that CSS is working on a solution for this that I didn’t know about:

reading-order: normal | flex-visual | flex-flow | grid-rows | grid-columns | grid-order

In our case, we could use reading-order: flex-visual to align the way sighted users and screen-reader users consume our feed.

So we’ve reversed the order using flexbox, but we can make the elements still read top-to-bottom (visual order) by forcing it with this property. I might argue, again, that in this case, users might want to read bottom-to-top. But at least you’ve got options now.

And this reader-order stuff is generally interesting. Like if you use flexbox and totally mess with where the flex items are placed with the order property, placing, for instance, the 7th item in the 2nd place, and the 19th item in the 1st place, updating the reading order to flex-visual will be great. I notice there is no grid-visual though, which is curious, since you can mess with the order of grid just the same.


Jonathan Snook has a play with the idea of lenticular cards. Those are those ridged plastic novelty cards that have two different images you can see depending on the angle you look at it from. Or more!

Since Apple released Live Photos, I’ve always felt like they could be used to create a similar effect and yet, no photo app that I’ve seen has implemented it, from what I’ve come across.

I enjoyed playing with the demo on mobile (where the DeviceOrientation API is a thing):

I love the experimentation spirit here. Like thinking of something you think should exist, but doesn’t seem to in an obvious way, then building it anyway.

Yair Even Or had the idea that a box-shadow could be cool if it… wasn’t actually a shadow, but was a blur instead.

The implementation is that perfect tornado of cleverness that appeals to me. It’s not incredibly complicated, but it requires usage of a number of different CSS features that you not think about immediately. In the end, it’s:

  • Place a pseudo element behind the element, a specified amount larger than the original element.
  • Blur the background with that pseudo element using backdrop-filter.
  • This doesn’t “fade out” the effect like a box-shadow would naturally, so instead, two masks are used to fade out the effect (vertical and horizontal).
  • Mask compositing is used to combine the masks.

I think the two masks are needed because of the rectangular nature of the element. I’d be tempted to try it with a single radial-gradient, but I think you’d lose blurring near the corners.


Dan Wilson always does a good job looking at new CSS features and the possibilities they unlock. Particularly the new features that are a bit esoteric, or seem to be at first glance, like math functions.

In The New CSS Math: pow(), sqrt(), and exponential friends, Dan looks at those new CSS functions (and a few more), and point out some somewhat practical things they can do. For example, a typographical system where the header sizes aren’t a straight multiple of one another, but are grown on a curve. Or simulating an easing effect by animating a number linearly, but having the movement distance calculated by a pow() on that number. There is even a function now that makes quick work of the Pythagorean theorem.

If you’re into this stuff, Dan looked at rem() and mod() here, which are similar methods for determining what is left over when you divide a number into another number. Is 9 divisible by 3? Yes, and you can know if the remainder is 0. But in web design, you could do things like figure out how many 125px grid columns could fit into 693px of space, if you needed to.

Dan has looked at trig functions as well, and shortly after that, Hypersphere looked at simulating randomness in CSS with those functions. The sin() function, for example, modulates from -1 to 1. So by farting around with that and incorporating a seed value, you can build pretty darn random looking CSS output:


The can’t-miss link recently is Ahmad Shadeed’s An Interactive Guide to CSS Container Queries. His interactive guides are always outstanding. This one is full of practical examples of where container queries are useful.

Chris’ Corner: Hard Things

Julia Evans has an extremely relatable and extremely charming talk in Making Hard Things Easy. Julia has a way of putting her finger on technology concepts that are notoriously difficult and making them easier to understand. She does this both by sharing her own tactics, like learning a reduced set of options or commands, as well as by producing very approachable guides.

I like her formula: infrequent use + lots of gotchas = disaster.

(As a CSS guy who regularly hears people complain about CSS, this tracks.)

Another trick to avoiding that disaster is… using computers! Tools like linters can help you fix (or avoid) the very mistakes that can make a technology frustrating or error prone. She uses the tool ShellCheck, which I’d never heard of, as an example to avoiding problems in Bash scripts. Then, sharing when you find tools like this that actually help you. I found that last bit especially interesting. It’s good to be “intellectually honest” about sharing tools that really have helped you, not tools that seem like they could help you, because they look nice or whatever.


Speaking of hard things… you know what can be hard? Refactoring. I’ve probably over-repeated this, but David Khorshid one said “It should be called legendary code not legacy code”, referring to the idea that code that is in production doing work, even if you think it might be sloppy, inefficient, inelegant, is literally doing the job it needs to do. Whereas some theoretically rewritten wonderful code has yet to prove itself.

Miroslav Nikolov writes:

Code refactoring may cost a fortune if not done right. A dysfunctional revamped system or new features coupled with incorrect rewrite is, with no doubt, damaging. One can argue to what extent.

Refactoring code can be very dangerous, so it’s worth being very considerate about what you’re doing. A few of Miroslav’s points:

✅ Isolate improvements from features. Do not apply them simultaneously.

❌ Do not mix expensive cleanups with other changes. But do that for small improvements.


This makes me think about TypeScript.

TypeScript is (uh, obviously) newer than JavaScript, so there is a good amount of code out there that has been refatored into TypeScript. Whether than was worth it or not is up for debate. People that love it might say that a refactor like this actually makes the code safer, and they probably aren’t wrong to some degree, although it wouldn’t be hard to argue that any refactored code has risks.

There is also cost to the TypeScript itself. Build tooling and whatnot of course, but also the syntax itself. Remy Sharp has made the call that his own personal code isn’t in TypeScript, partially for this reason:

A “well crafted” definition, type or interface (still no idea when I should use each), is often a huge cognitive load on me.

Being presented with lots of double colons, <T> when I’m not sure what T refers to, a wall of interfaces and more is an upfront cost on me, the reader.

Often the types will be tucked away in other files (probably good) but working out the argument required to a function call often leaves me distracted in the task of understanding what’s required rather than making my function call.

I feel that. I’m slowly getting better at TypeScript myself, because at CodePen we’ve decided to take advantage of it when we can. I can see the value in it fairly regularly, but I’m also fairly regularly frustrated by it and question the hours lost. I’ve felt this way for years, and I’m still not quite sure what to make of that.


One of the reasons you might be refactoring something is because you’ve decided on some new abstractions. A classic, in my experience, is that you’re adding, dropping, or changing a framework. The old one just isn’t doing it anymore, times have changed, and you either want to go vanilla or move to something more modern. There is probably some kind of axiom where any sufficiently large codebase is always undergoing at least one refactoring per hundred thousand lines of code or the like.

Have you read the Hammer Factories thing? It’s a pretty satisfying read, save for a few dated stabs at comedy that read pretty misogynisticly. Sometimes you just need a hammer, is the thing, it’s clearly the right tool for the job, but the industry wants you to you some all-in-one hammer, wait, no, a hammer factory, wait, no, a factory for building hammer factories, wait, no…


It feels true to me that front-end specific work has always been treated as lower-value than back-end work. Don’t hate me, but part of me feels like that’s fair. I’m a front-end guy myself and actually think it’s extremely valuable, but ultimately most products real value lies in some kind of unique back-end magic. The problems on the back-end, on the whole, are harder and riskier and scarier, and that translates to higher paying roles. Of course there is tons of nuance here. A product with a very decent back end and total garbage front end is likely to have problems catching and may outright fail because of a poor experience for the people actually using the thing, and making an experience people love is weighted toward the front end. Or as Josh Collinsworth recently wrote:

In many ways, CSS has greater impact than any other language on a user’s experience, which often directly influences success. Why, then, is its role so belittled?

There used to be a time where if you knew both front end and back end you were a unicorn and it was considered very rare and you were a powerful force in this industry. Now unicorns are dead. We call that “full stack” now and it’s all but expected that you are one. Especially if you’re skilled in the front end, you can’t just say that, you have to say “full stack” or your job prospects ain’t looking great. Then the actual expectations of full stack mean that you’re good at the JavaScript stuff, you’re fine with the work that connects that JavaScript client work with JavaScript on the server, and you know enough front end to use a design system, library, or hack some workable things together.

It’s just a thought, anyway. It solidified in my mind reading Andrew Walpole:

The full-stack developer was borneth!

It looks great on paper, especially to the payroll department: One person to fill traditionally two roles. But in reality, we know it doesn’t work that way. It may be a role for a technology generalist to thrive in, but one person’s effort is finite, and consistent, quality development across the entire product development spectrum requires focus and expertise. Nevertheless, start-ups soaked up the efficiency, and in a tumultuous churn of web tech it was a decent defense.


There is a new Node.js website and it’s always fun to read a little behind-the-scenes. That would be a hard job but it looks like they did a great job.

Chris’ Corner: Real World CSS

I enjoyed Lee Robinson’s take on How I’m Writing CSS in 2024. Rather than jump right into tools and syntax, it starts with the user:

What does a great experience look like loading stylesheets when visiting a website?

  1. Stylesheets should load as fast as possible (small file sizes)
  2. Stylesheets should not re-download unless changed (proper caching headers)
  3. The page content should have minimal or no layout shift
  4. Fonts should load as fast as possible and minimize layout shift

Agreed! Number 3, and to some degree 4, are almost more in the JavaScript bucket than CSS, but it’s a good starter list. I’d add “The page styles shouldn’t interfere with default accessibility”.

Then, after those, the developer experience is considered:

How can the DX of the styling tools we use help us create a better UX?

  1. Prune unused styles, minify, and compress CSS for smaller file sizes
  2. Generate hashed file names to enable safe, immutable caching
  3. Bundle CSS files together to make fewer network requests
  4. Prevent naming collisions to avoid visual regressions

What about to help us write more maintainable, enjoyable CSS?

  1. Easy to delete styles when deleting corresponding UI code
  2. Easy to adhere to a design system or set of themes
  3. Editor feedback with TypeScript support, autocompletion, and linting
  4. Receive tooling feedback in-editor to prevent errors (type checking, linting)

I like how the DX concerns are about making things easier that the UX demands. I want all that stuff! Although I admit I still bristle at the idea of dealing with unused styles. It’s very hard to properly detect unused styles and I worry about tools making those decisions.

Lee’s ultimate recommendations are CSS Modules, Tailwind, or StyleX (or just vanilla CSS on simple stuff), and I feel like those feel fair based on his own journey and accomplish the things he laid out. I’m a fan of the CSS Modules approach myself. It’s largely vanilla CSS, but with great scoping built in, it couples to components nicely, and is so well established it’s everywhere you need it.


Speaking of writing CSS in the real world, Ahmad Shadeed did quite a deep dive of looking at the TechCrunch Layout and approaching it with modern techniques.

Sure, it’s just a three column layout, but the different columns have all sorts of different constraints. The first is in a fixed position, the main content has a maximum width but is otherwise fluid as well as contains nested grids. There is a maximum width overall too, with the third column involving absolute positioning. That’s without getting into the (five!) major breakpoints and footer complexities. If you’re into nerding out on CSS layout, Ahmad tackles it literally five different ways, ultimately landing on a nice CSS grid powered technique. He called it easy to implement, but looking at the column declarations I think it only looks easy to someone who was on his fifth iteration. 🤣. And that’s only half the article.


To think that Ahmad’s tackling of a complex layout, in the end, only boiled down to a few lines of CSS is rather incredible. CSS is certainly more powerful. But is it easier? Geoff Graham thinks yeah, it is a little easier to write actually, in some ways.

To name a few, grouping styles is easier, centering is easier, translation needs are easier, and spacing is easier. Geoff names more. And by easier, really truly easier in all ways. Less and more direct code that is easier to reason about and does what it says.


Roman Komarov outlines The Shrinkwrap Problem, which is maybe a little niche but certainly a very interesting layout situation. The deal is that if content wraps, the element essentially takes up all available width. Not that strange, but when you look at how a wrapped title looks with text-wrap: balance;, for example, it looks a little weird. A header might only take up half the space visually, yet still take up all the available space.

Roman goes really deep on this, with solutions that involve even new tech like anchor positioning which is an awfully weird thing to invoke just for this, but hey, needs are needs. Just when you think this is all far too much for such a niche thing, Roman gets to the use-cases which are actually pretty basic and straightforward. Things like chat bubbles where full-width bubbles would look awkward. Or decorations on either side of a header.


David Bushell has a fun and illuminating post about button-specific CSS styles.

Have you ever repeatedly tapped on a button only for the page to zoom in unexpectedly? Rewind and fast-forward buttons in an audio player for example. This unwanted side effect can be removed with touch-action.

There are four others in there that are all in the decent-chance-you-hadn’t-thought-of-it category.

Chris’ Corner: Cool Ideas

Lossy compression can be good. For something like a JPG, that naturally uses compression that can be adjusted, that compression leads to lost image data. In a good way. The image may end up being much smaller, which is good for performance. Sometimes the lost image data just isn’t a big deal, you barely notice it.

You don’t often think of lossy compression outside of media assets though. You can certainly compress a text asset like CSS, but you wouldn’t want to do that with a lossy compression. It would probably break syntax! The code wouldn’t work correctly! It’s total madness!

Unless you’re just having a bit of fun — like Daniel Janus was doing there.

  • Original: pageCSSsource SASS
  • 1 style rule: pageCSS (93% information loss)
  • 5 style rules: pageCSS (74% information loss)
  • 10 style rules: pageCSS (55% information loss)
  • 20 style rules: pageCSS (31% information loss)
  • 30 style rules: pageCSS (17% information loss)

When I think of JavaScript-based syntax highlighters, my go-to is Prism.js. I think it’s mostly used client-side but I don’t see why you couldn’t run it server-side (and in fact you probably should).

But I digress already, I’m trying to link to Shiki, a syntax highlighter tool I hadn’t seen before. The results look really nice to me:

It’s based on TextMate grammars, using the same exact system as VS Code. And it can be run in any JS runtime. Pretty compelling to me!

I was able to Hello, World! it on CodePen very easily.


“Pull to refresh” is a wonderfully satisfying UI/UX interaction famously first invented by Tweetie. It’s the kind of interaction where you might assume you need some deeper level of control over view primitives to pull off, and that we essentially lack this control on the web. You’d be wrong though, as Adam Argyle has a really interesting solution for it.

The trick Adam uses is that the page starts scrolled down a smidge by virtue of the <main> region having a hard scroll snapping point. With that in place, you could scroll upwards to see another element (which says “Pull to refresh”), but the page would immediately scroll snap back down if you let go. Turns out this is pretty tricky, involving using a very deliberately sized snapping point and even an animation to delay the adding of the snap point. Then Scroll Driven Animations are used to animate things as you scroll into that area, and a smidge of JavaScript, which you’d use anyway to “refresh” things, is used to fake the refreshing and put the page back in the original position

See the demo, it’s very cool.


If you were going to pick some colors, you could do worse than just stealing them from Dieter Rams designs.

Speaking of ol’ Dieter this collection of Framer components (thus work on the web through just HTML/CSS and sometimes SVG) is very impressive. Just really elegant buttons and controls that have that “beg to be touched” look.


Have you heard people saying “LoFi” lately? Kinda sounds like a subgenre of electronic music. But no, it stands for “Local First Web Development”. And even that twists my brain a little bit, because I’m like “yes obviously we all work on local development environments now, the edit-on-the-server days are all but gone”. But they don’t mean local development, they mean that the apps architecture should support storing data, possibly to be synced later, locally. This has many advantages, as Nils Riedemann writes:

“local first” is an approach to developing web applications in such a way, that they could be used offline and the client retains a copy of their data. That also effectively eliminates many loading spinners and latency issues. Kind of like optimistic updates, but on steroids with PWA, Wasm and a bunch of other new abbreviations and acronyms.

The premise is that it’s much easier to consider this behavior as the default and then expand, rather than the other way around.

There are some major apps like Figma that do this, and it’s fairly easy to point to them as examples and be like “see, good.” I don’t disagree, really. Especially the fact that it can help support the PWA offline approach, that just feels right. They have a homepage for the movement. There are some technologies that help support it. For instance, the concept of CRDTs to help sync data in a way where you can merge data rather than have one side or the other “win” is pretty transformative.

Chris’ Corner: Performance is Good for Brains

I was darn impressed by Scott Jehl’s personal charge to bring back an idea known as “responsive video”. If you’ve seen the <picture> element, and how you can provide multiple <source>s with different @media queries allowing for only the best match to be shown, you already get it. It turns out that browsers, at one time, sensibly thought that was a good idea and it made it to browsers, then it got ripped out for not-great reasons, and Scott wanted it back.

Instead of just writing snarky blog posts like I would do, or using my best pretty please eyes on people I think could help, Scott just rolled up his sleeves and did it.

First, he had to kick it back up in the working group, the WhatWG as it’s called, and get conversation going. Conversation did get going, but then it died away. For years. That’s just how it goes sometimes. But by some stroke of luck, it kicked back up again and the spark moment happened:

… representatives from Firefox and Chrome chimed in to say that they agreed and intend to reinstate their support! Following that, implementation bugs were filed in the Chromium and Firefox trackers.

You might think first things get into “the spec” then browsers agree to implement them. But it’s actually the other way around. Now that the agreement to implement was in, the spec was appended to put responsive video back in.

Phew! That’s a lot!

But wait!

Just because browsers agree and the spec is updated still doesn’t mean it’s actually going to happen anytime soon. Because someone still needs to roll up their sleeves and actually do it. As an aside, I assume that’s why Igalia is so successful and involved in so many things like this — because they do the doing.

In this case, the doer of the doing was… Scott.

The tricky part is that writing code for web sites and web browsers is very different. Scott tackled Firefox, which is C++.

Following the initial steps, it took my aging Macbook Pro at least a few hours to clone and build Firefox Nightly, but I was pleased to see it all work without any trouble. After that, I moved on to reinstating the C++ code that would enable media attribute support in video source elements.

Fortunately, by the good graces of open source, Scott was able to find the old commits where these features were added/removed, so he had a starting point. But there ended up being a lot more to it, and I think through Scott’s intelligence, enthusiasm, and sheer will, he got it all pushed through! He was about the do Chrome but they ended up doing it themselves. Cool!

Well! I accidentally re-blogged Scott’s blog post. Oops. Sorry Scott. But that tees us up for a few more performance related links.


Rick Viscomi has a good overview of web performance, how to think about it, and whats going on as we start 2024. Unlike the accessibility world where status quo or regressions are sadly common, there is some slow movement forward:

At the start of 2023, 40.1% of websites passed the Core Web Vitals assessment for mobile user experiences. Since then, we’ve seen steady growth. As of September 2023, we’re at 42.5% of websites passing the Core Web Vitals assessment, an improvement of 2.4 percentage points, or 6.0%. This is a new high, representing an incredible amount of work by the entire web ecosystem.

2.4% movement of the entire web seems pretty darn good to me.

The post is loaded with more data and information. I found this bit about image performance interesting.

The slowest part is actually the resource load delay. Therefore, the biggest opportunity to speed up slow LCP images is to load them sooner. To reiterate, the problem is less about how long the image takes to load, it’s that we’re not loading it soon enough.

We often think so much about the image size and format as being so vital to image performance, but them not loading soon enough is like a 4× bigger issue overall. There are all sorts of things hurting this. Entirely clienet-side rendered apps hurt this. Using loading="lazy" on an image included in the “Largest Contentful Paint” hurts this, too. You can fight back with a rel="preload" thing, but better to avoid the problem at all if you can.


Speaking of image performance, Tim Severien asks: Should AVIF be the dominant image format on the web? It’s very complicated, ultimately a little subjective, and varies with type of image and your goals. My brain loves declaring a winner, but I’m afraid that’s not going to happen here. This didn’t get into stuff like the computational cost, which I always understood to be much higher with AVIF. I like the idea of tools that make the call based on individual images.


There are so many “why”s of web performance. They are all good, like it’s good for business because it makes the site feel more reliable and trustworthy, and people don’t get distracted waiting for things, meaning less abandonment. It’s good for the planet (less carbon emissions). It’s good for accessibility (more people can use the site on slow connections).

It’s also just… in our brains. Tammy Everts says:

If you don’t consider time a crucial usability factor, you’re missing a fundamental aspect of the user experience.

Her research, and citing the research of others, shows just about important all this is. We’re just impatient beings, and it hasn’t changed over time, and isn’t likely to.

The internet may change, and web pages may grow and evolve, but user expectations are constant. The numbers about human perception and response times have been consistent for more than 45 years. These numbers are hard-wired. We have zero control over them. They are consistent regardless of the type of device, application, or connection we are using at any given moment.


A lot of us have some locked-in knowledge that document.write() is bad and should never be used. It must be some maxim from the early days of people taking web performance seriously. Harry Roberts dug into that and explains exactly why, as it’s still definitely true.


Bonus:

1MB Club is a growing collection of performance-focused web pages weighing less than 1 megabyte.

Chris’ Corner: Some AdviCSS

Get it?! Like “advice”, but for CSS.

When should you nest CSS?

Scott Vandehey says:

There’s a simple answer and a slightly more complicated answer. The simple answer is “avoid nesting.” The more practical, but also more complex answer is “nest pseudo-selectors, parent modifiers, media queries, and selectors that don’t work without nesting.”

The big idea behind avoiding nesting (which is a native CSS feature now, if you hadn’t heard) is that it can lead to specificity increases that just aren’t necessary. Like:

.card {
  .content {
    .byline {

    }
  }
}

That .byline selector probably doesn’t gain anything by being nested like that. Break it out of there and it’ll be more re-usable and easier to override if you need to.

But this:

.card {
  @container (width > 60ch) {

  }
}

Is probably good! It just saves you from having to re-write the .card selector again. Scott gets more in-depth though with more examples and I largely agree.

How do you adjust an existing color light and darker?

I’m a biiiig fan of the relative color syntax, which is great at this job, but before I go on a tangent about that, it’s not well supported yet so let’s not. It’s on the Interop 2024 list though!

Better supported is color-mix(), and Cory LaViska has the story on using it for this job:

Using color-mix(), we can adjust the tint/shade based on the background color, meaning we don’t need to manually select lighter/darker colors for those states. And because we’re using OKLCH, the variations will be perceptually uniform, unlike HSL.

By mixing white and black into colors, and doing it in the OKLCH color space, we can essentially tint and shade the colors and know that we’re doing it evenly across any color we have. This is as opposed to the days when a lot of us tried to use darken() and such in Sass only to find extremely different results across colors.

How are the final values of Custom Properties calculated?

Stephanie Eckles:

Custom properties – aka “CSS variables” – seem fairly straightforward. However, there are some behaviors to be aware of regarding how the browser computes the final values. A misunderstanding of this process may lead to an unexpected or missing value and difficulty troubleshooting and resolving the issue.

Custom Properties follow the cascade and are computed at runtime, for one thing, which is the whole reason that they cannot be preprocessed ahead of time. But it’s more complex than that. What if the value is valid for a custom property (most anything is), but not valid for the way you are trying to use it?

This is a real head scratcher:

html { color: red; }

p { color: blue; }

.card { --color: #notacolor; }

.card p { color: var(--color); }

Turns out .card p will actually be red (I would have guessed blue), but Stephanie explains:

The .card p will be the inherited color value of red as provided by the body. It is unable to use the cascaded value of blue due to the browser discarding that as a possible value candidate at “parse time” when it is only evaluating syntax.

How do you accommodate people who struggle with transparent interfaces?

Adam Argyle explains it can be like this, using this media query you may or may not have heard of:

.example {
  --opacity: .5;

  background: hsl(200 100% 50% / var(--opacity));

  @media (prefers-reduced-transparency: reduce) {
    --opacity: .95;
  }
}

Adam had lots of practical examples in the post, and does consider that word reduced, and how it doesn’t mean absolutely none ever.

What units should you use for spacing properties?

Me, I just use rem usually as that’s what I use for nearly everything else. But Ashlee M Boyer argues that while stuff like text makes good sense to use relative units, spacing need not scale at that same rate:

When a user is customizing their viewing experience, they thing that’s most important to them and their task at hand is the content. Spacing isn’t often vital for a user to perform their task, so it doesn’t need to grow or scale at the same rate as the content itself.

When spacing between content grows, it eats up vital real estate and becomes harder to manage.

Ashlee proves it with a before video and an after video, after being moving relative units to absolute units for spacing.

How do you make every Sass file automatically include common imports?

This one hits home for me, as someone with a codebase with easily hundreds of Sass files that all start with something like @import "@codepen/variables"; Wouldn’t it be cool if Sass could just assume we wanted to do that for every file?

Austin Gil covered this a while back on doing it with Vite. When you define your Vite config, you can like:

  css: {
    preprocessorOptions: {
      scss: {
        additionalData: `@import "@/assets/_shared.scss";`
      }
    }
  }

I see webpack can do it too, but I’m not sure if Sass alone can be configured to do it, although I wish it could.

Chris’ Corner: Scroll Driven Delight

I’m pretty hot on Scroll-Driven Animations! What a wonderful idea that we can tie @keyframe animations timelines to scroll positions. And I’m sure the creators of it thought long and hard, because the API makes a ton of things possible. It’s not just “how far the entire page has scrolled”, although that’s possible. The progress through the animation can be tethered either to the scroll position of any element or to the position of an element within a scrollable container. Those are referred to as the Scroll Progress timeline or the View Progress timeline respectively. Slow clap, people.

Bramus Van Damme makes that nicely clear in this overview article. Bramus has been following, working on, creating demos, and writing about this stuff for a long time, and it was a smart move to wrap all that stuff up in a dedicated website for it.

I’m also a big fan of his co-worker Adam’s approach to an intro article here. Adam makes demos that are a smidge more designery and those tend to land with me. And speaking of designery demos, Ryan Mulligan’s beginning explorations are wonderful. He’s got some Polaroid photo style images that “blur in” when they scroll into view and a pair of photos that shuffle themselves as you scroll. I share Ryan’s sentiment that the tools Bramus has built are nearly crucial in understanding this stuff, since all the different keywords and values have such big effects.

These scroll-driven animations tend to be things that are just fun and could easily be thought of as progressive enhancement. So the fact that this is Chrome-only for now isn’t terribly bothersome to me, although it is polyfillable. We also didn’t get scroll-driven animations on the Interop 2024 list, but that doesn’t mean we won’t get Safari or Firefox support this year. Still could happen, just not really a guarantee.

Why do I think this is so cool? It’s not like we absolutely couldn’t do this before. Greensock has a ScrollTrigger plugin that is widely loved and has a pretty sweet API. (Here’s a great Collection.) I don’t think it’s terribly egregious to use JavaScript for these kind of effects, particularly if it makes them more maintainable, performant, or do things impossible any other way. But that’s the thing — when these abilities come back to native web technology like CSS, chances are the performance is going to be great and arguably more maintainable thinking long term as the people familiar with the technology will grow.

Yuriko Hirota did a great job of proving how much more performant using CSS for these types of animations are. The single-threaded nature of DOM interactive JavaScript means that if JavaScript is busy doing anything else, a JavaScript-powered animation is going to suffer from jankiness, that is, jerky and non-smooth animation. Even when JavaScript is quite busy, a CSS powered animation is fine. Those “scroll progress animations” are the classic demo of this web tech. Michelle Barker went deep on those this past year, starting with the basics and getting lovably weird as the article goes on.

Let’s end with a little tip! Bramus mentioned that if you’re setting up a scroll-driven animation that involves a target element and different scrolling element, if it’s not working, there is a good chance…

The culprit: an overflow: hidden sitting somewhere in between the target and the scroller.

It’s always the overflow, isn’t it? I find overflow is usually the culprit in figuring out why a sticky positioned item isn’t working as well. The solution, if you do actually need to deal with hiding overflow, is to use overflow: clip; instead, a relatively new ability. Kevin Powell covered a couple other scenarios where overflow: clip; saves the day, so it’s definitely worth knowing about!

I’ve been playing with Scroll-Driven Animations myself a bit. I wrote one bit about highlighting a bit of text as you scroll down a blog post, something I was inspired by from Lene Saile’s blog. As a response to a reader question, I also figured out how to zoom in images when they come into the viewport as well. Both of those ultimately use the scroll position to control the point in the animation to be at, which I think is usually nice, but I also enjoyed the idea that you can un-tether those things (say, run a 3s animation once an element becomes fully visible) by flipping a --custom-property in a keyframe which triggers a different keyframe, like Ryan Mulligan digs into.

It’s still early days for Scroll-Driven Animations and there are sure to be extremely clever ideas people will find for years. My jaw was already dropped by using them to fit text exactly to a container and to write conditional logic detecting if an element can scroll or not.

Chris’ Corner: More Like Scalable Vector Goodness

I’m going to do an SVG issue here, because I find that technology persistently interesting. It’s a bit of a superpower for front-end developers who know how it works and can leverage it when needed to pull of interesting effects. For example, this compelling line drawing scroll effect is powered by SVG features.

There have been some really cool SVG tools I’ve only just seen recently, and some great writing about SVG techniques. Warms my little heart to see SVG still being actively explored even as it sits rather dormant from a standards point of view.

Let’s start with some tools and resources, since those are easy to digest and if you really love one of them you’ll be all like thanks CodePen Spark, you’re a good newsletter and ya know that’s what we’re in it for.

Tech Icons

SVG icons tend to be single-color as a trend, but actual logos tend to involve brand colors and can often be multi-color. I like how it’s super easy to use, offering both downloads and quick copy-and-paste.

Durves

I can’t explain it but sometimes you need an SVG of a grid of dots that are waving. This allows you to control all the aspects of that. Has some tearable cloth vibes.

svghub

Squiggles, scribbles, shapes and… other stuff.

I love this because they are the kind of things that are perfect for vector art, but that you don’t typically find in things like icon sets. One click to copy right to clipboard or download.

SVGMix

Big one! 193 Icon collections. I do like that they are grouped in collections, so in case you need a bunch of assets, there is a good chance they’ll go together aesthetically. I’m a big Noun Project guy, but find it isn’t quite as well organized into collections.

OK I suppose we’d better move on to some techniques and explanations.


SVG Gradients: Solving Curved Challenges

How do you get a color gradient to follow the path of SVG artwork? Michael Sydney Moore solved it by breaking up the art into smaller sections and applying gradients to each section.

This is an interesting contrast to another technique that Ksenia Kondrashova explains.

SVG viewBox

The viewBox on SVG is pretty simple really: it sets up the visible coordinate system where everything else is drawn. Interestingly, you can change it at any time, and it effectively acts as a camera, especially if you animate it.

Brad Woods has perhaps the best explanation of it I’ve ever seen, via an interactive post.

Making noisy SVGs

Turns out <feTurbulence> is up to the job of making a noise effect in SVG, but there is a little more to it to make it nice, as Daniel Immke writes up:

To create noise, I used the <feTurbulence> filter which is explicitly for generating artificial textures but required quite a bit of fiddling to get to my liking. Then, I had to use other filter effects to eliminate color variance and blend naturally with the fill color selected, and finally apply the filter to the circle.

Noise sometimes feels like the perfect way to chill out the mathematical sharpness of vector art.

Also — did you know there is a weird trick to make noise with CSS gradients?

Responsive SVGs

There is a technique in this post from Nils Binder where he stretches just a part of an SVG according to variable content elsewhere and I love it.

Speaking of responsive… did you know the illustration in Ethan’s original article was responsive in itself?

Making SVG Loading Spinners: An Interactive Guide

This is part of what makes SVG so attractive to me: simple primitives that all combine together to do elegant things. Here, to make a specific kind of fun spinner, Sébastien Noël uses

  1. <circle> with a stroke
  2. stroke-dasharray to control exactly how the stroke should be dashed
  3. stroke-linecap to control the nice look of the dashed parts
  4. stroke-dashoffet to control the position of the dashes
  5. @keyframe animation to animate the stroke-dasharray making it feel like a spinner.

Icon transcendence: customizing icons to complement fonts

This one is from the “I hope your client has a lot of money” files. I love the idea but it’s wild. The idea is that SVG icons could swap out to match the vibe of the font they are next to.

But by “swap out”, really, somehow, it’s the same source icon.

Although these icons look quite differently visually, they were actually crafted by using the single source icon you saw above as a reference. For each of the fonts here, we’ve modified that source icon, thus producing a custom icon that better matches the style and mood of each font:

Chris’ Corner: People Be Doing Web Components

Native Web Components are still enjoying something of a moment lately. Lots of chatter, and a good amount of it positive. Other sentiment may be critical, but hopeful. Even more important, we’re seeing people actually use Web Components more and more. Like make them and share them proudly. Here are some recently:

  • David Darnes made a <storage-form> component. Here’s an example that happens to me regularly enough that I really notice it. Have you ever been on GitHub, typing up a PR description or something, but then accidentally navigated away or closed the tab? Then you go back, and everything you typed was still there. Phew! They are using the localStorage API to help there. They save the data you type in the form behind the scenes, and put it back if they need to.
  • Dave Rupert made <wobbly-box>, which draws a border around itself that is every so slightly kittywampus. It uses border-image which is nigh unlearnable so I’d be happy to outsource that. Also interesting that the original implementation was a Houdini paint worklet thing, but since that’ll never be cross-browser compatible, this was the improvement.
  • Ryan Mulligan made a <target-toggler>, which wraps a <button> and helps target some other element (anywhere in DOM) and hides/shows it with the hidden attribute. Plus toggles the aria-expanded attribute properly on the button. Simple, handy, probably catches a few more details that you would crafting it up quick, and is only like 1KB.
  • Hasan Ali made a <cruk-textarea> that implements Stephen’s trick on auto-growing text areas automatically. Probably isn’t needed for too much longer, but we’ll see.
  • Jake Lazaroff made <roving-tabindex> component such that you can put whatever DOM stuff inside to create a focus trap on those things (as it required for things like modal implementations). I think you get this behavior “for free” with <dialog> but that assumes you want to and can use that element. I also thought inert was supposed to make this easier (like inert the entire body and un-inert the part you want a focus trap on), but it doesn’t look like that’s as easily possible as I thought. Just makes this idea all the more valuable. Part of the success story, as it were.

Interesting point here: every single one of these encourages, nay requires, useful HTML inside of them to do what they do. Web Components in that vein have come to be called HTML Web Components. Scott Jehl took a moment to codify it:

They are custom elements that

  1. are not empty, and instead contain functional HTML from the start,
  2. receive some amount of progressive enhancement using the Web Components JavaScript lifecycle, and
  3. do not rely on that JavaScript to run for their basic content or functionality

He was just expanding on Jeremy Keith’s original coining and the excitement that followed.

Speaking of excitement, Austin Crim has a theory that there are two types of Web Components fants:

  1. The source-first fans. As in, close to the metal, nothing can break, lasts forever…
  2. The output-first fans. As in, easy to use, provide a lot of value, works anywhere…

I don’t know if I’m quite feeling that distinction. They feel pretty similar to me, really. At least, I’m a fan for both reasons. We could brainstorm some more fan types maybe! There’s This is the Best Way to Make a Design System group. There’s the This is Progressive Enhancement at its Finest group. There’s the If Chrome Says it’s Good, Then I Say it’s Good group. There’s the Ooooo Something New To Play With group. Your turn.


Let’s end here with two things related to the technology of Web Components you might want to know about.

One of the reasons people reach for JavaScript frameworks is essentially data binding. Like you have some variable that has some string in it (think: a username, for example) and that needs to make it’s way into HTML somewhere. That kind of thing has been done a million times, we tend to think about putting that data in braces, like {username}. But the web platform doesn’t have anything like that yet. Like Rob Eisenberg says:

One of the longest running requests of the Web Platform is the ability to have native templating and data binding features directly in HTML. For at least two decades innovative developers have been building libraries and frameworks to make up for this platform limitation.

The Future of Native HTML Templating and Data Binding

DOM Parts is maybe the closest proposal so far, but read Rob’s article for more in-depth background.

Another thing I’m interested in, forever, is the styling of Web Components. I think it’s obnoxious we can’t reach into the Shadow DOM with outside CSS, even if we know fully what we’re doing. The options for styling within Web Components all suck if you ask me. Who knows if we’ll ever get anything proper (the old /deep/ stuff that had a brief appearance in CSS was removed apparently for good reason). But fortunately Brian Kardell has a very small and cool library that looks entirely usable.

Let’s say you are totally fine with making a request for a stylesheet from within a Web Component though, how does that work? Well there is a such thing as a Constructable StyleSheet, and if you have one of those on hand, you can attach it to a Shadow Root via adoptedStyleSheets. How do you get one of those from requesting a CSS file? The trick there is likely to be import assertions for CSS, which look like:

import sheet from './styles.css' assert {type: 'css'};

Now sheet is a Constructable StyleSheet and usable. I like that. But let’s say you’re bundling your CSS, which is generally a smart thing to do. Does that mean you need to start breaking it apart again, making individual component styles individually importable? Maybe not! There is a proposal that looks solid for declaring individually importable chunks of CSS within a @sheet block. Then, just like non-default exports in JavaScript, you can pluck them off by name.

@sheet sheet1 {
  :host {
    display: block;
    background: red;
  }
}

@sheet sheet2 {
  p {
    color: blue;
  }
}
import {sheet1, sheet2} from './styles1and2.css' assert {type: 'css'};

Pretty solid solution I think. I’d be surprised if it didn’t make it into the platform. If it doesn’t, I promise I’ll go awww sheet.

Chris’ Corner: More Like Celebrating Style Skills

It’s January and we’re seeing a little round of CSS wishlists make the rounds.

  • Tyler Sticka is mostly hoping for better support on stuff that already is starting to cook, like View Transitions, anchor positioning, balanced text, and scroll-driven animations. But he’s got his own new wishes and has done the leg-work to properly propose them, like more useful vertical alignment (without extra work).
  • Christopher Kirk-Nielsen also has some wishes for better support for thing that ought to be doing that naturally, like Style Queries, but also has some really interesting ideas of his own like using transform to force an element of an unknown size into a specific size, which he also wrote up properly.
  • Manuel Matuzović has more love for Style (and State) Queries (agreed there is some real potential here to reduce CSS size and complexity) and @scope. On the doesn’t-exist-yet side, a vote for mixins to which I’ll echo: Yes, please!
  • Elly Loel weighted in last year using the same distinctions: stuff that is already cooking (e.g. Custom Media) and stuff that doesn’t exist yet (e.g. detecting flex-wrap). Good news for Elly, some stuff from the list got done, like subgrid.
  • Nathan Knowler joins the choir with strong votes for Style Queries, Scroll-Driven Animations, and @scope, among others.
  • Sarah Gebauer has a great one on her wishlist about borders and controlling the offset. Hey, SVG can do it and it’s super cool. Plus a shout out to a personal favorite: more useful attributes and the attr() function.

Phew! That’s a lot of people taking the time to be pumped about CSS and make clear what they want. The way CSS is going lately I would not be surprised to see massive progress again in 2024.


Stephanie Eckles points out 12 one-liners in CSS that are all tremendously useful and powerful.

One liners.

Stuff like p { text-wrap: pretty; } that just makes web type all the nicer. Five years ago our gapping mouths would have been on the floor if they knew what was coming.


Brecht De Ruyte took two brand new CSS and HTML things and smashed them together:

  • The new <selectmenu> element, which is exactly like <select> except fully CSS styleable.
  • The Anchor Position API

It’s a cool interaction that is supposed to end up like this:

But this was actually published mid-last-year, and the demo is already broken even. It only ever worked in Chrome Canary with the Experimental Web Platform Features flag turned on, so that just goes to show you how fast this experimental stuff moves.

It’s not <selectmenu> any more, it’s <selectlist>, but even after forking the demo and trying to fix it, I couldn’t get it. I suspect it’s changes to the Anchor Positioning API that I’m not up to snuff on. I do suspect this interaction is possible today, but you’d either be using cutting edge APIs that may or may not be polyfillable, and/or recreating a ton of interaction and accessibility stuff that you’d get for free with Brecht’s implementation.


Ben Frain has a very clever idea for drawing angled lines in CSS (like a line chart). Which, weirdly, isn’t a particularly easy thing to do in CSS. You can make lines various ways and rotate them and stuff, but it’s harder to be like: draw a line from here to here. SVG is typically the path for that.

In Ben’s technique, you take advantage of the polygon() function and clip-path. You “draw” points in the clip path at certain coordinates, then go back over the exact same coordinates with a 1px difference, leaving that little gap where a new color can “shine through”.

Like:

clip-path: polygon(
  0% 60%,
  20% 90%,
  40% 43.33%,
  60% 61.67%,
  80% 23.33%,
  100% 18.33%,
  100% calc(18.33% - 1px),
  80% calc(23.33% - 1px),
  60% calc(61.67% - 1px),
  40% calc(43.33% - 1px),
  20% calc(90% - 1px),
  0% calc(60% - 1px)
);
Demo

Robin Rendle once made a bunch of different chart types in CSS only, but I don’t think I’ve seen a CSS “sparkline” like this until now.

Chris’ Corner: Type

I’m in the mood for a typography focused edition. I have some links saved up I’ve been meaning to read. I’m going to start reading now and the links that turn out any good I’ll put below.


Mike Mai put together a Typography Manual (for type on the web). It’s a pretty random spattering of 11 bits of advice. Originally a Pen! I can’t help but read through each of them and raise my Well, Actually finger, but I shall keep my finger down because more and more I like eliminating nuance in this industry. Just do this advice is pretty valuable. If you have no idea where to start, well, just follow the advice, and once you’ve leveled up you can do your own rule breaking.

Like #1 is “Use One Font” but Henry, as a very experienced designer, can do what he wants.


This was mid-last-year, but I still think Stephanie Eckles has the best guide at the moment for modern fluid type. There was this whole period where “fluid type” meant using viewport units (e.g. vw), ideally in a calc(), to set type size (and sometimes line-height). Then things got a little better when we got clamp() because the code got a lot more straightforward (by the way, this is a helpful mind trick). Now things are changing one more time, because we have container units and they change the approach again.

Just as 1vw equals 1% of the viewport width, so does 1cqi equal 1% of a container’s inline size. We’ll be using cqi for purposes of defining fluid typography since we want the size to be associated with the horizontal axis of the writing mode.


Speaking of relatively new units, we now have units that represent the current line height (and “root” line height) in CSS: lh and rlh. Paweł Grzybek writes about how to use them to acheive the idea of “vertical rhythm”:

Vertical rhythm is a design concept that helps to create a harmonious layout by following consistent spacing between elements, typically using the height of a line as a base. I learned it in my design days when printed media was still a thing.

It’s kind of an invisible idea that theoretically makes a page more pleasant to look at.

In the past this was quite a bit harder to pull off, and these units are yet another example of a new CSS technology making an old idea a lot easier.


The why of typography is interesting. There are aesthetics. Making type look good is an art, but it’s an art with everyday consequences. Poor typography can make people feel a product is shoddy, a restaurant doesn’t care, or a service isn’t trustworthy. Great type can be a cheat code in making people choose one thing over another simply through aesthetics. But another aspect of typography is legibility. If you want people to read text, and you do (and maybe even have them feel a certain way while reading it) then you’re very concerned with legibility.

Mary C. Dyson has a whole new book on this: Legibility. It’s certainly a book-worthy topic, as Mary makes clear in an early chapter:

Within typographic and graphic design, we might consider whether signs are legible (in particular from a distance), whether we can decipher small print (especially later in life), if icons can be easily identified or recognised (without text labels), if a novel or textbook is set in a readable type (encouraging us to read on). These questions emphasise that it is not only the physical characteristics of the text or symbol that need to be considered in determining whether or not the designs are legible, or how legible they are. The purpose for reading, the context of reading, and the characteristics of the reader also determine legibility.

My mind goes: pick fonts that are obviously readable, be generous with line height, don’t make the line length too long, and go big (but not too big). But that’s like legibility 101, and there is a lot more to consider, and a lot more depth to those basics.


Where do you actually go to find fresh fonts? I wish I had a perfect answer for you, but there are hundreds of font foundries with individual websites that all do things differently. My best advice is to bookmark them when you come across them, and when it’s time to pick fonts, make plenty of time to go window shopping.

Here’s one to save for sure though, because although I’m usually quite happy to pay for fonts, not every project has that in the budget, so free is what is needed. Google fonts, as ever, has a lot of potential there, but in the greater world of fonts is more limited than you might think. OK here is is for real: Collletttivo.

Collletttivo is an Open-Source type foundry and a network of people promoting the practice of type design through mutual exchange and collaboration.

It’s a pretty darn nice group of typefaces already.


I bet you know there are some generic keywords for fonts in CSS already, like serif and sans-serif. More recently, we’ve gotten keywords like system-ui which is supposed to pick whatever font that operating system uses primarily (which is awesome). There are more in that vein:

font-family: system-ui;
font-family: ui-serif;
font-family: ui-sans-serif;
font-family: ui-monospace;
font-family: ui-rounded;

There is now discussion in the W3C for more generic font families, a lot of which is centered around fonts for non-Latin languages. I think that’s a fantastic idea. Imagine how disappointing it would be to choose a custom font for a non-Latin language, have there be some problem in loading it, and have the next font down the list not support the language you need.

These new generic font choices have practical consequences and apply in situations where your browser could cause readers problems if it falls back to a random font: either because different fonts are conventionally used to distinguish one part of text from another (eg. headings from body text,), or because the text may become unreadable with the wrong font (eg. non-nastaliq styles in Kashmir). It’s more than just presentational preferences.


Variable fonts: still cool.

Mandy Michael resurrected her site with the perfect URL: https://variablefonts.dev/

I’m tempted to say that variable fonts didn’t hit as hard as I thought they would hit when they were coming out. But… I might be wrong about that. They are supported across the board on the web. There are tons of them. Their support in design tools is pretty darn good. There are lots of good resource sites like Mandy’s. People generally know about them and think they are a good idea. So that’s a pretty darn good. I just feel like I don’t see them in use a ton. The biggest strike against them is how big they tend to be, and I think that scares people off.


How about we end with an actual font: Playpen Sans! It’s like a classy version of Comic Sans. I think it’s both more legible (a feat, as Comic Sans is already super legible) and more fun. I really like how there are a ton of alternate glyphs for each letter that automatically activate, meaning it actually looks like handwriting a lot more than if there is only one like most fonts. Plus it’s FREE so that rules.

Reminds me of Comic Code (which we offer as a code font family on CodePen) and all the variations of the Inkwell family.

Chris’ Corner: Swinging For It

New year, new local code editor? It’s maybe worth a peak at Zed, at least. They do a good job in the one-sentence pitch:

Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.

All tech things have to be fast, so check. No shade either, speed is vital in all things tech, especially something you use heavily all day. Their main competition is clearly VS Code, which already gets some flack for slowness (feels fast to me 🤷), so leaning into a comparison that shows them some 4x faster on startup is sensible marketing.

“Multiplayer”… eh. I’m skepitcal about how much teams really care about this. VS Code did a whole “live share” thing a while back and I don’t think it struck much of a chord. I’m not against it as a feature, it’s just more like table stakes these days rather than a killer feature.

The last one… “from the creates of Atom and Tree-sitter” is a great pitch. People loved Atom. People are not happy Atom went away, so you get those people right out of the gate. Even if you didn’t use it, I think a lot of people respected it. Then Tree-sitter is this best-of-breed code parsing tool that all sorts of stuff uses (the new version of CodePen that will be out this year uses it heavily).

Will I actually switch? No idea. I’m always down to try new things. But I’ve written about this before, and I have my own criteria on switching code editors. For me, the new one needs to be able to behave essentially like the old one. If anything is too obnoxious, I’ll just switch back. If I can switch without terrible annoyances, then I’m happy to explore changes and different features and such.

And to keep me (and I think this is generally true, not just for me) it needs killer features. Maybe that is speed, but I don’t think the competition in this area is slow enough for that to be the big thing.

Maybe the AI stuff will be one of the killer features. AI is such a big thing these days, but VS Code has Github Copilot, which is great and a huge competitive advantage. Except… Zed supports GitHub Copilot! Nice move. And check out the video where they’ve integrated GPT-4 into the editor, and you invoke it by highlighting a block of text and typing a prompt. Very classy, I think.

It appears as if they are really making a swing for it with Zed and that’s good for everyone. Making a business out of it is going to be tricky too. Looks like they already have a team of 10. I can tell you that 10 world-class developers aren’t cheap. It doesn’t look like Zed is open source yet, so best guess is the plan is to make it paid. That’s tough in a world where VS Code is free (although Copilot is not). Panic makes it work with Nova though, so it’s not uncharted territory. I think developers are happy to throw bucks at tools that are even a little bit better, so I would bet on Zed doing pretty decently, myself. The good design of their landing page makes me feel like they have their heads screwed on straight.

Gotta love companies making a big swing on things. I feel like that’s what Arc has been doing since the beginning with their new web browser. They are constantly shipping and taking risks on big features. And it has all the hallmarks of big swings: some big hits, some big misses. I think Arc has massive potential as being the best web browser out there, but it seems like that’s not where they are headed. In a vague end-of-year where-we’re-headed video, they say they actually want to build a whole new computer. That seems like a wildly different task and set of skills needed, but hey, a big swing is a big swing. They are kind of “pre-money” so I guess it’s more of a pivot.

I feel like another company that is trying to make a swing for it in a crowded market this year is Bun. A brand new JavaScript runtime in a totally different language is a big endeavor. Competition in this space feels warranted and good for everyone. And also like the business model is just as nebulous as all the other companies mentioned here. There was some pretty pointed pushback about Bun, which both feels fair and like that’s exactly what you’d expect when you’re doing something new and bold.

Chris’ Corner: Switch

The “switch” is a pretty common design pattern on the web. It’s pretty much a checkbox. In fact, under the HTML hood, it really ought to be an <input type="checkbox"> or perhaps a <select> with just two <option>s (or a third if there is an indeterminate state).

But unfortunately, the web doesn’t give us any primitive that looks particularly switch-like, as in, some kind of knob that is flipped one way or the other. So we use CSS. For example, we hide the checkbox one way or another, making sure there is still a discoverable clickable area, then with the :checked selector, style something that looks switch-like.

Here’s a very classic example.

Marcus Burnette nicely re-creating the iOS toggle look

I’m sure you could imagine using that for, say, toggling email notification settings on an off for some sort of app. We use them here on CodePen quite a bit, for stuff like toggling the privacy of a Pen. Or you might use a toggle to switch a site between dark mode and light mode.

Speaking of that, Aleksandr Hovhannisyan has a solid article about the struggles of a dark mode toggle. You’d think it would be pretty straightforward, but it really isn’t. Consider that users have system-level preferences in addition to your site-level preference, and you have to honor them in the proper order. Plus you have to set the controls properly as well as actually style the site accordingly, ideally without temporarily flashing the wrong colors. (FART). Aleksandr does a good job of it and links to other posts that have done a similarly good job. It’s always way more code than you want it to be, leading me to think browsers and standards could and should get more involved, but I also admit I don’t have a perfect idea on what they should do. Chrome has played with Auto Dark Mode, but it’s not clear how that trial went. (And speaking of Dark Mode, this gallery is pretty nicely done.)

Anyway, I was trying to talk about switches!

I saw Jen Simmons note that Safari is playing with a native switch. Here’s the HTML:

<input type="checkbox" switch>

Nice.

And here’s what it looks like by default:

No big surprise there! It’s the native iOS toggle come to life. It respects accent-color in CSS like other form controls, which is great. But better, it has really sensible pseudo elements you can grab and style. You get ::thumb and ::track elements (nice clear naming) plus ::before and ::after work on the element itself, so there are a lot of possibilities.

.custom-switch { }
.custom-switch::thumb { }
.custom-switch::track { }

.custom-switch:checked::thumb { }
.custom-switch:checked::track { }

.custom-switch::checked::after { }
.custom-switch::checked::before { }

Tim Nguyen has demos that do a good job of showing off the possibilities with clean readable CSS.

The best part of browsers providing this kind of thing for us, to me, is that now you don’t have to worry about dinking up the accessibility. Now, as long as you follow the normal HTML structure of a labelled checkbox in a form, you’re good. No worries about the way you hid the original checkbox screwing things up. You are taking visual control though, so do take care to make sure the checked and unchecked values are at least as clear as a checked or unchecked checkbox.

Chris’ Corner: Very Simple Web Components

Let’s do a Web Components issue this week. We gotta do it. As Scott Vandehey says: HTML Web Components Are Having a Moment. Scott links to a pretty healthy stew of other people’s writing where they have their own ah-ha moments where they can picture the usefulness of native Web Components in their own work.

I can feel it, as this has been creeping up on me as well. Here’s a line of thinking that might get you there:

  • You have a bit of HTML
  • You want to apply some functionality to it
  • So wrap it in a Web Component

Honestly I think you can compare it to jQuery style thinking. “Find something; do something” — except you’ve already found it, you just need to do it.

Now, before I seem to be reductionist, Web Components can get a lot more complicated. You can opt-in to the Shadow DOM approach, which unlocks slots and more complex templating, as well as true encapsulation and such. But you don’t have to do that, you can level up to that as needed. The very basic use-case (and wow isn’t the web good at this?!) is useful, powerful, needs limited boilerplate, and no libraries or build steps.

This example is, uhh, pretty simple, but it’s an example of how my mind has flipped a bit. I wanted to make a box you could resize, that’s exactly 9:16 aspect ratio, and it would tell you the pixel dimensions. I need a little bit of HTML, a little bit of CSS, and a little bit of JavaScript. I don’t need to do that as a Web Component, but doing so is barely any more work, and helps more tightly couple the HTML and JavaScript in a way that really helps communicate intention.

I don’t have any grand plans, but I could now easily make this re-usable across, well, anywhere, by making the JavaScript importable and documenting the intended HTML to use. I should probably bundle the CSS too, but that’s a story for another day.


Let’s keep this whole “Web Components can be pretty darn simple” train rolling a little bit. In fact, I think it was Jim Nielsen who kicked the first domino on Web Components lately getting people thinking about them differently in Using Web Components on My Icon Galleries Websites.

Imagine a grid of icons. Jim wanted on-page controls, in the form of a <input type="range"> to control their size. By using a web component, he can insert that input only when the JavaScript executes, which is ideal. Then all the event listening on that range slider is also all bundled into that web component. All the while, the HTML guts of the web component, which is what will render if JavaScript doesn’t, is still a perfectly serviceable grid of icons.

This is a nice approach (over stringing together a bunch of functions in JS file) because it feels more encapsulated. All related logic and DOM manipulations are “under one roof”. Plus it works as a progressive enhancement: if the JS fails to load (or the user has it disabled) they can still navigate through different pages with lists of icons (and the <icon-list> component just works like a <div>). And if JS works, the <icon-list> component acts like a <div> with interactive super powers layered in.


Dave Rupert’s Web Component version of FitVids is even more straightforward. You probably have videos on your sites that are <iframe>s of stuff like YouTube and Vimeo. Just wrap ’em in <fit-vids> and call it a day.

<!-- import custom element -->
<script type="module" src="fit-vids.js"></script>

<!-- wrap embeds in fit-vids custom element -->
<fit-vids>
  <iframe src="https://youtube.com?v=123"></iframe>
</fit-vids>

If the JavaScript doesn’t load, all good, video is still there.

More, if somehow, say via a fetch request or the like, more videos get injected onto the page, there is no need to re-call any JavaScript to make these work. The way connectedCallback works they’ll just do their thing automatically when they arrive in the DOM.


I’ll say I’m a fan of the “Light DOM Only” approach to Web Components. But there are some relatively big things you give up when you don’t use the Shadow DOM. Namely: slots. Slots is a pretty sweet mechanism for taking the “guts” HTML of a Web Component and slotting it properly into a likely-more-complicated bit of template HTML. This means that your fallback HTML can be simpler and possibly more appropriate.

Cory LaViska explains it well, here’s some usage:

<my-button>
  Click me
  <img slot="icon" src="..." alt="...">
</my-button>

Which gets smushed with:

<template>
  #shadow-root
    <button type="button">
      <span class="icon">
        <slot name="icon"></slot>
      </span>

      <span class="label">
        <slot></slot>
      </span>
    </button>
</template>

You can imagine how the <img> there makes it into the <slot> with a matching name. And then rest of the content goes into the default slot.

Cory’s article is actually about extending this concept and building out HTML with as many slots as you need, as powered by attributes. It’s clever go read it.


Speaking of parameters, that’s an approach that some web components take. Rather than any guts HTML at all, you say what you want of the component via attributes on the element. That’s a rather JavaScript-framework-y way of doing things. An example of this is <lite-youtube> which you use entirely with attributes:

<lite-youtube 
  videoid="ogfYd705cRs" 
  playlabel="Play: Keynote (Google I/O '18)"></lite-youtube>

That doesn’t prescribe light or shadow DOM, but it does mean that no-JavaScript means nothing rendered at all.

Unless you’ve figured out some kind of server-side rendering, which is another whole bag of peanuts.


So we’ve covered lots of options already so far:

  1. All HTML needed provided in guts / Some HTML provided (likely fallback or designed for slots) / No HTML provided
  2. Light DOM / Shadow DOM
  3. Elegant Fallback / Some Fallback / No Fallback
  4. Attribute-Heavy / Attribute-Light / No Attributes

And there are more

  1. Use Framework / No Framework
  2. Lazy Load / Eager Load
  3. Client-Side Render / Server-Side Render (via Build Step)
  4. Styles via external stylesheet / Styles baked into JavaScript / No Styles

The list could go on.

Zach Leatherman attempted to simplify the taxonomy down to HTML Web Components and JavaScript Web Components. I think that’s smart to try to smash down the complexity here, but also think that’s maybe too far. Someone could probably make a decision tree that paints a middle ground taxonomy.