Stay alert

A few days ago, Chris wrote up his thoughts about how alert(), confirm(), and prompt() were being deprecated by Chrome and collected a bunch of thoughts from developers. If certain features can essentially be turned off by a major browser, a lot of folks started to worry about the predictability of the web.

On that note, I really liked this note by Richard Harris:

We can’t normalise the attitude that collateral damage is the price of progress, even if we accept the premise — which I don’t — that removing APIs like alert represents progress. For all its flaws, the web is generally agreed to be a stable platform, where investments made today will stand the test of time. A world in which websites are treated as inherently transient objects, where APIs we commonly rely on today could be cast aside as unwanted baggage by tomorrow’s spec wranglers, is a world in which the web has already lost.

This specific bit of drama isn’t of much interest to me, I must admit. But! I think it brings up a super important distinction between software and the web. Here’s a story.

The other day I was faffing about with Astro (which I like a lot). I was rebuilding my personal site with it and I decided — in a spark of punk rock-ness — to update to the latest version of it. I thought perhaps it might make my build process a bit quicker and give me a chance to explore new features. But alas — everything broke. APIs had been deprecated! My build process broke! Everything crumbled down around me.

This isn’t me dunking on Astro. I love it, still. But it’s important to remember that Astro isn’t the web. Neither is React or any other framework, really. Those teams can feel free to deprecate things, improve things as much as they want. They can burn it all to the ground and start again. But stuff like alert(), old CSS features, and HTML elements aren’t in the same category. They can’t be deprecated in the same way because, as Jeremy said, the web needs to be predictable. And we can’t treat the web like plain ol’ software because no one team or individual owns those features.

Here’s the gist of my rant: alert() and confirm() aren’t features of Chrome, but of the web. But I fear that’s how a lot of folks might think about them.

This is also why standards are so important! Talking about new features in public lets us fix all the bugs and answer all the questions before a new feature ships onto this platform where you can’t just delete it when you realize you goofed up. I’m not even really dunking on Chrome here either, but this distinction between software and the open web is an important one to make. Right?

Direct Link to ArticlePermalink


The post Stay alert appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Choice Words about the Upcoming Deprecation of JavaScript Dialogs

It might be the very first thing a lot of people learn in JavaScript:

alert("Hello, World");

One day at CodePen, we woke up to a ton of customer support tickets about their Pens being broken, which ultimately boiled down to a version of Chrome that shipped where they ripped out alert() from functioning in cross-origin iframes. And all other native “JavaScript Dialogs” like confirm(), prompt() and I-don’t-know-what-else (onbeforeunload?, .htpasswd protected assets?).

Cross-origin iframes are essentially the heart of how CodePen works. You write code, and we execute it for you in an iframe that doesn’t share the same domain as CodePen itself, as the very first line of security defense. We didn’t hear any heads up or anything, but I’m sure the plans were on display.

I tweeted out of dismay. I get that there are potential security concerns here. JavaScript dialogs look the same whether they are triggered by an iframe or not, so apparently it’s confusing-at-best when they’re triggered by an iframe, particularly a cross-origin iframe where the parent page likely has little control. Well, outside of, ya know, a website like CodePen. Chrome cite performance concerns as well, as the nature of these JavaScript dialogs is that they block the main thread when open, which essentially halts everything.

There are all sorts of security and UX-annoyance issues that can come from iframes though. That’s why sandboxing is a thing. I can do this:

<iframe sandbox></iframe>

And that sucker is locked down. If some form tried to submit something in there: nope, won’t work. What if it tries to trigger a download? Nope. Ask for device access? No way. It can’t even load any JavaScript at all. That is unless I let it:

<iframe sandbox="allow-scripts allow-downloads ...etc"></iframe>

So why not an attribute for JavaScript dialogs? Ironically, there already is one: allow-modals. I’m not entirely sure why that isn’t good enough, but as I understand it, nuking JavaScript dialogs in cross-origin iframes is just a stepping stone on the ultimate goal: removing them from the web platform entirely.

Daaaaaang. Entirely? That’s the word. Imagine the number of programming tutorials that will just be outright broken.

For now, even the cross-origin removal is delayed until January 2022, but as far as we know this is going to proceed, and then subsequent steps will happen to remove them entirely. This is spearheaded by Chrome, but the status reports that both Firefox and Safari are on board with the change. Plus, this is a specced change, so I guess we can waggle our fingers literally everywhere here, if you, like me, feel like this wasn’t particularly well-handled.

What we’ve been told so far, the solution is to use postMessage if you really absolutely need to keep this functionality for cross-origin iframes. That sends the string the user uses in window.alert up to the parent page and triggers the alert from there. I’m not the biggest fan here, because:

  1. postMessage is not blocking like JavaScript dialogs are. This changes application flow.
  2. I have to inject code into users code for this. This is new technical debt and it can harm the expectations of expected user output (e.g. an extra <script> in their HTML has weird implications, like changing what :nth-child and friends select).
  3. I’m generally concerned about passing anything user-generated to a parent to execute. I’m sure there are theoretical ways to do it safely, but XSS attack vectors are always surprising in their ingenouity.

Even lower-key suggestions, like window.alert = console.log, have essentially the same issues.

Allow me to hand the mic over to others for their opinions.

Couldn’t the alert be contained to the iframe instead of showing up in the parent window?

Jaden Baptista, Twitter

Yes, please! Doesn’t that solve a big part of this? While making the UX of these dialogs more useful? Put the dang dialogs inside the <iframe>.

“Don’t break the web.” to “Don’t break 90% of the web.” and now “Don’t break the web whose content we agree with.”

Matthew Phillips, Twitter

I respect the desire to get rid of inelegant parts [of the HTML spec] that can be seen as historical mistakes and that cause implementation complexity, but I can’t shake the feeling that the existing use cases are treated with very little respect or curiosity.

Dan Abramov, Twitter

It’s weird to me this is part of the HTML spec, not the JavaScript spec. Right?!

I always thought there was a sort of “prime directive” not to break the web? I’ve literally seen web-based games that used alert as a “pause”, leveraging the blocking nature as a feature. Like: <button onclick="alert('paused')">Pause</button>[.] Funny, but true.

Ben Lesh, Twitter

A metric was cited that only 0.006% of all page views contain a cross-origin iframe that uses these functions, yet:

Seems like a misleading metric for something like confirm(). E.g. if account deletion flow is using confirm() and breaks because of a change to it, this doesn’t mean account deletion flow wasn’t important. It just means people don’t hit it on every session.

Dan Abramov, Twitter

That’s what’s extra concerning to me: alert() is one thing, but confirm() literally returns true or false, meaning it is a logical control structure in a program. Removing that breaks websites, no question. Chris Ferdinandi showed me this little obscure website that uses it:

Speaking of Chris:

The condescending “did you actually read it, it’s so clear” refrain is patronizing AF. It’s the equivalent of “just” or “simply” in developer documentation.

I read it. I didn’t understand it. That’s why I asked someone whose literal job is communicating with developers about changes Chrome makes to the platform.

This is not isolated to one developer at Chrome. The entire message thread where this change was surfaced is filled with folks begging Chrome not to move forward with this proposal because it will break all-the-things.

Chris Ferdinandi, “Google vs. the web”

And here’s Jeremy:

[…] breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

Jeremy Keith, “Foundations”

I’ve painted a pretty bleak picture here. To be fair, there were some tweets with the Yes!! Finally!! vibe, but they didn’t feel like critical assessments to me as much as random Google cheerleading.

Believe it or not, I generally am a fan of Google and think they do a good job of pushing the web forward. I also think it’s appropriate to waggle fingers when I see problems and request they do better. “Better” here means way more developer and user outreach to spell out the situation, way more conversation about the potential implications and transition ideas, and way more openness to bending the course ahead.


The post Choice Words about the Upcoming Deprecation of JavaScript Dialogs appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Web Features That May Not Work As You’d Expect

As the web gets more and more capable, developers are able to make richer online experiences. There are times, however, where some new web capabilities may not work as you would expect in the interest of usability, security and privacy.

I have run into situations like this. Like lazy loading in HTML. It’s easy to drop that attribute onto an image element only to realize… it actually needs more than that to do its thing. We’ll get into that specific one in a moment as we look at a few other features that might not work exactly as you‘d expect.

This limitation has been around for a while, but it does show how browser features can be exploited. One possible exploit is an anchor gets some :visited link style in CSS and is positioned off screen. With the off-screen anchor, one could use JavaScript to change the anchor’s href value and see if a particular href causes the link to appear visited—reconstructing a user’s history in the process.

Known as the CSS History Leak, this was so pervasive at one time that the Federal Trade Commission, the United States’ consumer protection agency, had imposed harsh fines for exploiting it.

These days, attempting to use getComputedStyle on a :visited link returns the style of the unvisited (:link) link instead. That’s just one of those things you have to know because that’s different from how it intuitively ought to work.

There are two approaches ways you could try to get around this, but neither of them are possible.

  1. make the visited link’s style trigger a side effect (e.g. a layout shift), or
  2. leverage the sibling (~ or +) or child (>) CSS selectors to render another style.

Regarding side effects, while there are some clever yet fragile ways to do this, the options we have for styling :visited links are limited and some styles (like background-color) will only work if they’re applied to unvisited links. As for using a sibling or child, executing getComputedStyle on these returns the style as if the link wasn’t visited to begin with.

Browsers don’t cache assets across sites anymore

One advantage of a CDN was that they allowed for a particular resource (like Google Fonts) to be cached in the browser for use across different websites. While this does provide a big performance win, it has grave privacy implications.

Given that an asset that’s already cached will take longer to load than one that’s not, a site could perform a timing attack to not only see your site history but also expose both who you are and your online activity. Jeff Kaufman gives an example:

Unfortunately, a shared cache enables a privacy leak. Summary of the simplest version:

  • I want to know if you’re a moderator on www.forum.example.
  • I know that only pages under load www.forum.example/moderators/header.css.
  • When you visit my page I load www.forum.example/moderators/header.css and see if it came from cache.

In light of this, browsers don’t offer this anymore.

performance.now() may be inaccurate

A scary group of vulnerabilities came out as couple of years ago, one of which was called Spectre. For an in depth explanation, see Google’s leaky.page (works best in Chromium) as a proof of concept. But for the purposes of this article, just know that the exploit relies on getting highly accurate timing, which is something that performance.now() provides, to try and map sensitive CPU data.

Text about the demo on the left side of the page and two black Germaine-looking code blocks on the right side with a black background and green text.
The demo at leaky.page

To mitigate Spectre, browsers have reduced its accuracy and may add noise as well. These range from 20μs to 1ms and can be changed based on various conditions like HTTP headers and browser settings.

Lazy loading with the loading attribute doesn’t work without JavaScript

Lazy loading is a technique where assets are only loaded in the browser when it scrolls into the viewport. Until recently, we could only implement this in JavaScript using IntersectionObserver or onscroll. Except for Safari, we can apply the loading attribute to images and iframes (in Chromium) and the browser will handle lazy loading.

Note that lazy loading can’t be polyfilled since an image is probably loading by the time you check for the loading attribute’s support.

Being able to do this in HTML makes it sound like the attribute doesn’t require JavaScript at all, but it does. From the WHATWG spec:

  1. If scripting is disabled for an element, return false.

    Note
    This is an anti-tracking measure, because if a user agent supported lazy loading when scripting is disabled, it would still be possible for a site to track a user’s approximate scroll position throughout a session, by strategically placing images in a page’s markup such that a server can track how many images are requested and when.

I’ve seen articles mention that this attribute is how you support lazy loading “without JavaScript” which isn’t true, though it is true you don’t have to write any.

Browsers can limit features based on user preferences

Some users might opt to heavily restrict browser functionality in the interest of further security and privacy. Firefox and Tor are two browsers that do this through the resist fingerprint setting which does things like reducing the precision of certain variables (dimensions and time), omitting certain variables entirely, limiting or disabling some Web APIs and never matching media queries. WebKit has a document outlining how browsers can approach fingerprint resistance.

Note that this goes beyond the standard anti-tracking features that browsers implement. It’s unlikely that a user will enable this as they would need a very specific threat model to do so. Part of this can be countered with progressive enhancement, graceful degradation, and understanding your users. This limitation is a big issue when you actually need fingerprinting, like fraud detection. So, if it’s absolutely necessary, look for an alternative means.

Screen readers might not relay the semantics of certain elements

Semantic HTML is great for many reasons, most notably that it conveys meaning in markup that software, like screen readers, interpret and announce to users who rely on them to navigate the web. It’s essential for crafting accessible websites. But, at times, those semantics aren’t conveyed—at least how you might expect. Something might be accessible, but still have usability issues.

An example is the way removing a list’s markers removes its semantic meaning in WebKit with VoiceOver enabled. It’s a very common pattern, most notably for site navigation. Apple Accessibility Standards Manager James Craig explains why it’s a usability issue, though, citing the W3C’s Design Principle of Priority of Constituents:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors;

Another case where semantics might not be relayed is with emphasis. Take inline elements like strong, em, mark, ins, del, and data—all elements that have semantic meanings, but are unlikely to be read out because they can get noisy. This can be changed in a user’s screenreader’s settings, but if you really want it to be read you can declare it in visually hidden in the content property of either a :before or :after pseudo-element.

To illustrate this I made a brief example to see how NVDA with Firefox 89 and VoiceOver with Safari 14.6 read out semantic elements.

Unlike VoiceOver, NVDA reads out some of the semantic elements (del, ins and mark) and tries to emphasize text by gradually increasing the volume of emphasized text. Both of them have no trouble reading out the :before/:after psudo-elements however. Also, VoiceOver read out the tag’s brackets (greater than, less than), though both screenreaders have the ability to change how much punctuation is read.

To see whether or not you need to emphasize the emphasis, make sure you test with your users and see what they need. I didn’t focus on the visual aspect but the default styling of emphasis elements may be inconsistent across browsers, so make sure you provide suitable styling to go along with it.

Web storage might not be persistent

The WHATWG Web storage specification includes a section on privacy that outlines possible ways to prevent storage from being a tracking vector. One such way is to make the data expire. This is why Safari controversially limits script writable storage for seven days. Note that this doesn’t apply to “installed” websites added to the home screen.

Conclusion

Interesting, isn’t it? Some web features that we might expect to work a certain way just don’t. That isn’t to say that the features are wrong and need to be fixed, but more of a heads up as we write code.

It’s worth examining your own assumptions during development. Critically examine what your users need and factor it in as you make your site. You’re certainly welcome to work around these these as you encounter them, but in cases where you’re unable to, make sure to find and provide reasonable progressive enhancement and graceful degradation. It’s OK if users don’t experience a website the exact same way in every browser as long as they’re able to do what they need to.

That’s my list of things that don’t work the way I expect them to. What’s on your list? I’m sure you’ve got some and I’d love to see them in the comments!


The post Web Features That May Not Work As You’d Expect appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

OAuth2 Bearer Token Usage

I have immersed myself in the digital identity space for the past few years. A good chunk of this work involves reading (and sometimes creating) specifications, as you can imagine. It is critical that they be written in such a way that two independent parties can build interoperable implementations without relying on each other’s code. With this in mind, let’s have a brief chat about OAuth2 Bearer Token Usage with a focus on the token’s encoding.

But first, let’s have a brief talk about what OAuth2 is.

Blue Beanie Day 2019

November 30th, the official "Blue Beanie Day," has come and gone. I'm not sure I ever grokked the exact spirit of it, but I've written about what it means to me. Last year:

Web standards, as an overall idea, has entirely taken hold and won the day. That's worth celebrating, as the web would be kind of a joke without them. So now, our job is to uphold them. We need to cry foul when we see a browser go rogue and ship an API outside the standards process.

Building off that, I'd add the need to prevent browsers from breaking things that have worked for half a decade, like iOS 13 did with CSS parallax.

Zeldman, the Blue Beanie Day champion, isn't particularly optimistic:

Mainly, though, Blue Beanie Day is receding from view because our industry as a whole thinks less and less about accessibility (not that we ever had an A game on the subject), and talks less and less about progressive enhancement, preferring to chase the ephemeral goal posts of over-engineered solutions to non-problems.

It's people who won the browser wars, so I suppose it's fitting that people are going to let it slip away.

The post Blue Beanie Day 2019 appeared first on CSS-Tricks.

iOS 13 Broke the Classic Pure CSS Parallax Technique

I know. You hate parallax. You know what we should hate more? When things that used to work on the web stop working without any clear warning or idea why.

Way back in 2014, Keith Clark blogged an exceptionally clever CSS trick where you essentially use a CSS transform to scale an element down affecting how it appears to scroll, then "depth correcting" it back to "normal" size. Looks like we got five years of fun out of that, but it's stopped working in iOS 13.

Here's a video of official simulators and the problem:

I'd like to raise my hand for un-breaking this. If we don't watchdog for the web, everything will suffer.

The post iOS 13 Broke the Classic Pure CSS Parallax Technique appeared first on CSS-Tricks.

Some HTML is “Optional”

There is a variety of HTML that you can just leave out of the source HTML and it's still valid markup.

Doesn't this look weird?

<p>Paragraph one.
<p>Paragraph two.
<p>Paragraph three.

It does to me, but the closing </p> tags are optional. The browser will detect it needs them and manifest correctly in the DOM anyway.

This probably happens to HTML you write and you don't even know it. For example...

<table>
  <tr>
    <td></td>
  </tr>
</table>

That looks perfectly fine to me, but the browser will inject a <tbody> in there around that <tr> for you. Optional in HTML, but the DOM will put it in anyway.

Heck, you don't really even need a <body> in the same fashion! Jens Oliver Meiert shares more:

<link rel=stylesheet href=default.css>

Some attributes are "optional" too in the sense that they have defaults you can leave out. For example, a <button> is automatically <button type="submit">.

Jens further argues that these are almost considered optimizations, as it reduces file size and thus network speed.

Me, I don't like looking at HTML like that. Makes me nervous, since there are actual situations that screw up if you don't do it right. Not all file names can be left unquoted. Sometimes, leaving off closing tags means enveloping a sibling element in a way you didn't expect. I'd even sacrifice a tiny smidge of performance for a more resilient site. Sorta like how I know that * {} isn't a particularly efficient selector, but worrying about CSS selector performance is misplaced worry in most cases (the speed difference is negligible).

I actually quite like JSX in how strict it forces you to write "HTML." That strictness helps code formatting (e.g. Prettier) too, as a bonus.

But hey, a perf gain is a perf gain, so I wouldn't say no to tooling that automatically does this stuff to compiled output. That's apparently something HTMLminifier can do.

The post Some HTML is “Optional” appeared first on CSS-Tricks.