Excalidraw and Project Fugu 🐡 at Google I/O

Google I/O 2020, like all the I/O conferences before, was planned as a physical event. But then the coronavirus struck, and I/O 2020 was the I/O that never was. In 2021, we had enough time to plan, so I/O 2021 was the first virtual event in the series.

The team outdid themselves and recreated the entire experience as a virtual game. As Ars Technica wrote, Google's I/O Adventure was almost as good as being there. To get a feel for it, here's the official teaser video. During the event, you could bump into Googlers and talk to them, almost like in the real world. Below, you can see a team photo we took at the obligatory lighthouse. Can you spot me?

Google I/O Adventure team photo.

Together with @lipis, I had the pleasure of giving a talk titled Excalidraw and Fugu: Improving Core User Journeys. You can watch the talk in the video embed below, or read my video write-up over on web.dev.

I also created a codelab that covers a lot of the Project Fugu 🐡 APIs. You can work your way through it at your own pace, or if you want, re-join me in a virtual workshop session where you see me do it—and run into some minor service worker caching issues… 😅

Google I/O 2021 was in my opinion a good event that worked OK enough under the circumstances and with the constraints of the virtual format. With I/O in the books, we're looking what to do when it comes to events in the future. Virtual, physical, or hybrid? The planning phase for Chrome Dev Summit 2021 has already started… Stay tuned, and always root for Team Web!

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2021/06/01/excalidraw-and-project-fugu-at-google-io/.

<ruby> HTML footnotes

It is sometimes surprising to me to see what kind of use cases HTML has a dedicated element for. Something that comes to mind is <output>, a container element into which a site or app can inject the results of a calculation or the outcome of a user action. For another use case that is arguably more common and which is also the topic of this blog post, HTML has nothing specific to offer: footnotesFootnotes are notes at the foot of the page while endnotes are collected under a separate heading at the end of a chapter, volume, or entire work. Unlike footnotes, endnotes have the advantage of not affecting the layout of the main text, but may cause inconvenience to readers who have to move back and forth between the main text and the endnotes..

Footnotes in HTML, then and now

Despite several proposals to deal with footnotes at the language level, HTML 3.0 Draft was the last version of HTML that offered the FN element. It was designed for footnotes, and when practical, footnotes were to be rendered as pop-up notes. You were supposed to use the element as in the code sample below (the inconsistent character casing sic).

You should not have believed me, for virtue cannot so
<a href="#fn1">inoculate</a> our old stock but we shall
<a href="#fn2">relish of it</a>. I loved you not.

<DD>I was the more deceived.</DD>

Get thee to a nunnery. Why wouldst thou be a breeder of sinners? I am myself
<a href="#fn2">indifferent honest</a> ...

<fn id="fn1"><i>inoculate</i> - graft</fn>
<fn id="fn2"><i>relish of it</i> - smack of it (our old sinful nature)</fn>
<fn id="fn3"><i>indifferent honest</i> - moderately virtuous</fn>

The current HTML Living Standard (snapshot from January 22, 2021) remarks that HTML does not have a dedicated mechanism for marking up footnotes and recommends the following options for footnotes. For short inline annotations, the title attribute could be used.

<p><b>Customer</b>: Hello! I wish to register a complaint. Hello. Miss?</p>
<span title="Colloquial pronunciation of 'What do you'">Watcha</span> mean,

<b>Customer</b>: Uh, I'm sorry, I have a cold. I wish to make a complaint.
<b>Shopkeeper</b>: Sorry,
<span title="This is, of course, a lie.">we're closing for lunch</span>.

Using title comes with an important downside, though, as the spec rightly notes.

Unfortunately, relying on the title attribute is currently discouraged as many user agents do not expose the attribute in an accessible manner as required by this specification (e.g. requiring a pointing device such as a mouse to cause a tooltip to appear, which excludes keyboard-only users and touch-only users, such as anyone with a modern phone or tablet).

For longer annotations, the a element should be used, pointing to an element later in the document. The convention is that the contents of the link be a number in square brackets.

<p>Announcer: Number 16: The <i>hand</i>.</p>
Interviewer: Good evening. I have with me in the studio tonight Mr Norman St
John Polevaulter, who for the past few years has been contradicting people. Mr
Polevaulter, why <em>do</em> you contradict people?

Norman: I don't. <sup><a href="#fn1" id="r1">[1]</a></sup>
<p>Interviewer: You told me you did! ...</p>

<p id="fn1">
<a href="#r1">[1]</a> This is, naturally, a lie, but paradoxically if it
were true he could not say so without contradicting the interviewer and thus
making it false.

This approach is what most folks use today, for example, Alex Russell or the HTML export of Google Docs documents.

The ruby element

The other day, I came across a tweet by Michael Scharnagl, whose website and Twitter handle are aptly named Just Markup and who runs a Twitter campaign this year called #HTMLElementInATweet:

Day 22: <ruby>

Represents small annotations

ℹ️ The term ruby originated as a unit of measurement used by typesetters, representing the smallest size that text can be printed on newsprint while remaining legible.



— Michael Scharnagl (@justmarkup) January 22, 2021

I had heard about ruby in the past, but it was one of these elements that I tend to look up and forget immediately. This time, for some reason, I looked closer and even consulted the spec.

The ruby element allows one or more spans of phrasing content to be marked with ruby annotations. Ruby annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations. In Japanese, this form of typography is also known as furigana.

The rt element marks the ruby text component of a ruby annotation. When it is the child of a ruby element, it doesn't represent anything itself, but the ruby element uses it as part of determining what it represents.

You are supposed to use it like so.

<ruby> 明日 <rp>(</rp><rt>Ashita</rt><rp>)</rp> </ruby>

The MDN docs describe the ruby element as follows.

The HTML <ruby> element represents small annotations that are rendered above, below, or next to base text, usually used for showing the pronunciation of East Asian characters. It can also be used for annotating other kinds of text, but this usage is less common.

The term ruby originated as a unit of measurement used by typesetters, representing the smallest size that text can be printed on newsprint while remaining legible.

Hmm 🤔, this sounds like it could fit the footnotes use case. So I went and tried my luck in creating ruby HTML footnotes.

Using ruby for footnotes

The markup is straightforward, all you need are ruby for the footnote, and rt for the footnote text. I like that the footnote is just part of the flow text, so I do not need to mentally switch context when writing. I also do not have to manually number my footnotes and come up with and remember the value of ids. Another small advantage is that footnotes are not part of copied text, so when you copy content from my site, you do not end up with "text [2] like this". The snippet below shows the markup of a footnote.

<body tabindex="0">
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec consectetur
dictum fermentum. Vivamus non fringilla dolor, in scelerisque massa. Quisque
mattis elit quam, eu hendrerit diam ultricies ut. Nunc sit amet velit
posuere, malesuada diam in, congue diam. Integer quis venenatis velit. Donec
quis nunc
<ruby tabindex="0"
vel purus<rt
Lorem ipsum dolor sit amet, consectetur adipiscing elit.

maximus dictum. Sed nec tempus odio. Vestibulum et lobortis ante. Duis
blandit pulvinar lectus non sollicitudin. Nulla non imperdiet diam. Fusce
varius ultricies sapien id pretium. Praesent ut pellentesque massa. Nunc eu
tellus hendrerit risus maximus porta. Maecenas in molestie erat.

The CSS to make the automatic footnote numbering work is based on a CSS counter. The rt is styled in a way that it is not displayed by default, and only gets shown when the ruby's :after, which holds the footnote number, is focused. For this to function properly, it is important to make the <ruby> element focusable by setting tabindex="0". On mobile devices, the body needs to be focusable as well, so the footnote can be closed again by clicking/tapping anywhere in the page. The rt element can contain phrasing content, so links and images are all fine. Another thing to remember is to make sure the rt element remains visible on :hover, so links can be clicked even when the ruby element loses focus. I have moved the CSS display value of rt into a CSS custom property, so I could easily play with different values. The CSS below is all it takes to make the footnotes work.

/* Behavior */

/* Set up the footnote counter and display style. */
body {
counter-reset: footnotes;

/* Make footnote text appear as `inline-block`. */
ruby {
--footnote-display: inline-block;

/* Display the actual footnote [1]. */
ruby:after {
counter-increment: footnotes;
/* The footnote is separated with a thin space. 🤓 */
content: ' [' counter(footnotes) ']';

/* Remove the focus ring. */
ruby:focus {
outline: none;

/* Display the footnote text. */
ruby:focus rt {
display: var(--footnote-display);

/* Hide footnote text by default. */
rt {
display: none;

* Make sure the footnote text remains visible,
* so contained links can be clicked.

rt:hover {
display: var(--footnote-display);

The following CSS snippet determines the look and feel of the footnotes.

/* Look and feel */

/* Footnote text styling. */
rt {
background-color: #eee;
color: #111;
padding: 0.2rem;
margin: 0.2rem;
max-width: 30ch;

/* Images in footnote text styling. */
rt img {
width: 100%;
height: auto;
display: block;

/* Footnote styling */
ruby:after {
color: red;
cursor: pointer;
font-size: 0.75rem;
vertical-align: top;

Something I could not get to work (yet) is to make the rt's CSS position to be absolute. I got the best results so far by making the rt an inline block by setting the CSS property --footnote-display: inline-block. I am well aware of ruby-align and ruby-position. The former does not have great browser support at the moment but seems relevant, and the latter seems to have no effect when I change the display value of rt to anything other than the UA stylesheet default, which is block. If you manage to get it to work such that footnote texts open inline, floating right under the footnote and not affecting the surrounding paragraph text, your help would be very welcome. I also still need to look into supporting printable footnotes. If you are interested, you can reach me and discuss this idea on Twitter.


I have enabled ruby footnotes right on my blog This is the second footnote, the other is at the top., but you can also play with a standalone demo on Glitch and remix its source code.

⚠️ Please note that this is not production ready. Support seems decent on Blink/WebKit-based browsers, but not so great on Gecko-based browsers like Firefox. I have opened an Issue with the CSS Working Group to hear their opinion on the idea.

Other approaches

The "standards nerd and technology enthusiast" Terence Eden proposed to use details in a blog post titled A (terrible?) way to do footnotes in HTML. Next, Peter-Paul Koch, web developer, consultant, and trainer, runs a side project named The Thidrekssaga and footnotes where for the current iteration of the site he just notes that his "implementation of footnotes is mostly shit". If you have yet another approach apart from what is listed here and above, please reach out and I am happy to add it. And as I wrote before, I am looking for help from CSS experts to make rt positioned absolutely. Sorry for the nerd-snipe.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2021/01/24/ruby-html-footnotes/.

Submitting and Distributing a Safari App Extension

Safari 14 has added support for the Web Extensions standard, which I consider a clever move on Apple's side that deprecated Safari's previous .safariextz-style Safari extensions. While there is great documentation for creating a Safari Web Extension from scratch or for converting a Web Extension for Safari (and at least the outlines for converting a legacy Safari extension to a Safari app extension), the documented path currently ends at building and running the application. This post documents the steps for submitting and distributing a Safari App Extension.

The post assumes you already have an Xcode project either created manually or via the converter script and that you use Swift. Here is an example build script for one of my extensions for reference. Some of these steps may improve or change over time for new versions of Xcode; this guide was written for Version 12.0 (12A7209). Caveat: this is my first time interacting with Xcode, so if any of the steps do not make sense, thanks for correcting me.

  1. Change the bundle identifier of the extension from com.example.foo-Extension to com.example.foo.Extension (that is, replace the '-' with a '.') and reflect the change in ViewController.swift. For some reason this is necessary. Xcode bundle identifier
  2. Change the App Category.
  3. Change the version number for app and extension.
  4. Update the build number in app and extension.
  5. Create a new certificate via your developer profile.
  6. Create a new app via App Store Connect.
  7. In Xcode, run Product > Build and then Product > Archive.
  8. In Xcode, open Window > Organizer and then first validate, then distribute (don't change any of the settings).
  9. Hope for the best…

I successfully went through the process with two extensions now:

(Thanks to Timothy Hatcher who has been very helpful in navigating me through the process.)

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/11/09/submitting-and-distributing-a-safari-app-extension/.

Learning from Mini Apps—W3C TPAC Breakout Session

After my W3C TPAC breakout session focused on Project Fugu last year, this year, too, I ran a breakout session titled "Learning from Mini Apps" at the fully virtual TPAC 2020 event. In this breakout session, I first explained what mini apps are and how to build them, and then moved on to an open discussion focused on what Web developers can learn from mini apps and their developer experience. The TPAC folks have done an ace (👏) job and have put all the resources from my session online (and everyone else's of course):

General event recap

For being a first-time virtual event, communication went really well. It felt like everyone has learned by now how to discuss in virtual rooms, and Zoom as the communication platform held up well. While I appreciate the W3C team having made an effort to replace hallway conversations, I didn't attend any of these slots. It just felt exhausting to do those on top of 11pm meetings or 7am slots, apart from the "just fine" afternoon slots (the "golden hour" is actually super friendly for people in the EU), but time zones are hard.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/11/05/learning-from-mini-apps-w3c-tpac-breakout-session/.

Play the Chrome dino game on your Nintendo Switch

I have landed an article over on web.dev that talks about the Gamepad API. One piece of information from this article that our editorial board was not comfortable with having in there was instructions on how to play the Chrome dino game on a Nintendo Switch. So, here you go with the steps right here on my private blog instead.

The hands of a person playing the Chrome dino game on a Nintendo Switch.
Press any of the Nintendo Switch's buttons to play!

The Nintendo Switch contains a hidden browser, which serves for logging in to Wi-Fi networks behind a captive portal. The browser is pretty barebones and does not have a URL bar, but, once you have navigated to a page, it is fully usable. When doing a connection test in system settings, the Switch will detect that the captive portal is present and display an error for it when the response for http://conntest.nintendowifi.net/ does not include the X-Organization: Nintendo HTTP header. I can make creative use of this by pointing the Switch to a DNS server that simulates a captive portal that then redirects to a search engine.

  1. Go to System Settings and then Internet Settings and find the Wi-Fi network that your Switch is connected to. Tap Change Settings.
  2. Find the section with the DNS Settings and add as a new Primary DNS. Note that this DNS server is not operated by me but a third-party, so proceed at your own risk.
  3. Save the settings and then tap Connect to This Network.
  4. The Switch will tell you that Registration is required to use this network. Tap Next.
  5. On the page that opens, make your way to Google.
  6. Search for "chrome dino tomayac". This should lead you to https://github.com/tomayac/chrome-dino-gamepad.
  7. On the right-hand side in the About section, find the link to https://tomayac.github.io/chrome-dino-gamepad/. Enjoy!
  8. 🚨 For regular Switch online services to work again, turn your DNS settings back to Automatic. Conveniently, the Switch remembers previous manual DNS settings, so you can easily toggle between Automatic and Manual.

For the Chrome dino gamepad demo to work, I have ripped out the Chrome dino game from the core Chromium project (updating an earlier effort by Arnelle Ballane), placed it on a standalone site, extended the existing gamepad API implementation by adding ducking and vibration effects, created a full screen mode, and Mehul Satardekar contributed a dark mode implementation. Happy gaming!

You can also play Chrome dino with your gamepad on this very site. The source code is available on GitHub. Check out the gamepad polling implementation in trex-runner.js and note how it is emulating key presses.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/11/04/play-the-chrome-dino-game-on-your-nintendo-switch/.

My Working From Home Setup During COVID-19

Google, like many other companies, has a required working from home (WFH) policy during the COVID-19 crisis. It has taken me a bit, but now I have found a decent WFH setup.

The Hardware

My COVID-19 working from home setup

The Software

  • The Sidecar feature, so I can use my iPad Pro as a second screen with my MacBook Air. The coolest about this feature is that I can multitask it away (see next bullet) without the laptop readjusting the screen arrangement.
  • The Hangouts Meet app on my iPad Pro, so my laptop performance stays unaffected when I am on a video call. A nice side-effect is that the camera of the iPad Pro is in the middle of my two screens, so no weird "looking over the other person" effect when I am on a call.
  • The Gmail app on my iPad Air, so I can always have an eye on my email.
  • (Honorable mention) The iDisplay app on the iPad Air with the iDisplay server running on the laptop, so I can use the iPad Air as a third screen. Unfortunately, since I do not have another free USB C port on my laptop, it is really laggy over Wi-Fi, but works when I really need maximum screen real estate.
  • (Out of scope) The Free Sidecar project promises to enable Sidecar on older iPads like my iPad Air 2, since apparently Apple simply blocks older devices for no particular reason. It requires temporarily turning off System Integrity Protection, which is something I cannot (and do not want to) do on my corporate laptop.

The Furniture

  • A school desk—This is a desk we had bought earlier on eBay, placed on two kids' chairs to convert it into a standing desk.
  • Some shoe boxes to elevate the two main screens to eye height. I had quite some neck pain during the first couple of days.

It is definitely not perfect, but I am quite happy with it now. I very much want the crisis to be over, but (with the kids back in school), I could probably get used to permanently working from home.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/03/23/my-working-from-home-setup-during-covid-19/.

Multi-MIME Type Copying with the Async Clipboard API

Copying an Image

The Asynchronous Clipboard API provides direct access to read and write clipboard data. Apart from text, since Chrome 76, you can also copy and paste image data with the API. For more details on this, check out my article on web.dev. Here's the gist of how copying an image blob works:

const copy = async (blob) => {
try {
await navigator.clipboard.write([
new ClipboardItem({
[blob.type]: blob,
} catch (err) {
console.error(err.name, err.message);

Note that you need to pass an array of ClipboardItems to the navigator.clipboard.write() method, which implies that you can place more than one item on the clipboard (but this is not yet implemented in Chrome as of March 2020).

I have to admit, I only used to think of the clipboard as a one-item stack, so any new item replaces the existing one. However, for example, Microsoft Office 365's clipboard on Windows 10 supports up to 24 clipboard items.

Pasting an Image

The generic code for pasting an image, that is, for reading from the clipboard, is a little more involved. Also be advised that reading from the clipboard triggers a permission prompt before the read operation can succeed. Here's the trimmed down example from my article:

const paste = async () => {
try {
const clipboardItems = await navigator.clipboard.read();
for (const clipboardItem of clipboardItems) {
for (const type of clipboardItem.types) {
return await clipboardItem.getType(type);
} catch (err) {
console.error(err.name, err.message);

See how I first iterate over all clipboardItems (reminder, there can be just one in the current implementation), but then also iterate over all clipboardItem.types of each individual clipboardItem, only to then just stop at the first type and return whatever blob I encounter there. So far I haven't really payed much attention to what this enables, but yesterday, I had a sudden epiphany 🤯.

Content Negotiation

Before I get into the details of multi-MIME type copying, let me quickly derail to server-driven content negotiation, quoting straight from MDN:

In server-driven content negotiation, or proactive content negotiation, the browser (or any other kind of user-agent) sends several HTTP headers along with the URL. These headers describe the preferred choice of the user. The server uses them as hints and an internal algorithm chooses the best content to serve to the client.

Server-driven content negotiation diagram

Multi-MIME Type Copying

A similar content negotiation mechanism takes place with copying. You have probably encountered this effect before when you have copied rich text, like formatted HTML, into a plain text field: the rich text is automatically converted to plain text. (💡 Pro tip: to force pasting into a rich text context without formatting, use Ctrl + Shift + v on Windows, or Cmd + Shift + v on macOS.)

So back to content negotiation with image copying. If you copy an SVG image, then open macOS Preview, and finally click "File" > "New from Clipboard", you would probably expect an image to be pasted. However, if you copy an SVG image and paste it into Visual Studio Code or into SVGOMG's "Paste markup" field, you would probably expect the source code to be pasted.

With multi-MIME type copying, you can achieve exactly that 🎉. Below is the code of a future-proof copy function and some helper methods with the following functionality:

  • For images that are not SVGs, it creates a textual representation based on the image's alt text attribute. For SVG images, it creates a textual representation based on the SVG source code.
  • At present, the Async Clipboard API only works with image/png, but nevertheless the code tries to put a representation in the image's original MIME type into the clipboard, apart from a PNG representation.

So in the generic case, for an SVG image, you would end up with three representations: the source code as text/plain, the SVG image as image/svg+xml, and a PNG render as image/png.

const copy = async (img) => {
// This assumes you have marked up images like so:
// <img
// src="foo.svg"
// data-mime-type="image/svg+xml"
// alt="Foo">
// Applying this markup could be automated
// (for all applicable MIME types):
// document.querySelectorAll('img[src*=".svg"]')
// .forEach((img) => {
// img.dataset.mimeType = 'image/svg+xml';
// });
const mimeType = img.dataset.mimeType;
// Always create a textual representation based on the
// `alt` text, or based on the source code for SVG images.
let text = null;
if (mimeType === 'image/svg+xml') {
text = await toSourceBlob(img);
} else {
text = new Blob([img.alt], {type: 'text/plain'})
const clipboardData = {
'text/plain': text,
// Always create a PNG representation.
clipboardData['image/png'] = await toPNGBlob(img);
// When dealing with a non-PNG image, create a
// representation in the MIME type in question.
if (mimeType !== 'image/png') {
clipboardData[mimeType] = await toOriginBlob(img);
try {
await navigator.clipboard.write([
new ClipboardItem(clipboardData),
} catch (err) {
// Currently only `text/plain` and `image/png` are
// implemented, so if there is a `NotAllowedError`,
// remove the other representation.
console.warn(err.name, err.message);
if (err.name === 'NotAllowedError') {
const disallowedMimeType = err.message.replace(
/^.*?\s(\w+\/[^\s]+).*?$/, '$1')
delete clipboardData[disallowedMimeType];
try {
await navigator.clipboard.write([
new ClipboardItem(clipboardData),
} catch (err) {
throw err;
// Log what's ultimately on the clipboard.

// Draws an image on an offscreen canvas
// and converts it to a PNG blob.
const toPNGBlob = async (img) => {
const canvas = new OffscreenCanvas(
img.naturalWidth, img.naturalHeight);
const ctx = canvas.getContext('2d');
// This removes transparency. Remove at will.
ctx.fillStyle = '#fff';
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(img, 0, 0);
return await canvas.convertToBlob();

// Fetches an image resource and returns
// its blob of whatever MIME type.
const toOriginBlob = async (img) => {
const response = await fetch(img.src);
return await response.blob();

// Fetches an SVG image resource and returns
// a blob based on the source code.
const toSourceBlob = async (img) => {
const response = await fetch(img.src);
const source = await response.text();
return new Blob([source], {type: 'text/plain'});

If you use this copy function (demo below ⤵️) to copy an SVG image, for example, everyone's favorite symptoms of coronavirus 🦠 disease diagram, and paste it in macOS Preview (that does not support SVG) or the "Paste markup" field of SVGOMG, this is what you get:

The macOS Preview app with a pasted PNG image.
The macOS Preview app with a pasted PNG image.
The SVGOMG web app with a pasted SVG image.
The SVGOMG web app with a pasted SVG image.


You can play with this code in the embedded example below. Unfortunately you can't play with this code in the embedded example below yet, since webappsec-feature-policy#322 is still open. The demo works if you open it directly on Glitch.


Programmatic multi-MIME type copying is a powerful feature. At present, the Async Clipboard API is still limited, but raw clipboard access is on the radar of the 🐡 Project Fugu team that I am a small part of. The feature is being tracked as crbug/897289.

All that being said, raw clipboard access has its risks, too, as clearly pointed out in the TAG review. I do hope use cases like multi-MIME type copying that I have motivated in this blog post can help create developer enthusiasm so that browser engineers and security experts can make sure the feature gets implemented and lands in a secure way.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/03/20/multi-mime-type-copying-with-the-async-clipboard-api/.

Brotli Compression with mod_pagespeed and ngx_pagespeed

The PageSpeed modules (not to be confused with the PageSpeed Insights site analysis service), are open-source webserver modules that optimize your site automatically. Namely, there is mod_pagespeed for the Apache server and ngx_pagespeed for the Nginx server. For example, PageSpeed can automatically create WebP versions for all your image resources, and conditionally only serve the format to clients that accept image/webp. I use it on this very blog, inspect a request to any JPEG image and see how on supporting browsers it gets served as WebP.

Chrome DevTools showing a request for a JPEG image that gets served as WebP

The impact of Brotli compression

When it comes to compression, Brotli really makes a difference. Brotli compression is only supported over HTTPS and is requested by clients by including br in the accept-encoding header. In practice, Chrome sends accept-encoding: gzip, deflate, br. As an example for the positive impact compared to gzip, check out a recent case study shared by Addy Osmani in which the web team of the hotel company Treebo share their Tale of Brotli Compression.

PageSpeed does not support Brotli yet

While both webservers support Brotli compression out of the box, Apache via mod_brotli and Nginx via ngx_brotli, one thing that PageSpeed is currently missing is native Brotli support, causing resources that went through any PageSpeed optimization step to not be Brotli-encoded 😔. PageSpeed is really smart about compression in general, for instance, it always automatically enables mod_deflate for compression, optionally adds an accept-encoding: gzip header to requests that lack it, and automatically gzips compressable resources as they are stored in the cache, but Brotli support is just not there yet. The good news is that it is being worked on, GitHub Issue #1148 tracks the effort.

Making Brotli work with PageSpeed

The even better news is that while we are waiting for native Brotli support in PageSpeed, we can just outsource Brotli compression to the underlying webserver. To do so, simply disable PageSpeed's HTTPCache Compression. Quoting from the documentation:

To configure cache compression, set HttpCacheCompressionLevel to values between -1 and 9, with 0 being off, -1 being gzip's default compression, and 9 being maximum compression.

📢 So to make PageSpeed work with Brotli, what you want in your pagespeed.conf file is a new line:

# Disable PageSpeed's gzip compression, so the server's
# native Brotli compression kicks in via `mod_brotli`
# or `ngx_brotli`.
ModPagespeedHttpCacheCompressionLevel 0

One thing to have an eye on is server load. Brotli is more demanding than gzip, so for your static resources, you probably want to serve pre-compressed content wherever possible, and for the odd chance of when you are Facebook, maybe disable Brotli for your dynamic resources.

Chrome DevTools Network panel showing traffic for this blog with resources served Brotli-compressed highlighted

Happy Brotli serving, and, by the way, in case you ever wondered, Brotli is a 🇨🇭 Swiss German word for a bread roll and literally means "small bread".

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/01/24/brotli-compression-with-mod-pagespeed-and-ngx-pagespeed/.

Progressive Enhancement In the Age of Fugu APIs

Back in March 2003, Nick Finck and Steven Champeon stunned the web design world with the concept of progressive enhancement:

Rather than hoping for graceful degradation, [progressive enhancement] builds documents for the least capable or differently capable devices first, then moves on to enhance those documents with separate logic for presentation, in ways that don't place an undue burden on baseline devices but which allow a richer experience for those users with modern graphical browser software.

While in 2003, progressive enhancement was mostly about using presentational features like at the time modern CSS properties, unobtrusive JavaScript for improved usability, and even nowadays basic things like Scalable Vector Graphics; I see progressive enhancement in 2020 as being about using new functional browser capabilities.

Sometimes we agree to disagree

Feature support for core JavaScript language features by major browsers is great. Kangax' ECMAScript 2016+ compatibility table is almost all green, and browser vendors generally agree and are quick to implement. In contrast, there is less agreement on what we colloquially call Fugu 🐡 features. In Project Fugu, our objective is the following:

Enable web apps to do anything native apps can, by exposing the capabilities of native platforms to the web platform, while maintaining user security, privacy, trust, and other core tenets of the web.

You can see all the capabilities we want to tackle in the context of the project by having a look at our Fugu API tracker. I have also written about Project Fugu at W3C TPAC 2019.

To get an impression of the debate around these features when it comes to the different browser vendors, I recommend reading the discussions around the request for a WebKit position on Web NFC or the request for a Mozilla position on screen Wake Lock (both discussions contain links to the particular specs in question). In some cases, the result of these positioning threads might be a "we agree to disagree". And that's fine.

Progressive enhancement for Fugu features

As a result of this disagreement, some Fugu features will probably never be implemented by all browser vendors. But what does this mean for developers? Now and then, in 2003 just like in 2020, feature detection plays a central role. Before using a potentially future new browser capability like, say, the Native File System API, developers need to feature-detect the presence of the API. For the Native File System API, it might look like this:

if ('chooseFileSystemEntries' in window) {
// Yay, the Native File System API is available! 💾
} else {
// Nay, a legacy approach is required. 😔

In the worst case, there is no legacy approach (the else branch in the code snippet above). Some Fugu features are so groundbreakingly new that there simply is no replacement. The Contact Picker API (that allows users to select contacts from their device's native contact manager) is such an example.

But in other cases, like with the Native File System API, developers can fall back to <a download> for saving and <input type="file"> for opening files. The experience will not be the same (while you can open a file, you cannot write back to it; you will always create a new file that will land in your Downloads folder), but it is the next best thing.

A suboptimal way to deal with this situation would be to force users to load both code paths, the legacy approach and the new approach. Luckily, dynamic import() makes differential loading feasible and—as a stage 4 of the TC39 process feature—has great browser support.

Experimenting with browser-nativefs

I have been exploring this pattern of progressively enhancing a web application with Fugu features. The other day, I came across an interesting project by Christopher Chedeau, who also goes by @Vjeux on most places on the Internet. Christopher blogged about a new app of his, Excalidraw, and how the project "exploded" (in a positive sense). Made curious from the blog post, I played with the app myself and immediately thought that it could profit from the Native File System API. I opened an initial Pull Request that was quickly merged and that implements the fallback scenario mentioned above, but I was not really happy with the code duplication I had introduced.

Excalidraw web app with open "file save" dialog.

As the logical next step, I created an experimental library that supports the differential loading pattern via dynamic import(). Introducing browser-nativefs, an abstraction layer that exposes two functions, fileOpen() and fileSave(), which under the hood either use the Native File System API or the <a download> and <input type="file"> legacy approach. A Pull Request based on this library is now merged into Excalidraw, and so far it seems to work fine (only the dynamic import() breaks CodeSandbox, likely a known issue). You can see the core API of the library below.

// The imported methods will use the Native File
// System API or a fallback implementation.
import {
} from 'https://unpkg.com/browser-nativefs';

(async () => {
// Open a file.
const blob = await fileOpen({
mimeTypes: ['image/*'],

// Open multiple files.
const blobs = await fileOpen({
mimeTypes: ['image/*'],
multiple: true,

// Save a file.
await fileSave(blob, {
fileName: 'Untitled.png',

Polyfill or ponyfill or abstraction

Triggered by this project, I provided some feedback on the Native File System specification:

  • #146 on the API shape and the naming.
  • #148 on whether a File object should have an attribute that points to its associated FileSystemHandle.
  • #149 on the ability to provide a name hint for a to-be-saved file.

There are several other open issues for the API, and its shape is not stable yet. Some of the API's concepts like FileSystemHandle only make sense when used with the actual API, but not with a legacy fallback, so polyfilling or ponyfilling (as pointed out by my colleague Jeff Posnick) is—in my humble opinion—less of an option, at least for the moment.

My current thinking goes more in the direction of positioning this library as an abstraction like jQuery's $.ajax() or Axios' axios.get(), which a significant amount of developers still prefer even over newer APIs like fetch(). In a similar vein, Node.js offers a function fsPromises.readFile() that—apart from a FileHandle—also just takes a filename path string, that is, it acts as an optional shortcut to fsPromises.open(), which returns a FileHandle that one can then use with filehandle.readFile() that finally returns a Buffer or a string, just like fsPromises.readFile().

Thus, should the Native File System API then just have a window.readFile() method? Maybe. But more recently the trend seems to be to rather expose generic tools like AbortController that can be used to cancel many things, including fetch() rather than more specific mechanisms. When the lower-level primitives are there, developers can build abstractions on top, and optionally never expose the primitives, just like the fileOpen() and fileSave() methods in browser-nativefs that one can (but never has to) perfectly use without ever touching a FileSystemHandle.


Progressive enhancement in the age of Fugu APIs in my opinion is more alive than ever. I have shown the concept at the example of the Native File System API, but there are several other new API proposals where this idea (which by no means I claim as new) could be applied. For instance, the Shape Detection API can fall back to JavaScript or Web Assembly libraries, as shown in the Perception Toolkit. Another example is the (screen) Wake Lock API that can fall back to playing an invisible video, which is the way NoSleep.js implements it. As I wrote above, the experience probably will not be the same, but the next best thing. If you want, give browser-nativefs a try.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/01/23/progressive-enhancement-in-the-age-of-fugu-apis/.

Same same but different: Unicode Variation Selector-16

The other day, I did an analysis of Facebook's WebView, which you are kindly invited to read. They have a code path in which they check whether a given page is using AMPHTML, where \u26A1 is the Unicode code point escape of the ⚡ High Voltage emoji.

var nvtiming__fb_html_amp =
nvtiming__fb_html.hasAttribute("amp") ||
console.log("FBNavAmpDetect:" + nvtiming__fb_html_amp);

An undetected fake AMP page

I was curious to see if they did something special when they detect a page is using AMP (spoiler alert: they do not), so I quickly hacked together a fake AMP page that seemingly fulfilled their simple test.

<html ⚡️>
<body>Fake AMP</body>

I am a big emoji fan, so instead of the <html amp> variant, I went for the <html ⚡> variant and entered the via the macOS emoji picker. To my surprise, Facebook logged "FBNavAmpDetect: false". Huh 🤷‍♂️?

⚡️ High Voltage sign is a valid attribute name

My first reaction was: <html ⚡️> does not quite look like what the founders of HTML had in mind, so maybe hasAttribute() is specified to return false when an attribute name is invalid. But what even is a valid attribute name? I consulted the HTML spec where it says (emphasis mine):

Attribute names must consist of one or more characters other than controls, U+0020 SPACE, U+0022 ("), U+0027 ('), U+003E (>), U+002F (/), U+003D (=), and noncharacters. In the HTML syntax, attribute names, even those for foreign elements, may be written with any mix of ASCII lower and ASCII upper alphas.

I was on company chat with Jake Archibald at that moment, so I confirmed my reading of the spec that is not a valid attribute name. Turns out, it is a valid name, but the spec is formulated in an ambiguous way, so Jake filed "HTML syntax" attribute names. And my lead to a rational explanation was gone.

Perfect Heisenbug?

Luckily a valid AMP boilerplate example was just a quick Web search away, so I copy-pasted the code and Facebook, as expected, reported "FBNavAmpDetect: true". I reduced the AMP boilerplate example until it looked like my fake AMP page, but still Facebook detected the modified boilerplate as AMP, but did not detect mine as AMP. Essentially my experiment looked like the below code sample. Perfect Heisenbug?

JavaScript console showing the code sample from this post

The Unicode Variation Selector-16

Jake eventually traced it down to the Unicode Variation Selector-16:

An invisible code point which specifies that the preceding character should be displayed with emoji presentation. Only required if the preceding character defaults to text presentation.

You may have seen this in effect with the Unicode snowman that appears in a textual ☃︎ as well as in an emoji representation ☃️ (depending on the device you read this on, they may both look the same). As far as I can tell, Chrome DevTools prefers to always render the textual variant, as you can see in the screenshot above. But with the help of the length() and the charCodeAt() functions, the difference gets visible.

// false
// true
// 2
// 1
'⚡'.charCodeAt(0) + ' ' + '⚡'.charCodeAt(1);
// "9889 NaN"
'⚡️'.charCodeAt(0) + ' ' + '⚡️'.charCodeAt(1);
// "9889 65039"

The AMP Validator and ⚡️

The macOS emoji picker creates the variant ⚡️, which includes the Variation Selector-16, but AMP requires the variant without, which I have also confirmed in the validator code. You can see in the screenshot below how the AMP Validator rejects one of the two High Voltage symbols.

AMP Validator rejecting the emoji variant with Variation Selector-16

Making this actionable

I have filed crbug.com/1033453 against the Chrome DevTools asking for rendering the characters differently, depending on whether the Variation Selector-16 is present or not. Further, I have opened a feature request on the AMP Project repo demanding that AMP should respect ⚡️ apart from ⚡. Same same, but different.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/12/12/same-same-but-different-unicode-variation-selector-16/.