My Working From Home Setup During COVID-19

Google, like many other companies, has a required working from home (WFH) policy during the COVID-19 crisis. It has taken me a bit, but now I have found a decent WFH setup.

The Hardware

My COVID-19 working from home setup

The Software

  • The Sidecar feature, so I can use my iPad Pro as a second screen with my MacBook Air. The coolest about this feature is that I can multitask it away (see next bullet) without the laptop readjusting the screen arrangement.
  • The Hangouts Meet app on my iPad Pro, so my laptop performance stays unaffected when I am on a video call. A nice side-effect is that the camera of the iPad Pro is in the middle of my two screens, so no weird "looking over the other person" effect when I am on a call.
  • The Gmail app on my iPad Air, so I can always have an eye on my email.
  • (Honorable mention) The iDisplay app on the iPad Air with the iDisplay server running on the laptop, so I can use the iPad Air as a third screen. Unfortunately, since I do not have another free USB C port on my laptop, it is really laggy over Wi-Fi, but works when I really need maximum screen real estate.
  • (Out of scope) The Free Sidecar project promises to enable Sidecar on older iPads like my iPad Air 2, since apparently Apple simply blocks older devices for no particular reason. It requires temporarily turning off System Integrity Protection, which is something I cannot (and do not want to) do on my corporate laptop.

The Furniture

  • A school desk—This is a desk we had bought earlier on eBay, placed on two kids' chairs to convert it into a standing desk.
  • Some shoe boxes to elevate the two main screens to eye height. I had quite some neck pain during the first couple of days.

It is definitely not perfect, but I am quite happy with it now. I very much want the crisis to be over, but (with the kids back in school), I could probably get used to permanently working from home.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/03/23/my-working-from-home-setup-during-covid-19/.

Multi-MIME Type Copying with the Async Clipboard API

Copying an Image

The Asynchronous Clipboard API provides direct access to read and write clipboard data. Apart from text, since Chrome 76, you can also copy and paste image data with the API. For more details on this, check out my article on web.dev. Here's the gist of how copying an image blob works:

const copy = async (blob) => {
try {
await navigator.clipboard.write([
new ClipboardItem({
[blob.type]: blob,
}),
]);
} catch (err) {
console.error(err.name, err.message);
}
};

Note that you need to pass an array of ClipboardItems to the navigator.clipboard.write() method, which implies that you can place more than one item on the clipboard (but this is not yet implemented in Chrome as of March 2020).

I have to admit, I only used to think of the clipboard as a one-item stack, so any new item replaces the existing one. However, for example, Microsoft Office 365's clipboard on Windows 10 supports up to 24 clipboard items.

Pasting an Image

The generic code for pasting an image, that is, for reading from the clipboard, is a little more involved. Also be advised that reading from the clipboard triggers a permission prompt before the read operation can succeed. Here's the trimmed down example from my article:

const paste = async () => {
try {
const clipboardItems = await navigator.clipboard.read();
for (const clipboardItem of clipboardItems) {
for (const type of clipboardItem.types) {
return await clipboardItem.getType(type);
}
}
} catch (err) {
console.error(err.name, err.message);
}
}

See how I first iterate over all clipboardItems (reminder, there can be just one in the current implementation), but then also iterate over all clipboardItem.types of each individual clipboardItem, only to then just stop at the first type and return whatever blob I encounter there. So far I haven't really payed much attention to what this enables, but yesterday, I had a sudden epiphany 🤯.

Content Negotiation

Before I get into the details of multi-MIME type copying, let me quickly derail to server-driven content negotiation, quoting straight from MDN:

In server-driven content negotiation, or proactive content negotiation, the browser (or any other kind of user-agent) sends several HTTP headers along with the URL. These headers describe the preferred choice of the user. The server uses them as hints and an internal algorithm chooses the best content to serve to the client.

Server-driven content negotiation diagram

Multi-MIME Type Copying

A similar content negotiation mechanism takes place with copying. You have probably encountered this effect before when you have copied rich text, like formatted HTML, into a plain text field: the rich text is automatically converted to plain text. (💡 Pro tip: to force pasting into a rich text context without formatting, use Ctrl + Shift + v on Windows, or Cmd + Shift + v on macOS.)

So back to content negotiation with image copying. If you copy an SVG image, then open macOS Preview, and finally click "File" > "New from Clipboard", you would probably expect an image to be pasted. However, if you copy an SVG image and paste it into Visual Studio Code or into SVGOMG's "Paste markup" field, you would probably expect the source code to be pasted.

With multi-MIME type copying, you can achieve exactly that 🎉. Below is the code of a future-proof copy function and some helper methods with the following functionality:

  • For images that are not SVGs, it creates a textual representation based on the image's alt text attribute. For SVG images, it creates a textual representation based on the SVG source code.
  • At present, the Async Clipboard API only works with image/png, but nevertheless the code tries to put a representation in the image's original MIME type into the clipboard, apart from a PNG representation.

So in the generic case, for an SVG image, you would end up with three representations: the source code as text/plain, the SVG image as image/svg+xml, and a PNG render as image/png.

const copy = async (img) => {
// This assumes you have marked up images like so:
// <img
// src="foo.svg"
// data-mime-type="image/svg+xml"
// alt="Foo">
//
// Applying this markup could be automated
// (for all applicable MIME types):
//
// document.querySelectorAll('img[src*=".svg"]')
// .forEach((img) => {
// img.dataset.mimeType = 'image/svg+xml';
// });
const mimeType = img.dataset.mimeType;
// Always create a textual representation based on the
// `alt` text, or based on the source code for SVG images.
let text = null;
if (mimeType === 'image/svg+xml') {
text = await toSourceBlob(img);
} else {
text = new Blob([img.alt], {type: 'text/plain'})
}
const clipboardData = {
'text/plain': text,
};
// Always create a PNG representation.
clipboardData['image/png'] = await toPNGBlob(img);
// When dealing with a non-PNG image, create a
// representation in the MIME type in question.
if (mimeType !== 'image/png') {
clipboardData[mimeType] = await toOriginBlob(img);
}
try {
await navigator.clipboard.write([
new ClipboardItem(clipboardData),
]);
} catch (err) {
// Currently only `text/plain` and `image/png` are
// implemented, so if there is a `NotAllowedError`,
// remove the other representation.
console.warn(err.name, err.message);
if (err.name === 'NotAllowedError') {
const disallowedMimeType = err.message.replace(
/^.*?\s(\w+\/[^\s]+).*?$/, '$1')
delete clipboardData[disallowedMimeType];
try {
await navigator.clipboard.write([
new ClipboardItem(clipboardData),
]);
} catch (err) {
throw err;
}
}
}
// Log what's ultimately on the clipboard.
console.log(clipboardData);
};

// Draws an image on an offscreen canvas
// and converts it to a PNG blob.
const toPNGBlob = async (img) => {
const canvas = new OffscreenCanvas(
img.naturalWidth, img.naturalHeight);
const ctx = canvas.getContext('2d');
// This removes transparency. Remove at will.
ctx.fillStyle = '#fff';
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(img, 0, 0);
return await canvas.convertToBlob();
};

// Fetches an image resource and returns
// its blob of whatever MIME type.
const toOriginBlob = async (img) => {
const response = await fetch(img.src);
return await response.blob();
};

// Fetches an SVG image resource and returns
// a blob based on the source code.
const toSourceBlob = async (img) => {
const response = await fetch(img.src);
const source = await response.text();
return new Blob([source], {type: 'text/plain'});
};

If you use this copy function (demo below ⤵️) to copy an SVG image, for example, everyone's favorite symptoms of coronavirus 🦠 disease diagram, and paste it in macOS Preview (that does not support SVG) or the "Paste markup" field of SVGOMG, this is what you get:

The macOS Preview app with a pasted PNG image.
The macOS Preview app with a pasted PNG image.
The SVGOMG web app with a pasted SVG image.
The SVGOMG web app with a pasted SVG image.

Demo

You can play with this code in the embedded example below. Unfortunately you can't play with this code in the embedded example below yet, since webappsec-feature-policy#322 is still open. The demo works if you open it directly on Glitch.

Conclusion

Programmatic multi-MIME type copying is a powerful feature. At present, the Async Clipboard API is still limited, but raw clipboard access is on the radar of the 🐡 Project Fugu team that I am a small part of. The feature is being tracked as crbug/897289.

All that being said, raw clipboard access has its risks, too, as clearly pointed out in the TAG review. I do hope use cases like multi-MIME type copying that I have motivated in this blog post can help create developer enthusiasm so that browser engineers and security experts can make sure the feature gets implemented and lands in a secure way.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/03/20/multi-mime-type-copying-with-the-async-clipboard-api/.

Brotli Compression with mod_pagespeed and ngx_pagespeed

The PageSpeed modules (not to be confused with the PageSpeed Insights site analysis service), are open-source webserver modules that optimize your site automatically. Namely, there is mod_pagespeed for the Apache server and ngx_pagespeed for the Nginx server. For example, PageSpeed can automatically create WebP versions for all your image resources, and conditionally only serve the format to clients that accept image/webp. I use it on this very blog, inspect a request to any JPEG image and see how on supporting browsers it gets served as WebP.

Chrome DevTools showing a request for a JPEG image that gets served as WebP

The impact of Brotli compression

When it comes to compression, Brotli really makes a difference. Brotli compression is only supported over HTTPS and is requested by clients by including br in the accept-encoding header. In practice, Chrome sends accept-encoding: gzip, deflate, br. As an example for the positive impact compared to gzip, check out a recent case study shared by Addy Osmani in which the web team of the hotel company Treebo share their Tale of Brotli Compression.

PageSpeed does not support Brotli yet

While both webservers support Brotli compression out of the box, Apache via mod_brotli and Nginx via ngx_brotli, one thing that PageSpeed is currently missing is native Brotli support, causing resources that went through any PageSpeed optimization step to not be Brotli-encoded 😔. PageSpeed is really smart about compression in general, for instance, it always automatically enables mod_deflate for compression, optionally adds an accept-encoding: gzip header to requests that lack it, and automatically gzips compressable resources as they are stored in the cache, but Brotli support is just not there yet. The good news is that it is being worked on, GitHub Issue #1148 tracks the effort.

Making Brotli work with PageSpeed

The even better news is that while we are waiting for native Brotli support in PageSpeed, we can just outsource Brotli compression to the underlying webserver. To do so, simply disable PageSpeed's HTTPCache Compression. Quoting from the documentation:

To configure cache compression, set HttpCacheCompressionLevel to values between -1 and 9, with 0 being off, -1 being gzip's default compression, and 9 being maximum compression.

📢 So to make PageSpeed work with Brotli, what you want in your pagespeed.conf file is a new line:

# Disable PageSpeed's gzip compression, so the server's
# native Brotli compression kicks in via `mod_brotli`
# or `ngx_brotli`.
ModPagespeedHttpCacheCompressionLevel 0

One thing to have an eye on is server load. Brotli is more demanding than gzip, so for your static resources, you probably want to serve pre-compressed content wherever possible, and for the odd chance of when you are Facebook, maybe disable Brotli for your dynamic resources.

Chrome DevTools Network panel showing traffic for this blog with resources served Brotli-compressed highlighted

Happy Brotli serving, and, by the way, in case you ever wondered, Brotli is a 🇨🇭 Swiss German word for a bread roll and literally means "small bread".

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/01/24/brotli-compression-with-mod-pagespeed-and-ngx-pagespeed/.

Progressive Enhancement In the Age of Fugu APIs

Back in March 2003, Nick Finck and Steven Champeon stunned the web design world with the concept of progressive enhancement:

Rather than hoping for graceful degradation, [progressive enhancement] builds documents for the least capable or differently capable devices first, then moves on to enhance those documents with separate logic for presentation, in ways that don't place an undue burden on baseline devices but which allow a richer experience for those users with modern graphical browser software.

While in 2003, progressive enhancement was mostly about using presentational features like at the time modern CSS properties, unobtrusive JavaScript for improved usability, and even nowadays basic things like Scalable Vector Graphics; I see progressive enhancement in 2020 as being about using new functional browser capabilities.

Sometimes we agree to disagree

Feature support for core JavaScript language features by major browsers is great. Kangax' ECMAScript 2016+ compatibility table is almost all green, and browser vendors generally agree and are quick to implement. In contrast, there is less agreement on what we colloquially call Fugu 🐡 features. In Project Fugu, our objective is the following:

Enable web apps to do anything native apps can, by exposing the capabilities of native platforms to the web platform, while maintaining user security, privacy, trust, and other core tenets of the web.

You can see all the capabilities we want to tackle in the context of the project by having a look at our Fugu API tracker. I have also written about Project Fugu at W3C TPAC 2019.

To get an impression of the debate around these features when it comes to the different browser vendors, I recommend reading the discussions around the request for a WebKit position on Web NFC or the request for a Mozilla position on screen Wake Lock (both discussions contain links to the particular specs in question). In some cases, the result of these positioning threads might be a "we agree to disagree". And that's fine.

Progressive enhancement for Fugu features

As a result of this disagreement, some Fugu features will probably never be implemented by all browser vendors. But what does this mean for developers? Now and then, in 2003 just like in 2020, feature detection plays a central role. Before using a potentially future new browser capability like, say, the Native File System API, developers need to feature-detect the presence of the API. For the Native File System API, it might look like this:

if ('chooseFileSystemEntries' in window) {
// Yay, the Native File System API is available! 💾
} else {
// Nay, a legacy approach is required. 😔
}

In the worst case, there is no legacy approach (the else branch in the code snippet above). Some Fugu features are so groundbreakingly new that there simply is no replacement. The Contact Picker API (that allows users to select contacts from their device's native contact manager) is such an example.

But in other cases, like with the Native File System API, developers can fall back to <a download> for saving and <input type="file"> for opening files. The experience will not be the same (while you can open a file, you cannot write back to it; you will always create a new file that will land in your Downloads folder), but it is the next best thing.

A suboptimal way to deal with this situation would be to force users to load both code paths, the legacy approach and the new approach. Luckily, dynamic import() makes differential loading feasible and—as a stage 4 of the TC39 process feature—has great browser support.

Experimenting with browser-nativefs

I have been exploring this pattern of progressively enhancing a web application with Fugu features. The other day, I came across an interesting project by Christopher Chedeau, who also goes by @Vjeux on most places on the Internet. Christopher blogged about a new app of his, Excalidraw, and how the project "exploded" (in a positive sense). Made curious from the blog post, I played with the app myself and immediately thought that it could profit from the Native File System API. I opened an initial Pull Request that was quickly merged and that implements the fallback scenario mentioned above, but I was not really happy with the code duplication I had introduced.

Excalidraw web app with open "file save" dialog.

As the logical next step, I created an experimental library that supports the differential loading pattern via dynamic import(). Introducing browser-nativefs, an abstraction layer that exposes two functions, fileOpen() and fileSave(), which under the hood either use the Native File System API or the <a download> and <input type="file"> legacy approach. A Pull Request based on this library is now merged into Excalidraw, and so far it seems to work fine (only the dynamic import() breaks CodeSandbox, likely a known issue). You can see the core API of the library below.

// The imported methods will use the Native File
// System API or a fallback implementation.
import {
fileOpen,
fileSave,
} from 'https://unpkg.com/browser-nativefs';

(async () => {
// Open a file.
const blob = await fileOpen({
mimeTypes: ['image/*'],
});

// Open multiple files.
const blobs = await fileOpen({
mimeTypes: ['image/*'],
multiple: true,
});

// Save a file.
await fileSave(blob, {
fileName: 'Untitled.png',
});
})();

Polyfill or ponyfill or abstraction

Triggered by this project, I provided some feedback on the Native File System specification:

  • #146 on the API shape and the naming.
  • #148 on whether a File object should have an attribute that points to its associated FileSystemHandle.
  • #149 on the ability to provide a name hint for a to-be-saved file.

There are several other open issues for the API, and its shape is not stable yet. Some of the API's concepts like FileSystemHandle only make sense when used with the actual API, but not with a legacy fallback, so polyfilling or ponyfilling (as pointed out by my colleague Jeff Posnick) is—in my humble opinion—less of an option, at least for the moment.

My current thinking goes more in the direction of positioning this library as an abstraction like jQuery's $.ajax() or Axios' axios.get(), which a significant amount of developers still prefer even over newer APIs like fetch(). In a similar vein, Node.js offers a function fsPromises.readFile() that—apart from a FileHandle—also just takes a filename path string, that is, it acts as an optional shortcut to fsPromises.open(), which returns a FileHandle that one can then use with filehandle.readFile() that finally returns a Buffer or a string, just like fsPromises.readFile().

Thus, should the Native File System API then just have a window.readFile() method? Maybe. But more recently the trend seems to be to rather expose generic tools like AbortController that can be used to cancel many things, including fetch() rather than more specific mechanisms. When the lower-level primitives are there, developers can build abstractions on top, and optionally never expose the primitives, just like the fileOpen() and fileSave() methods in browser-nativefs that one can (but never has to) perfectly use without ever touching a FileSystemHandle.

Conclusion

Progressive enhancement in the age of Fugu APIs in my opinion is more alive than ever. I have shown the concept at the example of the Native File System API, but there are several other new API proposals where this idea (which by no means I claim as new) could be applied. For instance, the Shape Detection API can fall back to JavaScript or Web Assembly libraries, as shown in the Perception Toolkit. Another example is the (screen) Wake Lock API that can fall back to playing an invisible video, which is the way NoSleep.js implements it. As I wrote above, the experience probably will not be the same, but the next best thing. If you want, give browser-nativefs a try.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2020/01/23/progressive-enhancement-in-the-age-of-fugu-apis/.

Same same but different: Unicode Variation Selector-16

The other day, I did an analysis of Facebook's WebView, which you are kindly invited to read. They have a code path in which they check whether a given page is using AMPHTML, where \u26A1 is the Unicode code point escape of the ⚡ High Voltage emoji.

var nvtiming__fb_html_amp =
nvtiming__fb_html.hasAttribute("amp") ||
nvtiming__fb_html.hasAttribute("\u26A1");
console.log("FBNavAmpDetect:" + nvtiming__fb_html_amp);

An undetected fake AMP page

I was curious to see if they did something special when they detect a page is using AMP (spoiler alert: they do not), so I quickly hacked together a fake AMP page that seemingly fulfilled their simple test.

<html ⚡️>
<body>Fake AMP</body>
</html>

I am a big emoji fan, so instead of the <html amp> variant, I went for the <html ⚡> variant and entered the via the macOS emoji picker. To my surprise, Facebook logged "FBNavAmpDetect: false". Huh 🤷‍♂️?

⚡️ High Voltage sign is a valid attribute name

My first reaction was: <html ⚡️> does not quite look like what the founders of HTML had in mind, so maybe hasAttribute() is specified to return false when an attribute name is invalid. But what even is a valid attribute name? I consulted the HTML spec where it says (emphasis mine):

Attribute names must consist of one or more characters other than controls, U+0020 SPACE, U+0022 ("), U+0027 ('), U+003E (>), U+002F (/), U+003D (=), and noncharacters. In the HTML syntax, attribute names, even those for foreign elements, may be written with any mix of ASCII lower and ASCII upper alphas.

I was on company chat with Jake Archibald at that moment, so I confirmed my reading of the spec that is not a valid attribute name. Turns out, it is a valid name, but the spec is formulated in an ambiguous way, so Jake filed "HTML syntax" attribute names. And my lead to a rational explanation was gone.

Perfect Heisenbug?

Luckily a valid AMP boilerplate example was just a quick Web search away, so I copy-pasted the code and Facebook, as expected, reported "FBNavAmpDetect: true". I reduced the AMP boilerplate example until it looked like my fake AMP page, but still Facebook detected the modified boilerplate as AMP, but did not detect mine as AMP. Essentially my experiment looked like the below code sample. Perfect Heisenbug?

JavaScript console showing the code sample from this post

The Unicode Variation Selector-16

Jake eventually traced it down to the Unicode Variation Selector-16:

An invisible code point which specifies that the preceding character should be displayed with emoji presentation. Only required if the preceding character defaults to text presentation.

You may have seen this in effect with the Unicode snowman that appears in a textual ☃︎ as well as in an emoji representation ☃️ (depending on the device you read this on, they may both look the same). As far as I can tell, Chrome DevTools prefers to always render the textual variant, as you can see in the screenshot above. But with the help of the length() and the charCodeAt() functions, the difference gets visible.

document.querySelector('html').hasAttribute('⚡');
// false
document.querySelector('html').hasAttribute('⚡️');
// true
'⚡️'.length;
// 2
'⚡'.length;
// 1
'⚡'.charCodeAt(0) + ' ' + '⚡'.charCodeAt(1);
// "9889 NaN"
'⚡️'.charCodeAt(0) + ' ' + '⚡️'.charCodeAt(1);
// "9889 65039"

The AMP Validator and ⚡️

The macOS emoji picker creates the variant ⚡️, which includes the Variation Selector-16, but AMP requires the variant without, which I have also confirmed in the validator code. You can see in the screenshot below how the AMP Validator rejects one of the two High Voltage symbols.

AMP Validator rejecting the emoji variant with Variation Selector-16

Making this actionable

I have filed crbug.com/1033453 against the Chrome DevTools asking for rendering the characters differently, depending on whether the Variation Selector-16 is present or not. Further, I have opened a feature request on the AMP Project repo demanding that AMP should respect ⚡️ apart from ⚡. Same same, but different.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/12/12/same-same-but-different-unicode-variation-selector-16/.

Inspecting Facebook’s WebView

Both Facebook's Android app as well as Facebook's iOS app use a so-called in-app browser, sometimes also referred to as IAB. The core argument for using an in-app browser (instead of the user's default browser) is to keep users in the app and to enable closer app integration patterns (like "quote from a page in a new Facebook post"), while making others harder or even impossible (like "share this link to Twitter"). Technically, IABs are implemented as WebViews on Android, respectively as WKWebViews on iOS. To simplify things, from hereon, I will refer to both simply as WebViews.

In-App Browsers are less capable than real browsers

Turns out, WebViews are rather limited compared to real browsers like Firefox, Edge, Chrome, and to some extent also Safari. In the past, I have done some research on their limitations when it comes to features that are important in the context of Progressive Web Apps. The linked paper has all the details, but you can simply see for yourself by opening the 🕵️ PWA Feature Detector app that I have developed for this research in your regular browser, and then in a WebView like Facebook's in-app browser (you can share the link visible to just yourself on Facebook and then click through, or try to open my post in the app).

In-App Browsers can modify webpages

On top of limited features, WebViews can also be used for effectively conducting intended man-in-the-middle attacks, since the IAB developer can arbitrarily inject JavaScript code and also intercept network traffic. Most of the time, this feature is used for good. For example, an airline company might reuse the 💺 airplane seat selector logic on both their native app as well as on their Web app. In their native app, they would remove things like the header and the footer, which they then would show on the Web (this is likely the origin of the CSS can kill you meme).

If you build an IAB, don't use a WebView

For these reasons, people like Alex Russell—whom you should definitely follow—have been advocating against WebView-based IABs. Instead, you should wherever possible use Chrome Custom Tabs on Android, or the iOS counterpart SFSafariViewController. Alex writes:

Note that responsible native apps have a way of creating an "in app browser" that doesn't subvert user choice or break the web:

https://developer.chrome.com/multidevice/android/customtabs

Any browser can implement the protocol & default browser will be used. FB can enable this with their next update.

If you have to use a WebView-based IAB, mark it debuggable

Alex has been telling people for a long time that they should mark their WebView-based IABs debuggable. The actual code for that is a one-liner:

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
WebView.setWebContentsDebuggingEnabled(true);
}

Looking into Facebook's IAB

The other day, I learned with great joy that Facebook finally have marked their IAB debuggable 🎉. Patrick Meenan—whom you should likewise follow and whom you might know from the amazing WebPageTest project—writes in a Twitter thread:

You can now remote-debug sites in the Facebook in-app browser on Android. It is enabled automatically so once your device is in dev mode with USB debugging and a browser open just visit chrome://inspect to attach to the WebView.

The browser (on iOS and Android) is just a WebView so it should behave mostly like Chrome and Safari but it adds some identifiers to the UA string which sometimes causes pages that UA sniff to break.

Finally, if your analytics aren't breaking out the in-app browsers for you, I highly recommend you see if it is possible to enable. You might be shocked at how much of your traffic comes from an in-app browser (odds are it is the 3rd most common browser behind Chrome and Safari).

I have thus followed up on the invitation and had a closer look at their IAB by inspecting example.org and also a simple test page facebook-debug.glitch.me that contains the debugger statement in its head. I have linked a debug trace 📄 that you can open for yourself in the Performance tab of the Chrome DevTools.

User-Agent String

As pre-announced by Patrick, Facebook's IAB changes the user-agent string. The default WebView user-agent string looks something like Mozilla/5.0 (Linux; Android 5.1.1; Nexus 5 Build/LMY48B; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/43.0.2357.65 Mobile Safari/537.36 Facebook's IAB browser currently sends this:

navigator.userAgent
// "Mozilla/5.0 (Linux; Android 10; Pixel 3a Build/QQ2A.191125.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/78.0.3904.108 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/250.0.0.14.241;]"

Compared to the default user-agent string, the identifying bit is the suffix [FB_IAB/FB4A;FBAV/250.0.0.14.241;].

Added window properties

Facebook's IAB adds two new properties to window, with the values 0 and 1.

window.TEMPORARY
// 0
window.PERSISTENT
// 1

Added window object

Facebook further adds a new window object FbQuoteShareJSInterface.

document.onselectionchange = function() {
window.FbQuoteShareJSInterface.onSelectionChange(
window.getSelection().toString(),
window.location.href
);
};

This code is used for the "share quote" feature that allows users to mark text on a page to share.

Facebook's in-app browser and its "share quote" feature

Performance monitoring and feature detection

Facebook runs some performance monitoring via the Performance interface. This is split up in two scripts, each of which they seem to run three times. They also check if a given page is using AMP by checking for the presence of the amp or ⚡️ attribute on <html>.

void (function() {
try {
if (window.location.href === "about:blank") {
return;
}
if (
!window.performance ||
!window.performance.timing ||
!document ||
!document.body ||
document.body.scrollHeight <= 0 ||
!document.body.children ||
document.body.children.length < 1
) {
return;
}
var nvtiming__fb_t = window.performance.timing;
if (nvtiming__fb_t.responseEnd > 0) {
console.log("FBNavResponseEnd:" + nvtiming__fb_t.responseEnd);
}
if (nvtiming__fb_t.domContentLoadedEventStart > 0) {
console.log(
"FBNavDomContentLoaded:" + nvtiming__fb_t.domContentLoadedEventStart
);
}
if (nvtiming__fb_t.loadEventEnd > 0) {
console.log("FBNavLoadEventEnd:" + nvtiming__fb_t.loadEventEnd);
}
var nvtiming__fb_html = document.getElementsByTagName("html")[0];
if (nvtiming__fb_html) {
var nvtiming__fb_html_amp =
nvtiming__fb_html.hasAttribute("amp") ||
nvtiming__fb_html.hasAttribute("\u26A1");
console.log("FBNavAmpDetect:" + nvtiming__fb_html_amp);
}
} catch (err) {
console.log("fb_navigation_timing_error:" + err.message);
}
})();
// FBNavResponseEnd:1575904580720
// FBNavDomContentLoaded:1575904586057
// FBNavAmpDetect:false
document.addEventListener("DOMContentLoaded", event => {
console.info(
"FBNavDomContentLoaded:" +
window.performance.timing.domContentLoadedEventStart
);
});
// FBNavDomContentLoaded:1575904586057

Feature Policy tests

They run some Feature Policy tests via a function named getFeaturePolicyAllowListOnPage(). You can see the documentation for the tested directives on the Mozilla Developer Network.

(function() {
function getFeaturePolicyAllowListOnPage(features) {
const map = {};
const featurePolicy = document.policy || document.featurePolicy;
for (const feature of features) {
map[feature] = {
allowed: featurePolicy.allowsFeature(feature),
allowList: featurePolicy.getAllowlistForFeature(feature)
};
}
return map;
}
const allPolicies = [
"geolocation",
"midi",
"ch-ect",
"execution-while-not-rendered",
"layout-animations",
"vertical-scroll",
"forms",
"oversized-images",
"document-access",
"magnetometer",
"picture-in-picture",
"modals",
"unoptimized-lossless-images-strict",
"accelerometer",
"vr",
"document-domain",
"serial",
"encrypted-media",
"font-display-late-swap",
"unsized-media",
"ch-downlink",
"ch-ua-arch",
"presentation",
"xr-spatial-tracking",
"lazyload",
"idle-detection",
"popups",
"scripts",
"unoptimized-lossless-images",
"sync-xhr",
"ch-width",
"ch-ua-model",
"top-navigation",
"ch-lang",
"camera",
"ch-viewport-width",
"loading-frame-default-eager",
"payment",
"pointer-lock",
"focus-without-user-activation",
"downloads-without-user-activation",
"ch-rtt",
"fullscreen",
"autoplay",
"execution-while-out-of-viewport",
"ch-dpr",
"hid",
"usb",
"wake-lock",
"ch-ua-platform",
"ambient-light-sensor",
"gyroscope",
"document-write",
"unoptimized-lossy-images",
"sync-script",
"ch-device-memory",
"orientation-lock",
"ch-ua",
"microphone"
];
return getFeaturePolicyAllowListOnPage(allPolicies);
})();

Not all directives are currently supported by the WebView, so a number of warnings are logged. The recognized ones (i.e., the output of the getFeaturePolicyAllowListOnPage() function above) result in an object as follows.

{
"geolocation": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"midi": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"ch-ect": {
"allowed": false,
"allowList": []
},
"execution-while-not-rendered": {
"allowed": false,
"allowList": []
},
"layout-animations": {
"allowed": false,
"allowList": []
},
"vertical-scroll": {
"allowed": false,
"allowList": []
},
"forms": {
"allowed": false,
"allowList": []
},
"oversized-images": {
"allowed": false,
"allowList": []
},
"document-access": {
"allowed": false,
"allowList": []
},
"magnetometer": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"picture-in-picture": {
"allowed": true,
"allowList": [
"*"
]
},
"modals": {
"allowed": false,
"allowList": []
},
"unoptimized-lossless-images-strict": {
"allowed": false,
"allowList": []
},
"accelerometer": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"vr": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"document-domain": {
"allowed": true,
"allowList": [
"*"
]
},
"serial": {
"allowed": false,
"allowList": []
},
"encrypted-media": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"font-display-late-swap": {
"allowed": false,
"allowList": []
},
"unsized-media": {
"allowed": false,
"allowList": []
},
"ch-downlink": {
"allowed": false,
"allowList": []
},
"ch-ua-arch": {
"allowed": false,
"allowList": []
},
"presentation": {
"allowed": false,
"allowList": []
},
"xr-spatial-tracking": {
"allowed": false,
"allowList": []
},
"lazyload": {
"allowed": false,
"allowList": []
},
"idle-detection": {
"allowed": false,
"allowList": []
},
"popups": {
"allowed": false,
"allowList": []
},
"scripts": {
"allowed": false,
"allowList": []
},
"unoptimized-lossless-images": {
"allowed": false,
"allowList": []
},
"sync-xhr": {
"allowed": true,
"allowList": [
"*"
]
},
"ch-width": {
"allowed": false,
"allowList": []
},
"ch-ua-model": {
"allowed": false,
"allowList": []
},
"top-navigation": {
"allowed": false,
"allowList": []
},
"ch-lang": {
"allowed": false,
"allowList": []
},
"camera": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"ch-viewport-width": {
"allowed": false,
"allowList": []
},
"loading-frame-default-eager": {
"allowed": false,
"allowList": []
},
"payment": {
"allowed": false,
"allowList": []
},
"pointer-lock": {
"allowed": false,
"allowList": []
},
"focus-without-user-activation": {
"allowed": false,
"allowList": []
},
"downloads-without-user-activation": {
"allowed": false,
"allowList": []
},
"ch-rtt": {
"allowed": false,
"allowList": []
},
"fullscreen": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"autoplay": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"execution-while-out-of-viewport": {
"allowed": false,
"allowList": []
},
"ch-dpr": {
"allowed": false,
"allowList": []
},
"hid": {
"allowed": false,
"allowList": []
},
"usb": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"wake-lock": {
"allowed": false,
"allowList": []
},
"ch-ua-platform": {
"allowed": false,
"allowList": []
},
"ambient-light-sensor": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"gyroscope": {
"allowed": true,
"allowList": [
"https://example.com"
]
},
"document-write": {
"allowed": false,
"allowList": []
},
"unoptimized-lossy-images": {
"allowed": false,
"allowList": []
},
"sync-script": {
"allowed": false,
"allowList": []
},
"ch-device-memory": {
"allowed": false,
"allowList": []
},
"orientation-lock": {
"allowed": false,
"allowList": []
},
"ch-ua": {
"allowed": false,
"allowList": []
},
"microphone": {
"allowed": true,
"allowList": [
"https://example.com"
]
}
}

HTTP headers

I checked the response and request headers, but nothing special stood out. The only remarkable thing given that they look at Feature Policy is the absence of the Feature-Policy header.

Request header

:authority: facebook-debug.glitch.me
:method: GET
:path: /
:scheme: https
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
accept-encoding: gzip, deflate
accept-language: en-US,en;q=0.9,de-DE;q=0.8,de;q=0.7,ca-ES;q=0.6,ca;q=0.5
cache-control: no-cache
pragma: no-cache
referer: http://m.facebook.com/
sec-fetch-mode: navigate
sec-fetch-site: none
sec-fetch-user: ?1
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Linux; Android 10; Pixel 3a Build/QQ2A.191125.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/78.0.3904.108 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/250.0.0.14.241;]
x-requested-with: com.facebook.katana

Response header

accept-ranges: bytes
cache-control: max-age=0
content-length: 527
content-type: text/html; charset=utf-8
date: Mon, 09 Dec 2019 15:23:40 GMT
etag: W/"20f-16eeb283880"
last-modified: Mon, 09 Dec 2019 14:55:12 GMT
status: 200
vary: Origin

Conclusion

All in all, these are all the things Facebook did that I could observe on the pages that I have tested. I didn't notice any click listeners or scroll listeners (that could be used for engagement tracking of Facebook users with the pages they browse on) or any other kind of "phoning home" functionality, but they could of course have implemented this natively via the WebView's View.OnScrollChangeListener or View.OnClickListener, as they did for long clicks for the FbQuoteShareJSInterface.

That being said, if after reading this you prefer your links to open in your default browser, it's well hidden, but definitely possible: Settings > Media and Contacts > Links open externally.

Facebook Settings option: "Links open externally"

It goes without saying, but just in case: all code snippets in this post are owned by and copyright of Facebook.

Did you run a similar analysis with similar (or maybe different) findings? Let me know on Twitter or Mastodon by posting your thoughts with a link to this post. It will then show up as a Webmention at the bottom. On supporting platforms, you can simply use the "Share Article" button below.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/12/09/inspecting-facebooks-webview/.

Animated SVG favicons

When it comes to animating SVGs, there're three options: using CSS, JS, or SMIL. Each comes with its own pros and cons, whose discussion is beyond the scope of this article, but Sara Soueidan has a great article on the topic. In this post, I add a repeating shrink animation to a circle with all three methods, and then try to use these SVGs as favicons.

Animating SVG with CSS

Here's an example of animating an SVG with CSS based on the animation and the transform properties. I scale the circle from the center and repeat the animation forever:

<svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<style>
svg {
max-width: 100px;
}

circle {
display: block;
animation: 2s linear infinite both circle-animation;
transform-origin: 50% 50%;
}

@keyframes circle-animation {
0% {
transform: scale(1);
}
100% {
transform: scale(0);
}
}
</style>
<circle fill="red" cx="50" cy="50" r="45"/>
</svg>

Animating SVG with JS

The SVG <script> tag allows to add scripts to an SVG document. It has some subtle differences to the regular HTML <script>, for example, it uses the href instead of the src attribute, but above all it's important to know that any functions defined within any <script> tag have a global scope across the entire current document. Below, you can see an SVG script used to reduce the radius of the circle until it's equal to zero, then reset it to the initial value, and finally repeat this forever.

<svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<circle fill="blue" cx="50" cy="50" r="45" />
<script type="text/javascript"><![CDATA[
const circle = document.querySelector('circle');
let r = 45;
const animate = () => {
circle.setAttribute('r', r--);
if (r === 0) {
r = 45;
}
requestAnimationFrame(animate);
};
requestAnimationFrame(animate);
]]>
</script>
</svg>

Animating SVG with SMIL

The last example uses SMIL, where, via the <animate> tag inside of the <circle> tag, I declaratively describe that I want to animate the circle's r attribute (that determines the radius) and repeat it indefinitely.

<svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
<circle fill="green" cx="50" cy="50" r="45">
<animate attributeName="r" from="45" to="0" dur="2s" repeatCount="indefinite"/>
</circle>
</svg>

Using Animated SVGs as Images

Before using animated SVGs as favicons, I want to briefly discuss how you can use each of the three examples on a website. Again there're three options: referenced via the src attribute of an <img> tag, in an <iframe>, or inlined in the main document. Again, SVG scripts have access to the global scope, so they should definitely be used with care. Some user agents, for example, Google Chrome, don't run scripts for SVGs in <img>. The Glitch embedded below shows all variants in action. My recommendation would be to stick with CSS animations whenever you can, since it's the most compatible and future-proof variant.

Using Animated SVGs as Favicons

Since crbug.com/294179 is fixed, Chrome finally supports SVG favicons, alongside many other browsers. I have recently successfully experimented with prefers-color-scheme in SVG favicons, so I wanted to see if animated SVGs work, too. Long story short, it seems only Firefox supports them at the time of writing, and only favicons that are animated with either CSS or JS. You can see this working in Firefox in the screencast embedded below. If you open my Glitch demo in a standalone window, you can test this yourself with the radio buttons at the top.

Should you use this in practice? Probably not, since it can be really distracting. It might be useful as a progressive enhancement to show activity during a short period of time, for example, while a web application is busy with processing data. Before considering to use this, I would definitely recommend taking the user's prefers-reduced-motion preferences into account.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/12/01/animated-svg-favicons/.

The redesigned Blogccasion is live

I started to blog way back in 2005, and while I was skeptical whether it would work out and was worth my time, I just tried it and danced 💃 and wrote like no one was watching (which definitely was the case for a while). Many of these early posts are slightly embarrassing from today's point of view, but nevertheless I decided to keep them around, since they are a part of me. Remember, this was before social networks became a thing. Actually, social networks almost killed my blogging, in both 2016 and 2017 respectively I only wrote one post, but plenty of tweets and Facebook posts. Here's a screenshot of the old blog:

The old Blogccasion

One of the reasons why I blogged less was also my hand-rolled stack that the blog was built upon: A classic LAMP stack, consisting of Linux, Apache, MySQL, and handwritten PHP 5. Since I had (and still have) no clue of MySQL character encoding configuration, I couldn't store emoji 🤔 in my database, but hey ¯\_(ツ)_/¯. The switch to HTTPS then killed my login system (that before on HTTP anyone could have sniffed, did I mention I was clueless?). In the end I had to log into phpMyAdmin to enter a new blog post into the database by hand. It was clearly time for a change.

Luckily this was the time when static site builders became more and more popular. I had heard good things of Zach Leatherman's Eleventy, so I went for it. It was super helpful to have the eleventy-base-blog repository that shows how to get started with Eleventy. I took extra care to make sure all my old URLs still worked, and learned more than I wanted about .htaccess files and .htaccess rewrite maps, since we all know that cool URIs don't change. There I was with a modern stack, and a 2005 design.

Now, I've finally also updated the design, and, while I'm not a designer, I still quite like it. Obviously it supports prefers-color-scheme, aka. Dark Mode and also uses the <dark-mode-toggle> custom element, but I've also decided to go for a responsive "holy grail" layout that is based on CSS Grid.

Here're the resources that helped me build the new Blogccasion:

🙏 Thanks everyone for letting me stand on your shoulders!

There're still some rough edges, so if you encounter a problem, please report an issue. It's well-known that there're a lot of encoding errors in the older posts. At some point I broke my database in an attempt to convert it to UTF-8 🤦‍♂️… If you care, you can also propose an edit straightaway, the edit this page on GitHub link is 👇 at the bottom of each post. Thanks, and welcome to the new Blogccasion.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/09/29/the-redesigned-blogccasion-is-live/.

`prefers-color-scheme` in SVG favicons for dark mode icons

🎉 Chrome finally accepts SVG favicons now that crbug.com/294179, where this feature was demanded on September 18, 2013(!), was fixed. This means that you can style your icon with inline prefers-color-scheme and you'll get two icons for the price of one!

<!-- icon.svg -->
<svg width="100" height="100" xmlns="http://www.w3.org/2000/svg">
<style>
circle {
fill: yellow;
stroke: black;
stroke-width: 3px;
}
@media (prefers-color-scheme: dark) {
circle {
fill: black;
stroke: yellow;
}
}
</style>
<circle cx="50" cy="50" r="47"/>
</svg>
<!-- index.html -->
<link rel="icon" href="/icon.svg">

You can see a demo of this in action at 🌒 dark-mode-favicon.glitch.me ☀️. Until this feature will have landed in Chrome Stable/Beta/Dev/Canary, be sure to test it with the last Chromium build that you can download from François Beaufort's Chromium Downloader.

Demo app running in dark mode, showing the dark mode favicon being used.

Demo app running in light mode, showing the light mode favicon being used.

Full credits to Mathias Bynens, who independently has created almost the same demo as me that I didn't check, but whose link to Jake Archibald's post SVG & media queries I did follow. Mathias has now filed the follow-up bug crbug.com/1026539 that will improve the favicon update behavior (now you still need to reload the page after a color scheme change).

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/09/21/prefers-color-scheme-in-svg-favicons-for-dark-mode-icons/.

Project Fugu 🐡 at W3C TPAC

This week, I attended my now third W3C TPAC. After TPAC 2017 in Burlingame, CA, United States of America and TPAC 2018 in Lyon, France, TPAC 2019 was held in Fukuoka, Japan. For the first time, I felt like I could somewhat meaningfully contribute and had at least a baseline understanding of the underlying W3C mechanics. As each year, the TPAC agenda was crammed and overlaps were unavoidable. Below is the write-up of the meetings I had time to attend.

Day 1

On Monday, I attended the Service Workers Working Group (WG) meeting. The agenda this time was a mix of implementor updates, new proposals, and a lot of discussion of special cases. I know Jake Archibald is working on a summary post, so I leave it to him to summarize the day. The raw meeting minutes are available in case you're interested.

Day 2

On Tuesday, I visited the Web Application Security Working Group meeting as an observer. I was mostly interested in this WG because the agenda promised interesting proposals like Apple's /.well-known/change-password that was met with universal agreement. Some interesting discussion also sparked around again Apple's isLoggedIn() API proposal. I was reminded of why on the web we can't have nice things through an attack vector that leverages HSTS for tracking purposes. Luckily there is browser mitigation in place to prevent this. The meeting minutes cover the entire day.

Day 3

Wednesday was unconference day with 59(!) breakout sessions. Other than the at times tedious working group sessions, I find breakout sessions to be oftentimes more interesting and an opportunity to learn new things.

Breakout Session JS Built-in Modules

The first breakout session I attended was on JS built-in modules, a TC39 proposal by Apple for a JavaScript Built-in Library. The session's minutes are available, in general there was a lot of discussion and disagreement around namespaces and how built-in modules should be governed.

Breakout Session New Module Types: JSON, CSS, and HTML

The next session was on new module types for JSON, CSS, and HTML. As the developer of <dark-mode-toggle>, I'm fully in favor of getting rid of the clumsy innerHTML all the things!!!1! approach that vanilla JS custom elements currently make the programmer to follow. If you're likewise interested, subscribe to the CSS Modules issue and the HTML Modules issue in the Web Components WG repo. The discussion circulated mostly around details how @imports would work and how to convey the type of the import to avoid security issues, for example following the <link rel="preload"> way. The meeting minutes have the full details.

// Non-working example
import styles from 'styles.css' as stylesheet;
import settings from 'settings.json' as json;
Breakout Session Mini App Standardization

The Mini App Standardization session, organized by the Chinese Web Interest Group, was super interesting to me. In preparation of the Google Developer Days in Shanghai, China, that I spoke at right before TPAC, I have looked at WeChat mini programs and documented the developer experience and how close to and yet how far from the web they are. A couple of days before TPAC, the Chinese Web Interest Group had released a white paper that documents their ideas. The success the various mini apps platforms have achieved deserves our full respect. There were, however, various voices—including from the TAG—that urged the various stakeholders to converge their work with efforts made in the area of Progressive Web Apps, for example around the Web App Manifest rather than create yet another manifest-like format. Read the full session minutes for all details. One of the results of the session was the creation of the MiniApps Ecosystem Community Group that I hope to join.

Breakout Session For a More Capable Web—Project Fugu

Together with Anssi Kostiainen from Intel and John Jansen from Microsoft, I organized a breakout session for a more capable web under the umbrella of Project Fugu 🐡. You can see our slides embedded below. In the session we argue that to remain relevant with native, hybrid, or mini apps, web apps, too, need access to a comparable set of APIs. We briefly touched upon the APIs being worked on by the cross-company project partners, and then opened the floor for an open discussion on why we see the browser-accessible web in danger if we don't move it forward now, despite all fully acknowledged challenges around privacy, security, and compatibility. You can follow the discussion in the excellent(!) session minutes, courtesy of Anssi.

Day 4 and Day 5

Thursday and Friday were dedicated to the Devices and Sensor WG. The agenda was not too packed, but still kept us busy for one and a half days. We discussed almost from the start about permissions and how they should be handled. Permissions are a big topic in Project Fugu 🐡 and I'm happy that there's work ongoing in the TAG to improve the situation, including efforts around the Permissions API that is unfortunately not universally supported, leading to inconsistencies with some APIs having a static method for getting permission, others asking for permission upon the first usage attempt, and yet others to integrate with the Permissions API. For the Geolocation Sensor API, we agreed to try retrofitting expressive configuration of foreground tracking into the Geolocation API specification instead of doing it in Geolocation Sensor, which should improve vendor adoption. For geofencing and background geolocation tracking, we decided to explore Notification Triggers and Wake Locks respectively, which both weren't options when the work on Geolocation Sensor was started initially.

Maryam Mehrnezhad, an invited expert in the working group whose research is focused on privacy and security, presented on and discussed with us the implications on both fields that sensors potentially have and whether mitigation like accuracy bucketing or frequency capping are effective. The minutes capture the conversation well.

Finally, we changed the surface of the Wake Lock API hopefully for the last time. The previous changes just didn't feel right from a developer experience point of view, so better change the API while it's behind a flag than be sorry forever. I reckon I do feel sorry for the implementors Rijubrata Bhaumik and Raphael Kubo da Costa… 🙇

partial interface Navigator {
[SameObject] readonly attribute WakeLock wakeLock;
};

partial interface WorkerNavigator {
[SameObject] readonly attribute WakeLock wakeLock;
};

[Exposed=(Window,DedicatedWorker)]
interface WakeLock {
Promise<unsigned long long> request(WakeLockType type);
Promise<void> release(unsigned long long wakeLockID);
};

dictionary WakeLockEventInit {
required unsigned long long wakeLockID;
};

[Exposed=(Window,DedicatedWorker)]
interface WakeLockEvent : Event {
constructor(DOMString type, WakeLockEventInit init);
readonly attribute unsigned long long wakeLockID;
};

As a general theme, we "hardened" a number of APIs, for example decided to integrate geolocation with Feature Policy and now require a secure connection for the Battery Status API. The chairs Anssi and Reilly Grant have scribed the one and a half days brilliantly, the minutes for day 1 and day 2 are both online.

Conclusion

As I wrote in the beginning, TPAC slowly starts to feel like a venue where I can make some valuable contributions. Rowan Merewood put it like this in a tweet:

The biggest thing I'm learning at [#W3Ctpac] is if you want to change the web, it's a surprisingly small group of people you need to convince. The surrounding appearance of the W3C and all its language is intimidating, but underneath it's just other human beings you can talk to.

To which Mariko Kosaka fittingly responds:

[Y]eah, but let's not forget getting to talk to that small set of people most often comes with being very, very, very privileged. […]

It's indeed a massive privilege to work for a company that has the money to take part in W3C activities, fly people across the world, and let them stay in five star conference hotels. With all the love for the web and all the great memories of a fantastic TPAC, let's not forget: the web is threatened from multiple angles, and being able to work in the standards bodies on defending it is a privilege of the few. Both shouldn't be the case.

Thomas Steiner
This post appeared first on https://blog.tomayac.com/2019/09/21/project-fugu-at-w3c-tpac/.