The post assumes you already have an Xcode project either created
manually
or via the
converter script
and that you use Swift.
Here is an
example build script
for one of my extensions for reference.
Some of these steps may improve or change over time for new versions of Xcode; this guide was written for Version 12.0 (12A7209).
Caveat: this is my first time interacting with Xcode, so if any of the steps do not make sense, thanks for
correcting me.
Change the bundle identifier of the extension from com.example.foo-Extension to
com.example.foo.Extension (that is, replace the '-' with a '.') and reflect the change in
ViewController.swift. For some reason this is necessary.
After my W3C TPAC breakout session focused on
Project Fugu
last year, this year, too, I ran a breakout session titled "Learning from Mini Apps"
at the fully virtual TPAC 2020 event.
In this breakout session, I first explained what mini apps are and how to build them,
and then moved on to an open discussion focused on what Web developers can learn from mini apps
and their developer experience.
The TPAC folks have done an ace (👏) job and have put all the resources from my session online
(and everyone else's of course):
The 📹 talk recording
if you fancy watching the whole talk.
For being a first-time virtual event, communication went really well.
It felt like everyone has learned by now how to discuss in virtual rooms,
and Zoom as the communication platform held up well.
While I appreciate the W3C team having made an effort to replace hallway conversations,
I didn't attend any of these slots.
It just felt exhausting to do those on top of 11pm meetings or 7am slots,
apart from the "just fine" afternoon slots
(the
"golden hour"
is actually super friendly for people in the EU), but time zones are hard.
I have landed an article over on web.dev
that talks about the Gamepad API.
One piece of information from this article that our editorial board was not comfortable
with having in there was instructions on how to play the Chrome dino game on a Nintendo Switch.
So, here you go with the steps right here on my private blog instead.
Press any of the Nintendo Switch's buttons to play!
The Nintendo Switch contains a
hidden browser,
which serves for logging in to Wi-Fi networks behind a captive portal.
The browser is pretty barebones and does not have a URL bar,
but, once you have navigated to a page, it is fully usable.
When doing a connection test in system settings, the Switch will detect that the captive portal
is present and display an error for it when the response for
http://conntest.nintendowifi.net/
does not include the X-Organization: Nintendo HTTP header.
I can make creative use of this by pointing the Switch to a DNS server
that simulates a captive portal that then redirects to a search engine.
Go to System Settings and then Internet Settings and
find the Wi-Fi network that your Switch is connected to. Tap Change Settings.
Find the section with the DNS Settings and add
45.55.142.122 as a new Primary DNS.
Note that this DNS server is not operated by me
but a third-party, so proceed at your own risk.
Save the settings and then tap Connect to This Network.
The Switch will tell you that Registration is required to use this network. Tap Next.
🚨 For regular Switch online services to work again,
turn your DNS settings back to Automatic.
Conveniently, the Switch remembers previous manual DNS settings,
so you can easily toggle between Automatic and Manual.
For the Chrome dino gamepad demo to work,
I have ripped out the Chrome dino game from the core Chromium project
(updating an earlier effort by
Arnelle Ballane),
placed it on a standalone site, extended the existing gamepad API implementation by adding ducking
and vibration effects, created a full screen mode,
and Mehul Satardekar
contributed a dark mode implementation.
Happy gaming!
You can also play
Chrome dino
with your gamepad on this very site.
The source code is available
on GitHub.
Check out the gamepad polling implementation in
trex-runner.js
and note how it is emulating key presses.
Google, like many other companies, has a required working from home (WFH) policy
during the COVID-19 crisis.
It has taken me a bit, but now I have found a decent WFH setup.
An iPad Pro (12.9-inch) (2nd generation):
This is a dented, scratched device from 2017 that I bought for €349.99
(incl. tax, customs, and shipping) from a dealer
in the UK
on eBay
(this is the search deeplink that I used).
An iPad Air 2: This is a private device from 2014.
A Lenovo Smart Clock:
This mostly just serves as a photo frame, and of course as a desk clock.
The Sidecar feature, so I can use my iPad Pro
as a second screen with my MacBook Air.
The coolest about this feature is that I can multitask it away (see next bullet)
without the laptop readjusting the screen arrangement.
The Hangouts Meet app
on my iPad Pro, so my laptop performance stays unaffected when I am on a video call.
A nice side-effect is that the camera of the iPad Pro is in the middle of my two screens,
so no weird "looking over the other person" effect when I am on a call.
The Gmail app
on my iPad Air, so I can always have an eye on my email.
(Honorable mention) The iDisplay app on the iPad Air
with the iDisplay server running on the laptop, so I can use the iPad Air as a third screen.
Unfortunately, since I do not have another free USB C port on my laptop,
it is really laggy over Wi-Fi, but works when I really need maximum screen real estate.
(Out of scope) The Free Sidecar project promises to
enable Sidecar on older iPads like my iPad Air 2,
since apparently Apple simply blocks older devices for no particular reason.
It requires temporarily turning off System Integrity Protection,
which is something I cannot (and do not want to) do on my corporate laptop.
A school desk—This
is a desk we had bought earlier on eBay, placed on two kids' chairs to convert it into a standing desk.
Some shoe boxes to elevate the two main screens to eye height.
I had quite some neck pain during the first couple of days.
It is definitely not perfect, but I am quite happy with it now.
I very much want the crisis to be over, but (with the kids back in school),
I could probably get used to permanently working from home.
The Asynchronous Clipboard API
provides direct access to read and write clipboard data.
Apart from text, since Chrome 76, you can also copy and paste image data with the API.
For more details on this, check out my
article on web.dev.
Here's the gist of how copying an image blob works:
Note that you need to pass an array of ClipboardItems to the navigator.clipboard.write() method,
which implies that you can place more than one item on the clipboard
(but this is not yet implemented in Chrome as of March 2020).
I have to admit, I only used to think of the clipboard as a one-item stack,
so any new item replaces the existing one.
However, for example, Microsoft Office 365's clipboard on Windows 10 supports
up to 24 clipboard items.
The generic code for pasting an image, that is, for reading from the clipboard,
is a little more involved.
Also be advised that reading from the clipboard triggers a
permission prompt
before the read operation can succeed.
Here's the trimmed down
example from my article:
constpaste=async()=>{ try{ const clipboardItems =await navigator.clipboard.read(); for(const clipboardItem of clipboardItems){ for(const type of clipboardItem.types){ returnawait clipboardItem.getType(type); } } }catch(err){ console.error(err.name, err.message); } }
See how I first iterate over all clipboardItems
(reminder, there can be just one in the current implementation),
but then also iterate over all clipboardItem.types of each individual clipboardItem,
only to then just stop at the first type and return whatever blob I encounter there.
So far I haven't really payed much attention to what this enables,
but yesterday, I had a sudden epiphany 🤯.
Before I get into the details of multi-MIME type copying,
let me quickly derail to
server-driven content negotiation, quoting straight from MDN:
In server-driven content negotiation, or proactive content negotiation, the browser
(or any other kind of user-agent) sends several HTTP headers along with the URL.
These headers describe the preferred choice of the user.
The server uses them as hints and an internal algorithm chooses the best content
to serve to the client.
A similar content negotiation mechanism takes place with copying.
You have probably encountered this effect before
when you have copied rich text, like formatted HTML, into a plain text field:
the rich text is automatically converted to plain text.
(💡 Pro tip: to force pasting into a rich text context without formatting,
use Ctrl + Shift + v on Windows,
or Cmd + Shift + v on macOS.)
So back to content negotiation with image copying.
If you copy an SVG image, then open macOS
Preview,
and finally click "File" > "New from Clipboard",
you would probably expect an image to be pasted.
However, if you copy an SVG image and paste it into
Visual Studio Code
or into SVGOMG's "Paste markup" field,
you would probably expect the source code to be pasted.
With multi-MIME type copying, you can achieve exactly that 🎉.
Below is the code of a future-proof copy function and some helper methods
with the following functionality:
For images that are not SVGs, it creates a textual representation
based on the image's alt text attribute.
For SVG images, it creates a textual representation based on the SVG source code.
At present, the Async Clipboard API only works with image/png,
but nevertheless the code tries to put a representation in the image's original MIME type
into the clipboard, apart from a PNG representation.
So in the generic case, for an SVG image, you would end up with three representations:
the source code as text/plain, the SVG image as image/svg+xml, and a PNG render as image/png.
constcopy=async(img)=>{ // This assumes you have marked up images like so: // <img // src="foo.svg" // data-mime-type="image/svg+xml" // alt="Foo"> // // Applying this markup could be automated // (for all applicable MIME types): // // document.querySelectorAll('img[src*=".svg"]') // .forEach((img) => { // img.dataset.mimeType = 'image/svg+xml'; // }); const mimeType = img.dataset.mimeType; // Always create a textual representation based on the // `alt` text, or based on the source code for SVG images. let text =null; if(mimeType ==='image/svg+xml'){ text =awaittoSourceBlob(img); }else{ text =newBlob([img.alt],{type:'text/plain'}) } const clipboardData ={ 'text/plain': text, }; // Always create a PNG representation. clipboardData['image/png']=awaittoPNGBlob(img); // When dealing with a non-PNG image, create a // representation in the MIME type in question. if(mimeType !=='image/png'){ clipboardData[mimeType]=awaittoOriginBlob(img); } try{ await navigator.clipboard.write([ newClipboardItem(clipboardData), ]); }catch(err){ // Currently only `text/plain` and `image/png` are // implemented, so if there is a `NotAllowedError`, // remove the other representation. console.warn(err.name, err.message); if(err.name ==='NotAllowedError'){ const disallowedMimeType = err.message.replace( /^.*?\s(\w+\/[^\s]+).*?$/,'$1') delete clipboardData[disallowedMimeType]; try{ await navigator.clipboard.write([ newClipboardItem(clipboardData), ]); }catch(err){ throw err; } } } // Log what's ultimately on the clipboard. console.log(clipboardData); };
// Draws an image on an offscreen canvas // and converts it to a PNG blob. consttoPNGBlob=async(img)=>{ const canvas =newOffscreenCanvas( img.naturalWidth, img.naturalHeight); const ctx = canvas.getContext('2d'); // This removes transparency. Remove at will. ctx.fillStyle ='#fff'; ctx.fillRect(0,0, canvas.width, canvas.height); ctx.drawImage(img,0,0); returnawait canvas.convertToBlob(); };
// Fetches an image resource and returns // its blob of whatever MIME type. consttoOriginBlob=async(img)=>{ const response =awaitfetch(img.src); returnawait response.blob(); };
// Fetches an SVG image resource and returns // a blob based on the source code. consttoSourceBlob=async(img)=>{ const response =awaitfetch(img.src); const source =await response.text(); returnnewBlob([source],{type:'text/plain'}); };
If you use this copy function (demo below ⤵️) to copy an SVG image,
for example, everyone's favorite
symptoms of coronavirus 🦠 disease diagram,
and paste it in macOS Preview (that does not support SVG) or the "Paste markup" field of
SVGOMG, this is what you get:
The macOS Preview app with a pasted PNG image.
The SVGOMG web app with a pasted SVG image.
You can play with this code in the embedded example below.
Unfortunately you can't play with this code in the embedded example below yet,
since webappsec-feature-policy#322
is still open.
The demo works if you open it directly on Glitch.
Programmatic multi-MIME type copying is a powerful feature.
At present, the Async Clipboard API is still limited,
but raw clipboard access is on the radar of the
🐡 Project Fugu team
that I am a small part of.
The feature is being tracked as crbug/897289.
All that being said, raw clipboard access has its risks, too, as clearly pointed out in the
TAG review.
I do hope use cases like multi-MIME type copying that I have motivated in this blog post
can help create developer enthusiasm so that browser engineers and security experts can make sure
the feature gets implemented and lands in a secure way.
The PageSpeed modules
(not to be confused with the
PageSpeed Insights
site analysis service),
are open-source webserver modules that optimize your site automatically.
Namely, there is mod_pagespeed
for the Apache server and
ngx_pagespeed
for the Nginx server.
For example, PageSpeed can automatically create WebP versions for all your image resources,
and conditionally only serve the format to clients that accept image/webp.
I use it on this very blog, inspect a request to any JPEG image
and see how on supporting browsers it gets served as WebP.
When it comes to compression, Brotli really makes a difference.
Brotli compression is only supported over HTTPS and is requested by clients
by including br in the accept-encoding header.
In practice, Chrome sends
accept-encoding: gzip, deflate, br.
As an example for the positive impact compared to gzip, check out a recent case study shared by
Addy Osmani
in which the web team of the hotel company Treebo share their
Tale of Brotli Compression.
The even better news is that while we are waiting for native Brotli support in PageSpeed,
we can just outsource Brotli compression to the underlying webserver.
To do so, simply disable PageSpeed's HTTPCache Compression.
Quoting from the documentation:
To configure cache compression, set HttpCacheCompressionLevel to values between -1 and 9,
with 0 being off, -1 being gzip's default compression, and 9 being maximum compression.
📢 So to make PageSpeed work with Brotli, what you want in your
pagespeed.conf
file is a new line:
# Disable PageSpeed's gzip compression, so the server's # native Brotli compression kicks in via `mod_brotli` # or `ngx_brotli`. ModPagespeedHttpCacheCompressionLevel 0
Rather than hoping for graceful degradation, [progressive enhancement] builds documents
for the least capable or differently capable devices first,
then moves on to enhance those documents with separate logic for presentation,
in ways that don't place an undue burden on baseline devices
but which allow a richer experience for those users with modern graphical browser software.
While in 2003, progressive enhancement was mostly about using presentational features
like at the time modern CSS properties, unobtrusive JavaScript for improved usability,
and even nowadays basic things like Scalable Vector Graphics;
I see progressive enhancement in 2020 as being about using new functional browser capabilities.
Feature support for core JavaScript language features by major browsers is great.
Kangax' ECMAScript 2016+ compatibility table
is almost all green, and browser vendors generally agree and are quick to implement.
In contrast, there is less agreement on what we colloquially call Fugu 🐡 features.
In Project Fugu,
our objective is the following:
Enable web apps to do anything native apps can,
by exposing the capabilities of native platforms to the web platform,
while maintaining user security, privacy, trust, and other core tenets of the web.
To get an impression of the debate around these features
when it comes to the different browser vendors, I recommend reading the discussions
around the request for a
WebKit position on Web NFC
or the request for a
Mozilla position on screen Wake Lock
(both discussions contain links to the particular specs in question).
In some cases, the result of these positioning threads might be a "we agree to disagree".
And that's fine.
As a result of this disagreement, some Fugu features
will probably never be implemented by all browser vendors.
But what does this mean for developers?
Now and then, in 2003 just like in 2020,
feature detection
plays a central role.
Before using a potentially future new browser capability like, say, the
Native File System API,
developers need to feature-detect the presence of the API.
For the Native File System API, it might look like this:
if('chooseFileSystemEntries'in window){ // Yay, the Native File System API is available! 💾 }else{ // Nay, a legacy approach is required. 😔 }
In the worst case, there is no legacy approach (the else branch in the code snippet above).
Some Fugu features are so groundbreakingly new that there simply is no replacement.
The Contact Picker API (that allows users to select contacts
from their device's native contact manager) is such an example.
But in other cases, like with the Native File System API,
developers can fall back to
<a download>
for saving and
<input type="file">
for opening files.
The experience will not be the same (while you can open a file, you cannot write back to it;
you will always create a new file that will land in your Downloads folder),
but it is the next best thing.
A suboptimal way to deal with this situation would be to force users to load both code paths,
the legacy approach and the new approach.
Luckily,
dynamic import()
makes differential loading feasible and—as a
stage 4 of the TC39 process
feature—has great browser support.
I have been exploring this pattern of progressively enhancing a web application with Fugu features.
The other day, I came across an interesting project by
Christopher Chedeau, who also goes by
@Vjeux on most places on the Internet.
Christopher blogged
about a new app of his, Excalidraw, and how the project "exploded"
(in a positive sense).
Made curious from the blog post, I played with the app myself
and immediately thought that it could profit from the Native File System API.
I opened an initial Pull Request
that was quickly merged and that implements the fallback scenario mentioned above,
but I was not really happy with the code duplication I had introduced.
As the logical next step, I created an experimental library
that supports the differential loading pattern via dynamic import().
Introducing browser-nativefs,
an abstraction layer that exposes two functions, fileOpen() and fileSave(),
which under the hood either use the Native File System API or the <a download> and
<input type="file"> legacy approach.
A Pull Request based on this library is now merged
into Excalidraw, and so far it seems to work fine (only the dynamic import()breaks CodeSandbox,
likely a known issue).
You can see the core API of the library below.
// The imported methods will use the Native File // System API or a fallback implementation. import{ fileOpen, fileSave, }from'https://unpkg.com/browser-nativefs';
(async()=>{ // Open a file. const blob =awaitfileOpen({ mimeTypes:['image/*'], });
// Open multiple files. const blobs =awaitfileOpen({ mimeTypes:['image/*'], multiple:true, });
// Save a file. awaitfileSave(blob,{ fileName:'Untitled.png', }); })();
#148
on whether a File object should have an attribute
that points to its associated
FileSystemHandle.
#149
on the ability to provide a name hint for a to-be-saved file.
There are several other open issues
for the API, and its shape is not stable yet.
Some of the API's concepts like FileSystemHandle only make sense when used with the actual API,
but not with a legacy fallback,
so polyfilling
or ponyfilling (as pointed out by my colleague
Jeff Posnick) is—in my humble opinion—less of an option,
at least for the moment.
My current thinking goes more in the direction of positioning this library as an abstraction
like jQuery's $.ajax() or
Axios' axios.get(),
which a significant amount of developers still prefer even over newer APIs like fetch().
In a similar vein, Node.js offers a function
fsPromises.readFile()
that—apart from a FileHandle—also
just takes a filename path string, that is, it acts as an optional shortcut to
fsPromises.open(),
which returns a FileHandle
that one can then use with
filehandle.readFile()
that finally returns a Buffer or a string, just like fsPromises.readFile().
Thus, should the Native File System API then just have a window.readFile() method? Maybe.
But more recently the trend seems to be to rather expose generic tools like
AbortController
that can be used to cancel many things, including
fetch()
rather than more specific mechanisms.
When the lower-level primitives are there, developers can build abstractions on top,
and optionally never expose the primitives, just like the fileOpen() and fileSave() methods
in browser-nativefs that one can (but never has to) perfectly use
without ever touching a FileSystemHandle.
Progressive enhancement in the age of Fugu APIs in my opinion is more alive than ever.
I have shown the concept at the example of the Native File System API,
but there are several other new API proposals where this idea (which by no means I claim as new)
could be applied.
For instance, the Shape Detection API
can fall back to JavaScript or Web Assembly libraries, as shown in the
Perception Toolkit.
Another example is the (screen) Wake Lock API
that can fall back to playing an invisible video,
which is the way NoSleep.js implements it.
As I wrote above, the experience probably will not be the same,
but the next best thing.
If you want, give browser-nativefs a try.
I was curious to see if they did something special when they detect a page is using AMP
(spoiler alert: they do not),
so I quickly hacked together a fake AMP page that seemingly fulfilled their simple test.
<html⚡️> <body>Fake AMP</body> </html>
I am a big emoji fan, so instead of the
<html amp>
variant, I went for the <html ⚡> variant and entered the ⚡ via the macOS emoji picker.
To my surprise, Facebook logged "FBNavAmpDetect: false". Huh 🤷♂️?
My first reaction was: <html ⚡️> does not quite look like what the founders of HTML had in mind,
so maybe hasAttribute()
is specified to return false
when an attribute name is invalid.
But what even is a valid attribute name?
I consulted the HTML spec
where it says (emphasis mine):
Attribute names must consist of one or more characters
other than controls, U+0020 SPACE, U+0022 ("), U+0027 ('), U+003E (>), U+002F (/), U+003D (=),
and noncharacters. In the HTML syntax, attribute names, even those for foreign elements,
may be written with any mix of ASCII lower and ASCII upper alphas.
I was on company chat with Jake Archibald at that moment,
so I confirmed my reading of the spec that ⚡ is not a valid attribute name.
Turns out, it is a valid name, but the spec is formulated in an ambiguous way, so Jake filed
"HTML syntax" attribute names.
And my lead to a rational explanation was gone.
Luckily a valid AMP boilerplate example
was just a quick Web search away, so I copy-pasted the code and Facebook, as expected,
reported "FBNavAmpDetect: true".
I reduced the AMP boilerplate example until it looked like my fake AMP page,
but still Facebook detected the modified boilerplate as AMP, but did not detect mine as AMP.
Essentially my experiment looked like the below code sample.
Perfect Heisenbug?
An invisible code point which specifies that the preceding character should be displayed
with emoji presentation. Only required if the preceding character defaults to text presentation.
You may have seen this in effect with the Unicode snowman that appears in a textual ☃︎
as well as in an emoji representation ☃️ (depending on the device you read this on,
they may both look the same).
As far as I can tell, Chrome DevTools prefers to always render the textual variant,
as you can see in the screenshot above.
But with the help of the
length()
and the
charCodeAt()
functions, the difference gets visible.
The macOS emoji picker creates the variant ⚡️, which includes the Variation Selector-16,
but AMP requires the variant without, which I have also confirmed in the
validator code.
You can see in the screenshot below how the AMP Validator
rejects one of the two High Voltage symbols.
I have filed crbug.com/1033453 against the Chrome DevTools
asking for rendering the characters differently, depending on whether the Variation Selector-16
is present or not.
Further, I have opened a feature request on the AMP Project repo demanding that
AMP should respect ⚡️ apart from ⚡.
Same same, but different.
Both
Facebook's Android app
as well as
Facebook's iOS app use a so-called in-app browser,
sometimes also referred to as IAB.
The core argument for using an in-app browser (instead of the user's default browser)
is to keep users in the app and to enable closer app integration patterns
(like "quote from a page in a new Facebook post"), while making others harder or even impossible
(like "share this link to Twitter").
Technically, IABs are implemented as
WebViews on Android,
respectively as
WKWebViews on iOS.
To simplify things, from hereon, I will refer to both simply as WebViews.
In-App Browsers are less capable than real browsers ⚓
Turns out, WebViews are rather limited compared to real browsers like Firefox, Edge, Chrome,
and to some extent also Safari.
In the past, I have done some research
on their limitations when it comes to features that are important in the context of Progressive Web Apps.
The linked paper has all the details, but you can simply see for yourself by opening
the 🕵️ PWA Feature Detector app
that I have developed for this research in your regular browser,
and then in a WebView like Facebook's in-app browser (you can share the
link visible to just yourself on Facebook
and then click through, or try to open my
post in the app).
On top of limited features, WebViews can also be used for effectively conducting intended
man-in-the-middle attacks,
since the IAB developer can arbitrarily
inject JavaScript code
and also
intercept network traffic.
Most of the time, this feature is used for good.
For example, an airline company might reuse the 💺 airplane seat selector logic
on both their native app as well as on their Web app.
In their native app, they would remove things like the header and the footer,
which they then would show on the Web (this is likely the origin of the
CSS can kill you meme).
For these reasons, people like Alex Russell—whom
you should definitely follow—have been advocating against WebView-based IABs.
Instead, you should wherever possible use
Chrome Custom Tabs on Android,
or the iOS counterpart
SFSafariViewController.
Alex writes:
Note that responsible native apps have a way of creating an "in app browser" that doesn't subvert user choice or break the web:
The other day, I learned with great joy that Facebook finally have marked their IAB debuggable 🎉.
Patrick Meenan—whom you should likewise follow
and whom you might know from the amazing WebPageTest
project—writes in a Twitter thread:
You can now remote-debug sites in the Facebook in-app browser on Android.
It is enabled automatically so once your device is in dev mode with USB debugging
and a browser open just visit chrome://inspect to attach to the WebView.
The browser (on iOS and Android) is just a WebView so it should behave
mostly like Chrome and Safari but it adds some identifiers to the UA string
which sometimes causes pages that UA sniff to break.
Finally, if your analytics aren't breaking out the in-app browsers for you,
I highly recommend you see if it is possible to enable.
You might be shocked at how much of your traffic comes from an in-app browser
(odds are it is the 3rd most common browser behind Chrome and Safari).
I have thus followed up on the invitation and had a closer look at their IAB by inspecting
example.org and also a simple test page
facebook-debug.glitch.me that contains the
debugger
statement in its head.
I have linked a debug trace 📄 that you can open for yourself
in the Performance tab of the Chrome DevTools.
As pre-announced by Patrick, Facebook's IAB changes the user-agent string.
The
default WebView user-agent string
looks something like Mozilla/5.0 (Linux; Android 5.1.1; Nexus 5 Build/LMY48B; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/43.0.2357.65 Mobile Safari/537.36
Facebook's IAB browser currently sends this:
navigator.userAgent // "Mozilla/5.0 (Linux; Android 10; Pixel 3a Build/QQ2A.191125.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/78.0.3904.108 Mobile Safari/537.36 [FB_IAB/FB4A;FBAV/250.0.0.14.241;]"
Compared to the default user-agent string, the identifying bit is the suffix [FB_IAB/FB4A;FBAV/250.0.0.14.241;].
Facebook runs some performance monitoring via the
Performance interface.
This is split up in two scripts, each of which they seem to run three times.
They also check if a given page is using AMP
by checking for the presence of the amp or ⚡️ attribute on <html>.
They run some Feature Policy
tests via a function named getFeaturePolicyAllowListOnPage().
You can see the documentation for the tested
directives
on the Mozilla Developer Network.
Not all directives are currently supported by the WebView, so a number of warnings are logged.
The recognized ones (i.e., the output of the getFeaturePolicyAllowListOnPage() function above)
result in an object as follows.
I checked the response and request headers, but nothing special stood out.
The only remarkable thing given that they look at Feature Policy is the absence of the
Feature-Policy header.
All in all, these are all the things Facebook did that I could observe
on the pages that I have tested.
I didn't notice any click listeners or scroll listeners
(that could be used for engagement tracking of Facebook users with the pages they browse on)
or any other kind of "phoning home" functionality,
but they could of course have implemented this natively
via the WebView's
View.OnScrollChangeListener
or
View.OnClickListener,
as they
did
for long clicks for the FbQuoteShareJSInterface.
That being said, if after reading this you prefer your links to open in your default browser,
it's well hidden, but definitely possible: Settings > Media and Contacts > Links open externally.
It goes without saying, but just in case:
all code snippets in this post are owned by and copyright of Facebook.
Did you run a similar analysis with similar (or maybe different) findings?
Let me know on Twitter or Mastodon by posting your thoughts with a link to this post.
It will then show up as a Webmention at the bottom.
On supporting platforms, you can simply use the "Share Article" button below.
When it comes to animating SVGs, there're three options: using
CSS,
JS, or
SMIL.
Each comes with its own pros and cons, whose discussion is beyond the scope of this article,
but Sara Soueidan has a great
article on the topic.
In this post, I add a repeating shrink animation to a circle with all three methods,
and then try to use these SVGs as favicons.
Here's an example of animating an SVG with CSS based on the
animation and the
transform properties.
I scale the circle from the center and repeat the animation forever:
The SVG <script>
tag allows to add scripts to an SVG document.
It has some subtle differences
to the regular HTML <script>, for example, it uses the href instead of the src attribute,
but above all it's important to know that any functions defined within any <script> tag
have a global scope across the entire current document.
Below, you can see an SVG script used to reduce the radius of the circle until it's equal to zero,
then reset it to the initial value, and finally repeat this forever.
<svgviewBox="0 0 100 100"xmlns="http://www.w3.org/2000/svg"> <circlefill="blue"cx="50"cy="50"r="45"/> <scripttype="text/javascript"><![CDATA[ const circle = document.querySelector('circle'); let r =45; constanimate=()=>{ circle.setAttribute('r', r--); if(r ===0){ r =45; } requestAnimationFrame(animate); }; requestAnimationFrame(animate); ]]></script> </svg>
The last example uses SMIL, where, via the
<animate>
tag inside of the <circle> tag, I declaratively describe that I want to
animate the circle's r attribute (that determines the radius) and repeat it indefinitely.
Before using animated SVGs as favicons, I want to briefly discuss
how you can use each of the three examples on a website.
Again there're three options: referenced via the src attribute of an <img> tag,
in an <iframe>, or inlined in the main document.
Again, SVG scripts have access to the global scope, so they should definitely be used with care.
Some user agents, for example, Google Chrome, don't run scripts for SVGs in <img>.
The Glitch embedded below shows all variants in action.
My recommendation would be to stick with CSS animations whenever you can,
since it's the most compatible and future-proof variant.
Since crbug.com/294179 is fixed, Chrome finally supports SVG favicons,
alongside many other browsers.
I have recently successfully experimented with
prefers-color-scheme in SVG favicons,
so I wanted to see if animated SVGs work, too.
Long story short, it seems only Firefox supports them at the time of writing,
and only favicons that are animated with either CSS or JS.
You can see this working in Firefox in the screencast embedded below.
If you open my Glitch demo in a standalone window,
you can test this yourself with the radio buttons at the top.
Should you use this in practice?
Probably not, since it can be really distracting.
It might be useful as a progressive enhancement to show activity during a short period of time,
for example, while a web application is busy with processing data.
Before considering to use this, I would definitely recommend taking the user's
prefers-reduced-motion
preferences into account.