For obvious reasons, Web Directions Code 2020 adapted to become code://remote – a completely online experience, held over four fridays.

The Vibe

>> Upbeat Electronic Music

When covid hit it was quickly obvious that our professional lives would be thrown into upheaval along with our personal lives.

We attend conferences for a lot of reasons – to update our knowledge, make connections with new people, to link up with the wider community and vendors, to see friends old and new.

The experience of attending includes the main event – the talks, Q&A and panels – but also the “hallway track”, taking a wander around the meetup muster and vendor booths, entering a few comps, maybe tuning out for a session to go grab a coffee and decompress a bit. Plus many people enjoy the after party or even just the excuse to travel to another city.

I won’t pretend code://remote was the same as meeting in real life, but I did get a sensation of logging in to an event. I wandered around the other content the way I’d wander around the booths, shared photos of my coffee rig in lieu of coffee missions with fellow coffee geeks, and chatted with other attendees. There were competitions, job board, workshops – all the usual things you’d expect at a conference.

A few benefits of the format emerged as well. With no venue, travel or accommodation required the time and cost commitment was reduced. Pre-produced content meant captioned video and reference material were all available together (I am obviously biased to notice this). Having the speakers in live chat during their own talk was an unsual dynamic.

A few people also observed that it was a new experience to sit and watch with their cat curled up on their lap. That went well with a persistent feline theme throughout the content (from ecmeowscript to kittenhub...!).

It wasn’t all rosy of course. Personally I found it was harder to concentrate, harder to switch off other channels and escape interruptions. Without the feedback loop of an audience, speakers had a different kind of energy; and obviously mid-talk audience participation is out for a pre-record.

One thing was not as new as people think: the event was delayed on day four due to a streaming platform problem – possibly related to the Google outage that day. But there’s always something beyond the event’s control – like the time we had a fire alarm and full evacuation mid-talk at Web Essentials 2005.

Also it was interesting to see there were still moments where you had to be there. I could tell you about Ben Dechrai’s rogue thread, or Phil’s temporal banter, or the race against the clock for a chicken delivery… but, well, you had to be there!

While I’d still pick in-person over online if all else was equal… all else is not equal. It probably never was, from many points of view. Whatever our New Normal™ turns out to be, I can imagine conferences offering an online-only tier as an ongoing thing; and a Conffab subscription makes more sense than ever.

For the past four weeks we had a gathering to look forward to; and over time I think we’ll get better at creating a rich online experience. Isn’t this part of the promise of the web anyway? To break down barriers of distance and circumstance? We’ve become all too well acquainted with the downsides of a global network, but perhaps it’s a good time to revisit the potential and promise of the web.

After all, to borrow from its inventor, the web is for everyone.

>> Upbeat Electronic Music

The tunes

People loved the music played during session breaks, but not everyone realised they were created by one of the presenters (Aaron Turner/aka Groovy Kaiju)! Check out the playlist on Spotify.

This is not the normal Big Stonking Post

(...this is just a tribute?)

The BSP is normally written live during the event, then reviewed and updated with images afterwards. Adding social media posts brings more perspectives and helps build a sense of the event.

This year the content was prepared ahead of time; and structured specifically to synchronise with the video and slides in the conference system. Hopefully still useful, but certainly different.

I imagine most attendees will log into the conference system if they need a reference; so I’m mostly publishing this to mark the occasion.

Fugu and the Future of the Web Platform

Kenneth Rohde Christiansen, Sr. Web Platform Architect Intel Corporation


Kenneth works at Intel and is also a member of the W3C TAG.

Why does Intel care about the web? Research suggests people spend more than 60% of their time on computers doing something in a browser.

Why does Kenneth love the web? It’s accessible, it’s friendly, the experience is familiar. Progressive Web Apps are part of the evolution of the web platform. So the web has world-wide reach, across all devices; and it’s incredibly flexible.

A quick recap of how PWAs work – while looking at a website, you get the option to install that website into an app on your device. It looks like any other app and works offline as well.

But what happens when you want to do something more than than PWA APIs let you do? Maybe you’re worried it’s missing an API, or there’s not a clear roadmap, or it’s not fast enough?

So should you just make a native app? You can, but there are also a lot of ways that a native app isn’t better than the web. You have have to download and maintain apps (which adds friction); you are bound by the APIs and rules of each specific platform; and each option is a separate skillset.

You might think multi-complilation solutions will solve the problem for you… but they are not always the best of boths worlds! They bring their own problems.

So what’s blocking people adopting the web? Common reasons…

  • existing code that’s hard to port
  • performance issues on a target device
  • lack of good tooling
  • lack of capabilities

But it’s not really that bad, as the web evolves:

  • WASM, transpilation
  • off-thread performance
  • CSS Houdini
  • increasing toolkits and design systems

But we’re here to talk about capabilities, and this is where Project Fugu comes in. It was started by Google, but Intel and Microsoft have joined.

Fugu is named for the puffer fish – delicious if prepared well, deadly if not. Similarly, Project Fugu offers powerful APIs that must be used with care: exposing the capabilities of native apps, while maintaining security, trust and other core tenets of the web.

So what’s the process for getting something into a standard? It starts with getting feedback; slowly turning the idea into a solution; and when the solution has support you go into an 'origin trial’ which Phil Nash will talk about later. Origin trial features can be enabled on your site with a token.

Where did Fugu start? Initially it was about connectivity, things like USB and Bluetooth. A lot of businesses use this heavily and it has lots of IoT applications. This is in Chrome and Edge today. There’s also a serial API because things like Arduino need serial comms.

Kenneth has worked on NFC (near field comms) – there are a lot of standards in this space, but they are starting with NDEF. The API is extremely simple, you create a reader instance and listen for reader events.

Generic sensors – created because existing sensor APIs had a lot of problems (not configurable enough, data not rich enough, etc). So the new API tries to cut across the problems and make sensors easier to use. As not all browsers support this yet, Kenneth created polyfills.

Sharing content and receiving shares – despite being an incredibly common task, sharing content hasn’t been supported very well, so Web Share and Web Share Target are being created to finally make it easy to do with PWAs.

Reading and writing files – Native File System API. It does what you want – gives full file access to the browser.

Media and playback – Media Session API, Keyboard Media Buttons, Media Stream Image Capture, depth extensions (virtual chat backgrounds, anyone?)... all designed to unlock advanced interactions and editing that was previously locked up in native APIs.

Barcode scanner, face detection – very common applications in business and it would be nice to have the ability to work with them in the browser. Shape Detection API.

System integration – Badging API (alerts, messages, etc); Wake Lock API – keeping a device awake and alive. Imagine you want to have a recipe application that should stay on while you are cooking and have dirty fingers.

Accessing the user’s contacts is useful, but has very high concerns around concerns and privacy. So the Contact Picker API is designed to be very specific about what it asks for, to enable clear interactions.

Dual and foldable screen support – these devices have unusual characteristics requiring things like the 'spanning’ CSS media query.

Title bar customisation is tricky – you want to enable some freedom, without making it easy to set up phishing attacks by perfectly emulating other software.

This has been a whirlwind tour of features and yet there are still heaps more coming! If you have any specific things you want for the platform, submit it!


The Web in the age of Surveillance Capitalism

Marcos Caceres, Standards Engineer Mozilla


So what is “surveillance capitalism”? As it became cheap to process and store data, personal data became commidified. The term was coined by Shoshana Zuboff in the title of her book The Age Of Surveillance Capitalism, to describe this new method of profit-making. She also observed that personal information had become the most valuable resource.

This data is captured through tracking – the collection and retention of data about a user as they use websites, which may be shared without their consent. Sometimes there are good reasons for tracking – measuring sales conversion for your business is not unreasonable, and tracking can help you improve the user experience of your product or website.

So why are companies tracking users? The data ise useful for marketing, but also less comfortable things like measuring social or political views. It gets even less comfortable when you see entire user sessions being recreated.

If you can link this data to one person across multiple sites and interactions, that data gets deeper and more valuable – particularly to people who want to sell you something; or sell the information itself.

It’s particularly problematic that there are hundreds of trackers trying to do this. Who are we talking about? The biggest are companies you would recognise like Google and Facebook.

In addition to collecting data, some tracking methods include surprisingly large forced downloads – imagine the impact of pushing 1.5megs to someone in a developing nation on an expensive data connection.

Tracking techniques use lots of methods including cookies, URLs, 'supercookies’, fingerprinting (to identify the user by their very specific device profile) and dual purpose apps.

Cookies are simple – key/value pairs. They do lots of useful things like maintaining state, keeping your session active, remembering your login on frequently-used systems. So there’s plenty to like about them, they’re not all bad.

Where things get muddy is when we do something read a news website and the embedded ads, loaded in iframes (different origins), third parties gain the ability to serve things to the same person across multiple sites. This is where you get the sensation that an ad is 'following you’ or 'knows things about you’.

Supercookies come from the law of unintended consequences. They can’t be cleared and they persist in private browsing. They exploit browser features that make them resistant to erasure – it’s essentially a hack. The user does not control them and that makes them much more scary. They are really only used for dodgy purposes.

A super cookie example: using an HSTS attack to build up a binary data store, exploiting a trick on each site to set a bit on or off.

Fingerprinting example: this attacks capabilities of the browser to create a unique, identifying profile of that browser.

It’s hard to defeat fingerprinting as the sheer number of data points is so high. The settings it has, the extensions installed, system fonts, viewport size, language, Do Not Track header enabled, device hardware, etc.

You can test your vulnerability using EFF’s Panopticlick (

This obviously has big ramifications, with incidents like the Facebook/Cambridge Analytica scandal – where personal data was used in political campaigns. The data was harvested when people did Facebook quizzes – data was gathered from people who had not used those quizzes or given consent to share the data.

(Video of the BBC’s Katie Hile, talking about the personal information she found in the Cambridge Analytica; and what that identified about her and her life. Ultimately she decided she had to take her information down, to essentially censor her online life.)

So how do we fix this? It’s hard and requires standards, education, governments, law enforcement and industry/NGO engagement. This is a lot of moving parts; and web developers play a role as well.

So what’s the role of educators? Mostly to teach people more about what’s going on; so people can make better choices.

(Video about Mozilla’s Lightbeam, designed to demonstrate the data being gathered; so people can understand what’s going on.)

An important aspect of industry activity is browser manufacturers engaging with this issue… with one key exception…

...Google has a vested interest in tracking, so Chrome does not include blocking options. So even though we love Chrome, if privacy is important it isn’t the best option.

(Demo of what Firefox blocks out of the box just visiting one website. Edge has similar controls built in.)

Where this gets tricky again is dual-purpose applications. Strict blocking will break sites like YouTube, which both provide useful content AND track the user. So there is always a balance to be found between privacy and functionality.

Firefox use as the basis for tracker data; Safari uses an algorithmic approach; Edge will be doing their own version as well.

So what happens to you (or your code) if you are identified as a tracker? Browsers will partition your code, block your cookies (but tell you they were set), block storage, block sensitive APIs. The browser tries to close the gaps they’re sneaking in through, while smokescreening that fact. This can obviously lead to breakage if not applied perfectly.

Other key industry initiatives include Let’s Encrypt, safe browsing list, secure DNS, and so on.

So what role do standards play? Sometimes standards have unintentional consequences, even when they are good attemps (like the failed Do Not Track)... and even when standards (like the cookie standard) include dire warnings about the risks, they can’t stop bad things happening!

But standards evolve and over time the list of questions and requirements get better at heading off negative consequences. People who make standards are highly aware of all of this. The Payment Request API has lots of examples of privacy mechanisms built in (eg. truncating postcodes).

Of course many of these things will require user-disruptive dialogs in order to give control to the user. Again this walks a difficult line between privacy and usability.

So what about developers? We may not have even realised we were complicit in tracking – just by using Google Fonts or a CDN, we will have contributed to user tracking. They’re hard to remove, but you can start with other choices like deciding if you really need to include social media widgets, Google Analytics and so on.

Ask yourself if you can work around including third party code – whether you need it at all, or you can write your own solutions instead. There’s a lot to think about!


Better Payments for the Web with the Payments API

Eiji Kitamura, Developer Advocate Google


Eiji is going to talk about progress on the web payments API.

The most common payment method in most countries is to use a credit card. But there are pitfalls like people entering the wrong code, or they just give up on the purchase – which is not what an online retailer wants!

In-app payments have much lower friction, but requires an existing relationship and prior setup. It makes individual purchases very easy for the user.

So why not do something similar for the open web? That’s what the payment API is trying to do.

It’s also looking at the challenges of closed ecosystems, where developers have to deal with each separate API for multiple payment providers.

Comparison demo – native payment flow in an app, where the user never leaves their app context; vs existing web experience where users are sent out to third party sites, then back to the actual merchant.

Using web payments the UX is much closer to the mobile-native purchase – the user stays in the same context and the payment is modal. It also works across platforms, so the same system can be used across desktop and mobile.

Key roles in this:

  • Merchants – who sell things
  • Payment handlers – who handle the transaction

Code demo showing how the Payment Request API captures purchase requests, launches a payment app and awaits a payment credential for a successful payment. This allows the merchant site to complete the purchase.

On the Payment Handler API side, listeners are set up to receive payment request events; then launch the payment interface.

There has been some confusion between existing JavaScript payment APIs and the new web APIs – they are low level standards, not specific implementations. There is also a polyfill for browsers that do not support it – although the evergreen browsers mostly have support.

Secure Payment Confirmation – while there are mostly two players, the merchant and the payment handler, but there may also be a a bank or issuer involved as well. That adds a requirement for stronger verification that the customer is the legitimate card holder.

While more secure, it adds a lot of friction – and this can lead to abandoned purchases. So WebAuthn aims to reduce this friction, using FIDO standard biometric sensors (like a fingerprint reader) on the user’s device to reduce the need for slow verification.

If you are interested in this feature, please join the discussion at

Digital Goods API – this brings the idea of marketplace purchases to the web. It pairs well with Trust Web Activities (TWAs) that expose web content on Android apps. While a specific marketplace will reduce user friction, you will still have to abide by the vendor’s rules.

If you are interested in more, go to


Say Goodbye to Passwords and Hello to WebAuthn

Ben Dechrai, Developer Advocate Auth0


Ben’s been a software engineer for 20 years; and in that time he’s gone from trying to control everything, to understanding that it’s often better to outsource some parts so you can focus on the area that lets you provide difference and value. User credentials and authentication are a good candidate to outsource as they are tricky and risky.

There are three main things we want credentials to have:

1) Easy to remember and hard to guess – too hard to remember and we won’t use it; too easy to guess and it’s not safe
2) Easy to change – so if there’s a breach, people can change their password quickly
3) Hard to intercept – resistant to attack

So how do different auth types fare against these credential types?

Passwords – (1) low (2) high (3) medium
It’s hard to remember a secure password, but they are easy to change and reasonably hard to intercept. But once breached they are easy to share (haveibeenpwned?).

SMS or email tokens – (1) high (2) reasonably high (3) medium
There are vulnerabilities that make SMSes relatively easy to intercept; email is not particularly secure either.

Biometrics (voice, fingerprint, etc) – (1) high (2) very low (3) medium-high.
There are lots of quite interesting proof-of-concept attacks around biometrics, so they are not unbreakable.

Combined, these things make up multifactor authentication (MFA). It’s useful to think of things you know (password), things you have (a device receiving a token) and things you are (biometrics).

Other than actual passwords, most are 'passwordless’ – a push notification or your voice can be used without entering a password.

Something that we can now use with webauthn is a FIDO security key. So let’s see how they fit into the scale:

FIDO Security Key: (1) high (2) medium-high (3) high

They’re easy to use, mathematically improbable to guess, registering a new one is reasonably easy, and they are very hard to intercept.

These can also help protect against phishing attacks, but first let’s remind ourselves how those attacks play out. Phishing attacks rely on fooling you into entering real details into a fake UI; and they will often present a fake success screen to help cover up what’s happening.

When these fake interfaces are done well enough, you can also be tricked into entering multifactor authentication.

So what can WebAuthn do to help? Phishing fundamentally relies on fake login screens to capture details for use on the real site, so the WebAuthn registration and login flows block that vector by creating a detectable mismatch.

Instead of creating a password directly in the web UI, it uses challenges and APIs to create keys that are specific to the actual domain – so they are no good for logging into the real one. Instead of relying on human brains to notice a tricky URL, computers simply detect that they don’t match.

(Demo of webauthn – walking through the process described earlier, using the laptop’s fingerprint reader for verification.)

WebAuthn can be passwordless, but it can also be _username_less. The authenticator can remember keys for you, so all you do is specify which key you want to provide.

(Demo of this process, with good ol’ Alice and Bob.)

As with all new tech, it will take a little while to get used to the new authentication patterns available with WebAuthn. Hopefully this demo has given you the interest to give it a try!


The Origin Trial

Phil Nash, Developer Evangelist Twilio

When Phil first wrote the topic, “The Origin Trials” sounded like a dystopian young adult thriller… but since we have quite enough dystopia right now you’ll be glad to know this is a good thing!

You heard earlier about Project Fugu; and that’s what led Phil to the origin trials as well. What Phil has found is that people simply hadn’t heard of origin trial. So it’s important for us to know what that is and how we can use them to be part of the standards process.

Origin trials are designed to help design new APIs that developers actually want to use; and to make sure they are secure and private for users. Origin trials let us test experimental web features with real users.

To understand, let’s go back to 2016 when Firefox released version 49. While not otherwise a huge release, it implemented a large number of -webkit prefixed features… some implemented by other browsers, some implemented in Firefox… all under the webkit prefix. A bit bizarre!

But this was after all the time of vendor prefixes, when new features were tested by shipping prefixed versions. It was popular because it gave early access, but it made really messy, bloated code… or people would only use a prefixed version, or they’d leave out some browsers’ prefixes.

The idea was that devs would try things out behind ugly prefixes but then remove them… but we didn’t. We shipped them. The joy of new things outweighed the caveats.

This ultimately left the web broken in some browsers, which is how you end up with Firefox shipping webkit prefixes. Other browsers like Opera did this too. We ended up with sites dedicated to tracking which prefixes were still necessary.

Developers got better at this over time, using tools like autoprefixer to ship
the right code; or we used polyfills to backport functionality to old browsers.

But what about things that can’t be polyfilled? Native features like file access don’t exist in JavaScript, you can’t polyfill them. We got browser feature flags, so you can turn on new features in the browser to try them out. But this is just for devs to try things on their own machines, users aren’t going to do this.

But what about users? We don’t want to ship potentially dangerous features to users without testing them somehow. This is where origin trials come in – they let us test experimental features with our users.

They currently work in Chrome and Edge.

So you go to the websites…

...and you register which feature you want to try out. Then you get a key that you publish in a meta tag or HTTP header on your website, and your URL is now allowed to use the new feature.

Origin trials have some rules. There is a fixed time limit; the feature can be pulled if implementation rates get too high or there is a security breach; features can change; features may not ship as a standard.

So if you are a developer working with these new features, you must code very defensively. Detect the features are available; check all parts of the feature are still available; use them as a progressive enhancement; and most of all you need to give feedback. The point is to get feedback on the developer and user experiences.

It’s really important to highlight the features can change. This can and does happen. The wake lock API changed significantly during its trial. The way you released the lock was originally pretty weird, devs didn’t like it. So it changed and non-defensive implementations broke.

Origin trials are not standards. WebKit has announced they won’t implement a huge range of features that can be used to fingerprint users’ browsers. So no matter how exciting something may be during origin trial, it doesn’t mean it’s definitely going to become a standard. But if they’re really useful, it gives an effective pathway to petition browsers to implement them.

The web has a lot to offer and it’s getting better. But we need to get it right. We need to avoid the mistakes of the past; and to make sure we get things right for users. But we should experiment with new features! Try them out, see what’s going to be useful, send feedback and help get them released to production.

It’s exciting, this is a conversation between developers and browsers – make use of it!


The State of Web Components

Ana Cidre, Developer Advocate Auth0

Ana is talking about the past, present and future of web components.

So what are web components? They are platform-agnostic components – they run equally well on all platforms.

Their main goal is to encapsulate the code so it’s reusable and things like styles don’t 'leak’.

The term “web components” is often used interchangeably with “custom elements”, but it’s not really true. Web components is an umbrella term for four technologies.

The first is HTML template <template></template> – which has been around for a long time. But this is where you put the template of your web component. It is only parsed once, no matter how many times you clone it.

(demo with an image component)

The second tech is Custom Elements, letting us create new HTML tags or extend existing tags. You must always name them with a - to indicate they’re a component; and for the sake of your future self, name them in an understandable way.

You declare a class to extend HTMLElement, then link that class to your component with customElements.define. This gives access to some special methods known as custom element reactions.

Remember if you build a completely custom component for something like a button, will not behave like a normal button unless the author recreates all the usual functionality of a button.

So now you have a custom element you can style it as you normally would. The example uses CSS custom properties, which is just normal CSS.

This is a really nice alternative to the all-too-common Div Soup.

The third tech is Shadow DOM, which provides DOM encapsulation – this isolates your component so you have total control over its style, without clashes with other code. You initialise this using attachShadow, choose the mode, and attach it to the DOM.

The fourth tech is ES Modules, which enable web components to be developed in a modular way. Most JS development uses ES Modules these days and web components are no exception. You declare an interface for your component in a .js file, which you can then load with a script type="module" using either src or import syntax.

So how do these all work together? Let’s create one! This example is creating a login button.

First we need to create our template. This provies HTML and CSS in a style element.

Then we create our custom element class, add a constructor and super, attach the Template and Shadow DOM.

Then we have our get observedAttributes which are the component attributes. You can then manipulate these attributes with attributeChangedCallback – one of the custom element reactions. Another is connectedCallback, which is called when the component is added to the DOM. You can then set default values and prepare your component for real world use.

This is all native JavaScript, no libraries involved.

You can get the code from Stackblitz (click the logo in the slide).

So web components are completely awesome, right?!

Note you can reach out to Ana about this tutorial, or to ask about Auth0 where she’s a developer advocate (

Ana is also the founder and organiser of NGSpain, Gals Tech (for women getting started in tech in the Galicia region in Spain), and she’s a Women Techmakers Ambassador.

When Ana talks about web components on a more personal level, not as a developer advocate, she tends to get a common reaction that people just don’t want to know.

Mike Hartington frames it as a classic XYZ joke – “I have a web components joke, but you’ll probably hate it for no reason”.

So why is this? Let’s go back to when web components started – browser support was terrible and the polyfills were huge (by the standards of the time).

So a lot of people tried it, didn’t like it, don’t wanna try it again.

But many years have passed and browsers have adopted web components – not just one or two, but all the current browsers.

But why swap your framework of choice for web components?

Interoperability: you can share components across teams and the different frameworks they’ve chosen – say Angular, Vue and React. Instead of making a new implementation for each team, you can do it once in web components and share it across all of them. has lots of information on using custom elements in a range of frameworks.

You also get big performance gains. Because web components are native to the browser, you don’t have to load heavy frameworks – yes, it’s even lighter than Svelte.

You get encapsulation – there is no leakage in or out of your component. This is a hot topic for styling.

You can use CSS custom properties to provide a styling interface for your component and give controlled flexibility. It’s ok, but it’s fairly verbose.

There is another way: CSS Shadow Parts (::part()). This lets you allow styling of specific parts of your component, while protecting others. Then people can write fairly normal-looking CSS to provide their custom extensions.

Slots act as a placeholder to add content in defined locations in your components. For example a description for your login component. You name the slot and authors can pass content to that named slot.

Browser compatibility – it’s really important to look at all the green ticks. Two years ago this was not all green. Everything except IE11 works, but the good news is that polyfills are good now too. So there really is no need to worry.

While Ana has created web components in absolutely pure native code, there are frameworks and libraries to help you do it.

LitElement, Hybrids, Stencil, Polymer, Skate.js, Slim.js, Angular Elements and Vue.js just to name a few.

Lit-HTML and LitElement are the future of Polymer. The example shown earlier uses far fewer lines of code when using them. It also adds a lot of popular features you’d be used to using in common UI frameworks (but without a build step).

There are also some pre-made element libraries, like Wired Elements and Ionic.

It’s worth noting lots of big companies are adopting web components; and they’re not just using them for pure UI elements. They are using them for other features as well.

Examples: 3D model viewer, medical imaging, CSS Doodle, colour picker and so on.

So we’ve seen the past and present, what is the future for web components?

The first is the Custom State Psudeo Class :state() – this lets components expose their state in a similar way to built in elements.

Declarative Shadow DOM will allow creation of shadow DOM just with HTML – no JavaScript.

Lastly the Form Participation API will allowing devs to provide custom form elements without heavy elements that are just reproducing what browsers already do.

It’s a great time to check out web components. Because they are native to the browser you no longer have to worry about big frameworks… they are just brilliant!


The State of JavaScript

Houssein Djirdeh, Developer Advocate Google

JavaScript constantly evolves and the full JavaScript ecosystem would probably take a book to cover. So Houssein is going to focus on JS on the web – although it will still only be scratching the surface.

So what’s the data source for this talk? HTTP Archive tracks over 5m URLs on a monthly basisc. Note that only the home page is tracked, as the dataset is already massive.

So how can you read the data?

1. Google’s BigQuery data warehouse tool – allowing any query without having to download and manage the (extremely large) raw data yourself
2. Preset monthly reports covering high level trends
3. Almanac – yearly analysis starting from 2019, with community input into the analysis

Houssein created the first Javascript chapter of the Almanac, with help from a lot of contributors. This talk traces the same topics with current/re-queried data from July 2020.

Almanac data comes from several tools:

1. WebPageTest a performance testing tool
2. Lighthouse a performance and profiling tool
3. Wappalyzer which detects the technologies used on a page

Some areas are also informed by the Chrome UX Report.

This talk focuses on the data from WebPageTest.

So let’s dive into some JavaScript stats. JS is the most expensive resource we send to browsers – it has to be downloaded, parsed, compiled and executed. While browsers have decreased the time it takes to execute, download remains expensive.

So much do we actually use? The biggest sites (90th percentile) send over a megabyte even when compressed for final transfer; and at the 50th percentile it’s still over 450k.

So it feels like we might send too much JS… but what is “too much”? That depends on the capabilities of the browsing device. We are sending much the same amount of JS to mobile as we do to desktop.

The impact of that is seen in V8 main thread processing times, where mobiles are significantly longer; and the gap gets bigger the more JS is sent.

Other interesting metrics:

  • number of requests shipped – with HTTP/2 more parallel connections can enable faster transfer by sending small chunks
  • first vs third party requests – most sites are making more requests to third parties than they are making requests to deliver their own code! This trend is backed up by data showing we also send more bytes of third party code.

Compression is a key method of improving download time. Roughly 65% of sites use Gzip, 19.5% use Brotli, but over 15% don’t use compression at all.

We can also look at feature usage.

  • Less than 1% of sites use script type="module"
  • 14% of sites ship sourcemaps in production (for at least one script on the page)
  • Market share of libraries and frameworks – React, Angular and Vue make up just 6.53% of sites in the archive, while a whopping 83.16% use jQuery.

If you want to dive specifically into the performance of web frameworks, check out Houssein’s project, Perf Track

The first edition of the almanac was a huge commitment, with 93 contributors. They are always keen to hear from people who are willing to help. Get in touch if that might be you!

All this just scratches the surface. HTTP Archive and the Web Almanac are gold mines of information and Houssein hopes more people will take advantage of them.


ECMeowScript–what’s new in JavaScript explained by cats

Tomomi Imura, Software Engineer

Appropriately enough Tomomi begins by introducing us to her feline assistant, Jamie. Who looks like Maru’s cousin!

Tomomi is going to talk about things you may or may not know about JavaScript, illustrated with examples about cats. You may have seen earlier projects about HTTPcat, Raspberry Pi Kittycam or her cartoons – all cat themed.

JavaScript is technically EcmaScript with cats – aka. EcmeowScript! Yes she knows it’s silly ;)

Let’s begin with ES6/ES2015. This was the sixth edition of EcmaScript, with a lot of major changes and additions including arrow functions, let and const, promises, template literals and much more.

While not part of ES6, Intl.DateTimeFormat() was introduced around the same time and is worth mentioning as well.

Reiwa – JavaScript International Date Format and Japan’s New Imperial Era

So let’s get into the -JavaScript-cats!

const cats = [
{name: 'Jamie', type: 'tabby', img: '00_jamie.jpg'},
{name: 'Leia', type: 'tuxedo', img: '01_leia.jpg'},
{name: 'Alice', type: 'tortoiseshell', img: '02_alice.jpg'}

First example, creating an array of cat names from an array of cat objects. In ES5 you’d have done something like loop through the array and push values; but ES6 offers a couple of new ways to do this.

.map() lets you create a new array by mapping values from the old array. It runs a function over all the values and builds an array from the function return. That’s pretty neat but then Arrow Functions let you write it with extremely concise syntax, by omitting parens when there is only one argument; and implicit return for one-liners. =>;

You could also achieve this result with Object Destructuring, which makes it easy to create variables with values from an object.{name}) => name);

This is a particularly useful technique when an API sends back a chunk of JSON and you only want part of it.

Next up we have the spread operator (“three-dots” ...) which allows an iterable (array or string) to be expanded. Great for easily including an array in a new array.

catNames = ['Jamie', 'Leia', 'Alice'];
const newCatNames = [...catNames, 'Yugi','Snori'];

The new array contains all five cat names.

Spread syntax can be used for a variety of useful (and concise) tricks:

Blog: Spread Syntax 'Three-dots’ Tricks You Can Use Now

Next we have the map object – not the same as from earlier, this is a hashmap. This has useful methods like get, set and has which provide intuitive ways to work with your data.

ES6 had lots of great stuff, but now let’s look at ES7. It was a relatively small update but brought in the Exponentiation Operator (**) and Array.prototype.includes()

a ** b is the same as Math.pow(a, b) (Math paws!)

Array.prototype.includes() is a neat boolean alternative to the clunky ES5 indexOf check that we all know and…use.

const coatColors = ['solid', 'bicolor', 'tabby', 'calico', 'tortoiseshell'];
coatColors.includes('tabby') // true
coatColors.includes('neon') // false
// VS
coatColors.indexOf('tabby') > -1 // true
coatColors.indexOf('neon') > -1 // false

Moving on to ES8 – this was a bigger update. Async/await, padStart()/padEnd(), trailing commas, Object.entries(), Object.values(), enumerable property values, shared memory and atomics.

Async/Await is basically syntactic sugar on top of promises, pausing execution of the async function to wait for the pass Promise’s resolution.

A silly example is pseudocode for a robot litterbox, that has to wait for the cat to leave to start a cleaning cycle.

const runLitterRobot = async() => {
await cat.poop();

padStart and padEnd allow you to pad the start or end of a string, to a specified length.

const boxStr = '📦';
boxStr.padStart(4, '🐱');
boxStr.padEnd(4, '🐱');

While cute, a more practical example is zero-padding:

const str = '5';
str.padStart(4, '0'); // '0005'

These work correctly in right-to-left languages, which is also they are not called padLeft and padRight.

Trailing commas aren’t really something you need to memorise, but trailing commas no longer cause an error in several cases where that was mostly just an annoyance.

const makeFood = (param1, param2,) => {..};
makeFood('tuna', 'salmon',);

On to ES9/ES2018. Spread & Rest properties, RegEx improvements, Asynchronous iteration and Promise finally()… but not enough cat jokes here so let’s move on!

ES10/ES2019 String.prototype.trimStart() and trimEnd(), Array.prototype.flat() and flatMap(), Object.prototype.fromEntries(), Function.prototype.toString(), well-formed JSON.stringify(), better array sort().

String.prototype.trimStart() and trimEnd() remove whitespace from the start and end of strings.

const greetStr = ' meow ';
greetStr.trimStart(); // 'meow '
greetStr.trimEnd(); // ' meow'

Array.prototype.flat() flattens arrays to one level by default, so you can specify the depth to flatten more levels.

const colors = [['black', 'gray', ['orange', 'light orange']], 'bicolor', 'calico'];
const colors1 = colors.flat();
const colors2 = colors.flat(2);

flatMap is similar, but it maps each element before creating the new array.

ES11/ES2020 – the latest version includes BigInt, globalThis, Dynamic import, Nullish coalescing operator (??), Optional chaining (?), Promise.allSettled(), String.prototype.matchAll(), Module export. These are still pretty new so Tomomi can’t cover the entire suite just yet.

globalThis is a standard property for accessing globals across the different JavaScript environments, particularly browsers vs nodejs (which just has global).

?? is a neat way to access falsy values, but avoiding issues with nullish values.

BigInt is a very big cat, letting JavaScript handle much bigger numbers than Number. You can see the effect by trying to increment beyond MAX_SAFE_INTEGER. The notation for BigInt is a trailing n.

let max = Number.MAX_SAFE_INTEGER; // 9007199254740991
max++ // try this, doesn't work
let largerThanMax = 9007199254740992n;
largerThanMax++ // 9007199254740993n
largerThanMax++ // 9007199254740994n
largerThanMax++ // 9007199254740995n

A real world example of needing these numbers is Twitter hitting JavaScript’s number limit while generating tweet IDs. We tweet a lot…

Running out of time and cat puns… but hopefully we are all feline great about by now!


Asynchronous and Synchronous JavaScript. There and back again.

Maciej Treder, Senior Software Development Engineer Akamai Technologies

JavaScript is a single-threaded language, meaning it can only do one thing at a time. When we need to do something time-consuming without blocking the main thread (querying APIs, loading content, etc), we use JavaScript’s event loop and callback queue mechanism.

Callbacks are a simple way to call JavaScript functions in order. But if you introduce a setTimeout with 0 delay, you can see the output change order. This demonstrates JavaScript’s execution queue.

function saySomethingNice() {
console.log(`Nice to meet you.`);
function greet(name) {
setTimeout(() => console.log(`Hello ${name}!`, 0));

Note that setTimeout, setInterval, fetch, DOM and window are external APIs and not part of JS itself. They communicate with JavaScript using the callback queue.

To understand this we can look at the Event Loop – how the JS stack, callbacks queue and external APIs interact. By calling to an external API, the first console.log is taken out of the synchronous stack; and is intead executed in the callback queue when the stack is complete.

Maciej is going to run through making a series of REST API calls in a variety of ways.

The API has three endpoints:
/directors – a list of directors and director ID
/directors/{id}/movies – movies by director ID, with movie ID
/movies/{id}/reviews – movie reviews by movie ID


What we are going to do with the API is find out which of Quentin Tarantino’s movies has the best user reviews.

The first approach is to use request API and use nested loops to dig down through the data, aggregate the scores and identify the best-rated movie.

The problem is the “pyramid of doom” – lots of nested levels of code. For three REST calls the code ends up with eight nested levels.

The next approach is to break up the code into multiple functions; and at the end of each one call the next fuction (callback pattern).

The problem is that the logical order of the code has now been reversed – you have to read bottom to top – so it’s hard to follow.

The next approach is to use Promises. This allows you to make requests from different places in the code and respond to them when they resolve.

The pitfall is that promises can only be resolved once. You also need to handle the rejection case; noting you cannot use try/catch as it will lead to an UnhandledPromiseRejectionWarning.

Next we look at promise chaining, which allows multiple actions without creating nesting issues. You can also use the catch method; and an action when all promises are resolved:

.then(val => square(val))
.then(value => console.log(value))
.catch(error => console.log(error))
.finally(() => console.log('Done'))

Another way of working with multiple promises is to combine them, using Promise.all.

It’s compact but the problem is if any of the promises are rejected, then Promise.all will reject. You can bypass this by chaining promises, with catches providing default values.

You can also use the allSettled method on the promise object (available in most browsers, but needs a shim in node). This returns an array with granular information about the status and value of each promise.

Another way to combine promises is with the race method, which gives you the fastest promise. This is useful if you are querying multiple sources but only need one of them, ie. the fastest will do.

const promise1 = Promise.resolve(3);
const square = (val) => Promise.resolve(val * val);
const promise2 = Promise.reject('Rejecting that!');
Promise.race([promise1, square(2), promise2])
.then(val => console.log(val))
.catch(error => console.log(`Failed because of: ${error}`));

This has the same pitfall as Promise.all, where one failure makes the entire set fail. Again you can chain promises with a catch statement to work around it.

But we really want the first successful promise, not just the first one to return. You can use a watchDog function to add a timeout.

So let’s use that knowledge to answer the case study question: which of Quentin’s movies has the best reviews? This is using node-fetch as we’re working in a nodejs environment.

Let’s look at the next asynchronous technique. So far we’ve covered single-action promises and multi-action callbacks; let’s take a look at something that combines them – RxJS. The basic interface is based on observables.

So how can we use RxJS to answer the case study question?

One of the big pros for using observables is that not only are they repeatable, you have plenty of operators.

So Maciej said he’s show both the synchronous and asynchronous code – the talk is called there and back again after all. So let’s forget about callback, subscribe and so on…

How can we do these these things in a more synchronous fashion? We can use async and await to make our code asynchronous again.

Async functions are equivalent to functions that return a promise with a resolve value. This adds a nesting layer and uses callbacks to work with promises.

(async () => {
let multiplier = await getMultiplier;
let result = await multiply(10, multiplier);
console.log(`Multiply result: ${result}`);


.then(value => multiply(10, value))
.then(result => console.log(`Multiply result: ${result}`));

Let’s look at another example using RxJS – because you can use RxJS observables together with async/await.

You need to be aware of the performance pitfalls of these techniques. Observables and promises are called at different times, which changes the execution time.

Let’s compare all four techniques:

callbacks promises RxJS async-await
Asynchronous? asynchronous asynchronous asynchronous synchronous?
Repeatable? repeatable one-shot repeatable
Reusable? not reusable reusable reusable
Data manipulation not manipulatable manipulatable manipulatable
Use cases DOM events REST WebSocket Dependent operations

So after all that… what is the best movie by Quentin Tarantino, ranked by user reviews? Inglourious Basterds!

Presentation feedback link


JavaScript debugging the hard way

Marcin Szczepanski, Principal Developer Atlassian

While this talk is about debugging, it’s mostly the story of a particularly difficult Webpack upgrade. While that might seem an unusual thing to have trouble with, the sheer size of the JIRA codebase turns it into a challenge.

JIRA’s codebase had 2,018,981 lines of code as at 2020.07.09 (not including external dependencies); and most of this goes through the webpack build.

This produces 200megs of total JavaScript assets, and 300megs of sourcemaps. Thankfully this is served in a very granular way, but the total is pretty big. The build took about 20 minutes.

So they needed to upgrade from Webpack 3 to 4 and scheduled a week to do it, based on previous experience. As you can guess from the fact Marcin is here talking to us… it did not take a week.

Normally you’d start with a minimal local build to see if the config is working, then push up to the CI environment for a full build.

But while it worked locally, it was timing out in CI. The main difference was that CI was not just emitting code, it was generating sourcemaps and running minification as well.

But to complicate matters there was a problem that made local builds run for over two hours, making repeated test runs impractical.

So they turned to profiling to work out what was going on in CI.

The most common types of profiling are memory and CPU; this talk will focus on CPU profiling, using flame charts and tree profiling.

Usually they’d do this locally by running webpack via node and using the node inspector:

node --inspect-brk ./node_modules/.bin/webpack

...but it was running out of memory. Eventually they found the cpuprofile-webpack-plugin which allowed them to generate a CPU profile in CI.

Comparing a typical flame chart for a normal build, they found the sourcemap stage was taking far longer than normal. The tree table view confirmed that the split function was the source of the problem.

They tracked that back to the sourcemap library, finding it was calling a regular expression a lot and triggering a great deal of garbage collection.

Then they discovered they were not the first people to find the problem; and they’d fixed it in sourcemap – but that hadn’t made it into the Webpack codebase due to breaking changes. So why not backport it? Someone tried that too and the fix wasn’t accepted.

So what do you do when one of your upstream dependencies won’t update and you need the new version?

Forking was impractical in this case as you’d have to fork everything that referenced it. So instead they used patch-package to install the fixes during bundling.

So happy days, the build was working again and back to a normal build time. Time to get this in front of internal customers and that worked, but as soon as it went live to a small initial cohort people started reporting that the navigation wasn’t loading.

Browser dev tools to the rescue? They discovered an undefined error related to the broken feature; and after some digging through minified code, tracked the problem back to the webpack runtime.

Debugging this is difficult. A normal breakpoint is impractical as it’s called too many times, so you have to use a conditional breakpoint. They use a type check to only trigger the breakpoint when the undefined condition is met. This revealed which module was failling to be found.

There’s also a log point, which is a conditional console.log, which would also have located this issue.

The problem ultimately is that the component wasn’t being bundled in Webpack 4, when it was in Webpack 3. They were able to bundled up all the information they had and provide it in a Webpack issue; and ultimately a new version of Webpack fixed the problem.

Key lessons:

  • understand the tools available to you
  • find out if others have hit the same problem before you
  • raise issues (just do your homework first!)


Native JavaScript Modules

Mejin Leechor, Software Engineer VMWare

Mejin started learning JS about three years ago and the import and export statements caught her eye – they made her realise her impression of JS was very outdated. She hadn’t used it since she was nine, making a Geocities site and using JS to add sparkly cursor trails.

The web in 2020 is very different from those days, but despite all the progress support for native modules is surprisingly recent. To clarify 'native modules’:

✖ Not CommonJS:
const express = require('express')

✖ Not AMD

define(['jquery'], function ($) {
// Module body
return someModuleExport;

EcmaScript Modules, aka ES Modules, are the only import and export mechanism to make it into the native language. You may be using them in a library like React:

import x from "module";
import * as namespace from "module";
import {x} from "module";
// ...and more

This specification may be newer than you expect – they were added to the ECMAscript spec in 2015, but only got support in browsers in 2018 and nodejs in 2019. Getting native modules took years of community effort.

So what’s all the fuss about modules? Why did we need them?

First they provide modularity – structure through boundaries. You can also think of this as building blocks for your code, small pieces that can be composed into bigger things.

They not only expose their own interface, they can express dependencies on other modules. Modules also get private scopes, beyond what you can do with namespaces.

The benefits of modularity include being able to reason about one piece of your code at a time; which encourages looser coupling and more-maintainable code. It also enables code re-use.

Modularity is for humans. We suffer when our code is difficult to manage, and benefit when it’s easier to understand and maintain.

So let’s consider modules vs modularity. Modules are a means to gain modularity, but you can also break that with tightly-coupled modules.

So how did we get here? Let’s take a look at a rough timeline of module history, in four broad eras…

  • pre-modules: mid 90s to early 00s
  • DIY modules: early to late 00s
  • Specification: early to mid 10s
  • Standardisation: mid to late 10s

In the early days there wasn’t a clear need for modular JavaScript – it was intended to be embedded in web pages, web 'applications’ weren’t a thing and JS wasn’t imagined as a language for codebases large enough to need modularity.

That gave way in the 2000s with the rise of AJAX driven web applications (particularly Gmail) alongside websites. Global variables became a huge problem, leading to DIY solutions like the module pattern and IIFE (Immediately Invoked Function Expression). It was an ingenious use of function scope, a trick that reappears many times.

Other problems turned up in this era; including dependency management, performance issues and lack of reusability. This led to the next phase of module history – when specifications began to emerge.

Part of this was driven by JavaScript’s evolution into a server-side language (from sites, to apps, to server…).

Kevin Dangoor’s 2009 article What Server-Side JavaScript needs captures the zeitgeist of the time, and set out a wish list:

  • module system
  • package management system
  • package repository

Dangoor recognised that these things would be popular; and also that they were community organisation problems more than technical problems. He went on to form a group called CommonJS, but that was not where the specification called CommonJS eventually came from.

This was great for server-side JS, but it wasn’t ideal for the browser; which spawned Asynchronous Module Definition (AMD) which performed better in the browser.

There was also an attempt to unify both server- and client-side with Universal Module Definition (UMD), but the syntax wasn’t great and it never really took off.

This lack of unified module definition remained a problem.

One other major development emerged in the specification era: bundling. This is when we stitch together multiple modules for deployment. eg. Browserify, Webpack, Rollup and Parcel.

So in the mid 00s we finally converged on a single definition: ES Modules. There’s a big range of import/export options here – default exports, named imports, named imports, aliased named imports…

import * as arithmetic from ‘./arithmetic.js'
import sweetExport from ‘./cool-module.js’
import { default as sweetExport } from './cool-module.js'

The things you import and export are read-only live bindings. That’s very different from CommonJS so be aware of that if you are changing from one to the other – and why it was hard to change in nodejs.

If you do want a dynamic import, there is support for that now too using a promise-based API (useful for lazy loading):

import('./baby-bird.js').then(module => module.drawBird())

There is now native support for modules in the browser:

<script type="module" src="my-module.js"></script>
<script nomodule src="for-non-supporting-browsers.js"></script>

You can use it now but you do need to specify a nomodule fallback for older browsers.

So are we there yet? Have we arrived? Not really. We have native modules, but they exist alongside all the options that went before; and there are still interoperability issues to be resolved.

You’re probably using module syntax during development and bundling the result – even though some browsers could use modules.

The reason for this is performance. For a long time the wisdom has been to ship one big bundle because it was better for performance; and a 2018 study by the Chrome team still recommended bundling even with HTTP2. We still haven’t quite found the right combination to make the shift.

To think about your options, consider the ES Module Maturity Model:

0: no modules
1: modules syntax and bundling
2: unbundled dev (only bundling for production)
3: native, optimised modules in production

You can opt in to whichever level you prefer and gives you value. If level three is not possible today, maybe we’ll get there in future. Judging by our past, it seems likely that we will!


Day 3

Welcome to the Layouts of the Future!

Erin Zimmer, Senior developer Cogent

We’re probably all familliar with the “CSS is awesome” meme, where the text breaks out of the box – and this is supposed to suggest CSS is not awesome. In fact the layout is what actually makes sense in reality.

There are a bunch of frameworks out there, weirdly obsessed with single-letter logos, but still…

Frameworks like Bootstrap enable a lot of powerful layouts without a lot of custom code. Or if you want to roll your own, it’s still only a few lines of code to get a basic responsive design working.

But there are some problems with the assumption that all responsive design should be attached to breakpoints.

The web is continuous – people don’t all use windows sized to your breakpoints. Many websites look surprisingly clunky if you happen to open your browser at a size that’s awkwardly placed between breakpoints.

You can alleviate this by setting breakpoints aligned to your design, rather than trying to aligning to screen size.

But when we write our CSS we don’t have all the information, such as how big the content will be; or how big the container will be. If the container was always the viewport, media queries would be perfect.

Design systems face this problem a lot, where a component is designed without knowing the context it will be used in.

So what can we do instead? Erin’s suggestion is let the browser work it out. The web’s flexibility is a feature, not a bug. Don’t try to place every pixel perfectly, give the browser boundaries and let it figure out what to do.

This is really what CSS is doing anyway – making suggestions to the user agent.

As an example, let’s look at making some responsive columns – the real newspaper-style column isn’t very popular on the web as it was really hard to do for years.

But now we have CSS columns which can do it in a single line of code:

.columns {
columns: 3 200px;

Why is this just a suggestion? Well if the property isn’t supported, the content will behave as a normal container. We can suggest a maximum for the number of columns and how wide we want them, but we don’t know what will fit.

So the browser takes info and finds a layout that fits the suggestions within the constraints at the time of rendering.

Then you can combine columns with flexbox. Text is set with columns, while an accompanying image is set to the side or below. The browser then goes ahead and makes it work in whatever combination makes sense.

A more common use case is things that grow and shrink. Two buttons side by side is a really common pattern; as is wanting the button to stretch wide if the parent is wide; or stack up if the container is narrow.

This seems hard but you can make it work using flexbox and a minimum width on the buttons. The demo uses a new flexbox property gap which sets space between elements – browser support can’t come soon enough!

This works really well but you need to be judicious with your boundaries. By setting too many you can create unwanted effects.

(Demo of a set of buttons, where four were normal sized and the fifth full width; because they’d been given a boundary of 24% min-width. It works better without that boundary.)

Next is an issue that is near and dear to all web developers – how to align things.

  • Flexbox is good for alignment in one dimension (eg. vertical)
  • Grid is better for alignment in two dimensions (ie. both vertical and horizontal)

(Two minute crash course in Grid)

There are a couple of ways to make Grid responsive.

You can make Grid responsive to its own content using width auto. Auto sets the boundary that the column needs to be at least as wide as its content; and the guideline is that it should stretch to fill available space.

Or, you can dynamically change the number of columns. This may not be immediately obvious so let’s break down the code. {
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
grid-template-rows: auto;
gap: 20px;
width: 100%;
  • repeat(auto-fit – fit as many columns you can in the space available
  • minmax(300px, 1fr) each col needs to be at least 300px wide; and stretch to fill available space

The big difference in this option is that the columns will all be the same width and not driven by the width of the content. So it’s better when you know or control the size of the content.

The last example combines everything… columns reflow to fit content into the available space; the whole page reflows to adapt to viewport size.

Finishing with a quote from Jeremy Keith’s Resiliant Web Design:

Relinquishing control does not mean relinquishing quality. Quite the opposite. In acknowledging the many unknowns involved in designing for the web, designers can craft in a resilient flexible way that is true to the medium.

A final note that browser support is better than you expect; although you will need some postprocessing for IE11.

@ErinJZimmer | slides: Welcome to the Layouts of the Future!

Know Your HTML

Chris Lienert, Front End Technical Lead Iress

Chris starts by noting the title is a bit of a lie… “Know Your HTML” seemed a bit too authoritarian, so he changed it “What’s New In HTML”... but most of this isn’t new either… so really the title should just be What’s In HTML? Because most people don’t really seem to know.

A little history – HTML reached version 5 and it was brillant, but the name has never changed because now we have living standard instead of HTML 6… whut?

What that’s about is reflecting the way HTML and the web have evolved. Everything is an increment from a vendor, or the W3C, or the community. Each feature is released progressively, which is what always really happened.

There is some debate now about whether there’d be some marketing benefit to an HTML6 even if only to get people take a new look at it.

When looking at what’s new, there are three key features:

  1. Does it work?
  2. Is it accessible?
  3. Can we style it?

It seems odd to even ask if something works – ie. does it do the thing it set out to do? Not everything gets used as intended, notoriously the web filled in the idea of a Date Picker with so many patterns that it was impossible to standardised.

Accessibility is reasonably straightforward and unambiguous – new things in HTML have to work for everyone.

Being able to style things isn’t so straightforward. Not everything can be styled out of the box.

Chris notes he is not the type of person who comes up with beautiful demos, so instead go to Oliva Ng’s codepen for nice examples! Chris’s own demos are relatively plain.

Codepen: range slider

✔ Works – good for quickly picking within an explicit set of values
✔ Accessible – works across devices and inputs
✔ Styleable – yes you can, although it is an absolute nightmare to do. TLDR: steal someone else’s example and if you are the praying type, pray.

This example also includes the output element. It’s a box you can assign an output to – a value on the screen that updates as the input changes. Very simple.

✔ Works – yes. It’s a box you can assign a value to…
✔ Accessible – yes as it will tell people what’s going on, although it is a little shouty.
✔ Styleable – yes. It’s a box. You can apply whatever you want to the box…

Basically you should use this any time you are writing output to the screen.

Codepen: HTML Progress

Progress is what it sounds like: it indicates progress between a start and end value, or an indeterminate progress state for things like file downloads.

✔ Works – it shows progress
✔ Accessible – it announces progress
✔ Styleable – you need remove native style first with appearance:none, then work the design back up with browser-specific code. You will need to get into shadow DOM to figure out what’s going on, so it’s worth digging into the tools to enable features for inspecting the shadow DOM.

Codepen: HTML Meter

Meter is another element that does one specific thing really well. It shows incremental progress towards an ideal value. A password strength indicator is a good example.

✔ Works – it shows progress
✔ Accessible – it announces progress
✔ Styleable – yes, in a similar manner to progress where you remove the native style and work it back up for each browser

Codepen: HTML Details/Summary

This one is weird – it seems like it would be an accordion but it’s not an accordion.

❓ Works – people aren’t too sure what it’s meant to do, but it
❓ Accessible – since the purpose isn’t clear it’s hard to say
❓ Styleable – naturally Chris immediately wanted to animate it, since that’s almost certainly a design requirement… and you can animate out but not in.

So it’s not quite what we want yet.

Codepen: HTML Hidden attribute

The ultimate in doing one thing and doing it well…

✔ Works – it hides things
✔ Accessible – it’s not there when it’s hidden
✔ Styleable – ...well, it takes away style…!

Codepen: HTML Dialog

Awesome! Finally a native dialog right?

❓ Works – you can show and hide just by changing attributes; it handles the backdrop for you; there’s a DOM API that you must use for things to work; and you’ll need polyfills
❓ Accessible – there are some issues
❓ Styleable – if backdrop worked it would be great, otherwise it’s just a box

Painfully close, but not supported in Firefox yet.

So what makes a good HTML element? The things that work are the ones that do something really well, usually something small and well understood. But there are lots of pieces missing – no tabs, no accordion, things like that.

So what can you do about it? You can get involved – many features are being specced at the moment and many browsers let you vote for what you want them to work on.

So this was what’s in HTML. Many aren’t new – you can use them today. You don’t have to reinvent the wheel. For the things that aren’t there, you can make a difference to bring them into existence.


Escape the box with Houdini

Ruth John, Creative Developer

Note Ruth will tweet the resources – follow @rumyra

Ruth wrote documentation about CSS Houdini for MDN docs and this intro to Houdini comes from that experience.

Houdini is a suite of APIs to help developers have better to access to CSS through JavaScript. They are designed to solve the problem of features being unavailable for quite a long time.

What’s under the Houdini banner?

  • Typed OM
  • CSS Pain API
  • Properties & Values
  • Layout API
  • Animation Worklet

Ruth will be focusing on the first three partly due to time constraints, but also as they are closer to workable browser support.

Typed OM underpins a lot of the other APIs. It’s access to CSS values as types in JavaScript. A quick syntax refresh to set the scene:

button {
border-width-top: 4px;
  • button is the selector
  • border-width-top is the property
  • 4px is the value

Typed OM gives access to the value, the 4px.

To access CSS values at the moment we need to use fairly cumbersom syntax including getComputedStyle which returns values as a string. So then you have to process the string to work out the value separately from the unit and so on. Then we have to put it all back together again to set the new style as a string.

Wouldn’t it be better if we could get style as an object splitting the value and unit? element.computedStyleMap().get('property') lets you do that. So you get { value: 4, unit: "px"} instead of a string. Then you can manipulate and set the value as a number instead of a string.

Value types include…

  • CSSStyleValue – not used by itself much, it is the subclass behind the others
  • CSSNumericValue
  • CSSUnitValue
  • CSSMathSum – if you use calc/min/etc it will come back as a CSSMathSum
  • CSSTransformValue – rotations, skews, etc
  • CSSKeywordValue – values that are unquoted words like auto
  • CSSUnparsedValue – unknowable values like custom properties
  • CSSImageValue – a loose type as there are so many kinds of images

...and there are more to come. The idea is to have a different type for every value we have in CSS.

So how can we create one of these values?

new CSSUnitValue(10, px)



Both create { value: 10, unit: "px" }

attributeStyleMap methods:

element.attributeStyleMap().set('border-top-width’, CSS.px(10))





There are also numeric methods for working on numeric types. You can add, subtract and find values; you can convert them using to; and test with equals.

So that’s Typed OM and it may seem like quite a lot when we can already manipulate style with JS, but it underpins everything else.

Properties & Values – we’ve probably all seen CSS variables by now; and we can use setProperty to manipulate them from JS.

This also means we can use JS to do things like generate our own custom gradient, then make it available using CSS.registerProperty – this is linked back to the type set in Typed OM.

Reference: Ana Tudo’s codepen

Workers are JS scripts that run off the main thread – ie. in the background. It communicates with the main thread using messages.

Worklets are the same, but with a predefined class and access to predefined properties specified in its API.

This set us up to understand the CSS Paint API, which lets us register a paint worklet wherever we might want to use an image in our CSS. We already have that; but a worklet opens up opportunities to handle things like resizing images.

It works like this:

.slide {
background-image: paint(dots)

Inside dots.js you can then write code to create the image with HTML5 canvas.

Reference: Jeremy Wagner’s paintlets project

To finish up, a quick look at Layout and Animation – they basically work the same way as Paint but letting you manipulate layout and animation. The key concept is that you can access worklets in your CSS; and those worklets have access to the element.

Please check these things out and give feedback to the W3C about what you want to see in them.


Prefers Reduced Motion: Designing Safer Animation for Motion Sensitivities

Val Head, Senior Design Advocate Adobe

Val’s going to be talking about the unusual intersection of animation and accessibility: Prefers Reduced Motion. Certain kinds of motion on screen can cause physical symptoms or responses – even when the screen is as small as an iPhone.

First-person accounts help understand the impact of this:

You may not expect the problems that come up – not just motion but certain use of colour can trigger vestibular disorders.

Before we go too far – this talk is not saying animation is bad or not to use it. There is a lot we can do with animation that positively impacts UX and acessibility – Val’s written a whole book on this!

Rather it’s about how to use animation in a responsible and inclusive way, the same as other aspects of design like colour.

How can users request reduced motion in their browser? There are a few options:

  • OSX accessibility panel
  • Windows’ Ease of Access includes 'simplify and personalise windows’
  • Android and iOS have settings as well

So how can web developer tap into these settings? The prefers-reduced-motion media query (supported in almost everything other than IE11).

Since it’s a media query we use it the same way as any other, for example in CSS:

@media (prefers-reduced-motion: reduce) {
/* alternative behaviour */

JavaScript matchMedia also works.

You will get back either true or false/no-preference. Only true is a really distinct indication of user preference as they have specifically changed the setting intentionally. This is why we go against the usual progressive enhancement direction and reduce motion when reduction is requested rather than adding when it’s allowed.

So what types of motion can be problematic? WCAG defines it as reducing “any motion that creates the illusion of movement”, which is a very broad brush but it makes sense to err on the side of caution.

Val’s research suggests there are some particularly problematic types of motion, or are almost-universally problematic to people with motion sensitivities:

  • Spinning and vortex effects
  • Multi-speed or multi-directional movement, including parallax effects
  • Constant animation near text

For more:

One thing that’s not on the list: animated colour changes, opacity fades and so on. These don’t create a motion effect so they aren’t considered a problem.

How can we respect the request for reduced motion?

  1. Identify potentially triggering motion
  2. Decide on the best reduced effect based on context

Be careful not to remove content when you do this – you want to improve the experience, not break it!

Examples of reducing motion:

  • iOS has a pronounced zoom effect that changes to a crossfade
  • Hello, Universe website has an animated starfield background that is paused/not moving
  • Mockup with multi-directional animation with transform and transition – the reduced version just translates immediately to the final coordinate, sets opacity to zero and fades them in

What about sites that use animation heavily, or as a core part of the content. In that case you might make the motion toggle much more prominent – reveal it in your own UI and don’t rely on the OS setting at all. Not everyone will know about the (relatively new) reduced motion setting.

We also need to use the setting more on the web, to encourage more people who need it to use it (or discover it in the first place).

Example of a prominent toggle is the Animal Crossing website – being an animation-based game there is a lot of animation. When this is toggled you don’t lose any content, but the animation stops – and there’s a control to go back to animation if you prefer.

For a useful and smart toggle, check out Codepen: Animation with reduced-motion by Marcy Sutton

The idea of toggles might feel a bit weird, but when you look for them you find toggles everywhere! If you are going to add one, make sure it’s in a logical place that people will be able to find it.

Or you might make it a prominent part of the content, like turning animations on and off while reading a tutorial with animated illustrations.

Reduced Motion Picture Technique, Take Two | CSS-Tricks

Hopefully this talk has shown we can still be creative with motion, while still being inclusive and considering accessibility. Expanding your audience is pretty much always a good thing, it seems like a win-win!


Debugging CSS

Ahmad Shadeed, UX Designer • Front End Developer

Software developers spend 35-50% of their time validating and debugging software… which costs about $100b each year!

So what is debugging? The process of finding and solving issues in software, that are preventing the user from doing something.

How has CSS debugging changed over time? In 1998: Style Master appeared, letting us debug the CSS on a website; and in 2006 the Firebug extension arrived which was a huge milestone.

In 2020 we have lots of tooling options, but we also have a huge amount of devices to test and web projects can be huge.

Why teach debugging?

Debugging reduces the time devs spend on issues, but the fast development of browser DevTools can be overwhelming for newcomers.

Learning resources can make it less confusing.

How to define a CSS bug? This is when the developer has made an error in their CSS, not a browser bug. It could be a mis-alinged element, horizontal scrollbar or clipped padding… things like that.

There are lots of ways to test web pages:

  • Browser devtools
  • Mobile devices
  • Emulators
  • Virtual machiens
  • Online


  • We trigger this with the contextual Inspect menu
  • Device modes to emulate mobiles, tablets, etc – but it’s not 100% reliable
  • Scroll into view feature is very useful for testing small screens

Media queries can be particularly tricky to debug. Their order is important (their effect can be overridden by selectors further along) but this may not be obvious in the dev tools – try setting a background colour. It’s an old trick, but it checks out!

Also classic fixes like clearing cache and checking links still apply.

Beware of the double breakpoint bug, where two media queries set to the same value try to hide content. At that exact pixel value, neither will be visible, so offset your value by 1px.

To debug an element that is being added and removed by JavaScript is helped by watching where the DevTools flash (to show changes). You can also use the contextual Break on menu to pause JS execution when the DOM mutates.

Debugging common CSS issues…

  • Vertical padding won’t work with inline elements (fix by setting them to inline-block)
  • Space below images can be removed by setting images to display:block
  • Beware of inline CSS overriding the intended styles
  • Margin collapse catches a lot of people out – this is where the larger of two margins between elements 'wins’ and both collapse to that size
  • Just In Case Margin – this is where you set a margin even when your test case didn’t need it, but larger content would cause content to collide
  • Don’t rely on min-width alone, add horizontal padding as well
  • Elements floating above siblings – check for z-index
  • Transparent keyword can show a dark colour in gradients – set the colour you actually want instead
  • background size must be included immediately after position
  • Amend transforms if setting multiple effects, to avoid overriding the earlier effects

Horizontal scrolling is a section to itself…

  • Use the Firefox 'scroll’ label to debug this
  • Check for absolutely positioned elements with large negative values
  • Ensure all wide layout like grid containers is set behind media queries
  • Use overflow-wrap:break-word on small screens to stop long words and URLs triggering scrollbars
  • Beware the difference between overflow:scroll and overflow:auto
  • Don’t forget to use flex-wrap:wrap
  • Don’t depend on space-between for the spacing between columns

There are lots of ways you can break a layout with CSS...

  • Revealing more content or switching languages
  • Forgetting the in-between sizes while resizing windows
  • Developing with ideal placeholder images that mask things like contrast problems


The Art of CSS

Voon Siong Wong, Technical Lead DiUS

As we know, CSS is simple not easy. The syntax is easy, but the semantics are harder; so Voon will be talking about how might use the cascade, inheritance and custom properties effectively.

Quick terminology refresher:

selector { /* declarations block */
property: value; /* declaration */
another : value; /* declaration */

The cascade deals with the way multiple blocks can define values for the same element; and the specificity of the selector determines what applies.

CSS Cascade Level 0001:

  • elements and pseudo elements
  • carry semantics
  • they are the low level of the design
  • lo-fi look and feel

CSS Cascade Level 0010:

  • classes, pseudo-classes and attributes
  • ARIA attributes increase semantic richness
  • medium fidelity

CSS Cascade Level 0100:

  • IDs
  • no semantic value outside the app
  • high-fi

CSS Cascade Level 1000:

  • inline styles
  • no semantic value
  • can complete the look and feel

Ideally these properties cascade constructively through these levels to build the final style. When selectors all declare styles for the same element, ther is destructive interference where one of the styles 'wins’ and removes the others. These are not good and bad things, good CSS is a balance of both behaviours.

In addition the cascade is influenced by the source of styles:

  1. user agent
  2. user (eg. with browser extensions)
  3. author
  4. author !important
  5. user !important
  6. user agent !important (rare, possibly non-existent)

Next we look at inheritance. Because HTML is inherently hierarchical, inheritance is a natural part of how styling HTML works. Some values like color are inherited from parent or ancestor elements; others like display are not.

You can force inheritance using inherit, initial and unset.

Let’s talk about the Single Responsibility Principle. This is good in JavaScript, but what does it mean in CSS? Voon feels this means:

  • declaration blocks should be easy to read and understand without their output
  • use composition to build an element’s style
  • responsibility can vary by the role it is playing
.as-child {}
.as-self {}
.as-parent {}
.as-peer {}

If we break our CSS down this way, we can work out what CSS is doing without needing to see the HTML.

If we think about this more, CSS has the ability to describe hierarchy as well:

.foo {}
.foo > * {}

CSS and HTML are co-dependent documents – we should not really write one without the other. We can also choose whether to push the complexity into CSS or HTML.

You can have very simple HTML and use more-complex/verbose selectors to build up style – at its most extreme having absolutely no classes in the HTML. Or you can go the other way and make the HTML more complex/verbose – at the most extreme, using things like Atomic CSS or Tailwind where you don’t write your own CSS at all.

The art of CSS is finding a balance that works.

Finally let’s look at custom properties (aka. CSS variables):

  • defined like any other property
  • obey the same cascade rules
  • inherited by default
  • can provide a fallback

This enables data-driven styles; and fine control of how dynamic styles get applied. While this can look like inline CSS, it’s much more powerful when properties depend on other properties. The inline custom property tweaks a single value within a larger piece of CSS – your CSS behaves more like a function.

So when we talk about the art of CSS, really we are talking about finding a balance – eg. global first vs local first. Whether you decrease the scope as required; or increasing the scope as required. Neither really works alone.

CSS is inherently designed for consistent design systems – to support global-first, with decreasing scope and increasing specificity.

But as developers we tend to focus on components which are local-first, increasing scope as required.

Voon would encourage you to embrace the cascade, don’t run away from it. Apply the common case, then apply for the exceptions. Think about the relationships between elements and describe that in your CSS, and you will find you reduce the verbiage in your HTML. This allows you to use composition of roles to build the styles you want; and make readable code.

Finally, custom properties are much more than a replacement for SCSS variables. They enable data-driven design. Why not try creating a bar chart just with DIVs and custom props! |

Day 4

Observability is for User Happiness

Emily Nakashima, Director, Engineering

Emily has always been a frontend engineer who loves to hang out with ops… which is possibly a bit unusual. She had a hard time adequately explaining why the roles have so much in common, until a boss commented:

Nines don’t matter if users aren’t happy. – Charity Majors

No matter which part of the stack we’re working on, we all have the same job – to deliver a great result, a great experience, to the user.

So the question is how do we know if the users are happy?

This is where observability comes in. But what does that mean, exactly? The term comes from control theory.

An observable system is one whose internal state can be deeply understood just by observing its outputs.

For the web this is probably more like…

An observable client app is one whose user experience can be deeply understood just by observing its outputs.

Looking at this definition you’ll realise it’s not a job, it’s a system property… and you can’t buy system properties. No matter who tells you otherwise!

These are things like usability, accessibility, performance, observability… You can buy tools that help you get there, but you can’t just swipe your credit card and get them delivered to you.

So this talk is about how you get there.

Some people like a three-pillar model of observability: logs, metrics and traces. Emily doesn’t agree with this – you can buy all these products and still have questions about your systems.

Emily has a much scarier graphic ;) A range of tools are involved across the traditional concerns of frontend, backend and ops tools.

Emily will focus on events and distributed tracing, as those two parts give you a lot of insight.

Distributed tracing sounds a bit scary but the concepts are reasonably easy. It started with logs.

If there’s one big tip today it’s to move to structured logs. Traditional logs in one-line format require a ton of regex to pull information out; while something with structured key/value pairs is much easier to work with. Also try to add request IDs everywhere so you can link different logs together.

127.0.01 – [12/Oct/2017 17:36:36] “GET / HTTP/1/1” 200 – 


“upstream_address”: “”,
“hostname”: “my-awesome-appserver”,
“date”: “2017-10-21T17:36:36”,
“request_method”: “GET”,
“request_path”: “/”,
“status”: 200

Maybe you should also capture the duration of each request as well as the time it occurred.

So now you’ve gone from a single line log, into something that captures an event. Events are the fundamental data unit of observability – they tell us about units of work in our system.

Note that events do not mean DOM events in this talk! Events are often one http request, but it will depend on the work your system is doing.

The next way to add value to this data is to identify parent/child relationships between events. Which naturally leads you to start visualising things, which makes it easier to understand cause and effect; and which parts are running fast or slow.

While most people will have seen something that did this kind of visualisation – but it’s worth digging into why they are useful. It’s also the reason the three pillars don’t work so well – logs and metrics are redundant if you have good traces.

This is why Emily’s diagram has such a large bubble for Distributed Tracing, it can encapsulate so much other data.

You may have been wondering about logging duration – that’s not 'normal’ for logging; and people generally don’t want to write the code required to do that. The way to simplify this is with a standard and a library – most people are using OpenTelemetry right now.

Most people are using this as a server-side tool, but what about using this in the browser? How might we use this for a complex React app?

We can definitely do this – we can pull out spans for fetching the bundle or fonts, running the bundle, rendering components, etc. It does take some code – there’ll be a link at the end. There isn’t a popular library yet.

It is really satisfying once you’ve got this up and running.

When we create events (spans)

  • On page load
  • On history state change (SPA navigation)
  • On significant user actions
  • On error (also send to error monitoring tools)
  • On page unload

Bringing this information together builds a good picture of the user’s experience.

The exact contents of spans varies according to the tool you’re using; but there’s usually a type, duration and a range of relevant metadata. There can be a lot of depth to this information; and it allows you compare data over time, using whatever cuts make sense.

A typical example would be heat mapping the total page load duration over a long period.

Traces can get really complicated, particularly when there’s a lot of user interaction. They are interesting as one-offs, but you can aggregate the data and see if there are correlations – eg. did slow page loads lead to lower conversions?

This all looks a lot like the information you get out of your browser’s network panel; but a key difference is your browser can only ever show you one person’s data. Nor does it capture as much context.

Fundamentally browser network data is synthetic data not RUM (Real User Monitoring) data from your production environment. Also the network tab’s data is extremely dense; you don’t want Personally Identifying Information; and so on… so your own tracing can cut the data down so you aren’t handling lots of overhead.

There is some overlap with session replay solutions. If they’re working for you, there’s no problem sticking with them.

So what next? Capturing more about the effect of interactions within applications. So far we don’t have this solved, although Conrad Irwin has a great blog post about this.

So what do you actually do with this data? Emily’s company is small, so the focus is on the customer.

Fast queries are really important for their customers – sub-second responses are good. But even so there was feedback that some queries blew out to multiple seconds.

They set a target of 1s, but found it was using polling set to 1s so it could never meet that. So they knew to shorten it; and they’d instrumented the response times and knew the median was about 411ms and many were faster.

So putting it all together, they dropped the polling intervale to 250ms. A 20 line code change instead of launching an entire project to implement an alternative. 19/20 queries were faster from that change.

There is a blog post with more details but there are two things to take away:

  1. It doesn’t matter how fast your backend is if you don’t pass that benefit along to the user.
  2. The story feels almost silly – a little data and a small change had a big benefit – but do we have enough data to find all of these gains? Probably not.

Honeycomb has two versions – one queried directly in the browser, another version queried via an encrypted proxy. Not many people use the secure version but they are a very important minority of users – and it’s slow for just one team… and it was a really important customer. But they could not reproduce the problem.

They started looking at traces and the answer popped out – something was blocking data requests. It turned out the JS to manage the requests was complex; and being single-threaded it was delaying the requests. They batched the logic and improved performance.

How to find the needle in the haystack? Use the appropriate data:

  • For breadth use metrics (a horizontal slice across traffic)
  • For depth use tracing (deep cross section of a single interaction)

Common questions:

  • Privacy – don’t collect every bit of data you can, question if you need every piece, choose the least sensitive options, avoid PII
  • Performance (will this slow my app?) – done well it won’t. Batch requests, use the Beacon API for non-blocking send, use requestIdleCallback or setTimeout for slow calculations
  • Sampling – if you have a really large amount of data, you can work with a representative sample

Observability is not just for performance and bugfixing, it’s great for getting back to the question of “are the users happy?”. Good for UX tracking:

  • Refresh/reload tracking – excessive reloads can indicate something is wrong. They tracked ctrl+r/cmd+r and found things like people hammering the user invite page.
  • Rage clicking – you can guess what this means! Rapid re-clicking on a single element can indicate a high level of frustration. A common trigger – elements that load data but don’t show a spinner.

You can also use observability to drive design.

Honeycomb had a sidebar showing query history; but designers weren’t sure what users wanted there (if anything). They looked at data about the screen to window size ratio; and found users were making that page larger than any other screen in the app.

So they made the sidebar easier to read quickly; but also collapsible so people could tuck it away when they didn’t need it.

Then they went into the data again afterwards, to see if fewer people were increasing the screen size on that page – and there was a small improvement.

Isn’t this just product analytics? Pretty much. As our apps get more complicated, the tooling has to get more powerful.

Emily likes the emergence of the term “Product Engineer” in preference to 'full stack’ etc. It’s better if we are not all siloed away from each other.

When you look at production data, you too are an observability practitioner. Welcome to the club!

@eanakashima |

Web Assembly at the Edge

Aaron Turner, Software Engineer Fastly

Aaron has made or is involved in lots of cool projects around WASM (WebAssembly) and WASI (Web Assembly System Interface). He’ll be talking about WASM, WASI and the Edge.

So what is WASM? It’s a universal, low-level bytecode for the web. It’s great for computationally-heavy stuff that doesn’t fare well in JavaScript. It runs in major browsers, nodejs and in standalone runtimes.

There are lots of languages outputting to WASM, but the most mature are:

  • AS – AssemblyScript (if you can read Typescript you’ll be able to read AssemblyScript)
  • Rust has good tooling for WASM
  • emscripten – a very mature toolchain for C that now compiles to WASM

WASM uses linear memory – it’s like one big array you can share between WASM and JS. This makes it easy to partition – great for security. It also relies on a capability-based security system, which provides some further control.

WASI is a system interface for WebAssembly, a standardised set of system calls for interacting with system resources like file systems, randomness and time. There are proposals for more.

You can use WASI through standalone runtimes like Wasmtime and Lucet. There are tight requirements around the permission you give WASM modules to do things like modify files on your system, you have to be very specific.

This gives performant modules with really powerful capabilities.

The Edge represents putting your servers closer to your users – eg. with Fastly’s edge cloud platform. The idea is to serve from locations that are optimised to deliver content to users, wherever they are. CDNs are the best known face of this.

You can also use this model for compute – commonly called “serverless”, where it’s the compute that closer to the users, not just storage and transfer.

To make this useful Fastly considered what users consider important for compute: language portability, security, runtime performance, memory usage, fast start/minimal cold starts. A cold start is where a user’s request has to wait while code is transferred to their nearest node, parsed and served.

So what are the options? We could ask people containerise their code. It would give good portability and security, but cold starts would be really slow and resource consumption high.

You could also put JavaScript at the edge – V8 can run WASM but requires JS so there are multiple execution layers. Cold starts still wouldn’t be great and as an interpreted language execution could suffer.

But WASM is great for the edge:

  • portability is high – almost any language that compiles to WASM could be supported with a slim SDK
  • security is great – sandboxed, continuous heap, capability based security
  • runtime performance is generally good (there are always caveats with performance)
  • memory usage is reasonable
  • cold start times are much quicker with WASM than with containers or JS

So those are great technical reasons, but let’s look at this another way. Imagine an example where you have a large group of users with cheap, low-powered devices; and reasonable but not great internet speeds: for example lower-income earners in a big city.

WASM is kind to profiles like this – it takes less CPU and latency is reduced by edge hosting; so those devices can still use powerful features.

WASM is really new with lots of work still to be done. Things that are second nature are still missing, simply because they’re not done yet. But it has an amazing community driving it, people who are not just smart but see the good it can achieve. WASI is also growing with a lot of cool innovation – eg. ideas coming forward for machine learning and cryptography interfaces

Standalone runtimes:

  • Wasmtime is a popular choice for a general standalone runtime for WebAssembly. It’s designed to be light and fast.
  • Lucet is built by Fastly and it’s what they’re using for their edge solution. It notable for having an ahead-of-time compiler rather than runtime. Lucet is fast for instantiation – as little as 50 microseconds (not milli, micro!), which is very good news for cold starts.

Languages that compile to WebAssembly:

  • Rust has been a strong contender in the WASM space for a long time. Good WASI support; enthusiastic community.
  • AssemblyScript is a very young language, but has some big backers; and has a good JS/WASM story; there is a lot of opportunity to get involved if you are interested.
  • Fastly have been experimenting with Go for WASM

Even if a language doesn’t compile to WASM... maybe its runtime will…!


There are lots of really exciting projects too:

  • Wasm3 runtime (small, good for IoT)
  • Wasm itself is also getting better
  • Lots of new projects around games – game engines, physics engines, etc

The tech is cool, but the reason Aaron stays so interested is the community is great to be part of. If you (and your company!) are interested you should get involved.

Together we can build an awesome WebAssembly for the browser, edge and beyond!


Tuning web performance with just browser APIs

Yaser Adel Mehraban, Lead Consultant TelstraPurple

New web performance APIs allow us to tune our sites with native tools, rather than having to rely on third party tools.

First of all we should set a definition:

Performance optimisation is the act of monitoring and analysing the speed and interactivity of the application and identifying ways to improve it.

This can be on both the server side and the client side; but this talk focuses on the client side.

The process for web performance is easy: measure some metrics, make changes to the site, then take measurements again to see if it made things better.

Some actions are done in the lab (localhost, dev environment, sometimes third parties) or in production. Ideas should be tested in the lab, tested in production, then taken back to the lab environment to continue improving things.

This is not all about hitting 100% in Lighthouse, it’s about user happiness. Performance optimisations can take a lot of time and money, so if users are already happy you may not want to push beyond that into low ROI work.


  • Perceived load speed
  • Load responsiveness
  • Runtime responsiveness
  • Visual stability
  • Smoothness

Three reasons to care about performance:

  • Conversion
  • Traffic
  • UX

The classic example of impact is that Amazon found that every 100ms delay cost them 1% in sales… which at their scale meant $330m/pa.

Common ways to measure performance:

  • Browser devtools
  • Lighthouse
  • Third party tools

Monitoring APIs:

  • User Timing
  • Performance Timeline
  • Navigation timing
  • Resource timing
  • Long tasks

User Timing

  • Performance marks
  • Performance measures
  • Accessed on the performance object in the client side

This API lets you set marks during execution of your code, then create measures between those marks. You can also clear these marks, because these might not get garbage collected effectively so clean up after yourself.

Browser support is good for this feature.

Performance Timeline helps you manage and observe your marks and measures.

  • performance.getEntries
  • PerformanceObserver

A PerformanceEntry looks like this:

“name”: “measure-1”,
“entryType”: “measure”,
“startTime”: 22.68000000003667,
“duration”: 0.025000001187436283

Again, good browser support for these features.


  • total page load time
  • request response time
  • page render time
const perfData = window.performance.timing;
const connectTime = perfData.responseEnd – perfData.responseStart;
const renderTime = perfData.domComplete – perfData.domLoading;

Lots of examples of how to use these on MDN

Excellent browser support, although the API might change in future.

You can also send the information to a server in JSON.

const [entry] = performance.getEntriesByType(“navigation”);

The data is quite rich and you can draw lots of insights from it.

Resource timing – Network request timing

  • high-res timestamps (ms)
  • resource loading timestamps
  • resource size

The last API for today is Long Task:

  • Tasks running for 50ms or more
  • Culprit browsing context container
  • Attributions

This lets you flag tasks that are taking too long, using the longtask entry type.

NOTE this API is in draft, so its browser support isn’t as good as the other APIs.

So let’s say we’ve used all those APIs to find some areas to improve.. there are some more APIs we can use to make those improvements.

  • Network Information
  • Page Visibility
  • Resize Observer
  • Intersection Observer

Network Information

  • detect network condition changes on the client side
  • preload large requests
  • exposed via navigator.connection, navigator.mozConnection, navigator.webkitConnection

Early days so mostly works in Chrome at the moment.

(demo of using this in action, sending lighter images to slower connections)

Page Visibility

  • watch for when a page is not visible to the user
  • document.hidden
  • document.visibilityState
  • document.onVisibilityChange

This lets you know if the user is currently using the page or not; and you can do things like pause media, stop pre-fetching data, etc.

Resize Observer

  • monitoring object sizes in a performant way
  • ResizeObserver
  • ResizeObserverEntry

This lets you react if the page or container is resized – this gets away from having to manage and debounce the page resize event which gets fired a lot.

Intersection Observer

  • Observe changes in intersection of an element asyncronously
  • IntersectionObserver
  • IntersectionObserverEntry

One example of this is that an element that is initially off screen lower down a page will fire an event once it intersects with the viewport.

Use cases: lazy loading images, infinite scrolling, ad revenue reporting, prevent running tasks or animations.

To set up your intersection observer you will need to pass in some options: root (target element; null for the whole document), rootMargin (space around the target element) and threshold.

(demo of lazy image loading)

@yashints |

Understanding image compression

Andi Tjong, Developer Atlassian

There are many different forms of image compression. Andi is going to look at

  • the different formats
  • how important image compression is
  • why the different types exist
  • what are their use cases

Starting with the question “how important is it?”.

Example: RGB photo with ~2m pixels. Uncompressed this would be ~6megs, but with JPEG compression it could be ~40kb – a ~99% saving.

About half of web page data is images, so reducing that size has a significant effect.

Why are there so many compression algorithms? It’s partly history – people find new and better ways to compress data, or have a specific use case to solve. There also isn’t a universal definition of compression – “first million digits of pi” is a compression of 1m actual digits! It’s a different representation of the same information. You need to understand the content before you can understand how best to compress it.

Image formats – will focus on the most common formats PNG, JPG and GIF.

Graphics Interchange Format (GIF)

  • created in 1987, last update in 1989
  • good for low colour count images – logos, cartoons
  • bad for photos as they introduce a lot of noise
  • uses Quantization and LZW (lossless compression)

Quantisation reduces images to 256 colours; it may also apply dithering to soften the hard edges quantisation can introduce. When GIF was created, this was suitable for the kind of hardware people were using.

The other big use case for GIF is animation – but the images get very big. In fact a lot of “gifs” you see online are actually videos.

GIFs only have intraframe compression; while videos can use interframe compression – the video only stores the difference between each frame.

So in the end, you should probably use video for animation and PNG for static images.

The Portable Network Graphics (PNG) format was created in 1996 to get around the LZW compression patent. Has several modes:

  • Indexed, aka PNG8
  • Greyscale
  • RGB, aka PNG24
  • Greyscale + alpha
  • RGBA – RGB + alpha, aka PNG32

Key PNG use cases:

  • save lossless image. PNG uses filtering + deflate compression (same algorithm as GZIP), both are lossless
  • image with low colour count – PNG-8, replaces GIF for small static images with smaller file size. Use this format if you can as you get good results.
  • partial transparency – PNG-32 or PNG8+alpha. GIFs can’t do partial transparency, PNG can.

Joint Photographics Experts Group (JPEG)

  • created in 1992
  • uses a lossy compression method
  • based on Discrete Cosine Transform (DCT)

JPG use case is food for photographic images, images with high colour counts.
Don’t use JPG for low colour count images as it introduces artefacts/noise, use PNG instead.

How JPG works

  • colour model conversion
  • chroma subsampling
  • block splitting – splits images into 8×8 blocks, transforms to numbers
  • DCT – uses a pattern grid to reproduce the 8×8 grid
  • quantisation
  • entropy coding

In short it tries to remove the details that we don’t miss as much. But because it works in blocks, it adds noise to images with crisp lines – because those lines calculate poorly in the compression blocks.

What about other image formats? WebP, JPEG 200, HEIC, AVIF, BPG... they have better compression so why aren’t they being used? Basically they’re not supported in browsers.

By understanding the different image compression algorithms, you can understand which format to use (and which tools can help you reduce file sizes).

Predictive Pre-fetch

Divya Sasidharan, Developer Experience Engineer Netlify

Divya currently lives in the United States where people love their french fries… and you usually get two condiments: ketchup or mayo. Since most Americans prefer ketchup, you get it by default – you don’t have to ask for it.

It’s a frictionless, nice experience… and that’s what we want on the web too. If apply it to users browsing the web, they should just get the content they want without having to constantly ask. This is prefetching – having the browser fetch information before the user specifically requests it.

To set the scene, a run through of how websites load – from request, through the connection dance, so data can be transferred. That’s one level of latency. The next level is fetching the resources – HTML comes down first, then other resources. This all adds time to rendering that the user can see. This is basically the status quo.

Divya will be focusing on…

  • DNS Prefetching
  • Link Prefetching
  • Prerendering

DNS Prefetching is the first and simplest level. It establishes a connection for future requests to use.

Link Prefetching goes one step further, fetching priority resources that the developer has specified are important to load ahead of time.

Prerendering is the most extreme form of prefetching – it fetches an entire page and rendering it ahead of time, in a different virtual layer. This makes navigation instantaneous… assuming they go to that page next.

Hint Cost if wrong Benefit if correct
DNS Prefetch Very low Low
Link Prefect Mid high High
Prerender Very high Very high

It is useful for us to consider the both the cost and benefits of these techniques.

Some use cases give more predictability; eg. the top results on a search page, or the post-login screen of an application.

These techniques all assume that we as devs know what the user is going to do… but we are speculating. Users aren’t really that predictable.

So a predictive approach is better.

An illustrative example is predicting weather – you predict future days based on past days. If you know that 80% of the time a cloudy day is followed by another cloudy day, you can make a reasonable prediction if today is cloudy.

Translating this to a website, you can look at user statistics to find the most common sequences. On a restaurant website you might find 50% of users go to the menu.

Google Analytics gives you a navigation summary that is useful for this; but Divya finds viewing raw data is more useful than pre-processed data, where people have assumed they know what you wanted.

You can consider not just pages people visited; but also where people exited and didn’t load another page at all.

So how to integrate this with your page? Build automation – Divya has it set up to query GA during the build and update the values in 11ty. It’s better than permanently hard-coding them.

Blog post: The subtle art of predictive prefetching

The result is a JSON asset that gives a set of paths and certainty values. So the prefetching can be set according to certainty thresholds.

(Demo showing that menu is served from prefetched cache.)

This isn’t a new concept, this was pulled from guess.js – a project by the Google Chrome team. It makes a lot of this much easier to implement.

This is still a naive implementation with some hard-coded paths. It would be better to have more data, to make better predictions.

  • Thinking of the weather example, a month’s data will give different results than just a week’s data.
  • Cookie based tracking can enable predictions customised to a specific user’s habits.
  • Looking at more levels of navigation will reveal more detailed patterns.

It’s important to note with cookie-based tracking you might run afoul of things like the GDPR. You will need to ensure you handle all the required opt-ins and so on.

But if it is an option, it can shift from compile-time predictions to real-time predictions; which will be a better experience.

Another lens to help decisions is the user’s connection, including disabling prefetch if people are on data saver.

Bandwith Threshold Recommended
Slow 2G 0.9 DNS/Link Prefetch
2G 0.9 DNS/Link Prefetch
3G 0.5 DNS/Link Prefetch
4G 0.2 Link Prefetch
Data Saver 0 null

Also referring back to the cost table from earlier provides good guidelines.


Performance versus security, or Why we can’t have nice things

Yoav Weiss, Co-chair W3C Web Performance Working Group

Yoav gets a lot of questions in the form why can’t we just…? ...access more data about our users? ...avoid CORS for my specific case?

The reasons are user security and privacy… and people tend to respond “yeah I get that but...” which shows they don’t truly understand all the threat models on the web.

They basically don’t know what browsers are trying to defend against.

Note this talk is not about server-side security, defense in depth, third party tracking or fingerprinting.

Yoav will be talking about the broader categories of attacks, surfaces and vectors; and giving some examples where things went wrong.

Hopefully this will give insights into the constraints browsers operate with; and help answer a few of those “why can’t we just” questions.

Threat categories:

  • History leaks
  • Cross-site leaks
  • Speculative execution attacks (dangerous sub-type of cross-site leaks)

So what are history leaks?

Let’s say you love kittens and you often browse – but you don’t want every other website to start sending you kitten-related advertising. To avoid this, the browser has to prevent history data leaking between different websites. This is much harder than it seems…

The oldest history leak is :visited style – any website could put in a bunch of links to other URLs; then check properties like computed style to see if it was visited.

Blog: Plugging the CSS History Leak – Mozilla Security Blog

Mozilla closed a lot of these attacks but it resulted in very limited styling and slow rendering. Yoav feels visited link history really needs to be blocked entirely between sites – users probably wouldn’t have noticed but we’d have had nicer styles and faster rendering!

These links trigger paints that open up timing attacks – attackers can derive state from things like Frame Timing, Paint Timing or Element Timing.

Browsers are mostly defending against this, but it’s adding a lot of complexity to protect something that doesn’t give a lot of value to users.

Caching is great for performance; but it’s not always unicorns and rainbows – the dark side is caching attacks. If visited links were blocked, caching attacks would be a great way to find out about the user’s history.

A site can load a static resource from and time how long it takes to load; and if it’s fast it can deduce that you have visited that site. While you can defend against this by not caching anything, that’s not so great.

Safari was the first browser to add cache partitioning (or double-key caching) as another defence.

This means storing both the URL of the cached resource; and the top-level domain it’s loaded from. This prevents caching across sites (ie. prevents loading from a TLD other than the cached TLD) and prevents the leaks.

Partitioned caching also tackled a range of other privacy and security issues. Sadly other browsers are yet to follow, mostly for performance concerns; although the Chrome team are looking this again.

Another example of cross-origin state leak is Service Worker installation state. Resource timing has an attribute called worker state, which reveals the time it takes for a SW to start up. It was possible for a site to load another in an iframe and inspect the state of the service worker; and figure out if the user had visited it before.

This was a bug that was fixed in both the specification and implementation.

Cross-site leaks are the next big category. This is where one site can deduce information about you from another site.

For example if you are logged into a social media site, this may be revealed to other sites; and it may even reveal details like the sections of the social site you use.

To prevent these leaks, browsers put in the Same-origin Policy (SOP). The mechanism for this is CORS (Cross Origin Resource Sharing). This allows legitimate sharing by enabling specific sites to read the data.

This protects us from direct leaks, but there are side channels like resource size. Let’s say you have been shopping on a website that’s running an A/B tests of different icons for different age groups – the hypothesis being older users like bigger icons. The resource size of those icons now reveals your age.

Cross-site search reveals information because an attacker can send search queries (eg. to your email inbox) to see if a certain keyword returns results. Let’s say it reveals once again that you are into kittens, because a search of your inbox for “kittens” does not return a zero result.

So exposing resource size is bad… what kind of idiot would have bugs like that? Uhh… (slide highlighting Yoav’s name on a security bug…) (The ticket lists the detail of the attack)

In a busy to-do list a task about changing resource timing implementation hadn’t made it to the top of the list (it didn’t seem high priority!). The bug coming in certainly provided new motivation to get it fixed, although Yoav also felt pretty bad…

Another place content sizes get exposed is the Cache API. It turns out the API has a quota; and initial implementations took the cache size into account while calculating that quota. This revealed the size. So now browsers have to pad the sizes out to arbitrary values to block the attack… which sadly makes things slower than they’d otherwise be.

Beyond bugs, there are also features that get blocked due to resource size exposure. The Content Performance Policy spec had to be abandoned because there was no way to expose the information that made it useful, without exposing that information to attack.

The Performance Memory API also fell victim to a similar problem. They are reviving some parts of the idea in the performance.measureMemory API.

Other things that give out details:

  • status code
  • processing and rendering timing

This is why Timing-Allow-Origin is another opt-in.

Speculative execution attacks – Yoav kept the best for last!

It turns out that CPUs also have caches. We’ve seen many attacks but nothing quite as dramatic as Meltdown and Spectre, which shattered previous expectations around multi-tenanted computing.

Basically when modern CPUs see an IF statement, they can speculatively execute both branches to save time; which gives big performance gains. It turns out there is an unexpected side effect: it can keep things in the CPU cache which can be observed by other programs running on the CPU.

Mitigation for this requires keeping processes separated. Chrome was relatively fortunate as they had already launched Site Isolation, which limited the impact on desktop at least. Other browsers didn’t have this in their architecture, although they are working on it now.

CORB (Cross Origin Resource Blocking) came out of this as well.

Spectre attacks also read the CPU cache state through timing attacks. High-resolution timers were facilitating this problem, which led to some being disabled or “coarsed” to lower resolution (although some have been re-enabled in isolated contexts).

Because these features are pretty useful… is there a way to re-enable them? We can try to create new types of secure or isolated contexts, which limit cross-origin vulnerabilities.

performance.measureMemory and JS Self-Profiling are still risky; but may be ok to expose in isolated contexts.

CORS is a high bar for opting in (there’s a lot of friction) and not something they want to require for isolated contexts. To tackle that…

  • Cross-Origin-Resource-Policy: cross-origin
  • Cross-Origin-Opener-Policy: same-origin
  • Cross-Origin-Embedder-Policy: require-corp

How many opt-ins is that? CORS, CORP, TAO, COOP, COEP... how do they all fit together and what should devs be doing?

  • CORS – good for public resources that don’t require credentialled requests, noting there are limits (eg. you can’t CORS-enable CSS background images)
  • CORP – where exposing some details like size are ok; but the content can’t be CORS enabled
  • TAO – can be used when exposing timing doesn’t reveal anything about the user

If it all sounds a bit vague… that’s because it is. There’s work to be done to clarify it all. It needs to be very clear what you are doing with those opt-ins.

To summarise…

  • adding APIs to browsers is hard – particularly because people do so much in the browser now and browsers have a duty to protect their data
  • fundamental changes are coming – cache partitioning, isolated contexts, opt-in rationalisation
  • we can’t have everything – some features just can’t be done safely!