Web Directions Code 2021 was held online over two days, September 17 and 24 2021.

Disclaimer & Credits

  • These notes were hammered out very quickly and paraphrase what speakers were saying. If you need an exact quote, use a definitive source such as a recording or slide deck. Note that you can review videos, plus download transcripts and slides from conffablive.com under “past sessions”.
  • Photo and other content credits as per the social media embeds.
  • Slide credits obviously to the speaker.

The jump menu

The State of PWAs

Hemanth HM, Member Of Technical Staff PayPal

...

Hemanth is a TC39 delegate and wrote the PWA chapter of the Web Almanac.

We start by going back in time to 2007, when Steve Jobs announced web 2.0 and AJAX as the vision for iPhone apps. But then…the app store happened instead, with a push to apps instead of a web experience.

Come forward in time to 2015 and the idea of progressive web apps (PWAs) appeared. The key concept was to enable “app like” experiences built on the native web. There are some huge success stories for converting to PWAs – lower data consumption, better performance, increased engagement and conversions. Lots of big names across publishing, social media and ecommerce.

What of the Web Almanac? It’s a project by HTTP archives the to provide insights and reports of the state of the Web. Hemanth wrote the PWA chapter, but is quick to call out all the contributors and reviewers who were critical to the chapter being written.

Screenshot of the PWA chapter in the 2020 Web Almanac

So how does the Web Almanac use large amounts of raw data and analysis to produce a useful publication?

The first step is to form a content team including at least one author, reviewer and analyst. Ideas for chapters are proposed and discussed; content is planned and deadlines set. Then the team will go ahead and gather the data, validate results and draft content. From there it’s content refinement, review and finally publication. So it’s a lot of work, with a lot of people working together.

There is a great deal of raw data, so Google BigQuery is used to enable the analytics and data visualisation. The amount of data available is pretty extraordinary, allowing solid insights into what technology is actually in use; and implementation analysis like Lighthouse (performance) scores.

They looked into the Lighthouse PWA audits for around 6 billion pages, and it gave a lot of useful insights – eg. 2.21% had an installable manifest (that’s a lot when you’re talking about 6b pages!). Then deep details can be derived about the specific PWA features being used, how people are using service workers and so on.

So what can we take from this as devs? Well for one thing just having a JSON manifest doesn’t make something a PWA, it needs service workers and offline support. So many manifests appear to be automatically generated by big CMS systems. They also found a non-trivial amount of typos! Things like theme_colour instead of theme-color.

There are also some business insights, such as shopping being the most common category for mobile PWAs. This makes sense when you consider research shows a good, fast, flexible experience translates to revenue in ecommerce.

See The 2020 Web Almanac PWA chapter for all the data.

The next PWA chapter is being written now. Check it out!

The Evolution of the Web and OffscreenCanvas

Brian Kardell, Developer Advocate Igalia

...

The overall theme of this talk is 'where do web standards come from and why?’ ... and in many ways the answer is it’s complicated and we still don’t truly know how to do this well. Many great ideas never make it as standards, some jump the queue and get implemented too fast, and so on.

It’s not mean or heretical to question how this happens, or to ask how it could be done better.

So let’s look at canvas. When creating this spec the team tried to pave as many cowpaths as possible; and the spec was made with reference and awareness of SVG and MathML that had already been dropped into browsers.

Canvas was introduced by Apple in 2004. While it paved a cowpath, it was not one that came from the web – it was built on contemporary graphics and animation libraries. This is where things like context/ctx come from; and specialised things like Uint8ClampedArray (that is really just a way to store a colour value).

Around the same time, Apple was launching the new MacOS including widgets, which had a low-level API. So people were interested in playing with these.

(Some demos of generating shape-based versions of raster images – a neat evolutionary demo)

Back to Canvas – Jen Schiffer made a canvas version of MS Paint; and went on to work at Glitch who created a tool that created pixelatize. You can put an image in and generate a canvas version, and play with pixel size and colour snap.

Another super cool trick that canvas can do is hack each frame of a video during playback. To go super meta, Brian created a pixel version of Jen’s talk literally everything is pixel art

More-practical applications of canvas are to build full-featured graphics editors that work right in the browser. But that makes it easy to think “hey so canvas is pretty niche…”. But it’s used really widely to display video, render maps, charts and other dataviz. Often the complexity of all this is abstracted away behind APIs.

Links:

So with most of the uses actually being really complex, how do you improve canvas? What cowpaths can be paved? Photoshop-in-the-browser is cool, but not something you need to replicate frequently.

We don’t just need high level features, those features need an architecture underneath that allows people to rethink, remix and adapt.

Canvas had tech debt – like doing everything on the main thread, which led to janky rendering and blocking input. Seriously-heavy tech becomes a big problem for people in rural areas and developing nations, where internet connections are still slow and expensive.

But performance will catch up with all of us, as browsers keep getting bigger but also keep getting embedded in a staggering array of devices. They’re cheaper, with less processing power and they update slowly if at all. So while your desktop browser is going to be fine, appliances with web-tech UI need things to be light and fast.

Igalia maintain WPE and care about this a lot. One of the things that appeared to address this tech debt is OffscreenCanvas, which allows developers to offload processing to separate threads.

The API is incredibly small, it’s the effects that are big. You can do big, heavy things which can just take as long as they take, without making your interface unusable.

(Demos drawing fractals; and the more-practical demo of drawing maps without locking up user input.)

Link: https://www.w3.org/community/maps4html/2020/04/07/better-web-maps-with-new-browser-features/

Evolution is interesting. When electricity first reached homes, it was only for lighting. But people made other things like fans and plugged them into light sockets. Some of these ideas were good but the implementation was dangerous, so we developed power plugs and more-efficient motors to put in fans.

Houdini is interesting development which lets people plug straight into the underlying architecture of CSS. Custom Paints in Houdini borrow performance patterns from offscreen canvas – you don’t have to reinvent everything, the idea has obvious links. You can look around at all the pieces that are currently available, and often use them already; or at least use them to imagine the next solution.

Perhaps you want an image that can be panned and zoomed… you can go ahead and make a web component for that. But then when you share it people might want to add more affordances, like pan/zoom on non-touch devices. The idea can be rapidly iterated and improved.

Link: https://open-ui.org/

When good ideas emerge and stabilise, they make great candidates for standardisation.

There is a good model around dictionaries. Dictionaries effectively standardise language, they don’t invent it. New words can appear, become common vernacular, then eventually get added to the dictionary – then it’s an 'official’ word.

So how do we capture the slang of the web? What is the web’s vernacular, which parts of that will disappear quickly, and which should become standards?

The HTTP archive is now recording the use of custom elements, giving the ability to watch for ideas and trends. There may be a better way to do this; and the sample size is too small for any of the ideas to be ready for standards track… but it is a start for measurement, and it gives authors one more reason to go and try new things and be part of the story.

Invent the future, we’ll write it down.

@briankardell

The 2021 edition of dealing with files on the Web

Thomas Steiner, Developer Relations Engineer Google

...

(I had connection issues during this talk so the notes are brief)

Sure this is a remote event but let’s make the best of it and create a badge anyway… demo of creating a badge in the browser. Now this is a PWA so we can install it; handle offline issues and so on.

Some of the tricks used:

  • Use contentEditable to make an in-browser editor
  • Access system clipboard to copy some text
  • Load files from the file system
  • Copy an image from the browser into system clipboard
  • Save the whole badge as an HTML file
  • Pre-processing the HTML to remove UI and inject everything to make the downloaded HTML file self-contained
  • Intercepting ctrl/cmd+s keystrokes to save the processed badge, instead of triggering the browser to download the page
  • Setting up drag and drop to put avatars into the badge, instead of the browser opening the file
  • Storing files in the browser

Links:

@tomayac

Practical uses for Web Components

Ben Taylor, Full-Stack Developer Explosion

...

In 2019 Ben was working on a codebase that was a collection of different styles and approaches, which had built up over time. Moving away from this was a problem as new frameworks all wanted to own the whole codebase rather than allowing incremental change and improvement.

Web components gave them a way forward.

So what is a web component? In a basic sense, it’s a custom element with a name that includes a dash in it (none of the standard HTML elements have a dash):

<ben-spin>
Hello World
</ben-spin>

Under the bonnet you create a template to construct an encapsulated element with its own style and script code. Web components can be mixed with standard HTML elements.

Web components are great for making small utilities – things you repeat a lot can be pushed into a component. Timestamps and icons are a classic case for this. In JavaScript we’d make a helper without really worrying about it, so it’s good to take the mindset across to HTML.

What about replacing big utilities? Ben had a page with an old jQuery UI carousel. As with any framework that requiried a download of jQuery + jQuery UI. Instead Ben created a web component that worked on its own with native tech.

These are all choices!

Web components work anywhere, including inside frameworks like React. You don’t have to convert everything, you can just use them for one thing. The styles can scoped as well so nothing will leak out if you don’t want it to.

Bonus link from attendee chat: codepen.io/abottega/pen/BapjYpN (comparison of HTML/CSS, web component and React).

There’s a lot to like about web components:

  • they’re a web standard, so they’re quite future proof (they won’t suddenly disappear or break)
  • they work with native code, no big frameworks to download
  • they work like any other HTML, so you can use them anywhere

...and they’re neat! They’re a great tool to have in the kit.

@taybenlor

What could you do with a neural network in your browser?

Ningxin Hu, Principal Engineer Intel

...

Ningxin is participating in the W3C Machine Learning for the Web community and working groups.

What’s the problem to be solved?

While ML and deep learning has been getting more prominent and important, it hasn’t been moving into the web very quickly. While there are frameworks like the JavaScript port of tensorflow, there is a big performance gap between web and native. It’s particularly obvious on mobile devices.

This is due to things like VNNI and DSP – designed to accelerate performance – being available to native processing, but not exposed through web APIs. The web is disconnected from the hardware that’s best for ML.

One proposal to address this is Web Neural Network API, as an abstraction for neural networks in the web browsers. The WebNN API is a specification for constructing and executing computational graphs of neural networks. This interoperates with WebGL, WebGPU and WebAssembly.

Link: w3.org/TR/webnn

There are three major interfaces within WebNN API:

  • MLContext
  • MLGraph
  • MLGraphBuilder

(Walkthrough of “Hello Tensors”)

Demos:

  • browser instance of detecting objects in images and videos
  • Electron application wrapping the object detection sample

Links:

While browser implementation is yet to come, there is a polyfill so you can try this out now.

The State of Augmented Reality in the Web Platform

Ada Rose Cannon, Developer Advocate, co-chair Samsung Internet, W3C Immersive Web Groups

...

Ada is co-chair of the W3C immersive web groups, developing web APIs to allow web browsers to use immersive hardware such as virtual reality headsets and augmented reality equipment.

The two main ways people use AR right now:

  • dedicated headsets
  • handheld devices like smartphones

Thse devices use WebXR, an API that gives you access to the positional information of immersive hardware, so you know what the user is looking at and what they’re doing.

Most modern browsers support this, although iOS users will need to wait for Safari to catch up.

Demo: RollARcoaster

It’s important to note WebXR is designed to be futureproof, to avoid early miss-teps like WebVR having such a huge range of features under one spec, that no single device would ever support it. The new spec uses modules to make things more flexible and supportable.

In other words – you can use WebXR to build VR and AR experience using progressive enhancement principles. For example in VR you provide the whole environment, but in AR you shouldn’t provide things that already exist – like floors and ceilings.

When the user is in virtual reality, they exist in an infinite void…

That means there is no limit to the size of the objects you can place in VR. But in AR, space is no longer infinite and you need to fit objects accordingly.

You also need to consider things like lighting and reflections. In VR these come from virtual content; in AR it needs to estimate from the user’s surroundings.

The next API to discuss is DOM Overlay – instead of being decoupled, you can keep using normal web content like buttons, links and forms… and have your underlying scene react to it. You can manage the transitions between the scene and the DOM Overlay using an event listener for beforexrselect.

This also opens up some new usability challenges around where to play DOM overlays. Obscure the whole scene? Place it into the scene and let the user access it as an object? These questions and their best implementations aren’t resolved yet.

The hit-test API lets the developer access the position of objects in the space.

Anchors help solve an AR problem, trying to balance practicalities of startup time against real-world objects. As the system learns more, mapped objects may 'drift’ in the scene as its understanding updates. Anchors let you define a point to fix a virtual device to a real object, as they update when the information about that real object changes. It goes together with ar-hit-test.

Dealing with space and time is interesting where there is no constraint on the 'space’ you are in. We could assume that a 'zero point’ could be set on earth, but we already have AR/VR equipment being used in space where that doesn’t make sense as a base coordinate. So coordinates have to be relative.

Also if you have too much going on and things slow down, you can break the illusion you are trying to create… or worse, make the user ill. So the APIs include estimation capabilities to work out where a pose will need to be shown.

More APIs are being created, so if you are interested you should give feedback to the working group.

Cool stuff that’s coming:

  • depth sensing for object occlusion
  • real world geometry API
  • raw camera access to enable building custom extensions
  • in-VR navigation – moving between immersive websites without leaving VR; this has some serious implications for security and safety
  • geographic alignment

Links:

@AdaRoseCannon

PWAs & Project Fugu: Closing The Relevance Gap

Alex Russell, Partner Program Manager Microsoft Edge

...

Why did they start down the tracks of PWAs and Project Fugu? Consider the evolution of features over time. They may start as single-vendor/single-OS hardware or software; but over time become widely supported; or even supported by meta-platforms that sit across multiple lower-level platforms.

The press is invariably focused on the usually-proprietary bleeding edge, but if the meta-platforms don’t keep up you get a relevance gap. The potential for new things is to some extent limited by the commonly supported features of the platform.

The early web allowed content delivery and small amounts of interactivity, so sites like news and weather flourished as they worked well with those capabilities. Email took longer to reach the web as it was harder to bring the right kind of interaction to the web. The turning point was arguably Gmail, which was built off the back of the AJAX revolution.

A similar path has played out with web videoconferencing. Over the years hardware has made cameras ubiquitous; and web technologies made in-browser VC viable.

It’s often the combination of existing pieces that creates new things. We could always draw boxes on the screen, but what if we could put a webRTC stream into one of those boxes? Video conferencing. What if we add access the GPU and add a gamepad? Web gaming.

Developers don’t experience platform progress in a straight line. They make broad-grained, infrequent bets with multi-year impacts. It’s difficult and expensive to change horses mid race. This is why teams will tend to choose safer options in the meta-platform commoditized layer. But what if the meta platform was able to close the relevance gap faster?

The core platform loop is a common pattern – the things teams make with available technology helps support more people adopting that technology. The core platform developers can’t make all the killer apps that rally people to the platform.

PWAs emerged as a way to try to level the playing field between installed/native apps and web apps. To earn their place on someone’s home screen, they needed to be as appealing and useful as an installable app. This was also required to win business decision makers over to the idea of web technology.

So PWAs paved the way for things like notifications and offline support, beyond basic installability.

By 2017 Microsoft started bringing PWAs to desktops, which allowed businesses to re-use their codebases for more platforms instead of building custom for all of them. Even though PWAs came out of Google, it was clear MS could see the potential.

By making it possible for web apps to provide value in more ways, on more devices, changes the entire picture for developers (who need to decide what to learn in their career) and businesses (who need to decide which platforms to invest in).

Project Fugu is a follow on from work that started in 2017, trying to make web apps first class citizens on desktops. It’s an open collaboration within the Chromium project, with a lot of heavy hitters involved. Everything is built in the open. The value of designing in the open is very high, particularly combined with ease of access/participation for developers.

The project is called Fugu because they are so well aware of the security risks. One “wrong cut” and bad things can happen! Fugu is up front about embracing the potential and not just shying away due to the risks.

Safe, secure access to things like bluetooth and NFC open up tremendous opportunities for the web. It’s worth the effort required to find a way, because the payoff is so big. Serial access is likely to be a big deal for makers – imagine using your IOT hardware without having to learn a ton of C!

There are a lot of things in the works – background app updates, barcode scanners, even firmware delivery secured to a specific trusted website. These are basic operating system conventions, making them available in the browser opens up a lot of opportunities.

Links:

So is all this truly viable? Lots of platforms have failed because they couldn’t keep up and stay relevant. Alex certainly thinks so! Major vendors have all committed to this pathway, to keep the web relevant for a long time to come.

@slightlylate

Conversational APIs

Léonie Watson, Director and co-founder Tetralogical

...

(This presentation featured a lot of voice recordings, so for the full effect you really need to go back to the video!)

We humans have been trying to talk to things other than ourselves for a remarkably long time! People were trying to mimic speech as far back as the 1700s, and we can thank Bell Labs for giving Kubrik the idea of a computer singing “Daisy”.

Conversational interfaces remain constrained by technology.

For example for many years TTS (text to speech) really only had one voice, which was robotic and male… the “female” version was just the same voice at a higher pitch.

Concatenative TTS created better speech, enabling richer simulation of gender, age and culture. But it takes a great deal of recorded speech to get to a basic level, and it’s practically impossible to make it perfect. There are too many possible segments and sequences of sound to cover all possible moments in speech with pre-recorded sounds.

There’s also a problem that most existing technology assumes binary gender.

Demo: Q, the worlds first genderless voice assistant.

Slide (transcript of speech by Q) - Hello, I'm Q, the world's first genderless voice assistant. I'm created for a future where we're no longer defined by gender, but rather, how we define ourselves. My voice was recorded by people who neither identify as male nor female and then altered to sound gender neutral, putting my voice somewhere between 145 and 175 Hertz. But, for me to become the third option for voice assistants, I need your help. Share my voice with Apple, Amazon, Google, and Microsoft, and together we can ensure that technology recognises us all. Thanks for listening. Q.

Parametric TTS was intended to solve the shortcomings of previous TTS. It’s based on recorded speech, but instead of breaking it down into segments for re-sequencing it trains a vocoder. This greatly improved the ability to express a greater range of gender, age and race; but even then remained oddly flat in tone.

So the next step is neural TTS engines. The biggest difference here is supplying absolutely vast amounts of data. Google’s Wavenet, for example, is trained by samples from all of Google’s voice services, with millions of users. This helped with subtle differences, like two different english accents.

What about conversational interfaces in the browser? Apart from a brief moment of excitement with MSAgent in 1997, there wasn’t much activity until the Web Speech API appeared in 2012. This addresses speech recognition and synthesis (talking to your browser, and having it talk to you).

It’s a good interface but it’s currently limited to webkit.

(Walkthrough of the Web Speech API and some demos of its ability to speak, transcribe, control pitch and volume, voice selection, etc.)

It’s worth noting that user permission is required to access hardware; and all settings are relative to users’ system settings like volume. So you can’t suddenly blare sound at maximum volume, which would be distressing.

You should also avoid changing settings once you’ve sent utterances into the voice queue – while it works it should be avoided.

There is a cross-platform issue that not all browsers and platforms support the same voices, so your code has to account for that (similar to choosing typefaces). While there’s some hope of improvement via the SSML specification, most browsers don’t support it yet (the story of the web!). Another possibility is Emotion ML (emotion markup language), again something for the future.

Curiously the browser gives a lot of compelling use cases – not just assistive technology, but browsers are adding reader modes intended for all users as part of the normal experience. So it makes sense to give authors/developers more options to make this expressive.

The CSS Speech Module addresses this! ...but as usual, nothings supports it yet. It should be noted that CSS has been anticipating speech in some ways right back to CSS2.1. The CSS Speech Module has properties like speak, voice-pitch, voice-volume, voice-family, pause-after, etc.

So the future of the web includes being able to style the speech output of our content, just as we style the visual output. Speech will become part of our brand, it will be possible to design speech content to produce emotional reactions.

As we start doing more with conversational interfaces, the more nuance we will want to build into them. So with more adoption and more noise from developers, hopefully we’ll see support for these specifications improve.

Links:

@leoniewatson

Desktop PWAs. About time.

Diego González, Program Manager Microsoft Edge

...

The capabilities of the web platform are already pretty amazing, but the future of the apps themselves is critical for the future of the platform.

When PWAs appeared a few years ago it felt like the beginning of something magical. Developers could use their existing skills to create apps that were equal to native applications.

The web has an ability to persist after walled gardens have come and gone; but it still has to adapt to the changing paradigms and requirements over time. The current paradigm is heavily skewed to apps, so PWAs need to bridge the “native app gap”.

A2HS (add to home screen) was the beginning, the start of PWAs. But they are already largely obsolete, it’s time to integrate directly with app stores and move to the desktop.

Desktop PWAs have a huge range of possible features – responsive design, OS theme support, custom titlebars, shortcuts, sharing from the app, sharing to the app, handling schemes, links, files, and access to the file system itself. There’s also badging and push notifications, but they’re well covered and won’t be included in this talk.

PWAs have a logo/lettermark, designed to be adapted and generally played with; but also to give PWA advocates something to use as a bat signal.

This demo will be a recreation of the PWA logo printer ('pwinter’) as a desktop PWA. The app creates a PWA logo with a random colour combination.

Code: github.com/diekus/pwinter

Features and notes:

  • Installation on windows includes options like pinning to taskbar, adding to desktop and so on like normal applications.
  • You can make the app responsive, since desktop apps can be resized just like a browser and the screen may be in different orientations
  • control over application titles and control over OS level overlays like showing or hiding the title bar
  • light and dark modes that can respect the OS settings (media query)
  • share to other apps (email, social media etc) with the webshare API
  • Web Share Target allows desktop PWAs to be a share target, not just a share source (demos shows sharing a colour into the pwinter)
  • Scheme/protocol/URL handlers – eg. you can set a desktop PWA email client to handle mailto links, or create a custom scheme (web+pwinter). If there are multiple apps the OS will provide a disambiguation interface. This can hugely expand the scope and usability of an app.
  • Open files and save files back to the host file system – the pwinter can be one of the “open with” options in the OS.
  • (there was more, there’s a lot in this demo!)

While notifications were not in this demo, there is a callout to be very careful and respectful if you do use them. The goal is to make things useful and usable, and just because you can do something doesn’t mean you should.

Diego thinks we’ll eventually see PWAs as a commodity like other things we assume now.

To try out the full features of the demo, you will need to install Edge Canary; and you will need to go to about://flags in your browser (search “PWA”).

Four important things:

  1. register for an edge origin trial aka.ms/origin-trials
  2. be mindful of your manifest file – it’s critical to most of this github.com/w3c/manifest-app-info/
  3. expand your distribution via pwabuilder.com
  4. give feedback! @MSEdgeDev or contact Diego @diekus

Desktop PWAs empower an already great platform. This is just the beginning!

@diekus

Web app installs: Why, when, how

Penny McLachlan, Product Manager Chrome Web Platform, Google

...

The word “app” is now pretty globally understood to mean a digital experience. But what does that really mean?

Consider the common attributes of a website – can run on any device, quick to open, runs in the browser and can be linked. It might have more attributes like responsive design and offline support. Compare that with apps that are usually ecosystem-specific, usually have to be downloaded and installed, doesn’t run in a browser or need a server. But they aren’t likable.

But how do users feel about the website vs app experience? They largely 'visit’ a website, they use it; but they 'get’ an app, they possess it. These words reveal some important implications about how users think about them.

Quick trip down memory lane – when Gmail appeared it was a huge change. It moved email off your computer, made it accessible from any computer and everyone could have a private email address. Even though Gmail didn’t have all the same features as desktop clients, it became hugely popular very quickly.

So how do PWAs stack up? They have many of the attributes of native apps, but not all – and we’ll probably never reach 100% parity. But does that matter? Gmail didn’t have 100% parity with desktop email clients. And do all use cases actually benefit from being native apps? So we want to close most gaps but not worry about 100%.

You really can install a web app on all your devices right now – try it! However if you offer this to your users, remember they will have expectations that the app will work like other apps on that device.

The manifest file is the starting point for all of this. Don’t forget to set things like starting URLs and screenshots to improve the installation experience; and the display mode to set appropriate windowing. Most apps use standalone.

So how and when should you promote your web app install? Please, just do this when it make sense! Focus on the user – how are you helping them get their job done? If you aren’t helping, don’t do it.

Use normal design and information architecture techniques to work out an appropriate time and place to offer to install the site as a PWA. Inside the hamburger menu is good, the users is already exploring and looking for something. It’s not as jarring as, say, a permanent app install button sitting full time in the header.

Landing pages aren’t bad either, since they are already used to promote your site or app. You can also use the 'feed pattern’ – “keep reading even when you’re on the train”.

Don’t prompt too often; and let users dismiss prompts. Find moments that make sense.

Principles for promoting PWA installation:

  1. don’t be annoying
  2. if it doesn’t benefit the user, don’t promote it
  3. use context to help the user understand the value of installing your app

Don’t forget your analytics – there are events for the site being qualified for install, for the prompt being clicked and the app being installed.

You can check for related native apps and not offer a duplicate download (web.dev/get-installed-related-apps). There are benefits to PWAs though, even if there is a native app. PWAs may be much lighter/use less system resources.

There is also a mini bar in browsers; and if you want to avoid it you can listen for beforeinstallprompt and preventDefault() on that event.

Hopefully this helps you decide how to promote installation of your app, do it in a way users are happy with, and measure its success.

@b1tr0t

Publishing a PWA to App Stores

Maximiliano Firtman, mobile+web developer

...

OK so you have a PWA and you want to publish it into app stores – that’s the goal! Maximiliano is taking us on the journey…

The first station along the line is the PWA itself. We don’t want to make this terribly difficult to maintain, nor do we want to move out of native web tech.

The next step is a plan. We have to know which app stores are compatible with PWAs. There are the ones you’ll immediately think of:

  • Google Play
  • Apple app store
  • Microsoft store

...but there are lots more on top of that:

  • Samsung Galaxy store
  • Amazon app store
  • Kaios store
  • Huawei app gallery

PWAs will have to comply each store’s rules. Some apps won’t be accepted! There may be specific UI requirements; and each one has its own business/charging model. Plus there may be extra restrictions, like the Play store rejecting content for children.

So you need to do your homework and plan the steps ahead of time.

  • you will need to register with each store, and this process can take weeks
  • you may need to pay a fee to register
  • you will need a privacy policy URL
  • choose a package ID
  • prepare a demo account for reviewers with test data
  • set all the metadata required for each store

The next station is launchers. Apps are basically zip files with compiled code, assets, etc – but critically the app itself is included. Launchers are basically an empty package, without the actual web app. So a PWA app is a native shell to launch the PWA, it has the icon, name, start URL and other metadata.

Your PWA has to be published to a URL before you can set up launchers.

  • Windows needs an APPX launcher – you can create it manually with Visual Studio or use pwabuilder.com
  • Android with Play needs an Android App Bundle (AAB) – you can use Android Studio, the bubblewrap CLI tool, or pwabuilder.com which is actually using bubblewrap under the bonnet
  • Apple App Store … Apple doesn’t officially say PWAs work, so you won’t find docs. But it has something known as WKWebView, and App Bound domains. These combine to a solution enabling publication of PWAs to the App Store. You need to use Xcode and write swift or objective C.

Link to learn more: firt.dev/learn

So the next station is extensions. Can we extend the web platform? It is actually necessary! Chromium has a lot of APIs available. Webkit is more restrictive. Basically it varies a lot across platforms, but you can bridge between web and native. If you need motivation here, it’s how you get to in-app billing…

Remember to use the “p” in PWA – progressive enhancement.

The final station is app stores, but let’s take a quick side trip to the enterprise world for a moment.

You can deploy PWAs to corporate-managed devices on iOS and Android. iOS uses Apple Configurator; on Android it’s Managed Google Play Iframe. Remember this is only for enterprise users, not end users!

Back to app stores (for end users)...

  • Apple – App Store Connect
  • Google Play Console
  • Microsoft partner center

Be prepared for initial rejection! You may need to do some iterations to get your app published.

But this is the end of the journey, you have your app in stores now!

@firt

How to outsmart time: Building futuristic JavaScript applications using Temporal

Ujjwal Sharma, Compilers Hacker Igalia

...

What is Temporal? Well Date is severely outdated and people don’t like using it; and while there are popular libraries for date/time handling, they don’t solve all the problems.

So Temporal is designed to replace Date with a more powerful and usable API for handling dates and times in JavaScript.

For a lot more about the history of Temporal: https://www.youtube.com/watch?v=3F2A708c1o0

The spec is now approved, which means it’s time for implementation. It’s one of the biggest proposals they’ve had in TC39.