Web Directions Code 2019 was held on June 20 & 21 at the Arts Centre Melbourne.

Old school jump menu

Trends

I’m going for individual trends this time. Despite strong sartorial contenders Ben Dechrai (utilikilt and drinking horn) and Klee Thomas (whose shirt was bananas), I’ve got to give this to Yuriy Dybskiy for ordering tea at the bar. “Wait, that’s an option?! GAME CHANGER!”

Talks

The Evolution of Web Development – Erwin van der Koogh

To understand the evolution of web development, we need to look at where we are and how we got here.

  • 1842 – We should start at the start, paying homage to Ada Lovelace – who created the first program.
  • 1942 – We fast forward 100 years to one of the first digital computers, the ENIAC Six. The problem with the ENIAC is the men who’d built it were shipped off to war, leaving six women to work out how to operate it – and they were largely forgotten in history. Which is crazy given they made such a massive contribution to the invention of programming.
  • 1952 – Grace Hopper came up with compilers, to make programming easier and more accessible.
  • 1969 – Bob Taylor thought about a computer network that could span the globe – Arpanet.
  • 1978 – Ward Christensen came up with BBSes, while snowed in with a modem. This was an amazing step as it linked computers around the world, anywhere you could connect by phone. This led to the first arguments over using the phone to call someone while someone was using BBSes… and you could only have 1:1 users:phone lines.
  • 1984 – Radia Perlman invents the Spanning Tree Protocol, which allowed people to connect multiple networks. The impact of the ability to bridge networks at scale is difficult to fully convey.
  • 1985 – Nicole Yankelovich created Anchor Links, letting people link between places in a marked up document.
  • 1989 – Tim Berners-Lee creates HTTP and HTML, under the world’s most under-stated job title “Web Developer”.
  • 1992 – Marc Andreessen creates Mosaic, which had the momentous ability to show text AND AN IMAGE! That really brought things to life for a lot of people.
  • 1993 – Rob McCool created CGI-BIN (Common Gateway Interface to BINary) which hugely changed the capabilities of the web. It let you create HTML on the fly, by calling out to a C program.
  • 1994 – Rasmuc Lerdorf creates Personal Home Page Form Interpreter – PHP. Another game changer. CGI-BIN was too hard for a lot of people, but PHP was much more accessible.
  • 1995 – Hakon wium Lie creates CSS. This allowed us to stop using tables and spacer gifs, a dramatic step forward for web development and design.
  • 1995 – Brendan Eich creates JavaScript. Just a little thing…
  • 2003 – Mike Little makes “The Comment That Changed The Internet”, suggesting they fork 'b2’ which became Wordpress.
  • 2005 – Anne van Kesteren, XMLHttpRequest – the standardised version of what Microsoft had created. This kicked off the AJAX era.
  • 2006 – John Resig, jQuery. Hugely changed the game for adding behaviour to web pages, it was more accessible and efficient.
  • 2011 – Sophie Alpert (lead and top contributor) releases React, which changed the narrative of developing for the web.

So what we are seeing over the course of a few decades is the progression from flat documents, to logic on servers, to logic in browsers. This kind of progression keeps happening in IT!

  • Back in 1943 we had mainframes, and people thought the world would only ever need a handful of them.
  • In 1977 even the founder of DEC couldn’t imagine people wanting a computer in their home.
  • Then in 1981 Mark Dean came up with the personal computer.

We have a sort of pendulum effect – we go back and forth between centralised and decentralised computing, we go back and forth between the server and client.

But we are really moving along two axes – Cool vs Easy. How cool is it, how much can it do; vs how hard is it to do those things.

  • Mainframes weren’t very cool or easy
  • Traditional apps and mobile apps are cooler but not particularly easy
  • Front-end apps are cool and arguably easier

Angel's Landing

Photo: Angel’s Landing, a difficult path but an incredible view.

But where do we go from here? So how do we get to cool AND easy!? How do we get there?

Level up the browser:

  1. WASM, which will change the scope of viable client-side applications
  2. variable fonts, check out variablefonts.dev
  3. Web Speech API, which can change how we interact in huge ways
  4. Web USB, making the browser closer to being an operating system
  5. Media Recorder, another aspect of conversational interfaces
  6. Progressive Web Apps, how do we make a great mobile experience without writing separate apps
  7. Push Notifications, another step towards PWAs
  8. WebGL, 3d-accelerated graphics inside your browser
  9. Web Monetization

Level up the server:

  1. GraphQL, which is a massive change to how we create APIs. REST was always pretty restrictive. “Dave was right…” This really changes how we think of the backend, the whole relationship between frontend and backend is different.
  2. We are reaching a point where we can have true separation and have pure business logic on the back end. They’re stable, so we’ll slow down the rate of change (and innovation) on the back end. It’s hard and expensive to change backends, but you also don’t need to change them as often. They’re also exceptionally logical as it’s computers talking to computers. Frontends change a lot; and users are very random creatures – we still need to test UI with humans.
  3. We currently have static hosting or server hosting. Static is scalable, simple, cheap. Server is flexible, deployment is clear, SEO is good, there are a lot of opportunities to tune and improve performance, NoJS which will be a huge deal for the next billion, we can do personalisation and localisation easily. So what do we do? SERVERLESS! Which is… MAGIC! but what next?

Introducing FABs: Frontend Application Bundles. Trying to create Docker for front-end. Remembering Docker didn’t invent anything so much as it was the One Ring To Rule Them All. FABs aim to be 100% portable, so you can build once and run in many places with a lovely light bundle.

@evanderkoogh

The Metric System: Making Correct Performance Measurements – Henri Helvetica

(No it’s not really his last name ;) But it’s a great nom de plume!)

Evolution of web development doesn’t come without evolution of the web itself. The web wouldn’t be where it’s at today without evolution of the web browser. In the early days there were spirited debates about whether including images was a good idea, as it would potentially encourage the wrong kind of content and publisher. But in the end the evolution opportunities won out.

A lot of evolution has been around resource loading. The Firebug tool was created to tell you what was happening when your page first loaded, then what happened after that. It was an early example of starting to really measure performance, which is important because you can’t improve what you can’t measure.

Anything you can measure is essentially a metric. Metrics let you create comparisons.

metric: (noun) a system or standard of measurement

So what are the correct performance measurements?

Chart of W3C

W3C performance timing information

When first presented with this wall of numbers, it was hard to draw much from them.

  • onLoad was a huge number that people used to use very heavily, but it does not convey a lot about modern experiences.
  • ttfb time to first byte gives an insight about the backend
  • startrender gives an indication of when the user may start seeing pixels painted to the viewport.
  • speed index this is a visual representation of the load, it looks like a film strip. It gives a way to understand the visual experience during load. It gives an idea what happens towards Visually Complete Progress. The speed index calculates a number based on how long the page spends unrendered on the way to visually complete.

Page performance is in equal parts proof and perception.

Performance is still hugely important because the next billion will be arriving on low powered phone handsets. In late 2016, mobile usage exceeded desktop usage for the first time. Emerging markets have a very high proportion of people on phones that can’t handle heavy content, or it will be prohibitively expensive to download the data.

So what newer metrics can we find?

Paint metrics:

  • First paint – this is when the browser first rendered after navigation, excluding default background render.
  • First contentful paint – the first time the user can start consuming content.
  • First meaningful paint – the paint that follows the 'biggest layout change’. This metric is so broad and fuzzy it may go away and be replaced with the largest contentful paint.

Interactivity metrics:

  • time to interactive – rendered and reliably usable/able to respond to user input
  • first input delay – measuring the time from when the user first interacts with the page, to when the browser is able to respond to that interaction

Experimental metrics – these get quite interesting…

Can we track the largest image on the page, or the biggest heading? A news website will really want to measure when the article’s headline loads. How can we detect or define the 'most important’ content?

Load is not one single moment, it’s an entire experience. – Phil Walton

Using good tooling will lead you to the right things to investigate. The Web Directions site is mostly great, but there’s a long time to first byte. So you know where to look to speed that up (the server). WebPageTest shows A ratings for all the other metrics, so overall it’s in good shape.

Where web performance is concerned, you can’t improve what you can’t measure. Use tools to give you insights into what’s going on, then ask questions about what you can do to improve.

Tools and sites mentioned:

@henrihelvetica

When Your Code Does a Number on You: Navigating Numbers in JavaScript – Meggan Turner

Lots of Beyonce gifs ahead #sorrynotsorry

We use numbers for a lot of things, but generally to…

  • Count
  • Measure
  • Label
  • Identify

There are many, many different types of number…

  • Natural numbers (positive integers)
  • Integers (negative and positive whole numbers)
  • Rational numbers
  • Irrational numbers
  • Complex numbers
  • Transcendental numbers
  • Infinity and negative infinity
  • Real numbers – a superset of all possible numbers
  • ...and many more

We can communicate numbers in many different ways. We can say eight and people may think of 8, ๅ…ซ, เค†เค 

We have many notations: base 10, binary, octal, hex, scientific…

JavaScript follows the IEE-754 standard for floating point arithmetic. This is why 0.1 + 0.2 !== 0.3 in JS (and many other languages that do unexpected things due to floating point maths).

This limits the range we can display, although it’s in the quadrillions so generally not a big issue. We also consider precision, JS goes to 17 decimal points.

Everything on computers ends up in binary, which can’t do decimals. Floating point arithmetic has to work around this by encoding numbers. You can break down a 64 bit binary number into its components, the Sign (1 bit), Exponent (11 bits) and Significand precision (53 bits, 52 explicitly stored).

Fractions are tricky. 1/3 is a recurring decimal 0.33333…... so we round it. And you know this is painful when you are trying to create a three-column layout…

How big an issue is this? Depends what the number relates to. The rounding error if you were measuring a light year with 0.00000000000000004ly margin for error, that’s about as big as a bowling pin. In most applications it just isn’t big enough to be an issue; but in something like banking it can become a really big problem very quickly.

Range: size does matter. We make a trade off between range and precision.

If you type 1.7e308 into your JS console, you get 1.7e+308; but if you type in 1.8e308 you get infinity.

Number.MAX_VALUE
Number.MIN_VALUE

We have names for big numbers going up to centillion (a number followed by 303 zeros), and JS goes beyond that. It’s a massive range but we can’t display them all safely.

If you try…

Number.MAX_SAFE_INTEGER
Number.MAX_SAFE_INTEGER + 1
Number.MAX_SAFE_INTEGER + 2
Number.MAX_SAFE_INTEGER + 3
Number.MAX_SAFE_INTEGER + 4

...this will not do what you would expect! You get:

9007199254740991
9007199254740992
9007199254740992
9007199254740994
9007199254740996

Meggan ran into this problem while working on a very large music database at Jaxta. There was an error where the main artist on a listing was coming up as the featured artist. Which wasn’t right, even if Meggan does contend that Double Beyonce isn’t a bug!

The core of the problem was that two artist resources were coming back with the same ID. How was this possible? The actual IDs were returned as different numbers from the back end; but since they were beyond MAX_SAFE_INTEGER, JS was turning them into the same number. Solution was to move IDs to a 128-bit UUID strings.

So, long term what can JS do about this? Enter BigInt. A new numeric primitive for JS, allowing representation of numbers beyond MAX_SAFE_INTEGER. This can be expressed with n or by explicit cast:

100n
BigInt(100)

We can do arithmetic with BigInts, noting it will still round to whole integers. Can’t do mixed type operations, although we can compare 0n == 0 and sort mixed numbers.

Plus if you try this…

let bigNumber = BigInt(Number.MAX_SAFE_INTEGER)
bigNumber + 1n
bigNumber + 2n
bigNumber + 3n
bigNumber + 4n
bigNumber + 5n

...you get the numbers you expect! Yay! This is a really important new feature for JavaScript. Sadly though, browser support is some way off.

Numbers in JS can be confusing, but when you understand why they were implemented that way it’s easier to figure out the bugs.

Things to look out for:

  • we only have 64 bits, so round numbers
  • remember MAX_SAFE_INTEGER
  • BigInt is coming!

@megganeturner

WebAssembly, your browser’s sandbox – Aaron Powell

Kicking off with a WAT hello world example that was far too long to write down ;) You can see the general vibe by looking at other hello world examples for WASM. The take away is that you have to do a lot of really low level stuff you’re probably not used to.

WASM is not the most readable language! Assembly languages are pretty complicated in general and not so accessible to most people. There is a reason compiled languages are popular.

But let’s step back and really start with What Is WebAssembly? It’s the evolution of asm.js, which was created to enable high-performance applications to run in the browser. asm.js was designed as a compilation target rather than a language you write by hand.

WASM is also designed as a target and not a language most people will write by hand. With WASM you end up with a compiled WASM binary written in some other higher order language.

The browser needs more C++ in it. – No one, probably ever

So what does WASM mean for JS devs?

It’s not a JS replacement. It’s possible to effectively replace JS, but it’s probably not the right way to go. Another platform tried to do that… it was called Flash. Browsers are really good at executing JS; and JS is great for things like DOM manipulation. WASM is good at other things. They complement each other.

Terminology:

  • Module – WASM binary
  • Instance – a running WASM module running in a VM in the browser
  • Memory – an ArrayBuffer shared between JS and the VM. It only stores values. This does allow values to be passed between modules.
  • Table – like memory, but for function references

But why? Why would we use it?

  • Sandboxing – the more we build in the browser, the more risks there are (security, stability, memory leaks). Running in a dedicated memory space helps address that.
  • Shared client & server – not every server app is going to be written in JS, because it’s not the best language for all problems.
  • Image manipulation – eg. when we want to edit a photo. There are great libraries in C++ which we can compile to WASM and use in the client and server without differences.
  • Complex maths – WASM is better suited to heavy mathematics than JS

So what is it like to developer with WASM?

  • First, pick your language… something like C, C++, Rust, C#, F#, Go. Aaron picked Go as he wanted to learn it and it was a good opportunity.
  • Convert to a web application… Webpack is well suited to transpilation tasks, so Aaron used that
  • Create a UI over the top… React works well, although you can use anything including gasp raw HTML

Demo: oz-tech-events.aaron-powell.com (github.com/aaronpowell/oz-dev-events, see also Aaron’s blog series behind the demo)

This was a super quick look at WASM. It’s here, it’s real, it’s in browsers. Be aware the binaries can be pretty big so think about appropriate cases to use it.

Use WASM to extend your app. Sandbox parts of the app that could benefit from being sandboxed; or push something to the client that could previously only be done on the server. Look at places it would be useful to reuse code.

While JS is great it’s not the best language for all cases – each language has its strengths and weaknesses.

aka.ms/learn-wasm
webassembly.org

@slace

Compression: slashing the bytes for faster web apps – Neil Jenkins

What is compression? Obviously it’s about making things smaller, to some a checkbox item you need to tick off… but ultimately compression is a bet that your CPU is faster than your network. This is a pretty good bet, most of the time.

Now if you take James Bond’s number and say 'double-oh seven’ you’re compressing the number by referring to how many zeros before the seven. Admittedly it’s not efficient, but if you increase the zeros 0000000000007 can be compressed to 12:07 … twelve zeros and a seven.

This is an example of lossless compression, as you get the exact same number back. Many compression systems are lossy, for example JPG compression. But we’ll focus on lossless as we’re dealing with code, which must be lossless.

You have to compress before you encrypt – encrypt-then-compress doesn’t work. This can also open up security holes. If you reflect a value in an error message AND you reflect a secret key, it makes an attack much faster and more likely to be successful.

But back to compression… most text files include a lot of repetition and that compresses really well. This does take some crunching however. Static files can be compressed at build time, which makes things nice and efficient. For dynamic responses, you have to compress on the fly with a cheaper/faster form of compression.

We have to remember that many network connections have much slower upload than download speeds, which means the request to an API will be transferred much more slowly than the response will be downloaded. So it would be nice to compress the request and not just the response.

Javascript has a solution: Pako, a port of zlib to JS. It’s JIT friendly and fast. Zlib implements the deflate algorithm (RFC1951).

It does a few things…

(1) Remove duplicates.

var x; var y;var x; 4y (point back to the first 4 characters)

(2) Replace symbols based on frequency (Huffman Coding)

Some text characters are used much more than others; and you can replace an 8-bit character with a single bit character. So if you have a lot of 'e’ in your document, you change it from 01000101 to 0.

...so now you have Pako, you can compress your fetch data before you send it. Again this is still a bet between CPU and network, so low-power devices will want the compression moved to a web worker to avoid blocking the thread. At Fastmail they added a shared dictionary to speed things up even further.

Compression isn’t magic, it’s maths. It’s a trade off between CPU and network. It makes things faster, but don’t forget to compress uploads as well as downloads.

github.com/nodeca/pako

How to AI in JS? – Asim Hussain

Website: aijs.rocks

Going to be looking at three applications:

TheMojifier.com

The app takes an image, detects faces, detects the emotion of the faces, then applies appropriate emoji over the faces. We can finally answer the age old question about whether the Mona Lisa is smiling or not!

But how does it calculate emotion on a face?

  1. detect facial features
  2. use a neural network

...so these are just incredibly easy. Right? Right. No?

Neural networks are based on biology. Neurons: dendrites->body->axons …easy to code right?

We create a graph, with nodes and edges replicating the idea of signals going into a body and passing along an effect. Signals are weighted; an activation function is called to crunch the numbers; and it returns the result. Yes, a lot of it simple maths like multiplication. Forget the meme about if statements! There are no ifs! Choosing the right activation function and calculations can be tricky of course.

The inputs in a real scenario are the seed data. Facial recognition takes in a set of example face images which have been classified by humans. In an assisted learning process, you’ll initialise with random numbers; compare the initial outputs with human-determined scores; and use back propagation to tune the numbers. Once the outputs match what a human would apply, it’s been tuned.

There is already a library of human-rated emotion images. There is in fact a commoditisation of machine learning problems. Azure provides the Face API, which takes an image and returns JSON with face attributes including emotion. That’s what Asim used to create TheMojifier.

Neural networks are incredibly powerful, but simple to understand. Don’t assume you have to build everything at this point, do some searching to see what’s already available.

Tensorflow, Mobilenet & I’m fine

TensorFlow announced TensorFlow.js in 2018 and it really is TensorFlow written in JS (there is a version you can load off a CDN). This lets you train models, or load pre-trained models right in the browser. It’s a great way to get started.

So you can set up image recognition in four lines of JS. That’s pretty incredible.

github.com/tensorflow/tfjs-models

Azure provides an image description tool, Computer Vision …which Sarah Drasner used to create an automatic ALT text demo. Unfortunately the accuracy wasn’t fantastic, which the internet in its usual way was kind enough to very politely and respectfully point out.

TensorFlow.js doesn’t have any dependencies; MobileNet is a simple way to analyse images; give it a go. For bigger applications you’ll want something with an API and a larger training pool backing it up.

Image2Image

It uses a Generative Adversarial Neural Network (GAN) to take one image and generate another.

Demo: turning outlines into cats

This uses a generator to create an image and a discriminator to decide if the image is a real cat (against a library of cat pics) or a sketch-derived cat pic. The two fight it out with each other (hence adversarial) until the discriminator can no longer tell if the image it’s been given is real or generated. So over time as the two get better the result gets better.

It’s not just pictures though, you can generate video; generate multiple outputs; etc.

github.com/NVIDIA/vid2vid

The input can be anything – not just outlines or segmented images. It can just be text describing an image.

github.com/hanzhanggit/StackGAN

How long until you just write “I want an ecommerce site, blue, using paypal…”?

GANs learn to generate new images, although they take a lot of compute to train.

...

land.asim.dev/tfjs – Asim is considering writing a book, sign up if you are interested.

Mojifier tutorial: aka.ms/mojifier

@jawache

In conversation with a browser – Phil Nash

Bots have been a hot topic lately, but they’ve been around for ages – Eliza was built back in the 60s, possibly to quietly prove they didn’t work very well. It was mostly just pattern matching.

But technology moved on and SmarterChild gained a surprisingly big following, despite being a ton of preset responses (this one was if statements).

There have been bots on IRC, SMS, then suddenly Slack invented bots! general laughter

Now we have in-house bots, Alexa and Home devices.

So the question is, how do we build our own conversational assistant using the web?

We have the Speech Synthesise API (text to speech) which allows you to have the browser speak with just a couple of lines of code. They do provide different voices to allow a little bit of customisation. This works in everything except IE11.

Speech Recognition API does what it sounds like, however it doesn’t have much support yet. Also Chrome sends all the data to the Google Cloud Speech API, which is likely to bother many people who are concerned about privacy.

Then there’s the MediaRecorder API, which lets you easily record audio or video in the browser; and use it immediately as a webm file.

Demo: http://web-recorder.glitch.me/

So then what? You can send the recording to a speech to text service like Google Cloud Speech, Azure Cognitive Services or IBM Watson.

WebAudio API lets you use the raw audio bytes as they are being recorded, using an audioworklet. Combine with websockets and you can create live transcription…..which kinda works. There are also some polyfills.

It’s not great that all of these services send the data off to a third party service. Privacy is important; and also these services cost money. This is why devices have 'wake words’ like “Alexa”, or “OK Google”.

So we need to build our own wake word. TensorFlow.js to the rescue! You can set up speech commands using pre-trained modules, which translates to an in-browser wake word. (Demo of waking up a service with the name 'baxter’).

Thinking about Conversation Design: the one piece that’s really important – speak your bot conversations out loud with someone else. Someone who doesn’t know what the responses should be. They’ll expose the cases you haven’t thought of.

While the technical journey is interesting, what’s more interesting is the potential for the web platform to take over from mystery boxes like Alexa. The web is about experimentation and freedom.

People have been able to build proof of concept of adding sign language detection and speech-to-text reflection for Alexa. Gesture based interaction can be very natural and we do it with open technology.

Phil is continuing with the original idea to build a web assistant. Feel free to join in the project. This is just the start of the journey.

@philnash

It’s time to hit record: an introduction to the Media Recorder API – Jessica Edwards

A few years ago as a junior Jessica was working in ad tech (“I know, I know…”). If you remember the Warcraft movie – yes, it was terrible – she had to create a 'Warcraft yourself’ tool that put let people put their photo into the trailer. This mean Jessica watched that trailer hundreds of times… But the project worked and shipped under extremely tight deadlines.

But then the PM asked 'can they save the video’? ...and the answer was no. Which seemed weird given the pixels are right there on screen, but actually creating a file was just out of the question. The amount of backend work and pipelines was prohibitive, it would have taken far too long to get it all going. They looked into the Media Recorder API but it just wasn’t ready to use. So that’s where it ended at the time.

But now Jess is working on the video team at Canva. It was a chance to revisit the Media Recorder API... and things have progressed!

Example: generated canvas with 1000 dots on it. It’s fun but what if you wanted to keep it? Media Recorder API includes new MediaRecorder(stream). What’s a stream? There is a spec for Media Stream, but there are Media Devices that let you do things like capture the screen, or a stream from the camera.

(code examples of the API, which sets up a listener to capture blobs from a stream, and start/stop methods)

You can create an ongoing stream from a file; and if you pause and restart the original video you can see the stream continues. This means you can create effects like jump cuts.

There is a canvas capture API, which lets you capture a stream from a canvas as you write to it. You can choose the frame rate (default is to update whenever the stream updates). It’s possible to use this to capture specific frames.

mediaRecorder lets you set both the contain mime type and codecs (so you can set audio and video codecs). Currently the browsers all produce quite different files despite all notionally supporting webm.

Issues… there’s a few!

Support – promising, but not there yet. And the different browsers producing different files can be an issue.
Bugs – there are quite a few. eg. Chrome gives a black screen if you want something that’s too high-rez.
webm – just not very popular, not natively supported on most OSes (just because we’re all nerds with VLC doesn’t mean all our clients use it)

So it’s not quite there for prime time, but it’s not that far away. It’s enough to be useful.

@jsscclr

Building secure web experiences with Passwordless Authentication – Matthew Kairys

Passwords are a problem. Yubico did a survey of devs…

  • 69% admitted they’ve shared passwords at work
  • 67% don’t use 2FA at home
  • 51% have experienced an attack
  • 57% aren’t going to change their password practices(!)

How can we solve this? WebAuthn is a W3C standard that aims to improve security when accepting user credentials by using public key cryptography.

How it works:

Authenticator eg. yubikey or code generator
(connects to…)
Client
(connects to…)
Relying Party

Demo:

  1. Enter a username
  2. Chrome prompts for an authenticator
  3. Yubikey is pressed
  4. Application accepts authentication

There are some browser prompts to accept/enable the tech.

Benefits of this:

  • Industry standard – good browser interest
  • No passwords
  • Seamless UX, faster authentication
  • Secure by design (private keys never shared to the relying parties – the key is stored on the OS, eg. recognised fingerprint)

(code walkthrough was too complex to capture accurately)

Browser support – ok apart from IE11 and Safari. Safari is on the way for desktop, there’s no clear advice for mobile.

Design considerations – authorizers are not always equal. Some options are easier than others; and people may not know the ins and outs of the options, like fingerprints being stored on a single device. Also people can injure themselves and not be able to use the same fingerprint and you need to be able to handle that scenario. Or they might lose the security key, replace their mobile and so on.

@mkairys

Securing JavaScript – Laurie Voss

There are 11 million JS devs, 99% of whom use NPM. Not only do we use JS more, we are building bigger things; and 97% of the code comes from NPM. The average app uses over 2000 packages!

Open source JS is massive and a huge win, but it is not free. While NPM started out as a package manager, it has become a security company – because there was so much insecure stuff going on they had to do something about it.

What do devs do to ensure the security of their code? Most do code review, many do automated scans, a few get third party audits, and 23% do NOTHING.

How does Laurie define an incident?

Anything that can allow a malicious act to hurt users is a security failure. Even if nobody gets hurt.

Writing secure code is essentially the same in any language, including knowing where third party comes from.

Threat models:

  1. Angry bears ๐Ÿป – literally an angry animal that bursts into your office. This would be exceptionally severe, but has a very low likelihood.
  2. Denial of service – NPM has 99.98% uptime, people put a lot of effort into the downtime scenario even though it is rare.
  3. Malicious packages – the amount of malware is going up, but the numbers are very low (less than 0.1% of NPM publishes are malicious; and nearly all of those are detected and deleted automatically before they even go live)
  4. Accidental vulnerabilities – this is worth a lot of attention, because accidental vulnerabilities happen relatively frequently AND people don’t update their dependencies. 33% of installs include vulnerable packages despite warnings!
  5. Social engineering – this is a huge problem, and hard to guard against
  6. Compliance failures – eg. using packages with prohibited licenses (many simply unrecognised like WTFPL)... this is at epidemic levels

Case study: financial industry. 8 of the world’s largest banks were analysed and discovered to have vulnerabilities in 3% of the packages they were downloading. They also use licenses they claim they prohibit.

If banks have tons of money and still get it wrong, what about the rest of us? JS snuck up on enterprises, they didn’t realise how much JS they were creating and using. Also the way security works in JS has to be different – you cannot use manual processes for 25k packages. Blacklists and whitelists don’t work either, you just can’t keep up.

The scale and nature of the JS ecosystem demands automation of security and compliance.

Security case studies:

  1. left-pad: the number of people who were directly affected is actually incredibly low! It’s famous but the impact wasn’t as big as people imagine. What went wrong? The package name 'kik’ was given to the chat client with millions of users, instead of the existing owner, Azer, whose 'kik’ had zero downloads. Kik tried asking nicely (Azer told them very literally to 'fuck off’); so when asking didn’t work Kik they tried vague legal threats; and when that didn’t work either they eventually emailed NPM with their case. NPM made a huge mistake and gave them the name. In protest Azer removed every package he had, including left-pad… and thousands of packages broke, including some really big ones. In the end someone just re-published left-pad, because it was WTFPL licensed and that was fine. Nodejs now has String.prototype.padStart() and yet people still use left-pad. NPM no longer allows unpublishing packages after they’ve been up for 24 hours.
  2. eslint: ESLint is maintained by a bunch of people, most of whom use two factor auth. But just one person had reused their credentials across systems and didn’t have two factor auth. His account was breached and a malicious copy of eslint was published, with a credentials harvester. Luckily the attacker was a brilliant hacker but a terrible javascript author, so it mostly didn’t work anyway. Plus it got taken down very fast. Still this showed that supply chain attacks are real; and 2FA needs to be enforced. You can now require 2FA in NPM projects so nobody slips through the cracks.
  3. event-stream: This one was much sneakier. It would only execute at runtime inside a specific application, a cryptocurrency wallet. The attacker got in by gaining trust as a legitimate contributor, before taking over the package from a very busy maintainer – then they injected their attack. On the plus side, 11 million users are a pretty good detection system and it was found. But we need to remember that open source maintainer burnout is an attack vector. The maintainer was too busy and exhausted to do a deep background check on the attacker, they just gratefully took a useful contribution. This also surfaces an issue where a huge proportion of packages are maintained by a very small group of prolific publishers.
  4. electron-native-notify: This one was detected by the NPM security team using internal tooling. Their approach was similar to event-stream, they first submitted a useful package; then added a malicious payload in a minor update. Sadly this means we really can’t assume open source contributors have good intentions, the rate of attack is going up.

So what is NPM doing about it?

  • In response to left-pad they changed the unpublish policy. Barring extreme cases (active security threats and legitimate legal takedowns) you can no longer take things down.
  • They also employ lawyers to ward off vague and dodgy legal threats, which might have defused left-pad.
  • They’ve added npm-audit which automatically scans for vulnerabilities; and npm audit fix will automatically update to secure versions where possible, based on SemVer. There is a force option to ignore SemVer.
  • You can set npm to fail tests if audit detects failures; and the severity level that should cause that failure.
  • 2FA support – although just 7% of authors use it, over 50% of packages are covered by those highly-prolific authors (most of those use 2FA).
  • Automatic token revocation – if you accidentally publish your token in a public repo, it gets revoked.
  • Registry-signed packages
  • Simple as it sounds, the “Report Vulnerability” button is on every NPM page. Reports are actively reviewed.
  • The NPM Security Team
  • Automated threat detection
  • SSO support in the NPM Enterprise product

Future steps? The goal is zero malware. They will continue to grow their security team, machine learning is being used. They won’t be doing static code analysis however, as it is provably impossible to detect security issues that way.

Addressing social engineering is a big challenge. The community is too big to rely on “knowing each other”; and that also reduces diversity anyway. NPM is looking for ways to improve social signals without turning it all into a stupid game.

Maintainer burnout is a problem that everyone can fix. We need to find a way to compensate maintainers appropriately; and prevent them becoming so tired they make significant mistakes.

Security is hard, the world is scary, your paranoia is justified. But you can help by reporting vulnerabilities, or simply contributing and taking the load off maintainers.

@seldo | Laurie’s slides

Building a new Web Browser in 2019 – Yuriy Dybskiy

Yuriy started as a web dev 'long before it was cool’ in Ukraine, hand-coding HTML and CSS.

Yuriy is building a new browser, Puma.

So why build a new browser now, in 2019?

  1. The web never had a built-in payment model. How many people know 402? HTTP 'payment required’. It was included in the spec, but never actually used.
  2. The main monetisation model is ads, which has led to all kinds of tracking and privacy concerns
  3. The browsers stopped exploring, there hasn’t been much real experimentation for a long time (other than Brave in 2015)

In the meantime we have ads and paywalls, which make people sad. How do we fix that? People + Technology… and mostly people. We as devs need to come together to explore new ways to build things.

Yuriy has been collecting tweets from people dreaming of a better way to pay content creators; and also to avoid having ads everywhere.

Enter the Interledger protocol (ILP). For a while everyone thought 'blockchain all the things’ but this problem doesn’t seem to be solved with blockchain. ILP is an open standard that connects networks, is use case agnostic and uses packetised money. It’s a new way of thinking about money and has some really positive impacts on how we think about money.

Existing systems don’t do micro transactions – the average VISA transaction is $80, the average for Interledger is a fraction of one cent.

Interledger doesn’t care what currency you are sending. Both the sender and receiver can choose a connector for the right currency for them (including cryptocurrency). So the reader can send what they want and the author can receive what they want.

Sender - Connector - Connector - Receiver

Think of ILP as TCP/IP for payments.

It’s built in JavaScript and Rust and has a JS API, with events for things like starting and progressing monetisation.

To receive payment you add a payment pointer to your pages, eg. with xrptipbot:

<meta name="monetization" content="$twitter.xrptipbot.com/username"/>

...

So how are they building Puma browser? They have a very small team (three people, geographically separated). But it takes a village and a lot of people are keen to help. The Coil team are very keen and working on the content side.

The Puma team is very focused – they are starting by building mobile only. V1 with Expo and React Native supports both iOS and Android; but they are going to focus on iOS after this. Curiously all iOS browsers have to share the same engine, which makes it much easier for a small team to create a browser – the difference is what you build above that common layer.

(video break: a fan-made video for Puma, which is beautifully sincere and wonderfully over the top ;))

(Live demo showing how payments stream while you are consuming content; also that the original version of Puma didn’t even have tabs! The new version does, and – very important – dark mode by default)

What can we do?

  1. Web Monetize your site
  2. Play around with Puma Browser (and give feedback)
  3. They are hiring

The web is currently heading in a bad direction. There’s a focus on grabbing attention and showing ads, which leads to tracking you a lot so you can sell more ads…

Let’s push the web in the right direction.

@html5cat

Front end migrations in legacy code – Tanvi Patel

Tanvi starts with the story of Alice In Wonderland, going down the rabbit hole – little does Alice know what lies ahead. She’s lost and confused and doesn’t know how to survive in Wonderland.

This is how Tanvi felt when she joined Yelp. Yelp has a tradition of people pushing code on day one – and it took half a day just to get a trivial change pushed. It was a legacy code base that was difficult to work on. Tanvi had to learn how to survive in this strange Wonderland.

How does the legacy code jungle grow? As the size of the team and code base grows, different styles come into play; and the code starts to get messier. Eventually you find duplication of logic and a tangled set of dependencies that are difficult to unpick.

So why is it a concern? If your application isn’t going to be kept, not so much. But if you need to keep growing and extending the application, it causes problems:

  • architectural decisions – scalability, re-usability, ownership and tight coupling
  • maintenance – code gets more complex, changes take longer, deployments take longer
  • testing – code gets harder to test, particularly when “we’ll come back and write tests later”. Legacy code bases rarely have sufficient tests.
  • performance – the multiple layers of tangled code slows down the app. It can help to set a time/performance budget.

So how do you fix it? There’s no one-size-fits-all solution.

Preventive measures:

  • This may mean taking preventive measures – this may happen when you recognise the problem, looking ahead and realising you need to make changes to avoid future problems.
  • Recognising the problem: tight coupling of unrelated code, frustrated devs, slow updates.
  • Forsee the future: don’t fall for immediate gratification. Put your foot down if you have to.
  • Control adding new code – avoid adding to legacy, invest in training, use code guards.

Reactive measures:

  • rewrite or iterative fixes? This depends on the team and context. Do you have the opportunity to shut down the app or parts of it, to rewrite?
  • smaller refactors are usually the way to go – break it up into smaller chunks, that can be more easily addressed and subsequently monitored
  • broadcast information – educate everyone about the refactoring that’s going on
  • monitor with metrics – invest in writing tests, monitoring and error detection (including client side with tools like Bugsnag)
  • dark launch – run experiments and test with small cohorts, monitor before rolling out to 100%
  • automated refactoring – Yelp built their own tool 'Undebt’ that defined complex find/replace rules in a python tool

Frontend migrations:

  • the frontend changes relatively quickly, so it’s likely you’ll need to change your UI at some point
  • have good reasons to migrate – FOMO/“I like X” is not a good reason
  • define ways to do it – create a proper technical specification (Tanvi showed an example of a formalised tech proposal and project plan)
  • identify components that can be migrated – example of changing a spinner from jQuery to React
  • define strategy – are you using any common components that you can refactor in place? what infrastructure will you need? who are your supporters in the organisation and how will they get involved? what metrics and monitoring will you use?

Post migration:

  • monitor the results
  • build a case study – it may be useful to others, so share the knowledge
  • run retrospectives – what might you do differently in future?
  • broadcast that information in your community

Legacy is a natural phenomenon, migrations are inevitable. It takes planning, communication, patience and dedication.

Escape from the Wonderland of confusion!

@tanviOninsta

Picking up the pieces – A look at how to run post incident reviews. – Klee Thomas

Last year on Christmas Eve, Klee got a phone call and had to jump onto his laptop and fix problems… glass of champagne in hand. But he realised this kept happening and needed to change.

Like any other team, NIB are trying their best to do a good job, with all the buzzwords (agile, pairing, TDD, CI/CD, devops…). But like everyone else, customers always want more and legacy code keeps growing. Things will inevitably go wrong. You need to build a culture that copes with failure and is prepared to recover.

Post incident review determines if you learn from incidents or not.

Incident life cycle:

  • detection – working out that something is happening
  • response – you work out what you’re going to do
  • resolution – going ahead with a fix
  • analysis – pulling apart what happened (this feeds into all the other steps in the life cycle)
  • readiness – preparation for next time
  • ...then back to detection

When should you run a PIR? ASAP! Within 2 days. People forget very quickly and replace real memories with their own version of events. Run PIRs regularly, on smaller incidents as well as big ones – so you’re practised and know what to do.

The path to a great PIR

Root cause analysis – get right down to the thing caused the system to break; and the best solution is the Five Whys technique. Keep asking 'why’ until you get past the surface reasons for an incident. But beware of blaming an individual; and beware that the people you ask will determine the answer you reach. Blame culture leads to fear; and fear leads to people hiding what really went on.

Blame is weird. People don’t blame one person for a big project’s success, so why do we blame a single person for a big project’s failures?

Blame the process, not the people. – Edward Deming

Go back to Norm Kerth’s prime directive:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

A good tool is a fishbone (Ishikawa) diagram, that lets you identify primary and secondary causes for a specific problem.

But you have to guard against biases:

  • anchoring – the first piece of evidence is the most relevant
  • availability – I can think of it so it’s true (juniors do this a lot)
  • bandwagon effect – getting swept up in the crowd
  • others – hindsight, outcome, confirmation

...many of them quickly lead back to blame culture. Yet again go with the prime directive of retros.

Other useful things to do:

  • Create a TLDR summary of an incident and its resolution.
  • Create a timeline of events, with multiple points of view.
  • Elaborate
    • don’t hide what happened
    • don’t ask 'why X happened’ if you can instead ask 'what contributed to X happening’, the factors that led to a decision rather than the decision itself

Key metrics:

  • who was involved
  • time to acknowledge
  • time to resolve
  • severity

Remember to go through what went well – something has gone well, because you’re in a PIR and not still trying to fix it!

Action items:

  • Document them as they come up
  • Identify impact and urgency
  • Commit to some but usually not all
  • Actually do the ones you commit to – put them into tickets and schedule them
  • Feed back into all stages of the life cycle

If you can do all this well, your post-mortem can become a pre-mortem: something that will help you avoid doing this again.

@kleeut

From DevOps to a DevOps Culture – Michelle Gleeson

Alternative title: How CD killed my culture

In the early days of Michelle’s career deployment involved floppy disks literally going from workstations to on-premises servers… and it never really seemed to work, it’d be all-hands heroics, pizza, beer, and thank yous at the all-staff meeting every other launch.

Going forwards a few years, to monoliths in data centres. Now you have to go through a bunch of forms, send them off to a third party and cross your fingers that everything worked as intended. And it never did. So it would be all-hands heroics and thank-you lunches again…

Then the cloud came along and everyone tried to lift and shift, which didn’t work so well and led to creation of microservices and SPAs. Lots of learning required to figure out how it all worked. Lots of heroics and celebrations again…

Now we have CI/CD solutions that mean people don’t have to come together to figure things out any more. This has created siloed teams and individual contributors sitting around with their headphones on all day. People can get along on their own so much that they aren’t practised at banding together.

We have evolved to belong in tribes. When we don’t feel connected and share a purpose with our team, motivation drops.

Who is on a team matters less than how team members interact, structure their work and view their contributions. – Google 2015

Trust…

Without moments of adversity to bind the team together, we need to work out how to build those connections in other ways. Generally this means building trust.

The five dysfunctions of a team (Lencioni 2002) all start from a lack of trust. Psychological safety is critical to build high performing teams. You can start by asking key questions that investigate evidence of the five dysfunctions.

  • Build personal connections – if you are butting heads with someone, go sit down and have coffee with them. Find common ground, connect and even just acknowledge that you aren’t getting along and want to change that.
  • Be vulnerable – admit you don’t know things, be ok with mistakes, make it ok for others to follow suit.
  • Cultivate a feedback culture – give and invite feedback

Celebrate…

Do you have shared stories that are not about work? Do you know anything personal about your colleagues?

  • eat together – it’s that simple
  • celebrate birthdays, bring cake or even just some Tim Tams
  • find reasons for frequent, small celebrations

Appreciate…

Just appreciating the contributions of peers can help build trust. Do you tell people you appreciate their work? Does anyone say that to you?

Tangible things like kudos cards make gratitude permanent, it gives people a reminder later on. Make them visible.

Craft…

Somewhere along the line of automate-all-the-things many people stopped thinking about software as a craft. Some thought microservices didn’t need tests because you wouldn’t maintain them, you’d just throw them away!

Do you have PRs that stay open for days? New starters take a long time (2+ days) to ship code? Do you have to delay releases if someone gets sick, or do people change holidays to avoid impacting the team?

  • clean code – produce code that is easy to read, write and maintain (and produce a culture of shared understanding and desire for quality)
  • TDD – this encourages small units of code
  • pair programming – this is not only a good way to work, it builds connections

Many of these things are hard or uncomfortable, but people have to push past discomfort if they are ever going to grow.

  • experiment – You can make things look less threatening by running it as an experiment for a limited period of time, before a review.
  • create a team charter – workshop core values that the team can align on

Learning and collaboration…

Working in silos leads to groupthink, as people only seek ideas from their direct team. It’s better to grow a network across all teams.

Do people attribute failures to other teams? Do people avoid seeking help across teams? Do they break things you rely on?

  • lunch and learn – good way to start sharing knowledge
  • code retreats and dojos – work on the same code kata for a day, with different people. You throw away the code, the point is to learn how different people approach the same problem.
  • create and support communities of practice

...

Trust
Celebrate
Appreciate
Craft
Learning
Collaboration

Take one thing that resonates and take it to your team. Maybe run an experiment or do a lunch and learn. Go for coffee with someone you don’t see eye-to-eye with. Push out of your comfort zone.

Create a workplace people are happy to come to.

@shelleglee

The Anthropology of Testing: Past, Present and Future – Michel Boudreau

Testing is synonymous with software, but we don’t spend a lot of time actually thinking about.

The term bug comes from bogey, a source of fear, perplexity or harassment, of unknown origin. Although it was popularised in software when Grace Hopper found a literal bug (moth at least) in a relay.

37.7% of devs aren’t using tests. That’s insane! Why aren’t they doing it? Carelessness? Apathy? We don’t know! But we also know the who do use tests are happier!

Genesis…

Testing is another name for Quality Assurance, which is a very old term that dates back at least to the 1000s. At medieval markets, you had to trust the people who were selling things that the wares were good. As markets grew, you could no longer rely on knowing the person.

Many bakers were cutting flour with sawdust to make bread more cheaply, because it was precious – it was expensive. This kept the volume of the bread, but lowered the weight. Those who didn’t cut the flour wanted a way to show they created a better quality product. Bakers created a guild, standards, and a standard mark to advertise the fact their bread was good. People who faked the badge were Dealt With...

Testing levels…

  • manual – this is only really good because there’s no barrier to entry, but it doesn’t scale. You can really throw this one out.
  • unit tests – (classic joke about ordering 1 beer, 0 beers, -1 beers, 9999 beers, 1 lizard…) trying different inputs to see if a unit of code produces the right output
  • integration tests – do units work together
  • system/e2e tests – everything from user input, through app, data and network layers etc
  • behavioural – this is testing state, if a certain sequence of events occurs, do you see the expected result in the state

What comes next? Well we don’t really know… but maybe…

Machine learning tests – can you fake data? can you predict what is “correct” for an ML system by faking ten years of activity? Or do we just fall back to manual? We actually do that a lot with CAPTCHAs – Google is learning how to detect cars at the moment.

Why test…

Some people hate tests, or they hate a particular kind of test, or they just want features and don’t want to spend time on tests.

Testing = Freedom & Creativity

You can try a lot more interesting things when you know what you’re breaking.

Testing shows the presence, not the absence of bugs. – Edsger W Dijkstra, 1969

Dijkstra released The Humble Programmer in 1971, and it holds up today as it set down so many ideas about testing theory.

...

Please test!

@AngryCanuck_

Super Hero Layouts – Anton Ball

(Lots of code demos: https://codepen.io/collection/DjwRaP/)

Comic books had ages that lasted for many years, going through gold, silver, bronze and modern over a total of 70 years.

Layout on the web has gone through many ages very rapidly:

  1. HTML only, including abuse of tables
  2. Flash – while there were issues it did allow a lot of creativity
  3. CSS with float and absolute layouts
  4. We are now heading into the intrinsic layout era with flexbox and grid

...all of that in just 26 years.

Anton has built a lot of websites and read a lot of comic books…

Action Comics #1 is an important book in comic history as it introduced Superman. The layouts of comics through this era were very much rows of rectangles. Each row would have 1-3 panels and reasonably consistent space between them. For CSS grid this derives a 16 column grid.

Moving forward in comic book history to Spider-man #121, the layout changed to use vertical panels to convey the drama of a fall from a bridge. This broke away from the row-based model.

Watchmen #7 is considered to kick off the modern era. Until that point they had stuck to the nine panel grid, but in this issue they started using tricks like an image that spanned full width, with panel gaps overlaid.

We can use subgrid for this and use pseudo elements on the overlay element to create the visible gaps. News websites can use this a lot to display a set of tiles with equal-height elements even when the text length changes a lot.

Subgrid is awesome! But it’s only in Firefox nightly at this point. But it should come through quickly in the other evergreens.

We can create the same effect with display:contents, but beware of accessibility impacts.

Spider-man 2018 #9 introduces a very detailed, layered design. This is more complicated in Grid but is still derived by the basic process of working out how many rows you need, then calculating the base pixel size within the overall space, then you can derive fractions from that. There can be a bit of work to figure it out but if you are methodical it will work out.

Firefox has an excellent grid inspector that reveals the rows and columns; and the positive and negative references.

The modern era of comics did a lot with layering and it gives rich design opportunities, but you do need to be very careful to set up a logical source order so it doesn’t become a confusing jumble for users with assistive tech. The tab order of the page is a good way to test the basics of this, people will expect to tab to things in a logical order.

Reference: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout/CSS_Grid_Layout_and_Accessibility

There is a lot of criticism that web layout is still putting everything in boxes. Comics also broke out of their boxes in the golden age, using geometric and curved shapes to really get away from boxes.

CSS can use clip-path for this. IE11 won’t ever get this; and we have to wait for Edge to move to chromium. Grid lines can only run straight, so we have to layer the images to get them into the page. Then rotate the panel container to add angles. Finally we use clip-path to create the irregular shapes. You can do this manually by setting the x/y coordinates of polygons; or use Firefox which includes a visual tool. To add the border, use a transform to scale the image down; and nudge the transform-origin to align correctly – this reveals the white background beneath.

Detective Comics #876 has a spiralling fall illustrated with a sequence of rotated panels. In CSS we use transform and clip-path. But clip-path isn’t just for polygons, it can create curves as well with ellipse.

CSS Shapes are useful to layer text into these layouts – again IE11 and Edge are gone at the moment… and again Firefox dev tools has a visual tool to visually adjust values and see the effects. You can also use shape-outside and a masking image to set irregularly-shaped text.

What about non-western comics/manga that read right to left and top to bottom? Grid can be sorted out with dir="rtl" (right-to-left) and writing-mode.

Our layouts have been stuck in boxes for a long time, so it’s exciting to have so many options to break out of that while still being able to maintain a11y and i18n.

With great power comes great responsibility. – Uncle Ben

Use your powers for good. Excelsior!

...

Resources

@antonjb

Let’s build a web component! – Erin Zimmer

Code-heavy which doesn’t transcribe well – see…

A few notes…

  • :root selector refers to your component when used inside it
  • You can access CSS custom props from outside your shadow DOM
  • You cannot use attributes on your component that exist in raw HTML
  • slot acts as a sort of symlink between the shadow DOM and light DOM, so you can place content
  • To style things in slots you use ::slotted, which can accept a selector
  • Adding behaviour is very much the same as normal DOM scripting and SPAs, it’s a matter of linking references and events. Custom events add flexibility.
  • Events won’t bubble up past the shadow root, unless you add a composed property to your custom event in the component

@erinjzimmer

The Other 6 Billion – Jason O’Neil

The other 6 billion refers to the billions of people who don’t speak English (80% of humans!). Since not everyone is connected to the internet, 25% of internet users are non-English-speakers. In Australia, 27.3% of households are multilingual. This impacts a lot of people and will impact more as more get online.

Jason is monolingual, but he had a project to add support for Arabic in Culture Amp’s product. Initially he imagined the biggest amount of time would be getting the translations done…

Classic string-replacement localisation is a little hard to work with, as you have no real content in the code context. They are looking at a text transformation approach which uses a readable English key and handles translations elsewhere t("Dashboard").

Who does the translation? There are specialists who do this for you at surprisingly cheap prices (20c/word) and that’s pretty good for UI where the text doesn’t change too much. But if you have lots of content, the price gets much higher. What about in-house or crowdsourced translations? You can use Pontoon by Mozilla to translate in-place which makes things much easier.

Beware of code in your translation strings! CA hit a problem where the translation also updated a variable name, which broke things. Don’t expect translators to code!

Don’t forget to translate alt text – it’s a common mistake.

So now you have a great big file full of translations and you want to add some UI and you think “ahh we’ll use a flag!”. But you will almost certainly offend someone. Languages are not nations. There is no flag for Arabic, just as one example. So just use the name of the language.

Next you need to handle writing direction: dir="rtl" …and browsers are surprisingly clever at applying this, but you will still need to manually update anything not build in flexbox and grid. If you ARE you using flex and grid you’re in luck, as they support rtl out of the box, it’s why we have names like flex start and end.

For older code (which CA had to deal with at the time) you can use SCSS mixins; but now rtlcss.com provides a plugin which is well worth investigating as it does a lot of it for you.

Then… you still have to sort out things like buttons with directional icons.

Localisation maturity model follows the classic scale of reactive, repeatable, managed, optimised and transparent. You should keep in mind that it’s a process to improve from wherever you are.

Let’s try to reach the other six billion!

@jasonaoneil

Designing with Components – Mark Dalgleish

Design systems are all about bridging the divide between design and code. What does that look like in practice for Seek? They use html-sketchapp to generate Sketch symbols, so devs and designers work from a common source when working with the design system.

There is a problem though – Sketch isn’t a browser and still works like the old ways of static mockups/screenshots. A new generation of tools are trying to resolve this:

This is blurring the lines between design and code, and adding new options to the tooling choice.

Why still use a design tool? They’re fast, easy, new documents are cheap; and the work is easily shared.

What about dev tools? We have tools like…

  • JSBin
  • RunKit
  • Babel
  • Typescript playground

...which are essentially REPLs so you can easily get up and running. There are more extreme tools like Code Sandbox which is a full IDE in the browser.

So can we bring this thinking into design systems? At Seek they came up with Playroom, which lets people use JSX in a visual tool and share it with a base 64 encoded URL.

(Demo of Playroom)

Seek is currently working on a new design system, Braid; and Playroom can preview all the themes at once.

What they found was Playroom changed their workflows:

  • Developers started using Playroom as a complement to their IDE
  • PRs started using Playroom previews to assist reviewers and QA

The goal was to make component code more approachable, and it mirrors the advantages of traditional design tools (easy, fast, cheap).

Low fidelity iteration should be on paper, high fidelity iteration should be in code. Design systems are about designing a palette and reusing it and this encourages that, by letting people design in the target medium.

Playroom is open source (npm install playroom) so you can use this for your own design system.

@markdalgleish | Playroom on github

Planning Your Progressive Web App – Jason Grigsby

Many of Jason’s clients had a situation where someone senior came to their web teams and demanded a PWA. Now… that’s not normal. But some common questions would come out…

PM – what is it?
Business owner – do we really need one?
Design – how did the CEO even know about that?
Dev – cool! I’ve wanted to use this stuff for ages!

So how do you make sense of what to build? How do you make decisions and answer your team’s questions?

...

So how does the CEO end up hearing about PWAs? It’s not just the tech press that’s covering them, general and niche publications are talking about them. There are success stories that are getting peoples attention. PWAstats.com keeps track of these stories.

What is a PWA? The original definition described ten characteristics like app-like interaction, but two key ones are “linkable” (URLs still work) and progressive (they use progressive enhancement). So they are like native apps but they are of the web. But the term is difficult to distill, even for people who have shipped them.

Jeremy Keith articulated it well: a PWA is a website that has been enhanced with https, service workers and a manifest.

The key is the service worker. The main power comes from being able to intercept network requests and decide what to do with them. This is what gives people the real fast/instantaneous experiences.

But they can be so much more… PWAs let us build experiences that were previously restricted to native apps.

(video talking up PWAs as the next generation of experiences)

But are they really that much different than regular web best practices?

The name isn’t for you and worrying about it is distraction from just building things that work better for everyone. The name is for your boss, for your investor, for your marketeer. – Frances Berriman

Do you need a PWA?
Well, do you have a website? Then yes you probably would benefit from a PWA! Particularly if you make money on the website (especially ecommerce).

There is a lot of FUD about PWAs.

Why do we need a PWA if we already have a native app?
The two can coexist. Not everyone has your app installed; and none of your potential users have the native app yet either. Also people aren’t installing as many apps any more; and they’re not using the apps they install. Your website is often a customer’s first interaction with your company, and a good experience will increase your changes of them sticking around.

The web can’t do (x)!
Sure, there are some things that require a native app. But often people say it’s not possible and they’re simply wrong! You can access the hardware, geolocation, notifications… it’s hard to work out how people think the web can’t access geolocation given how many websites ask you for it! If you are wondering about any feature, revisit the question.

...

So the team is keen to build a PWA, what next? What are you actually going to build? People don’t really agree on what “feels like an app” even means.

Does that mean we want to make it look native? There are frameworks to help you do that. Do you make something that feels mobile-y but looks the same across all devices?

How immersive does the experience need to be? You can choose how much browser chrome to show, and it’s tempting to go full screen – but actually keeping some toolbars is really useful. If you take them out, you end up having to add them back in. So don’t assume you should get rid of it all. Adding a back button is a surprising amount of work; and making it work just like the native back button is tricky. There is a display-mode media query that can help you customise different browsing contexts if you need to.

Work on keeping the app smooth, use placeholders and ghost content to keep the experience feeling fast. There is the app shell model plays into this. Remember that perception of speed is critical. Beware the App Shell model generally leads to conversations about being required to use an SPA, and that’s not true.

Should 'feeling like an app’ really be the goal? Do users care if it feels like a site or an app, or do they just want the experience to be good? Also consider the cost/benefit of all the work required to make things feel like an app.

Manifests are simple JSON files, and you can do a lot just by defining one.

Add To Home Screen badges vary between browsers; and some had quite intrusive banners but they are moving away from that. It’s likely they’ll all settle on something like a badge in the omnibar, to save the site to home screen. There are events you can fire to prompt the user, but you have to have passed some engagement heuristics. On the up side you can provide the prompt at a much more appropriate moment.

PWAs don’t need app stores, but that doesn’t mean people won’t ask about them! Microsoft lets you add PWAs to the app store and they will live side by side with native apps. Android has trusted web activity which allows you to prove you own both the app and the website, helping you ship to the android store.

PWAs can be marketed in any way you might promote your website. They’re just websites!

Does it matter if people actually 'install’ your PWA anyway? So much of the benefit is not the bit where you can add to home screen? More effort should be spent on making things work fast, work offline, have a great manifest, etc. Everyone who visits the web page “installs” the PWA anyway. The icon on the home screen is just a cherry on top.

Offline mode gives us lots of cool tricks:

  • cache for performance and offline fallback
  • cache recently viewed content for offline use
  • display the info you have, even if you can’t display the entire page
  • pre-cache content – Financial Times do a great job of letting the user control what gets cached
  • full offline interactivity is the holy grail, like editing a document while offline and having it sync later… or maybe just disallow offline editing to avoid conflicts and data loss

Recommended resource: Workbox

Push notifications are interesting. Notification JS is relatively easy, but designing notifications people actually want is tricky. Personalised notifications get pretty good engagement, but you still need to be careful when to send them. If you send one at a time people don’t want one, they are likely to be pretty annoyed. Many people just hate them entirely.

Don’t instantly ask people for permission to send notifications, it’s annoying! Maybe add a button at the end of an article that prompts them to give permission if they click it. Twitter uses a “turn on notifications” button. Browsers have added kill switches now, so if you annoy the user you will never get to even ask again. Android requires users to choose between 'block’ and 'allow’ and if you ask too soon, they are almost always going to block.

Beyond PWAs…

  • there are ways to auto-authenticate when you open the PWA
  • payment request API

...these aren’t part of PWAs but they may be useful in that same space.

So let’s go back to the project manager… how are you going to make a plan for this?

Create a progressive roadmap for your progressive web app. Start by getting onto HTTPS, then add a service worker, then improve offline, then add notifications…

Whatever you do, benchmark before and after and measure what you are doing. Use data to keep the momentum.

Do a tech debt assessment. If your site takes 30 seconds to load on mobile, a PWA can’t make that suddenly become fast. If it’s not usable, it won’t make a great PWA. Sometimes you just have to make basic improvements first, before you do anything PWA-specific.

But the beauty of PWAs is everything you add makes things better!

@grigs_talks | Jason’s slides