The ansarada frontend crew headed to Web Directions Code in Melbourne, to heckle^H^Hsupport Clark while he was speaking and generally to fill our brains with web goodness. Much fun and coffee was had!

#ansarada crew with Elijah Manor, at Cup of Truth. Fun coffee mission from #wdcode :)

A photo posted by Ben Buchanan (@200ok) on

(The G+ auto-animated version of Elijah's photobomb is even funnier!)

Disclaimer & Credits

  • These notes were hammered out very quickly and paraphrase what speakers were saying. If you need an exact quote, use a definitive source such as a recording or slide deck.
  • Photo and other content credits as per the social media embeds.
  • Slide credits obviously to the speaker.

Jump menu

Day one

Alex Russell – The future of the web

When asked to talk about this topic Alex worried at first … Does the web have a future? Is the trajectory good? Truism “future has arrived but is not evenly distributed” is especially true in software.

There are great new feature for the web but many new ideas can't be used universally. Web components was better in the sense that it used the existing platform to extend, rather than building everything from scratch. So what about other parts of the platform?

With TC39 they did a sort of archeology. So he went back and looked at the question what did we think the web was going to look like, back in 2006?

Slide: “WE WON ON DESKTOP” … we did really well with the web on desktop, but just in time for mobile to eat the world. The web is not doing as well on mobile as it did on desktop.

Web users spend 14% of their time on sites and 86% of their time in apps.

The return of ObjectiveC shows that it's not the technology that drives these things. Nobody would have predicted ObjectiveC back in 2006.

We aren't asking the right questions about why the web is not doing better on mobile. Case in point... the W3C's own mobile tech resources aren't responsive!

Google just had to put a graphic on their blog...


It's not about technology, it's about our expectations.

In the dialup era, going online was an occasion. You explicitly decided to sit down at a computer, plug in a modem and get online. You knew it would be slow and just dealt with it.

These days we expect things to work. So when the web fails, when the core tenant that “everything is a click away” fails, we fail the user. From the user's perspective we're a flaky friend that you can't rely on when you need them.

The web has no application model. You cannot tell what is a page, a URL or application. Apps with one single URL break the basic tenants of the web. But then if we make everything the old way with separate URLs will it still work offline?

Our platform currently cannot differentiate page from apps. It's a question the platform simply doesn't answer.

With all these problems... what explains all the e-commerce success? Most of the time and money on mobiles is spent on social and gaming apps. However all the e-commerce happens on sites – in browsers, not apps!

Meanwhile it's expensive to get native app users. $1-2 per install; $9 per signed-in user. That's what you have to spend on advertising/promotion to get people to install AND use your app.

There's also the problem of zombie apps – unmaintained, apps with tiny user bases (a couple of dozen)... the majority of app developers don't make enough money to pay for the tools they need to create apps. “Developer poverty”... where devs are below the poverty line of their ecosystem.

Alex thinks of app development like buying a lottery ticket – it's a chance, a very small number of people get rich. The people who win do make a lot of money. The rest make nothing. “It's sad!”

Because people are desperate to get users into apps, they do really bad things like put full-page blocking interstitials over their website – trying to get the user to install an app instead.

Users use 12-20 apps per month, but visit more than 100 distinct sites per month. (Data from opted-in chrome users)

Distribution is an extremely hard problem in software. It used to be much worse, when you had to sell floppy disks! We don't do that now.

It's the friction that really kills us now. There's a 20% drop-off for apps for every action the user needs to take to get an app working. The web has nailed this – you go to a URL, the site works.

So what's missing?

  1. homescreen access – less typing, more tapping. Chrome can now suggest a user adds a regularly visited site to their home screen. Then you can launch that website full screen. They've moved lots of stuff out in the manifest - <link rel=”manifest” href=”manifest.json” />. “Progressive apps” - you start using them as sites, they start working like apps as you use them.
  2. push notifications – equal access to system UI matters hugely to devs. Chrome can now let a site prompt the user to ask if they want notifications. The notifications work even when Chrome is closed. This doesn't require an install step.
  3. offline support – it isn't an app if doesn't start when you tap. Google Gears and appcache were failures as they assumed far too much about what an app was going to look like. Now they're doing something different, with service workers. Service workers load asynchronously after the first site load, then they can intercept new network requests and show the user a response even if the network isn't available. You can still load the shell of the app. (works in chrome and firefox). Service workers are network progressive enhancement. The sites still work in browsers that don't support service workers. They don't intercept the first load experience, you still need that to work. After that, you can extend users' patience for that initial load time – they can see a response quickly even if there's latency going on.


Alex Sexton – front-end ops (update!)

Quick recap: Alex' article appears in Smashing Mag in 2013; FeOps conference in 2014; then people starting hiring roles called frontend ops... naturally naming front-end ops was the hardest part.

So what is front-end ops? Serving web pages is really hard. Front-End Ops is the collection of things you can do to make serving web pages easier.

It frees other devs from having to think about deployment stuff all the time. Developers split time between writing their app, and working on a great deal of other stuff which is not the app but still has to be done to get the app live.

FeOps also addresses the problem where it's considered normal for frontend developers NOT to be monitoring their application in production. For backend devs that would be insane, so why is it normal for frontend? Particularly now that app logic is being pushed to the client via client-side MVVM/MVC/etc.

Backend programmers habitually automate stuff like error logging, lifecycle logging, application measurement over time (eg. is the app getting faster or slower?)

Performance is the basis of UX; speed is how we measure that. Everything can be measured and tracked, but it's important to read that information correctly.

Tools not rules.

Make computers do the hard work. Use tools to evaluate things you are concerned about. There are testing tools for pretty much everything.


Step 1: forget everything you know because it's wrong. All the stuff yslow taught you is wrong.

Step 2: it's probably the network

Step 3: read Browser Networking by Ilya to fix that...

(missed the others)

Then http2 will come along so we'll have to forget it all over again!

Use Chrome dev tools to simulate slow networks. “If you throttle everything to 3G all the time, you'll do a pretty good job of fixing speed problems...”

Theoretical graphs (timelines):

  • show the load time of different devices on a scale (ie. Laptops are quicker than mobiles..)
  • show the load time of different geographical regions
  • show the load time of different network types (wifi, 3g..)

Theoretical graphs:

  • plot the load times of desktop and mobile separately

Measure twice, optimise once!

Make a dashboard. Get this stuff visible and easy to read:

  • speed index over time (...and then mark commits/deployments)
  • graph your competitors speed index over time!
  • Page weight over time
  • File requests over time
  • Errors IF YOU TAKE ONE THING AWAY... log errors! eg. New Relic
  • Build time over time / general speed of tools
  • Cache everything – don't do the same thing twice

Speed of development = developer happiness.

Use component libraries.

  • They let you focus on re-use, adaptability, performance, test things thoroughly, etc...
  • Use your best days to build up your components, so you don't ruin your app on your worst days.
  • Build responsiveness and accessibility into your component library.
  • Combine documentation and code samples
  • bonus: you get consistency for internal apps

How to make one:

  • choose a preprocessor (they liked kfcss for light preprocessing, Suit CSS for scoping styles and linting them)
  • add your own tests and checks – eg. CSS Colorguard, you can test css!
  • Alex created a tool that converts :hover to .pseudo-hover so they were easier to test. Don't ship that CSS, it's just for testing.
  • Tool to automatically check in a screenshot of the changes that result from code changes.
  • Then... convert your CSS components into JS components (for your JS app) – that is, template it. Never use the patterns directly!

Spare no expense in your tooling! Automate everything, cache everything!

Future considerations: http2 support, async loading, non-js dependencies, web components...

The ultimate idea here is to have machines take the load off the humans, so they can focus their energy on the people using the application rather than the application itself.

(refer to slides for the huge list of tools)


Q: are there any really good resources to learn the deep depths of Chrome Dev Tools.

A: follow talks from Addy Osmani and Paul Irish, they do a good job showing off the shiny stuff.


Chris Roberts – offline with the service worker

Australia is still a very disconnected country. We have large areas with no data network. So Chris's example of needing to check a hotel booking on a plane is pretty common and will be a massive issue for many regions as they start getting online.

Expedia's site stores the user's most recently accessed information, as it's likely they will want it again.

Offline First: the next progressive enhancement technique. “Or perhaps it should be 'unreliable connection first'...”

Don't think of offline as an error situation – we have responsibility to handle the case. Time will not solve this problem – as time goes on, we won't suddenly have 100% of people on great connections. As time goes on people connect in more and more different ways.

Offline strategies:

  • reduce reliance on the server
  • sync when possible – make use of good connections
  • gracefully resolve conflicts
  • query device APIs

Great person to follow on this stuff is Jake Archibald. Also check out his book the “offline cookbook” from 2014 (on his website).

Service worker life cycle:

  • registration
  • installation/cache open
  • proxy the network
  • termination

Browser support:

  • Chrome and Opera yes, now!
  • Firefox soon
  • IE under consideration for Edge
  • Safari no

(John Allsopp notes afterwards – a great way to influence browser vendors is to use things and show the need and demand. Also as this is an enhancement you don't have to wait for full coverage to start giving the benefit to people who can use it.)


Q: do any frameworks have good support for service workers?

A: there are some, but in the meantime work on identifying the shell of your app that you should be serving while offline.

Q: any restrictions on what or how much you can cache?

A: pretty much any http request can be handled; noting CORS impacts


Simon Knox – media streams api (beyond video chat)

We need to escape the reliance on apps for video/access to the camera. Why do we need an app to read the humble QR code?

WEBRTC101 – lets browsers do live video and audio. You can also do screenshots and animated gifs. The API is a bit messy (use a polyfill) interface is a little variable but it's great for things like profile photos – we can't assume users have avatars sitting on their device.

Audio API hooks in really well.

These APIs let you do lots of really cool stuff right now – get in there and play with them!


Jonathon Creenaune – web components

Web Directions Code 2015
(JC won the tshirt stakes. Photo by Steven Cooper.)

Works at Atlassian on their style guide and the UI library that implements it (AUI).

His message: you can start using web components today!

Demo: JC's meme generator web component. “Y U NO PUT STRUCTURE IN DOM!” (bitbucket:jcreenaune/meme-or-die)

It's all DOM. The structure, events and methods are all DOM.

So why would you do this stuff?

The raw markup of a library has a lot of boilerplate and code you can't change. It's nice to abstract that away so people just supply the information you actually need to supply to make things work; and to make the API clear.

[Aside: essentially web components are a universal templating option. Otherwise you need to build out a set for every stack that needs it.]

Web components spec is pretty huge – currently they're just using Custom Elements.

Atlassian created skate.js (IE9+). They're not providing an API to style the component, because that's directly opposed to having a style guide.

So... what sucks about this stuff?

  • Client-side performance – a lot of Atlassian's products have very big DOM views (think Bitbucket diff pages). They had to do heaps of optimisation.
  • Server-side rendering – still an unanswered question, SEO could be a problem
  • Trying to link the original API to the exploded component, eg. For ARIA attributes which need to use common IDs to link elements together.

Web components = separation of interface and implementation of HTML.



Mark Nottingham - HTTP/2 for front-end devs

“in one slide...”

  1. A fully multiplexed binary protocol
    • one connection per origin (better network utilisation)
    • browsers don't have to guess when/where to send requests
  2. ...with header compression
    • many requests in one packet
  3. ...and server push
    • put a response in the browser cache before it knows it needs it

Just by turning it on you can expect ~5-15% performance improvement in most cases. You can do much better by tweaking and tuning.

HTTP/2 is the same for APIs, headers, methods, etc.

You might need new URLs – it requires an encrypted connection, which means in effect https.

Recommendation: move to https now.

HTTP/1 optimisations that no longer work:

  • spriting, inline, concatenation
  • these things reduce the granularity of your code
  • that means this hurts you with http2

You can either serve differently to http2 capable browsers; or you can aggressively push on with http2, but the recommendation here is to get a strategy in place.

Think about prioritisation... in http1, browsers are responsible for deciding request priority and ordering. Http2 gets the server to do it. Look for APIs, data and studies. We don't have a lot of knowledge in this area yet as we haven't had access to do it. You can get this really wrong! It's a lot like chunking... get it wrong and you do a lot of harm.

Server push – in theory can save you a round trip by putting something into cache before the browser needs it. This is so new we don't have any data on how best to use this. Try it, measure, but be careful.

Header compression – it is coarse grained, it works on an entire header. Any header variation blows this up and you don't get any compression. Avoid header variability.

Quick points...

  • TLS – this needs tuning.
  • Client certificates don't work with http2

More reasons you might need new URLs...

Spreading traffic to multiple hosts hurts perf in both http1 and http2. Sharding etc can cause flooding and congestion. You can use connection coalescing if DNS and certificate agree. Recommendation: there can be only one (hostname).

DoS Protection – the bad news is http2 needs more state. The good news is it lets you bound connection.

Connections are long-lived in http2, so you need to architect to accommodate this. It can mess up DNS load balancing (use GOAWAY and soon ALTSVC to manage this).

More servers or less? It's not totally nailed down yet. Twitter was able to reduce their servers; but it's so situational proceed with caution.

Key point: we are just getting started. Give this a year and people are going to have heaps more data, open source tools, frameworks etc.

(lots of deep detail in these slides, if you need the details definitely look this up)

Rhiana Heath – pop-up accessibility

Websites use modals and popups a lot... but screen readers tend not to know about them as there's no really standard way to do them.

Working on a project was using bootstrap – which has been improving, but still isn't perfect.

Tested JAWS and NVAccess NVDA – these two form about 70% of screen reader users. Voiceover is also quite popular; Chromevox is nice but less than 1% usage.

WAI-ARIA sends extra information to the browser, which helps with non-standard interactions and content.

Bootstrap has <div class="modal" role="dialog"> …advice was actually remove the dialog role as it forces all-in-one reading. Otherwise you need to also insert role="document" to reinstate the usual fine-grained control.

Mistake: used aria-hidden="false" and display:none together... it didn't work on the big screen readers. Correction: use the clip rect visually hidden style.

Aria-label is useful for giving extra cues, similar to title but without triggering tooltips.

ARIA implementation is often as simple as updating your existing show/hide functions so they also update aria-hidden's boolean. It's not hard at all.

To keep focus inside the dialog – focusable hidden element at start and end of the dialog, plus a third to close it.

Screen readers read things fast – it's hard to get used to when you don't use a reader all the time, but people who do use them run them even faster than the default.

(check slides for a demo of NVDA in action)


Warwick Cox – Console dot

Awesome things you might not be using right now in Chrome dev tools:

  • console.clear()
  • console.count(label)
  • console.dir(object) – prints out a DOM object
  • console.dirxml(object) – same thing with xml
  • console.error and console.warn – these are useful debuggers
  • – the following console output will be grouped, so you can create a collapsed set of debug information
  • console.groupEnd() - closes a group. Optional but lets you control things nicely.
  • console.profile() and console.profileEnd() - sends data to the profile tab.
  • console.time(label) and console.timeEnd(label) - sends data to the timeline tab.
  • console.table(object) – needs an array of objects, prints them out in a table with their keys. Doesn't print nested objects. You can control what's printed.
  • console.log() - prints a string. You can use %s and %d etc for fine grained control.
  • console.log("%cHack","color:red") You can also style the output (expedia have a console easter egg using this).

Just google “console api” as you'll never remember the URLs in the slides!


Simon Swain – Canvas

16ms to jank! That's our budget.

Canvas transform/translate commands are the key to really understand how to get canvas to do cool stuff. They let you do transforms with a minimum of mathematical calculations.

Classic example is the swarm simulation: you don't define the behaviour, you define the conditions that the actors will respond to and what they will do. Then you see emergent behaviour.

This is how you can build up the Cold War game, where the nation states have a variety of weapons and counter-weapons. You can then run your simulation without actually knowing the result.

(look up the video for this one!)


Rachel Nabors – State of the Animation

Web Directions Code 2015
(photo by Steven Cooper)

Flash may be gone, but the era of web animation has just begun. We still really think of animation as decorative rather than useful.

“Animation is the high road to the brain's GPU – the visual cortex.”

Large companies are now putting serious effort into their design guidelines for animation – Google, Apple, IBM all have detailed guidelines.

Kinds of animation:

  • static – they run from start to finish without variation (start → end) (spinners are static. Start-end-loop)
  • stateful – (default → event → predefined state)
  • dynamic – multiple factors affect events, so there are multiple possible results rather than a predefined state. They determine their own behaviour and end state.

Multi-state animation is not dynamic. You can think of a venn diagram, with stateful animation in the intersection of static and dynamic.

CSS devs naturally gravitate to stateful animation as they work well with classes; which means they degrade gracefully.

Animation libraries (just mentioning two as there as so many) – Greensock (GSAP) is the most flash-like. Velocity.js is very familiar to jQuery users.

There are a lot of organisations with great internal animation libraries... but they tend to have a commercial advantage to keeping them internal, so they don't end up being open source.

A spec you probably haven't heard of: Web Animation API. It adds lots of features that simply aren't available in CSS animation. It's a long spec though - “it's a two international flights spec...”

Naturally there is a polyfill:

Performance is important, as jank breaks the benefits of animation for cognition. Hence Flipboard's 60fps requirement, as they consider non-janky animation to be an accessibility issue. But they used canvas which left people with nothing on other accessibility issues.

The Web Animation API accesses the same rendering engine as CSS, it will have much better performance.

Reflows are a rendering engine problem. The vendors are working on it.

The css property will-change is important to learn at this point.

The various players need to work together:

  • UI Designers – get up to speed on the libraries
  • Devs – check out polyfills, give feedback
  • Animation library devs – read the spec, give feedback
  • Animation spec authors – find better ways of taking feedback, meet people in the field

Discrimination between roles is harmful. People leave the conversation and community, then we end up reinventing and re-learning lessons.

Special announcement – Rachel is an invited expert for the W3C WG for Animation. You can talk to her. She is listening!



(Comment during this time) We are not our tools! We should identify ourselves according to what we make, not the tools we use.

Q: how do you avoid annoying animation?

A: if people notice it then it probably needs to be reviewed.

Day Two

Elijah Manor – Eliminating JS Code Smells

Convoluted Code Smell – too many statements, too much depth, too complex. But what does “complex” mean? JSHint can tune for these factors and give a complexity score. It can throw errors when there are too many arguments or callbacks, etc... Then when you write tests and refactor, you can put numbers on the reduction in complexity.

Copy Paste Code Smell – duplication in code. There are tools like Jsinspect, JSCPD, CodePen which can detect copy+paste code. You can adjust how exactly it needs to match.

Switch Statement Smell – when code violates the Open/Closed Principle (OCP), one of Uncle Bob's SOLID Principles. Instead you can use the Strategy Design Pattern to add new code without touching existing code. Sadly... no tooling for this.

Edge cases? ESLint and JSHint can be disabled for a few lines if you need to.

The This Abyss – this=that, etc. Alternatives... bind, second param on forEach, ES6 fat arrow =>...

Long string concatenation smell – ugly string concatenation with lots of quoting. Alternatives? Tomas Fuchs tweet sized js templating engine. ES6 template strings, including multiline! Or use a full templating or mvc option.

Jquery Inquiry – excessive use of fluent code, where everything is chained. Problem is silent error handling vs long chains of code. Instead refactor out to use delegated events. Put less in document ready.

Temperamental Timer Smell – use of setInterval; which then gets out of sync. Use settimeout with a callback instead, so you can't get out of sync.

Repeat Reassign Smell – repetition. Lots of options, although many feel just as repetitive... Using reduce works... but ended up using lodash flow.

Incessant Interaction Smell – code that needs to be debounced (autocompletes, listening to scroll events, etc). This is better than throttling as it waits for a pause from the user before making calls.

Anonymous Algorithm Smell – when there are anonymous functions used in ways that mess up stack traces, code reuse, etc. Wherever you currently use an anon function, you can name it.

Last thoughts... Eslint is pluggable, so if you want to test something in particular you can write your own (or just as likely someone else has already done that). There are plugins for angular, react, backbone... npm search for eslint plugins!

(lots of code in this one, if you are interested look up the slides)



Q: what's the worst code smell you've encountered?

A: have done all of them... but the most teachable would be the this=that. Most are subjective, best to treat them as learning opportunities.

Q: is there a code smell for “over engineering”? Too much abstraction, etc?

A: yes... although there's a balance of needing to ship vs understandable code; and there are few smells that are always wrong.

Clark Pan – ES6 Symbols

Web Directions Code 2015
(photo by Steven Cooper)

To understand symbols you need to go back to basics of how objects work... and not work!

The ways js handles objects and strings can create unintended effects (presso shows example with values being overridden).

Symbols are a new primitive type – ie. Calling typeof will return “symbol”.

Symbols protect the values they contain and require you to access them via the symbol.

Use cases?

Because symbols are unique, there is no risk of collision. This makes them useful for storing state within web components; and can be used to make private values – that is, avoid polluting your public API. It is much like using jQuery's .data but simpler.

ES6 has also defined built-in symbols. They allow you to hook into language features being exposed in ES6:

  • for of loop (discrete from for in, doesn't need to access index)
  • symbol.iterator
  • instanceof
  • more to come as the language evolves

Biggest use case is encapsulation without (ab)using closures. The people consuming your code don't necessarily want to see everything, you don't want everything in your public API. To hack this, we have been closures to make things psuedo-private. That has lots of side effects – hard to unit test, hard to debug, bad for code re-use, small performance penalty.

Symbols solve encapsulation nicely, letting you set a const and refer to it via symbol. (see slide for pattern)

Browser support – great except for IE :( Useful in io.js or node with the --harmony flag


Ben Teese – ES6 Promises

Why promises? What problem are we trying to solve here? JS is fundamentally asynchronous; and async is hard to do well.

Quick overview...

What's a promise? The formal definitions are very dry and not always easy to understand. A good way to understand is to compare code with/without promises. Compare ye olde jQuery ajax vs the newer fetch API (fetch/then/catch example)... not hugely different.

But then you get into nested ajax calls... and you quickly get into Callback Hell(tm) aka the callback pyramid of doom. Nesting a fetch call is cleaner – (fetch/then with return fetch/then/catch).

Browser support? Latest browsers have them; and there are polyfills, transpilers and frameworks. Commonly people do use them in Node as well.

Deeper dive...

Chaining... then() and catch()

Composition... then and catch only take functions. They can return a promise; return an exception; or return a value.

Throw an exception... so long as you return correctly, you can pass meaningful errors.

Note that when you start getting something that looks a like the pyramid of doom, you can use Promises.all to simplify the code.

A side effect is you can clearly document things in terms of inputs and outputs, and be really clear on the conditions that will affect it.

Unit Testing... because we are focusing on inputs and outputs, testing can be pretty clean. You can dummy up a response easily by using Promise.resolve(), which creates an immediately-resolving promise.

Promises are awesome!

  • chainable & composable
  • resinstate input/output semantics
  • testable & documentable

They are definitely here to stay, it's definitely time to get across them!


James Hunter – Async & Await

Web Directions Code 2015
(photo by Steven Cooper)

Warning to functional programmers: this talk is all about side effects. Sorry!

Talking about code styles as personified friends... pretty funny ;)

But why does async have to be so hard? Why can't we write nice simple async code?

Well you can, eg. With libraries like bluebird to set up coroutines.

But then... what is this black magic? IT'S A HACK!

Three main points....

  1. iterators (control the rate data is returned)
  2. generators
  3. generators again! They have an advanced mode where you can feed values back in as well as get values out

ES2016 brings in async and await to make things much clearer and easier to write (and understand).

function  steve (x) {
const  y = inc()
return  x + y


async  function superSteve (x) {
const  y = await inc()
return  x + y

(Lots of code examples, easier to look up slides!)


Domenic Denicola – Async Frontiers

Usual thing is the four quadrant sync/asyc singular/plural. But this is such a gross over-simplification it's not really useful.

Two axes of interestingness:

  • asynchronicity
  • pluralness

Promises represent the eventual result of an asynchronous action, but they are not a representation of the action itself.

Promises are multi-consumer; but consumers can't cancel the action.

Cancellable promises (“tasks”), appearing in fetch first. Promise subclass where consumers can request cancellation, while still allowing branching etc.

Also adding finally() to all promises (then/catch/finally). Finally always happens where then and catch may not.

Plural+Async... I/O streams, directory listings, sensor readings, events in general... this combination is actually pretty common.

(Extremely concept-dense talk so get into the slides. Lots of good detail on the design decisions that go into writing specs; including the evergreen “how to name it” problem and how to deal with the different points of view... eg. the python mindset vs the C mindset on the committee.)

Lots of inspiration:

Is all this urgent? Mostly not. Streams are pretty urgent so they are working on that as fast as they can go; but many other considerations are interesting and potentially useful but we have usable options in the meantime.


Mark Dalgleish

(Mark Dalgleish – couldn't use laptop during this one, but will be worth reviewing on video!)

Alex Mackey - Maths in Javascript

One person suggested this talk should be “don't use JS for maths...”

The most reported bug in js: 0.1 + 0.2 != 0.3 (eg. 0.30000000000004) what is actually going on here?

A brief history of maths... we have had different number systems through history; and new understandings and rules on how to use them.

Humans like to count in 10s. Nobody really knows why but it's probably to do with having ten fingers. Computers prefer 2s (powers of two underpin binary).

IEEE-754: a not-free standard that defines formats for numbers, interactions, and so on. It's used in lots of modern programming languages and devices. It is a little bit like scientific notation. 5.6 x 10(power of -9)

  • The sign: positive or negative (1bit)
  • Exponent: exponent – bias (very hard to write this bit down!)
  • Mantissa: (always assumes it is preceded by 1. - one-dot)
  • Special values: NaN, Infinity, 0, -Infinity, -0.

There is an IEEE-754 calculator online which helps understand this system. So why do we care?

All numbers in JS are 64 bit IEEE-754 floating point values.

Value Representation

Original Stored as
0.5 0.5
0.25 0.25
0.1 0.100000000014351 (not the real number used in presentation)

Some numbers don't store quite right. Think of it like trying to produce 43c in Australian currency – you simply cannot do it, because we don't have the coins to add up to that value (no 1c or 2c).

So we get some calculation problems.

Integers are always stored exactly, but people tend not to work purely in Ints because it's a little clunky. Much better to use a library:

  • bigdecimal.js (beware: not easy to read or understand)
  • math.js
  • decimal.js
  • big.js

Issues with this? Speed. The libraries are not as fast as native. Also you can't generally do things you'd expect like x+y, you have to things like sum(x,y) or similar syntax.

Comparing numbers can be made more predictable by including a margin of error (noting small differences usually noted with the epsilon symbol, so expect to see that in examples.)

But what about the future? These things aren't great... so what might happen:

  • Decimal type: long standing debate in the ECMAscript community. Has been discussed since v3. But lots of concern about interoperability.
  • Value types proposal
  • ES2016? Unlikely.

Other issues not even discussed...

  • rounding
  • exceeding limits (eg. Number.MAX_SAFE_INT+2)
  • problems parsing json
  • parseInt, parseFloat radix param


  • all numbers stored in IEEE-754, no matter what anything ever tells you
  • some values can't be represented in IEEE-754
  • libraries can help
  • one day the value type proposal could enable quite a lot of new things



Andy Sharman – Classes in ES6

Let's remember a little JS history:

  • Inline scripting used to be the thing (embedded in markup).
  • Then we started using the prototype.
  • Then we started using JS to fix browser shortcomings (jquery).
  • Then we started building on top of JS itself (client-side mvc, templating)

Through these latter stages the libraries were creating classes – but they weren't true classes, so to speak.

ES6 Classes give...

  • inheritance
  • constructors
  • supers
  • getters/setters
  • default params
  • rest params (allows you to receive an unknown number of params)
  • arrow functions (=>)

Isn't all this just syntactic sugar? Yes... to an extent... but it helps create clean, understandable code. While this is possible in prototypal JS it's easy to go wrong, or it takes a lot of time.

Support? Evergreen browsers, IE edge, node/io... but there is a lot of red in the support chart.

You end up needing transpilers, but there are transpilers (babel, traceur) so why not?


Jess Telford – scope chains and closures without the hand waving

Web Directions Code 2015
(photo by Steven Cooper)

1. Scope

  • var – lexical scoping
  • let – block scoping
  • const – block scoping

1.5 Hoisting

  • JS does 2 pass parsing.
  • First pass (hoisting) gives scope – variables get 'hoisted' up to the top of the function.
  • This is where lots of confusing bugs begin. Google this if you aren't comfortable with it.

1 Scopes...

  • Scopes can be nested
  • You can access outer scope but not inner scope.
  • You can have multiple inner scopes
  • This does indeed look like a tree of scopes when you look from outer to inner (top down)
  • When you look the other way, inner to outer, you get a chain (bottom up) not a tree

2 Scope chains

  • If you look for a value, JS walks up the scope chain; if nothing has that value you end up in the global scope.

3 Closures

  • Closures occur when an inner scope references a variable in outer scope.
  • This closes over the referenced variable.

4 Garbage Collection

  • GC kicks in when all the code that might need a value has been executed

5 Shadowing

  • This one is a good concept but a lot of people don't understand it too well
  • When a value is defined more than once in a scope chain, JS reaches the first one and stops; the top-most scope is not closed over.
  • You can avoid a lot of problems here by emulating hoisting and putting all your vars at the top of scope. Use a linter to enforce this!

If you are still not sure:

  • npm install -g scope-chains-closures (self paced workshop)
  • Slides


Kassandra Perch - stop the fanatacism

Privilege – we all have it. Everyone here.

Tech culture is toxic. We're still arguing about Codes of Conduct?! We're still arguing about semicolons! We're still making fun of PHP and Flash devs! We still say “soft skills” like it's a bad thing. We constantly judge each other's experiences instead of accepting them as they are or attempting to understand or learn. We idolise jerks! It's gross!

We happily ignoring it even though it's destructive.

Fanatacism – marked by intense uncritical devotion.

We have all done this at some point!

It shows in the way we stick to our languages - “we're (language) developers!” - even though 99% of the time the language you use will not affect your project's success.

We need to stop telling people the language they like is wrong just because we don't like it. It's possible to discuss a choice with some objectivity.

Similarly nobody will force you to use a language feature – you don't have to use them.

Of course if you expect a programming language to stay the same, you're in the wrong profession.

So many of our arguments boil down to wanting to look smart. We sometimes argue passionately without being able to explain exactly why!

We hire weird archetypes – the hard-to-work-with genius; and the 10x engineer. Both of these archetypes are considered negative qualities in all groups other than young white men. This is called unconscious bias.

We can fix this. We can do a lot with language – we can frame things differently.

Take the fanatacism out. We need to be enthusiastic and encouraging. We don't need to love what we do at the expense of someone else, we can just love what we do.

Invite new people, teach things, discuss your tech, participate in open source if you want to.


Don't discourage “noobs”. Don't say “noobs”. Don't tell people RTFM when you don't have docs!

Be enthusiastic, stop short of fanatacism.

(Kassandra leading the audience all together...) There is nothing wrong with being nice!