The speaking is done, the blankets packed away to save us from the freezing aircon again next year, drinks consumed and planes have returned our friends to various corners of the globe.

It must be time for the Web Directions South 2012 Big Stonking Post™.

Pile of appropriate-coloured blankets
(photo by webdirections)

These are my rough notes hammered out on a laptop (to aid my own recall) and posted here in case they're useful for you too.

The usual notes: they're done in a hurry, if you need exact quotes check the videos later. Presume everything is a paraphrase rather than a quote.

Jump menu is old school but hey it's a big post.

Josh Clark: Beyond Mobile – Where no geek has gone before

We are hurtling into an era of science fiction... right now! But what's keeping Josh up at night is what's next? What's going to happen post-mobile?

Let's look into the future and think about what it means for the work we do and the way we work...

First where are we now? Mobiles were the first truly personal computer – not just because they're always on us, but they know so much about us and they have so many sensors about what's happening to us. GPS, audio, video, motion detector, gyro... yet mobile is often considered the "companion" or light computing experience.

The question is not how to strip down an experience – not to do less on a mobile, but how do we do more on a mobile? How can we push the boundaries? How can we make use of information ghosts – all the information about us, about the events we're in.

We need to think beyond proximity – mapping things that are nearby has been a big focus so far. How else can these devices help us in our lives?


"Shopper" app which rearranges your shopping list according the store you're in.

"Into_Now" is like Shazam for TV, so you can figure out which season and episode you're watching.

Then there's augmented reality, where we can add visuals to things around us. So far it's been fairly gimmicky in its implementation, but there have been some compelling examples, eg. Skinvaders which makes your face into the game environment.

"Word Lens" is a little more practical, which uses OCR to live translate signs into different lanuages – even keeping the font.

This is really significant because it moves the interface off the device. The UI has to be designed for the environment around us, rather than the device screen.

"Table Drum" turns any surface into a drum kit. These apps are replacing more traditional input methods.

"Anytouch" uses the camera to turn anything into a game controller.

Of course anyone with a Kinect knows you don't even need objects to be controllers. Sometimes the best touch interface is no touch at all. There are other sensors that offer a more natural interaction than touch – touch is just the first mature sensor. Voice controls are still early days – but they are just around the corner, we can see them coming. Siri has opened up expectations.

Then there are the combinations to consider – we tend to think of designing for touch, OR speech, OR gesture... but they will develop together. Perhaps we will use a gesture to tell a device to listen to what you say next.

Gesture + Speech = MAGIC

Then there is the truly free-form world of custom inputs and sensors. The medical world is doing a lot in this area. There are experiments going on with embedding blood pressure sensors in the body, even trying to download their collected data via touch (that is, put your finger on a sensor and the data is transferred).

You can get a sensor that turns plants into touch inputs! Although it seems kind of crazy, why not access your calendar using the bamboo on your desk?

Farmers are trialling monitors on cows that detect when they're in heat and texts the farmer. Cue the jokes...

Mirroring – sharing data between devices – turns dumb devices into smart ones. Link your sensor-laden phone with your traditional television. Then you have things like the Samsung TV that uses voice and gesture instead of a remote control.

"Everything around us could potentially become smart devices – it's always toasters and fridges for some reason – but what I really don't want is all of them on different OSes with different UIs to learn. I don't need more devices and OSes in my life!"

We get into the era of remote control.

Games often lead the way in this area – eg. Games that turn your ipad into a controller for what's happening on the TV. ("metal storm" plane game)

Innovation is going on in proprietary zones, standards bodies are terrible places for innovation... but lots of things are being done that way before they get standardised (which will help them spread without fragmentation..?)

Microsoft is almost an underdog but they understand the importance of the ecosystem, so they're doing things like "Smart Glass" which makes your phone into a controller.

Then pushing further into the future – migrating interfaces. Where the interface adapts to where you are. The most common example is plugging your phone into the car, so the call comes through your car... but you can unplug the phone and continue your call just on the phone. The phone was handling the call all the while, but its control surface migrated to the car.

Putting a Siri button on steering wheels, while a sad example of being stuck in proprietary solutions... is a sign we're moving towards more powerful migratory interfaces.

Corningware makes all the glass on touch devices... they made a concept video where they really tried to make it real; and show how it really would work in your life. Example of a bedroom mirror that's a huge display for your tablet.

This is what Microsoft is trying to do with Windows 8 – make an OS that works across a wide range of devices. This is something we will all have to design for in the next few years, it's a challenge that's waiting for us to tackle it.

Much of Corningware's video is just not possible due to the cost of the materials. But higher demand usually leads to lower cost; so when might we really see this in the world?

Bill Buxton says 20 years from conception to widespread use. So the things we'll be using in five years have been in the lab for fifteen years! So we can look into the future now...

(slides stop showing.... "uh, we've lost my slides – this is also part of the future!")

Flipping contexts becomes very fun, very sexy! You can use a grab gesture at your tv and a touch on your phone, and it feels like you're putting screenshots onto the phone – it brings natural physical gestures to our devices.

"Siftio" game cubes are little devices that are aware of each other and also connect back to your main PC.

Just in time, not just in case.

Our PCs are just in case – everything is on there just in case we need them. Even our phones are like that with apps. But Siftio cubes just download the little bit of software they need at the time, then discard that to make room for the next thing. Think of The Matrix where you download the knowledge you need right there and then.

(shows Matrix clip of downloading the ability to fly a helicopter)

"Gratuitous Matrix clip.... you're welcome"

Next we get passive devices. Things that are smart enough to just do what they do, passively – that is, without your direct intervention. An example is the "Nest" thermostat which is smart enough to detect you're home, use wifi to check the weather, etc. It's a smart-dumb device.

The device itself is very simple, fairly dumb, but connects to smarter devices when they are available. We think devices are going to get ever more powerful; but the truth is we're going to have a lot more dumb devices, that do less. That do one simple thing.

How do we design for this future? We can't be future proof, but we can be future friendly.

"metadata is the new art direction" - Ethan Resnick @studip101

We need to use metadata as one of our most important tools. We need to structure our data well, describe it well, set up an API for our content. Let the robots do the work! Metadata gives the machines information about how to format the content appropriately.

Example: newspaper. We know the importance, the editorial judgement, based on the layout. But how do we get that information out of the InDesign file? Do you just have an Editor for every possible stream and device? The Guardian did it with metadata. They put the editorial priority into their content and let each stream order appropriately – the iPad edition could be an entirely different layout but show the priority well for that stream.

Presentation deprecates! Our work goes out of date, yet that's what we tend to focus all our attention on... as tom coates would say "your product is not a website!". The individual containers of our services and content are not the product.

We need to look beyond the application we're working on today, to look at the big picture.

As Designers we need to work together across the whole stack; backenders need to design the content storage to cope with multiple displays.

What do we do?

  • Push sensors – push the web to use things like cameras and other sensors and get access to things everywhere, not just on phones in native apps
  • Think social
  • Think about your ecosystem – you're not building an app or website, you're building an overall system that has to keep up with a multi-device future
  • We're all cloud developers
  • Mind your metadata – data is for everyone to design, you need to know you can get at your content and display it properly to any device
  • Think about new input methods – we think too visually right now, we think about "pointing at stuff" too much. For those of you who've been designing for screen readers, you're ready for when speech becomes the interface!
  • The future is here! Bill Buxton's 20 year rule tells us the labs are full of the future!

"We have the best jobs in the world! The coolest jobs in the world in the most exciting time in the history of technology!" Think about the near future as well as how to bring this to your current work. Make something amazing!


Charlie Gleason: How to go from Design to Dev

"Who is this talk aimed at? Me! It's stuff I wish people had told me as a designer starting to do development." Charlie is a designer/dev.

He loves code and highly encourages designers to learn it.

But wait whut, he wasn't good at maths and stuff... so why do this? You are not a bad designer if you don't code, but you'll be a better one if you do.

Disclaimers and uke chord warnings for the lolcats in the presentation. "Everyone wins, nobody throws fruit at me!"

"If you stuck around to do honors, or as I thought of it 'more wine and less growing up'..."

He was inspired to make postofficebox16, an art sharing project. He designed it but was frustrated he couldn't build it. So he tried to ask for help from a dev.

"How not to talk to developers 101"... a placement's art project isn't a high priority.

Learning how to learn – also important to ask "why to learn". Not everyone has to learn to develop... the question really is "do you have something to build"? ...and if not, maybe just make up something to build.

Embrace your ideas – the tram is never on time, so how could the internet help with that? Charlie once made a website for a cafe he liked. 5/10 it's a bad idea. 4/10 someone's already built it. The other 1/10 you won't be able to get the domain name. But that's awesome, it means you are working up to something great.

If you build it, they might come, but at least you learned something. Charlie built the 98 against partly because he wanted to learn how to screen scrape.

The internet is full of people who will helpfully and/or snidely answer your questions. So the only thing you really need is boundless enthusiasm for getting annoyed at your computer!

What to learn? HTML, CSS, JS. Maybe some PHP, it doesn't have such a high learning curve. Python is very hip. Or ruby – check out Why's (Poignant) Guide To Ruby. Don't get hung up on what people think of the languages, just start.

Be forewarned about languages: everyone hates every one of them. Every language is hated. Everyone hates everything. Haters gonna hate. If it works it works. Find out what your friends use, what your employer uses, what you find easy to pick up.

As long as you're thinking about it, writing it and enjoying it then it doesn't really matter.

Charlie likes ruby+sinatra+haml+sass+git+github. Loves the fact the goal of Ruby is to make people happy.

Be ready for the fact you will get bad feedback. Do whatever to make YOU happy. If people say your choice of language is bad, or your ideas are bad, or your content is bad... don't let that deflate you and your enthusiasm. Not all criticism is constructive!

An aside on how great you are. I think it’s worth mentioning, anyone who tells you that the language you’re using is terrible or that the code that you’re writing is terrible or that your ideas are terrible are just really boring. Don’t let that hot air balloon of your boundless self esteem be deflated by cranky humans. You are great. You have great hair, and a great fashion sense, and you’re doing good. There is a big difference between constructive criticism and being a giant jerk face. Life is short - if someone is making you feel crummy, they’re probably not worth your time. End aside!

Tools to learn? The Internet! With an exclamation mark! (get list of specific resources from site)

"Everyone has to start somewhere."

There are sites, blogs, tutorials, books... ask your friends that make stuff! If you don't know people that make stuff, there are meetups for people who do; coworking spaces; podcasts...

Coworking spaces - "the best way to get involved in a community is to start living in that community".

OK so you've started making things and you broke it:

  1. You can walk away! Get away from the angry. Walk around the block, google lolcats. Charlie's office has a swing - "it's quite hard to be angry on a swing"
  2. Google it
  3. ask a friend, noting they will tell you to Google it
  4. ask Stack Overflow – but ask a sensible, detailed question
  5. find a mentor - "I'm yet to find one who couldn't be swayed with cake"
  6. ask on IRC, but it's terrifying...

(Digression into colour theory – every other movie poster is orange and teal. Love how Charlie distracted us with colour theory then showed us how to install stuff on the command line! Bait and switch for designers? ;))

Soapbox: we need you, ladies! Don't be put off by crap like Brogramming memes. We need more female coders.

Four things to take away:

  1. Plan ahead – not planning is a recipe for disaster, doesn't have to be big. But write down what you want to do and how it would work.
  2. Design to build – learning to build makes you better at designing things that have to be built. It helps when you want to advocate things like A/B testing... "If all we have is data, let's look at data. If all we have are opinions, let's go with mine." - Jim Barksdale (Former CEO of Netscape)
  3. Less is more – build mobile first...
  4. Ask more questions – get onto Stack Overflow, get onto blogs, share what you learn


Mark Boulton – Responsive Design

Talking about content and business: the technical side is really the simple bit of the equation – but the business needs to change and it hurts. This talk will be cathartic, it will be about pain and you can all nod knowingly to each other...

(film clip – alice down the rabbit hole from The Matrix. You are living in a prison for your mind. Take the red pill and stay in wonderland and see how deep the rabbit hole goes...)

John Allsopp's article A Dao Of Web Design showed us, long ago, that the web is not a medium of control. The first page ever made on the web was fluid, it was responsive, it adapted to whatever it was viewed on. Then we broke the web. But now we have finally taken the red pill.

Quote from Bobby Solomon (Disney) – that they're bored of talking about responsive design; it's how self-respecting websites should behave in 2012. But that's ok for Disney, who are huge and don't have clients the same way as designers! For the rest of us this has been hard.

Diagram of Van de Graaf canon

Designers have been making books for a long, long time. You can make a beautiful, legible, readable thing. Text block based on the object (shows an example of a canon) start with the page and work in to set the grid. Responsive design turns that upside down. We can no longer design from the edge inwards any more, because we no longer know what the edges are!

Centuries of knowledge about creating grids... that's a lot of baggage to discard. That's a lot of knowledge to throw away. It's painful. We're already trying to force pages back into responsive design with breakpoints.

But what happens in between the breakpoints? That's where it needs to be fluid and we need to be designing in that fluidity.

So how does responsive design affect and challenge businesses?

It costs more. Perhaps 30-50% more time than a traditional fixed website design. Because it's hard and we're still freaking out about it. We don't have a good body of knowledge about it yet. This costs money, so how do you convince the client to pay more? Well certainly don't pitch two options where responsive is more expensive... they'll pick the cheaper one or the one they at least understand. Instead sell the usefulness, sell the ways it will win them more customers/business, say you'll build something that adapts to different devices (don't say "responsive"!). Use distraction techniques! Talk about sustainable design...

Advertising. "Advertising has been ruining the design of content websites for 20 years" - Roger Black. Elephant in the room! It can make things into a non-starter. When big display ads bring in the cash, the sales teams will say no. Technically speaking serving responsive ads would be fairly straightforward; but the way ads have been made and sold has been set for a long time. People buy slots and put known ad sizes into them. Including those massive takeovers. How do you make crap like that responsive? You can't really. The cultural shift required is to stop people buying "a slot" and start buying a set of slots; the same ad in different contexts/breakpoints/etc. But that's not what really kills it dead is mobile sites (m sites, with their own teams and revenue streams. So going responsive kills an entire team and revenue stream. That can kill responsive dead. Dead in the water. It's difficult but not impossible to solve... but it's a big bit of business pain.

Process challenges. ("Google 'bob ross squirrel', I'm not going to say why..."). Bob Ross always turned little mistakes into something in his paintings. "Everyone has a game plan until you punch them in the face." - Mike Tyson. Responsive design is a right hook to design process. Mark spends much more time sketching now; and more of the time with clients. Good clients can handle it... you get more honest feedback. Some get a bit freaked out by the design process, they thought they were buying a pretty picture. But it works to get them involved. The old design processes don't work – wireframes don't work, are you going to multiply your work by the number of breakpoints? Prototypes work, get into the browser sooner. It gets you into the environment sooner - it gets the client into the Content Problem(tm) sooner (and they all have a content problem they don't want to deal with...).

Teams – specifically, silos. Break the silos, get people sitting together, you will learn heaps from each other. Work in product teams (not discipline teams) and iterate quickly.

"Does this work in X browser?" isn't really a question you can even answer any more. X browser in X resolution at X size......

"Responsive design forces a focus on content that many organisations don't understand" - @baskwalla ...Mark would say "don't want to understand".

Editorial process. News, for example. A news story starts as a little seed of content, it could be one single sentence when news breaks. More content comes in from the wire, from leads and contacts, other news organisations. The story coalesces, snowballs, picks things up on the way. The story, the content, is not a page! It has quotes, blogs, tweets, videos, links... this big snowball of stuff is the story. The content is not the page. The story is the link – the links between all that content. The story is the metadata.

We're the only people who resize their browser. (mimes us waving the browser window back and forwards) Only we do this!

The web is made of HTTP, links, files. But what is our content made of?

Talking about CERN – they have the LHC, but they also just invent things to solve their unusual problems. They have brilliantly interesting stories that aren't being told. They think boring graphs are awesome content. Mark had to develop a guide to "what is the wonder of this content"? Mapped out the comprehension levels from scientist to general public. Got the great visualisations instead of boring graphs.

We need to have good content with good metadata, so we can put it together into great things.

We've taken the red pill, it's an incredibly exciting time. We're changing design theory of thousands of years. It's up to us to push this forward, to push the tools forward, to challenge the way things are done.


Cam Adams: Opening Credits - behind the scenes

Creative coding – having an idea and the ability to implement it.

First off, why have an opening titles roll at a web conference? It sets up the event... inspirations for Cam include Saul Bass title designs; Catch Me If You Can titles; Se7en titles (which sparked a renaissance in title design).

Brainstorming – the title designs for WDS are quite visual, there's no speaking. Cam sketches ideas, works out what will work together, get the ideas down and work out how to fit them together.

Also for WDS being a web conference... it "behooves you to USE web technology". Video might be better for Cam's heart rate; but using web tech will push some new ideas onto your process.

Live video = the web + the real world. Developed the theme for the credits.

Flow – key aspect of the title. How do I get from one idea to another. Can be a technically complex problem – but the answer is "use a lot of duct tape".

Specific inspirations:

Technical breakdown:

  • Joy Division opening – the landscape is from the music waveform (frequency and volume) grabbed with the Web Audio API (really good API with a clear metaphor driving it, that of a producer's physical studio – hooking things up). Take colour values from the webcam (with WEBRTC); and apply those colours to the lines. Put that into 3.js (webgl).
  • 3D lettering: CSS3. Didn't do text in WEBGL as that's a pain in the arse. CSS3 is great with text; even if the performance isn't so great.
  • Bridged CSS3 to WEBGL for the 3D shapes.
  • MASH IT UP! Moving from CSS to WEBGL is not neat, don't dig into that code too much.
  • 3D shapes formed into letters with canvas and random generation of shapes mapped to on/off pixels mapped from the original text.
  • Forest scene – layered PNGs for the screen; CSS3 text and text shadow for blurring.
  • Sliced video was a canvas manipulation. (Note for anyone who skipped the first video, the colours were from Cam waving glow sticks at his laptop!)


Tim Gleeson – Music Mash Up

Web Audio API – included in HTML5 media set. Currently in editorial draft with the W3C. Support is not great, but it is emerging technology so let's hold out for it.

At the base of the API is the audio context, which is currently webkit prefixed (yey). You can load audio a few ways; it depends what you are loading – long or short soundtracks.

(The demo gods strike! No audio coming through!)

You can pass your audio signal to things like gain. Create a gain node; then you have access to change the gain value. gain_node.gain.value = 0.5;

Compression: compressors can take different sources and normalises the volume.

Filters allow you filter out frequencies based on filter type – there are 8 in the API. First in the default is Lowpass, which lets you get a bassier audio track. Highpass brings out treble. Panning controls the direction/position.

(DEMO GODS!! Panning not working... friendly sound guy helps out ;))

Finally – frequency analysis, which gets data which can be rendered as waveforms and so on. But use this wisely! Do not play audio when your website loads!


Dmitry - Animation

("next up is Dmitry, if you don't know his surname you haven't been paying attention")

What is animation for? It's for clarity, not to have fun. Make it easier for people to understand what the heck is going on. If you move things around with no animation or transition it's hard to follow what's going on. Sometimes the initial and final states look the same; so you might not even know something changed.

So, animation is important.

What is animation on the web? Usually we have a value; change it over a duration; and if we're luck we have easing; extra lucky we have iterations; and delay. That's the anatomy of web animation. That sucks! It's very primitive! But nobody complains.

Easing – we just have cubic-bezier. This is not enough! Why are you not complaining about this stuff? I'm not happy with this stuff!

Discrete animation – common example is a spinner – this is not as easy as it should be.

Reusing animation – you can't do it! You have to make multiple animations even if the result is the same. But then you can't apply multiple animations to one element.

Getting and setting animation status – you can't do that!

Want custom animation! I want to be able to write things that maybe my browser doesn't know about. Would prefer if the browser just did it...

There is hope! Web Animations 1.0 is an emerging web standard from the W3C. Only problem is it could be done wrong, of course.

Dmitry is one of the editors of the spec so he is trying to put his money where his mouth is.

So why is he talking to us? We should be complaining about the state of animation!

"Being happy and smiling all the time is not productive! We should be productively unhappy!"

If we don't complain it means you're happy; if you're happy nothing needs to change; then progress won't happen. Developers are too happy to talk about how good things are – but only seem to complain about javascript for some reasons... ("I think it's perfect.")

Be productively unhappy, change the world.


John Allsopp: What we talk about when we talk about the web

"Warning: you will get nothing practical and useful out of this! It's a philosophical discussion."

Reference "what we talk about when we talk about love" in which two couples discuss love. You'd think even if we couldn't define it we would know it when we saw it. But in the book people disagree deeply.

So how does this apply to WDS? Well, what is the web?

The proverb of the blind men and the elephant teaches us observation is important... but the web is constantly changing whether we observe or not; it does not exist as an object; it is the product of our minds.

But as a city is more than its buildings and roads and places, the web is more than specs and implementation.

To understand we need to look back and understand where we came from.

Precambrian web

TimBL built the web – the protocols and the tools. But they were very simple, links for example were one-directional, non-annotatable hyperlinks. It was very simple and not extensible.

Sometimes it's the things which don't change that are interesting. The principles of the web haven't changed. TimBL later wrote 'the web and the web of life' which clearly links real world philosophical principles to the principles driving the web. Decentralisation, invention expecting the results to be good but unpredictable. Tolerance – a tolerant protocol is robust. Interoperability.

"try to find screenshots of websites from the early nineties..."

In the early days even the conversations were difficult – Microsoft had a conflicting definition of what a "link" was ("embed here" not "go there"). There was no W3C, just a rough working consensus of code – and the consensus was often very rough.

Cambrian explosion

There was a great deal of innovation going on – IMG, FONT even the allegedly-meant-as-a-joke BLINK. Speaking about things like accessibility and cross-browser support was often referred to as holding things back.

Creating Killer Websites was a massive book; and the website was...well...we'd consider it terrible but people were so excited about it!

From this era things like the Web Standards Project were born, where people were trying to sell the standards-based approach. They'd been talking about a better way to build things.


Then IE5 for mac and IE6 for windows happened... and they introduced doctype switching so you could move forward without breaking the old; and they supported CSS quite well.

The 2002 Wired redesign was news; and the CSS Zen Garden showed what was possible.

The conversations about the web change the web. We do need to take time to think about what we do, not just spend all our time on implementation.

But a lot of talking was referring to the past; and redoing things better.

Around this point the name AJAX appeared; which encouraged people to talk about an idea that had been around for a while. Giving things a name changes things because it enables people to talk about it.

Generally people talked about Web 2.0. (a term used in Darcy DiNucci's "Fragmented Future" article)


The current discussions around HTML5 "readiness" remind John of the Making Killer Websites conversations. We focus on what's bad (eg. peformance) and not enough about the great things we can do with it.

Scott Jensen describes native apps as a remnant of the Jurassic period of the web.

Many discussions now are focussing on minutiae and not so much about where we are going. Talking about how things look; how we can make them... important discussions but not the only ones to have.

Walter Benjamin – Theses on the Philosophy of History; pictures the angel of history having his face turned to the past.

We should think of ourselves standing on the beach of a vast ocean with continents beyond the horizon; talking about how best to ride waves back to shore. We should think about going forwards too.

"I want to walk about a web that isn't just a two point oh." How can it augment us, how can it help us know ourselves better, know our society better, know our world better. How can it help stop us killing ourselves and our planet – to understand the true cost and result of our actions.

What will this web look like? That's what John wants to talk about when we talk about the web.

Of course we will still design interactions and they will still be interesting interactions. We will still be using all the techincal pieces and talk about those things; but we need to rediscover the unpredictability that made the web grow. We need to look into the future and not focus on history, look ahead not behind.

"So that's what we should talk about when we talk about the web: we should talk about the future."


Miles E: is Win8 the dream, of the web being "native"?

John: it's not the first time we've seen this, there have been other implementations using web technology as their native technology. Win8 is probably the most ambitious attempt to date. It's a lot of the way there. At the moment though it's not 100%, the capabilities exposed to a Win8 app still aren't all made available to the browser. But we're still thinking about apps, discrete packages of code – silos of functionlity. But what about decoupled things, small pieces loosely coupled.

This is not all to say we're talking about the wrong things – just that we should ask "what else should we be talking about".

Chaals: one of the reasons why we build apps is because we can sell them. In the future where does the money go?

John: it's kind of a miracle that the money's in apps. If you are not in the top 250 apps in the iOS store you will not make enough money to support two developers...

Really we are building services, not apps. Users engage with that service (netflix, amazon) not the specific apps. Business is in service. Some people make a lot of money making apps; but so too some people make a lot of money gambling. We're success-biased.


Ben Hammersley: The flower, the field and the stack

No slides – possibly a Web Directions first!

"You may not know this, but the Normals are freaking out."

They are losing their minds about the things we do – the web, technology, digital design... they are completely losing it.

Pretty much every country in the world has a law or push to attempt to "record everything on the web and monitor and censor it"... he worries politicians have shares in hard drive companies.

Recently Britain had some "rather rubbish" riots; and in response the PM wanted to be able to turn off social media during times of crisis. "this is the level of discussion we have..."

We can be outraged or simply confused why anyone would think this was a good idea... but before we can have any real debates with politicians we need to understand why people outside this room (WDS12) is confused and worried and scared.

Without understanding the social context of the work we do, we can't do the really good stuff or answer criticism from people who don't like what we're doing.

Moore's Law – every year or so computing power doubles for the same price. We already know this, it's why our devices are rubbish as soon as we've bought them. What you might not be aware of is this is the first time in the history of humanity to have this phenomenon. Swords did not get twice as sharp, horses did not get twice as fast. We live under Moore's Law and this breaks many things; modifies politics and so on.

We also never used the same tools to make the next tool – swords did not make swords.

While we, developers, are used to a fast cadence, other people are not. Politicians are used to making laws to last 10, 20 years.

In ten years our phones will be radically more powerful; or the equivalent device will be 1/64th the cost. How do you come up with a policy that still makes sense in ten years time, even though you don't understand the iphone in your pocket now?

We don't know what technology will even look like in 2040, but people try to predict what will be happening; and it's incredibly hard to predict.

We have careers other people don't understand. But every generation before had a simple career path with careers that lasted their whole lives and didn't change much.

This all adds up to the social undercurrent of what's going on right now.

"You may not have realised this, but you are all slightly rubbish cyborg." Our phones are our robot brain. We all have one, it's rarely more than three feet away and we love them. If you lost your laptop you'd be upset and looking forward to buying a new one; but if you lost your phone you would freak out. We don't remember stuff, we use our phone to remember things. Scientific studies show we are using part of our brain to monitor our phone – even if it's turned off.

Everyone feels phantom vibration, heavy social media users have been known to feel phantom vibration even while holding the phone in their hand.

It's become part of our emotional mind. If you have been on social networks long enough you have learned to monitor the general state of mind of people you connect with. If their pattern or style changes or stops, you will start to notice their absence and you will reach out.

This is all freaking out the Normals.

The internet also came into business like Godzilla into Tokyo. It destroyed things – the media industry, education... - and rebuilt them in its own image.

Some people may be thinking "fuck em, we won". We spent years being teased by the jocks and we have finally finally won after centuries of being beaten down. We just have to wait for the old guys to die out.

"Well yeah, but the thing is they're not dying out fast enough. Moore's Law also applies to medical technology..."

This is also a repeat of the renaissance, where a few thousand people in specific cities around the world – in many respects – control the culture. "You lot. You guys. And your friends and peers and heroes." This tiny group controls the culture of millions.

We've tried to do this. All the big buzzwordy technologies have been about getting into the psyches of people around the world. The nerds controlling the minds of millions. Making sites more sticky, campaigns and apps more viral... it's all about controlling how people deal with technology.

There comes a time in every expressive technology where people stop wondering how they were going to do things, and starting wondering what they should be doing and why. Because if they build the wrong thing, they can really hurt the people who are affected.

"Smart cities... put this in your CV now!"

It turns out sensors are cheap and so is the technology to connect them to the internet. So now cities can be measured. People can use the data to optimise their lives. In many cities public transport has GPS giving people real time access to information about when the bus really will arrive. You can decide if you have time at the pub for another beer.

But this movement is being driven by technology companies who will be selling things. They optimise for things like an efficient commute. But the university city of Orhuse(sp?), "one fifth hipster", decided that was a problem. It's optimising for what IBM wants to sell. They wanted to optimise not for an efficient commute, but for the beauty and serendipity of the commute.

Every city in the world is now being pushed up against technology made somewhere else – somewhere entirely outside the social context of its use. The people who create hardware and software need to be mindful of what it's doing.

Software is political. Facebook is the Zuckerbergian vision of the world made real. It is designed to make everyone into Zuckerbergian thought monsters. It pushes the political ideal that there should not be any privacy, everything should be open. Every decision that is made at Facebook is influenced to strengthen Zuckererg's vision. The same goes for Twitter, Microsoft, everything.

If we agree that we have become good at this stuff and the technology has become very intimate – it's in our daily lives – and we want people to use it all the time... then we have to admit to ourselves that our culture will be imposed on our users. The time is now to be mindful of this.

The world is still run by grey haired baby boomers who really don't understand what's going on. We have unfortunately elected people to drive our future who are very confused by our present.

If you can remember not having a phone, your social duty now is to translate. We need translators. We need people to guide the CEOS, the ministers, the senior professionals into the modern day. If we don't, they will continue to legislate against modernity. They will continue to enact legislation that is pointless or harmful and we will continue to have the same tired debates.

There are changes we've not remarked upon.

Ubiquity of reviewing is a huge change – we can now review anything we do. You no longer have to spend years become a restaurant critic, you just need to go to a review website and write something. There are sites to review everything. We have become used to the idea that we should be able to slag them off online and expect a quick response.

(reference to J Curve theory of revolution) the speed of change isn't as fast as people expect it to be.

We've also changed the speed of group forming – no matter what you're into you can find other people online who are into it to. It's become impossible to be alone if you don't want to be.

These are huge social drivers. We are coming to a point where we could bring about massive change for the better. But if we continue to make things blindly, maximising simply for income or hype; or continue to blindly consume things; then we'll never be able to use the revolutionary properties of the web to change the world for the good. But if you do, you are living in a great time to change the world.

It is an awesome responsibility and an awesome power.


Tom Coates: An animating spark

"I've decided to come up with a hash tag for this talk, here it is! #besttalkever"

Aw, I love @tomcoates, too. #wds12

Interested in how connecting things makes new possibilities, how a network of data and services can transform what we produce.

The possibility space for new products is defined by how many data sets are connected, mashed together... take multiple data sources and elements and you multiply the product possibility space.

New product possibility space: (all technologies + all data data sources)n

(n being the number of ways you can put them together)

Talking about mundane things receiving the spark of creativity, the spark of animation, similar to the spark of electricity bringing Frankenstein's monster to life. Of day to day objects receiving a small amount of intelligence and agency... and how you will feel about it? This is not a vision of the future – this is a vision of now.

How do we get things out of the labs and into the shops?

Tom has a theory that when people explore new technologies have one main goal... it's to articulate why the technology is interesting. We're trying to sell it and impress... which gets people excited but over-sells and sets up big expectation. Also while selling things the examples are exaggerated and crazy to get attention and prove the project should be funded. Especially as early technology can be relatively expensive – during development, Kinect-equivalent tech cost closer to $10,000 than something priced to attach to your television.

An uncharitable theory: the way we think about the future is betraying our present.

Tom wants us to think about tangible and real application of ubiquitous computing, what the "internet of things" – being the same internet after all – can really do. We can look at everyday devices in our homes and see what extra value we could add to them, considering how cheap it is to add network functionality to them.

LCD clocks were initially expensive but demand for them increased to such an extent they became cheap. Then LCD clocks were stuck all over the place, even in places they didn't really add value.

"What was once expensive is now trivial. This opens up opportunities."

Raspberry Pi is a fully functional computer that's as powerful as the computer Tom used through university... and you can buy it for about $25.

Kindle with permanently-on free worldwide 3G – a few years ago would have seemed impossible and yet there it is.

"Hundred dollar devices" with the ability to connect to the internet – what doors does that open?

Mundane Computing – a term by Chris Heathcote – most of the time life is routine; and Chris was interested in how technology can make some of those daily moments better. It's not bad, just think of it as not chasing a unicorn but instead doing something useful.

Example of the problem: washing machines that beep. Tom is "obsessed with machines that beep". When the machine is done it beeps at you. It doesn't respect the fact you're doing something else, it just keeps on beeping. And that's infuriating! It's like GPS devices that are disappointed in you when you take a wrong turn.

These devices should chill out! The problem could be fixed with a network connection and an app – devices could talk to an app that you could tie in to your more-polite-than-beeping notification systems.

Muji – sells lovely minimal, elegant things. Why aren't ubiquitous computing devices packaged so nicely? "Able to be invited into the home"... something which makes you want it in your life.

Mujicomp online taps into this idea.

Although "mundane computing" doesn't make a sexy buzzword it allows us to think about daily things. How can you take a "boring problem" and push it to the limit. The "Nest" thermostat is one example; Twitter could be thought of as SMS pushed to its limits.

Fighting the futurists... (want to get really meta?)... you'd be surprised how many network-enabled fridges are being pitched and made right now. It doesn't make sense to have Twitter on your fridge – who has an expensive fridge and DOESN'T have an iPad or some other more suitable device for reading twitter and playing music?

It's a misunderstanding of what "connected to the internet" means – it doesn't mean it has to have a browser. Don't add cost by adding a screen, for $5 you can network it without a screen. Increasing the cost is not helping things become useful.

The average life cycle of a fridge is over fifteen years. Imagine if you'd got an internet enabled fridge fifteen years ago... it'd have windows 95 on it! So actually you really want as little as possible on the device. Otherwise your appliances become hopelessly out of date much faster than before.

(referring to the Corningware vision) Do we really want every surface around us to be a screen? How often do you refit your kitchen? Do you really want to be using your bathroom mirror to bash out email? Do you keep every surface that pristine clean? Actually it's a bit gross!

Stewart Brand's home shearing layers (from "How Buildings Learn") shows us that the structure changes far too slowly to put technology directly into them.

Matt Rolandson, Ammunition Group, made the point – use the network to amplify the tool's core purpose, not to be another web browser or Twitter client. The internet != web browser. A network enabled coffee machine should be better at being a coffee machine.

Ideas to make things more useful....

  1. Make it easy to set up
  2. Make sure it works when it's offline (it just works better when it's online)
  3. Put the bulk of the intelligence online, not in the device (helps the upgrade cycle)
  4. The interface for the device isn't embedded in the device, it's wherever you need it.
  5. The best way to enhance an object is to make it easer to control or understand.
  6. Devices should be polite (? not sure I got this point right)

Mundane computing: what if all devices over $100 had an API that said...

  • where are you
  • who do you belong to
  • what are you doing right now
  • how have you been used/usage and error log
  • how much power have you used/are you using
  • how well are you functioning

You should be able to...

  • control (safe) basic functions
  • receive alerts when there's a problem
  • receive alerts when a job is completed

@houseofcoates is a feed of everything in Tom's house.

Anthropomorphising things can make them fun. Race your Scooba against other peoples Scooba. When we engage with things this way we intuit motives – your device doesn't need charging any more, it's feeling tired.

"We always overestimate the change that will occur in the next two years, and underestimate the change that will occur in the next ten years." - Bill Gates

Having a history of a device changes purchasing, maintenance, even the way we buy or rent them in the first place. Being able to track ownership and location – what does that mean for ownership and theft?

The infrastructure is there, if devices are tapping into it, we're already bootstrapped and the spark required is creativity. Ideas for things that can happen when things are brought into the domain of the internet. People who manufacture things often have no idea about the internet – this is where our responsibility comes in as designers and developers.

Raymond Loewy thought the goal of industrial design was MAYA: Most Advanced Yet Acceptable. This is a good way to think of mundane computing.

We also have a responsibility to bring the general public with us into the future. Make it friendly to everyone.

(books at the end: Bruce Sterling's "Shaping Things", Adam Greenfield "Everyware", Mike Kuniavsky "Smart Things". John also calls out Aussie startup, Ninja Blocks.)


Lea Verou – more css tips

Screenshot from slides showing background-attachment local

Following the 2011 "10 secrets talk" (worth reviewing) this is another ten secrets.

  1. shadow that gradually appears – used in Google Reader via JS but can be done with pure CSS. Background-attachment: local; … combined with multiple gradients you can make a shadow that only appears when you scroll down.
  2. fixed width, fluid background – using calc on a parent can remove the need to add wrapper divs for fixed width designs
  3. using transitions to avoid js for things like lightbox effects (also showed using a big box shadow instead of a blanket div)
  4. lined background that snaps to text using a linear gradient and background size. Uses background-origin:content-box so the background doesn't move away when the padding changes. You can also use the technique to create zebra striping.
  5. Opposite transitions to keep a child element positioned normally while the parent is rotated or skewed etc. you can create an image slider/comparison with CSS transforms.
  6. Clockwise animation of an element, using keyframes and a cancelling animation to keep the element horizontal (smiley face kept upright but moving in a circle)... however it does require an extra element. Better way from Aryeh gregor uses the frame-by-frame nature of transforms to get the same effect as attaching two origins to a single element.
  7. Alternative to box-shadow proposed by Adobe, "filter" to use SVG filters on all elements. Gives box shadow effect on speech bubbles created with pseudo elements. Currently only in Chrome; and obviously the name "filter" is a problem for IE.
  8. Pseudo element set to absolute position zero all 'round and copy of background image... you can hack together a background blur effect. It's hacky and messy and it works.
  9. Hyphens:auto – good but still patchy support
  10. frame-by-frame animation using a sprite and keyframes and steps(n) to prevent smooth transitions which aren't giving us animation. This gives us access to alpha transparency, which animated gifs don't do. Credit: IE9+

(Plug for – vendor neutral documentation platform, to bring together the currently-separate efforts being made by all the browser companies.)


Douglas Crockford – Programming Style & Your Brain

The two topics:

  1. programming style, the stuff which is ignored by the compiler/parser and why it's still important
  2. your brain, the big squishy thing you carry around and how it is linked to your code

There is an idea in economics that people work towards their own interests... but it is in fact not true. Humans do not always work in their own interests.

We have two systems: head and gut. Your gut is fast and you can't turn it off. Your brain can lie to you! Look at a Edward H Adelson's checkerboard illusion – your brain can be tricked.

Advertising knows this! They have been crafting messages that work on the gut for years. The head knows $199 is not a lot less than $200 but our gut says differently.

This is all important because our head and gut are in play when we make code. Code, programming, is some of the most complex stuff humans make... and we are still doing it entirely by hand, because AI isn't advanced enough to do it for us. We can't express what we want in small enough pieces for a computer to calculate it all for us.

Our primary tool is the programming language. We write the language, the computer can transform it into something that can be executed by the machine.

The thing that makes programming so hard is that is requires perfection, because if it's not perfect, the machine has license to do the worst thing possible – not work correctly. But we can't actually achieve perfection; and even if we did achieve it we have no way to recognise it. We can't hold software back to be perfect, we'd never release it. That's why we release things early so we can find problems quickly.

We have the brains of hunters and gatherers. Nothing in our evolution has prepared us for writing programs.

Programming makes use of head and gut. There are tradeoffs. Doug "I have no evidence at all that gut is involved... but my gut says it's true!"... but the gut doesn't really have the data it needs to make rational decisions, that's the head.

Good and bad code... naturally the examples are in JS.

JS has good and bad parts; which is why Doug wrote JSLint, to tell him when the code is using a bad part.

WARNING: JSLINT WILL HURT YOUR FEELINGS! Doug "It really does, it hurts me too!"

People have an emotional reaction to the results of JSLint. Rationally they went to a code quality tool and asked for advice; but they don't like the advice.

Braces on the same line ("on the right") or on a new line ("on the left"):

foo {


If someone is told to go between the systems, they get upset. The gut dislikes it so the head tries to rationalise it.

In JS though, you can actually break stuff due to automatic semicolon insertion:

ok: false;

SILENT ERROR! Because ASI has put a semicolon after "return". So in JS there is a very good reason to say put the brace on the right. So you should prefer forms that are error resistant.

Switch statement fallthrough hazard. It kind of persists the problems of goto. While there is a problem, DC initially decided not to create a warning as it would lose some elegance to avoid an error that hardly ever happens. But then the next day it was proved he had that error in JSLint!

"I had a moment of enlightenment." Elegance for its own sake is worthless.

Also saying "that hardly ever happens" is the same as saying "it happens".

Code style should not be about personal preference, it should be about making more robust code, something closer to perfection than we could do otherwise.

While the Romans WROTEINUPPERCASEWITHNOPUNCTUATIONORSPACES people who had to copy it later introduced lowercase, word breaks and punctuation because it reduced the error rate, it removed ambiguity.

Programs must communicate clearly to people, not just to the compiler. People are important. Use elements of good composition where applicable. For example, use a space after a comma, not before.

We want people to focus on the substance and not the formatting.

Good rule: no space between a function name and the first bracket; one space everywhere else. But then you get to Immediately Invocable Function Expressions...

(function () {

...but then you have the extra brackets floating around in the middle of nowhere. DC feels this is more readable:

(function () {

Never rely on ASI! This breaks:

x = y
(function () {

So put the semicolon in!

x = y;
(function () {

Don't use the with statement. It's useful but never not confusing. Confusion must be avoided, it creates bugs!

Always use === as the result of == is hard to anticipate. "It turns out you don't need double equal...". If there is a feature of a language that is sometimes problematic and another feature that's reliable, use the reliable form.

Multiline string literal example – breaks when there's a space after the \ can't see the problem, but it's there.

Make your programs look like what they do!

Declare all your vars at the top; declare all functions before you call them. Because of hoisting.

for (var i) {}

...i is not scoped to the loop, it gets hoisted too!

let is coming and that will have block scope. When that's available DC's advice will change. Unless of course you need to support IE in which case stick to var.

Global vars are evil but sometime necessary; they should be rare and stick out like a sore thumb so DC's advice is to write them in UPPERCASE.

New prefix – forgetting new can clobber globals. To help avoid problems, name constructor functions with TitleCase (?)

Write in a way that clearly communicates your intent.

++ … "this will be controversial. Controversial does not equal wrong."

The value of ++ is to reduce large amounts of code to one line; but that's a bad trade as it becomes incomprehensible. DC eventually just stopped using it. When he has to add one to something he adds one.

++ vs. x += 1

You can get subtle off by one errors if you use ++ and DC has even seen two instances of ++x when x += 2 would have worked.

Who are the bad stylists? DC identifies four groups:

  • Under educated – just picked up bad habits
  • Old school – came to JS from other languages, would rather be writing A Man's Language and there's no way I'm going to learn JS properly!
  • Thrill seeker – the ones who are just willing to take chances.
  • Exhibitionist – they have read the spec and found all the weirdest ways to write the code and show how smart they are

Designing a programming style is not about selecting features because they are liked, disliked or pretty. It's about avoiding the abyss. It's about avoiding broken code, avoiding debugging.

Devs can do this because we are optimists – we know we can go down into the abyss and come out with a fix. It's why we can't schedule (estimate) for CRAP.

If we want to be more efficient, it's not about saving keystrokes it's about avoiding time spent in the abyss of debugging.

Forms that can hide defects are considered defective.

Language subsetting - "only a madman would use all of C++" - liberate yourself from the parts that don't work, that will trip you up and cause you pain.

"There will be bugs. Do what you can to move the odds to your favor."

Good style is good for your gut.


Coffeescript – good to learn from, exposes good parts and not too many bad parts; but has bad parts of its own and not great tooling yet... not recommended for production.

What does DC think about the comma first style? "I think it's stupid." Main benefit appears to be saving a few keystrokes, are people truly that feeble?

Chaals – Beyond HTML5

Where are we?

  • HTML5 "plan 2014" - it's gunna be ready soon(er than 2022)!
    • ...and they did that by saying anything that wasn't ready didn't go into 5.0, it went to 5.1
  • W3C is working on HTML 5.1
  • WHAT-WG keeps going on their living standard
  • The web is more than HTML

In feature freeze and wanting to patch WHATWG bugs; they're making extension specs for controversial things are being postponed. That's problematic because accessibility features are being deemed controversial if they don't work 100% of the time – problem is most accessibility stuff doesn't work 100% of the time. HTML5 has to go through a Last Call 2, for patent issues.


The living standard is huge, 6meg of HTML is truly gigantic. In fact it's too big to read. Almost nobody has read it end to end; not even the editors.

While WHAT-WG is problematic it is still the biggest source of ideas at the moment.



  • Encrypted media (DRM)
  • Media Source (adaptive streams)
  • Responsive Images
  • Headers/Outlines
  • <maincontent>
  • @longdesc

If you want to influence these decisions, get involved and give feedback. Tell the editors what works for you as a developer. It turns out the people in the room have a lot more say than those who aren't saying anything; and things move faster when the editors aren't guessing.

Outside HTML:

  • hypervideo – allowing videos to link to other videos more akin to html
  • Audio API – commonly demonstrated
  • Speech API – not so commonly demonstrated, people do talk to their devices!
  • Sensors, Near Field Communications, gamepad
  • getUserMedia()
  • Push API, RTC


  • Web Intents – revolutionary, came from google, going to work out how to do plugins on the web. This time, instead of actual software plugged in; you'll use a service on the web to do what that downloaded plugin would have done.
  • Web components – ability to make your own markup, much like XHTML used to be extensible, except this time to make it actually work. Ability to collapse common features down to custom markup that shorthands whole sets of CSS and JS.
  • Web Animation
  • Fullscreen
  • Clipboard
  • 3D (and printing)

Getting input:

  • pointer events
  • gesture
  • speech
  • IMEs

DOM 3 events almost done.

Where does all this lead? To making an Operating System! The web can be an operating system, and that's where all this work is heading.

But there are a few curious issues – eg. The web has no concept of a file system. Things the web runs on have file systems but the web itself does not.

Building a network hits i18n issues like multilingual web – even just getting keyboard shortcuts to map to different layouts turns out to be very very poorly done and flaky.

Identity on the web is still a pain point – we don't have a good, single solution. Currently people mostly just re-use a specific service login (Facebook, Twitter). The useful case is wanting to vote online – you need some way to be reasonably sure each vote really came from one person.

"What we do with Moore's Law is what we do with freeways – we build more of them!" We download much higher-resolution images of cats. We forget that we will fill the capacity doing the same thing with bigger files.

Ecommerce is curious – you can almost but not quite get a whole homeloan online; you can do small payments, but not a genuine micropayment of less than $1 without an ongoing relationship. In developing countries with low incomes this is a really big problem. The ability to transfer phone credit between phones has led people in Kenya to think of a mobile-phone based bank.

When lots of money starts getting passed around in a visible form in such a nation, the government can – often for the first time – start to tax their population's cash flow. Which suddenly puts the government into peoples lives, when previous they were largely decoupled. Then, people start reacting to the government... and when that happens, you can get genuine revolution.

That's the exciting – and scary – part of where technology is taking us... it can change the world.

[My writeup doesn't do justice to the conclusion. You had to be there.]


Josh Clark – Buttons are a hack

"I hate the ipad back button with the fire of a million suns"

Fitt's Law – roughly, the smaller and further away a target it is, the harder it is to hit. Also we have issues with accessibility and discoverability... not everyone can even use a touch screen (he met a cabbie with a hook instead of a hand...) and even if they can how do they know what to do.

Gestures are the keyboard shortcuts of touch. They are patterns that are more forgiving, they can be done anywhere on the screen; they don't require as much precision.

Buttons are a hack – an inspired hack, but still a hack. Even in the physical world they're a hack – a button over there for a light up there is not intuitive.

The internet is not the browser – we have been reminded many times here at WDS12.

Photo of Tom Coates on stage, with speech bubble - internet does not mean browser

The web (browser) is inside of every application instead of every application being inside the web (browser). - Luke Wroblewski

You can do some touch stuff in browsers; but a lot of gestures have already been reserved by the browser itself. So for the time being the greatest/richest level of gestural interface design is going on in native apps.

App demo – quick prototyping with Adobe Proto. Would't work all that nicely on desktop but it's perfect for an ipad.

Also showed Clear – really good exploration of touch driven interface, inspired by playing a musical instrument.


How do you find what you can't see? Especially when there's no prior familiarity to fall back on? How do we get to the stage we're at for keyboards – muscle memory.

"There's no such thing as an intuitive interface. Everything is learned, there's no such thing as intuitive."

UIs are social conventions, we can't truly rely on them. Many can get solid; but you can design to remove uncertainty. Example: salt and pepper shakers, the very best is the one with glass that lets you see what's inside. The content itself tells you what's happening.

Design the content as the interface. We may finally get to a point where the message is actually the medium.

Many apps still give you a complete set of instructions when you start them; which asks people to become experts before they're novices. Front loading the instructions makes it seem complex even if it's not.

Besides, nobody reads the manual. We all have incomplete knowledge of the tools we use, because we don't read the manual. It drives us nuts when our users don't read the manual, but we know they really don't.

But people watch TV, maybe a screencast will work? Example video: Al Gore introducing an app, but it's a very dry, boring video.

So what else? We don't have to give people an instruction book.

Nature doesn't have instructions, even though it has a pretty complex interface. We all spent years learning the interface to the world. We've got it now, which is great, but it took work.

Some people use skeumorphism to tap into prior knowledge, but if you don't follow the metaphor all the way through you make a much worse situation. Eg. Apple Calendar is skeuomorphic but for the first 18 months you couldn't "turn the page".

When you're teaching things to users, think of the patience and tolerance you would show to children. They don't know things yet and we accept that. When an interface is unintuitable, why expect people to get things instantly?

To learn great ways to teach people how to use a UI... play more games! Games use...

  • coaching
  • levelling up
  • power ups great effect.

Some sites and apps use little popups/inline dialogs... but we are haunted by the ghost of Clippy past, where the terrible content and persistence despite dismissal drove us nuts.

If you add a hint or callout, make sure you stop showing it once the user has actually done it – when they know, you don't need to tell them. If they've done the gesture or said ok, stop hassling them.

Provide visual cues for custom gestures. A suitcase without a handle is useless; a gesture with no affordance is useless.

The best time to give a hint is when people need it – games may pause and tell you how to block when you're getting heavily attacked.

Apple took it a step further and forced people to learn to scrollin in OSX Lion – you had to scroll before you got a continue button, it was just that important.

Power ups in games give you super powers – they let you shortcut things and reward you for effort. They can be used by anyone but are especially effective when used by experts.

If people are doing a slow version of an interaction, after the tenth time you can offer a hint about a quicker way. You can even require them to do it to proceed.

Even so this all shows we don't have conventions. We don't have enough commonality to form standards. So talk to each other, be generous. Ask people why they built something the way they did.


Jon Kolko: A means to an end

Two parts to the talk – Means, and Ends.


(mesmerising time lapse of Jon making a clay pot)

Jon learned how to make ceramics from a very young age and it took him a long time to appreciate craftsmanship, that the goal is to get things right not do them fast.


source: slide deck (pdf)

Craft is about engagement and quality.

What happens when you gain expertise? You learn what the medium can and can't do; you gain patience; you slowly gain understanding. You learn what the medium wants to do, how it wants to work.

"Art resides in the quality of doing, process is not magic." - Charles Eames.

There is a message to creation, there is a wider reason for making things in the first place.

Where do we choose to aim our mastery of craft, material, process and voice?

What problem do we solve?

Ask these three questions...

  • Should I make things?
  • What things should I make?
  • For whom should I make these things?

Your selection of subject matter is a complex political decision which always has consequences.

If you do a wonderful job of designing a McDonald's website, did you cause the rise in diabetes? Probably not. But did you amplify something that was already happening?


Ethonography around homelessness in Austin, TX...

His students set up a sign at a gathering saying "hi can we ask you a question?" and they asked "what do you want to have happen by the end of the day".

They realised homeless people are a lot more like them than they realised. They have phones, they have Facebook accounts... they just don't have homes. They realised self-actualisation was important to them. What if they were teaching something?

So they set up HourSchool; and an online platform for people to teach things. "We believe that when people teach, they gain self-worth; and that empowers them to take control and change their situation." (inexact transcription)

Notion of a social entrepreneur – someone who takes on risk and reaps a reward in a social context. They can still make money, they can still be financially successful, without sacrificing positive social impact.

Design is now getting into the boardroom; it's not longer a service to the business, it's part of the entire strategy.

Three ways to control the subject matter:

  1. build a theory of change
  2. become a social entrepreneur
  3. run the show – get into the boardroom



(Once again, the end of Web Directions has arrived so fast your head spins.)