A11y Camp 2019 was held at the ICC in Darling Harbour, Sydney.

Old school jump menu because this is crazy-big

Disclaimer

  • These notes were hammered out very quickly and paraphrase what speakers were saying. They are very rough and will include missed words and typos. If you need an exact quote, use a definitive source such as a recording or slide deck.
  • Photo credits as per the social media embeds.

Day One

Angel Dixon – Attitudes and design

Angel does many things but currently focuses on design and, accidentally, modelling.

She had designed a range of walking sticks and wanted to see them in more mainstream imagery – which is how she ended up becoming a model, using her walking stick in the photos. She is often the first model with a disability that people have worked with, which can be scary for people booking her. But it works out well.

Angel is also the advocacy manager at Starting With Julius, who work to have disability included within the context of diversity. They seek to address the prevailing negative attitudes to disability in Australia.

We consume media more than ever before and media informs how we see the world. Things are starting to change, but there are still very few people with disabilities on our screens.

Biggest project at the moment is a documentary series called Perspective Shift. It recently aired on SBS, which was very exciting!

So how do these things relate to design? Someone recently described Angel as working in a niche… they were an optometrist! Design is not a niche, everything around us is designed – whether well or poorly.

(Story about trying to get into Subway in a wheelchair, being blocked by a large step, and ending up yelling her order to them across the room just to get some food! Uncomfortable for everyone.)

The design of our environment is particularly important. Physical exclusion creates literal barriers, and changes representation. It shapes who we think belongs and where.

The most common understanding of disability is the medical model – where disabled people are diagnoses, treated as needing to be 'fixed’, ignoring lived experiences and tending to throw band-aid solutions at problems.

The disabled access button is a classic example – where you have to hit a great big button just to get a door to open. It’s got a disability symbol on it so it automatically implies everyone who needs the door opened is disabled; and it also makes assumptions that people who need to use the button can see, find and use it. A better solution is an automatic door, which benefits pretty much everyone who needs to go through a door.

The medical model of disability gives people a negative lens, they assume Angel’s quality of life is lower, they come up to her and touch her trying to 'heal’ her or pray for her. Because that’s not weird…

A more contemporary definition: a long term physical, intellectual or sensory impairment, which may impair participation in society on an equal basis.

In a tech and design environment, bringing up accessibility often leads to negative attitudes being expressed – that a11y is the death of good design. In reality, a11y is good design. Inclusion is the driving force behind a11y.

Article: New library is a $41.5m masterpiece, but about those stairs

The article is an interesting read. It goes into how the library was hailed as an architectural masterpiece, but in reality the stairs in the children’s area have been closed as 'too dangerous for children’; and there’s always a massive line up to use the building’s one lift. The architect defends it, saying inaccessibility is “a wrinkle”.

For the people being excluded by such bad design, they are being excluded – their human rights are being denied.

When you look at the media landscape, while captions are common audio descriptions are not available on free-to-air. For Perspective Shift, they were able to negotiate a deal to play the audio-described version side by side in SBS On Demand. As is so common with a11y, creativity and determination finds a way.

Video: trailer for Perspective Shift

The trailer includes audio description in a way that simply works, making it an integral part of the trailer.

What you can do:

  • go to an a11y conference (“yay you!”)
  • observe your own biases
  • include people with disability in design

You can’t know what you don’t know. This is important to remember if you feel ashamed of a11y mistakes in the work you’ve been involved with, or you’re scared that you don’t know where to start. Pay attention to your biases and you can begin to challenge them.

Angel’s work is underpinned by equity, understanding different as part of the human experience. When Angel conceptualised it this way, it changed her attitude towards accessibility and design in general.

Santiago Velasquez – Disability: the avenue to solutions

Santiago starts with an exercise getting everyone to close their eyes – then he says “I’m holding something in my hand, you have one minute to work out what it is…” (people start asking questions about the form, size, capability… it was an iPhone)

In electrical engineering lectures, Santiago regularly hears “you can see in this graph…”, so he has to find other ways to interact with things and learn about them. He can’t see it so can he feel it, hold it?

Santiago has his guide dog with him. Peter Singer recently said we should redirect all the money from guide dogs into curing blindness – which assumes Santiago is broken and needs to be fixed. He isn’t broken, he feels pretty good! He’s done all kinds of things including the Kokoda trail – a tough endurance challenge for anyone at all.

Blindness requires adaptability – slide showing the different ways for a blind person to judge if a cup is full of cold or hot water. Cold water you can pop a finger inside the lip of the cup; but for hot water you have to rely on feeling the temperature change and listening to the sound.

Santiago’s family has never really driven their approach to life from blindness. In Colombia there was no real support for disability so they just worked out solutions. When they moved to Australia they chose his school based on being nearby and close to family… only to have them say they couldn’t take him because he was blind.

So that’s his experience for the past ten years – people saying in effect “you’re blind, why are you attempting this…”

Recently he spoke at the UN convention on the rights of persons with disabilities… at the event he found people saying “let’s build accessible technology” but people were not saying “let’s encourage people with disabilities to pursue STEM careers”.

Example: the new trend of lifts not having buttons, instead you use a touch screen to select your floor before getting in. These have a fairly poor accessibility workaround which slowly and annoyingly announces everything. A better solution would have been to include a keypad and a braille description.

People with these lived experiences can provide great input into the design of systems, processes and products – that benefit everyone, not just people with disabilities.

Santiago is currently working on two projects:

1. a tactile display
2. a solution for hailing public transport like buses (you can’t see the bus to hail it)

With public transport, lots of people would like an alternative way to hail the bus – blind, low mobility, parents with a pram…

We live in a world that was not designed to be accessible. So every day disabled people are solving problems. You’d better believe they have amazing problem solving skills.

Aimee Maree Forstrom - Accessibility API 101

Aimee is talking about how the accessibility API works, giving an overview of its capabilities.

The a11y APIs has its roots in the 90s, when browsers and the Document Object Model (DOM) arrived. The a11y API came along shortly afterwards. Since then we’ve had both the operating system and browser APIs; and the two map together so you can have a seamless experience between the two.

The way it works is markup is interpreted into the DOM for display in the browser, and into the accessibility tree in assistive technology. The two have a common language – HTML elements. Authors declare things like a search input, in a way that both APIs can understand (names, roles, values).

role is particularly important for the accessibility API as it exposes behaviours and actions.

Using native HTML elements is essentially magic for a11y, as you immediately declare all kinds of behaviours – it’s focusable, it can be activated and so on.

SPAN and DIV are ignored! The a11y API is essentially a subset of the DOM – stripping information down to a leaner model. If we use neutral elements we have to add back all the rich description that the a11y API needs.

There is no link between JavaScript and the a11y API – that’s where ARIA comes along, to bridge that gap. ARIA talks to the a11y API – it’s the only thing that does in this context! This is how we make accessible javascript applications. JS updates the DOM and ARIA propagates those changes to the a11y API.

The internet is built on top of the old internet. We build everything over the top of what we built first. We added the a11y API over the DOM; and we’ve added ARIA onto that as well.

Currently ARIA is only working for screen readers, there is a lot of assistive tech that still relies on plain old semantic HTML. Work is happening to make that better but in the meantime try to use native HTML as much as possible.

Take aways:

  • prefer native HTML
  • test site with keyboards
  • provide accessible names, descriptions and links in HTML
  • communicate state and state changes in ARIA
  • check DOM output
  • check accessibility tree output

To ARIA or not to ARIA? It’s aimed at screen readers, but we need to cater for other things as well. Don’t over-use ARIA, use a light touch to avoid creating new problems and unexpected behaviours.

Be aware that testing screen readers has to be done across browsers as they all have their own implementations and extensions to account for.

Go forth and make things more accessible!

Aimee’s slides – DOM and Accessibility API Communication

Rick Griner – Accessible development with modern frameworks

Accessible development is hard – but a lot of things are hard! Engineering has particular skills you need to learn. If you had someone turn up to build your house and they said “windows are hard” and just left them out, you’d be pretty upset! We need to expect the same of web development – that people do the whole job and not just the easy bits.

It’s perhaps more of an issue now because it’s become easier than ever to create inaccessible code. The focus of React, Angular and Vue is not really the HTML, it’s on the JavaScript. The libraries are full of accessibility antipatterns and the documentation doesn’t help either.

Example: HTML select. If we use the native input we can be confident it will work across devices and into assistive tech. If we use angular-ui-select you ultimately get a ul and a div; and if you pass ARIA attributes you can’t immediately be sure if that will be passed through. Then it gets worse because when you google for implementations and the top result may be a poor implementation!

Demo: the most popular implementation example of the custom select, which doesn’t work with keyboard at all. It’s very popular, 3000+ stars on github, 160+ contributors have worked on it. But it’s not accessible.

There are tools to help us automate testing:

  • Linting tools eg. JSX a11y tools
  • APIS eg. axe-core and tenon.io
  • browser tools

To go back to the basics, look at native elements. HTML is the web. Understand the semantics of HTML.

A trap of modern frameworks is you can often change the HTML elements in the component and it will continue to work for sighted users, even though it has stopped working for other people.

Be aware of the HTML5 landmark elements like main and nav, they are powerful additions to your HTML as they declare what the element is doing.

Building custom elements is a great feature of modern frameworks – it’s a powerful thing. But it takes a lot more code. Also the concerns about styling native elements are over-stated, there are lots of good reset techniques and examples out there.

Example: a wizard-style stepped form. Labels, inputs and buttons. When you move between steps in the wizard, what do you do with focus? If you re-focus the button with the same label people don’t know the page has changed. A better way to do this is to return focus to the main element that contains the updated content. You can build this into your page component or router. Remember to update HTML title as well.

Example: modal/overlay. In the example the React component code largely hides what the actual HTML will be. It means we have to inspect output, inspect behaviour, test focus, test keyboard interactions including escape key closing the modal. We need to check that focus stays inside the modal while it’s open (focus trap to the active content, we may also need to hide the rest of the content from assistive tech). A new way may be coming – the HTML5 inert attribute is being re-proposed to make this scenario easier to deal with.

Example: alerts/messages. ARIA live regions come into play – role="alert" and aria-live="polite". Must be present in the DOM on first load.

Take aways:

  • inspect the HTML being rendered by the plugins you are using
  • use native HTML where possible
  • use landmarks, ARIA and appropriate headings
  • test!
  • share good examples

Rick has a course coming on accessible single-page apps – let him know if you are interested

Janelle Arita, Jenny Lanier, Steven Raden – Tactile wireframing

IBM build a lot of complex applications. When you view a configuration page for IBM Cloud it’s not so bad, but in edit mode some fields require multiple inputs or complex interactions. This is awful when it’s vocalised in screen readers.

Video: a simple slack message via JAWS – it’s …. a lot of talking. For a few lines in Slack. This was a shock to Jenny and Janelle!

They worked with a blind coworker (Randy Horwitz) to learn, in essence, what does it mean to make something accessible? This is important to IBM on many levels. In terms of simple humanity they want to include everyone; from a business perspective they need to save time and money in development and ensure they can sell products to as many customers as possible.

The challenge they ran into was they needed to code up a solution before Randy could give feedback about how it worked in JAWS. That meant there was a lot of friction to change things later. In fact it would almost be too late to fix things and encouraged band-aid solutions.

IBM design thinking:

  • focus on users
  • treat everything as a prototype
  • co-create with diverse teams

They use an observe → reflect → make cycle.

So they created a problem statement about including accessibility into their design process. They created personas for dev manager (who is blind), designer and developer – the solution had to work for all of them.

The baseline workflow was a lot of work happening before a blind user was included in the process (post coding). There were a lot of pain points for all the personas as well, around re-work.

The idea was – how could a blind user engage with wireframes, rather than waiting for the build?

They looked at lots of technological solutions – automated testing, tools to create clickable demos, they looked at 3D printing and embossers… they discovered the Raised Line Drawing Kit, which allows blind people to sketch. They knew they would need to go forward with a physical medium.

Phase 1 – material exploration. Literally started with a visit to a huge arts and crafts supply store, feeling lots of supplies with their eyes closed. They tried using lots of things like textured tap, felt pads, pipe cleaners, all kinds of things to add tactile aspects to wireframes. They tried these ideas with Randy.

Findings – foam shapes, hot glue, felt and tape etc were easy to work with and communicated larger scale designs. The raised line kit was good for showing detailed micro-interactions.

Phase 2 – creating a pattern library of tactile materials. They linked this with their design system, Carbon – tactile methods of showing common content and inputs.

Findings – the pattern library made it really easy to make things quickly and consistently.

Phase 3 – making a scalable, reusable kit. The kit was quick but you couldn’t re-use things because they were using a lot of glue. They looked for ways to use magnets instead, tried balsa components, and so on. They effectively created small models of inputs, which were magenetised so they could be put on a whiteboard; and used braille labels for them. They could then run design sessions with both sighted and non-sighted users in the room.

I feel like I’m finally able to give useful feedback. – Randy

Findings – empowered Randy to give feedback much earlier, the kit was scalable and reusable.

Where to next? Back to the loop – they enter observe. Test with other teams, keep adding to the kit, get feedback from more visually impaired people.

Thomas Kuzma – A tutorial level for life

Thomas is an advocate for autism, mental health and video games – yes video games!

We live lives that are quite similar to the games we play. Thomas grew up in Lithgow, in a loving and caring Polish family. His mum and dad had different views about whether Thomas was neurotypical – his mum was aware, his dad was insistent he was normal.

Meanwhile he was building up an NPC (non playable character) library in his head. You also have quests, resources and combat situations in life. But all that was thrown out the door when he moved from Lithgow to a much bigger suburban area in the Blue Mountains. There were things he just didn’t understand, he had no connection with his peers.

This led to depression – and treatment. He was also playing games like Ratchet and Clank, which gave him the mental model of having an arsenel of tools to deal with new situations. Jak and Daxter tought him ideas about friendship.

Crazy notion – what if people with autism, ADHD and so on don’t have a disability, they are just neurodivergent? Not disabled, just different – their minds work differently. The world is built in a way that disables us.

In his work Thomas has found people on the spectrum need social stories and visual tutorials, to help them understand what’s happening and what to expect in new situations.

Sad story time – when he entered work life, he worked for a video streaming company… and he was sent to situations nobody on the spectrum should deal with. A boxing match under strobe lights was bad, but the worst was covering a huge pub crawl on the south coast… in the middle of it was told “you are young you should be enjoying this!”. Thomas couldn’t deal with it all and had a mental breakdown.

For a few years he had known and worked with people from Aspect; he’d been going to autism conferences. He learned we are the sum of our experiences, both the good and bad. We have to embrace all of it, as these moments define who we are. Meltdowns can be a way to learn about your own stresses.

He started a blog that was noticed by Aspect, who ultimately hired him. Now he mentors people, talks on TV and overseas.

So what can we take from all this?

Open up your NPC library. Accept those you don’t talk to. It’s important to open those doors and learn from them.
Keep your coping inventory close by. People tend to forget to use the inventory when stressed, anxious or depressed.

I find it very difficult to read faces. It has taken me many years to work on this skill, to adapt. But if you put me a room with ten black mercedes I could tell you the years the were produced and everything that makes them tick. I am ok with my social difficulties, because I know my strengths lay elsewhere. – John Elder Robinson

Aravind Thulasidaran – Accessibility in Salesforce

“It’s working as per the design!” ...if he had a dollar for every time he’d heard this… and he has said it himself in the past. But the wakeup call was when a friend pointed out the design sucks!

Over time he has realised a11y is not about doing favours for any particular group, it’s better for everyone. It’s not doing something extra, it’s just fixing problems. It’s the right thing to do, but we also know a11y is good for business.

How do you fit this into a CRM like Salesforce? While SF is making a lot of steps to make their product more accessible, SF devs still need to do things right to make things accessible for customers. You need to know where and how to customise it.

SF comes in two versions: classic and Lightning. Lightning is newer and better for accessibility. The tools available to you include Visualforce (apex) and Lightning Component framework; HTML; and ARIA. You can use these to override the default UI; to extend or customise functionality. It works very much like common single page app frameworks.

Fun analogy for ARIA tags – a coffee shop that won’t serve you coffee! Don’t use the wrong role… Or a sign saying 'open’ but it’s actually closed… this will make people angry.

You should use the Lightning design system. You can do this via Aura components or the new Lightning web components. You should use the lightning namespace to get the more accessible options. Be careful to use camelCase props for ARIA to ensure they are rendered the way you expect.

Due to the structure of some components, like lightning lists, you will need to reproduce native functionality as you can’t output valid HTML.

For designers, UX and content roles –

  • you can control brand colour and SF will apply an accessible set of complementary colours based on that; while there is an option to override it be really careful to test the values you apply.
  • it is possible to override tab order if required
  • it’s a good idea to test with high zoom to catch responsive issues
  • you can provide skip links
  • writers – use plain language, ensure alt text is applied when authoring, source captions with video
  • Salesforce has a lot of keyboard shortcuts, hit ctrl+/ to see the list. It’s not a bad idea to share this with your users.

Leadership –

  • promote inclusive culture
  • don’t treat a11y as an optional addon!

Salesforce do have support specifically for accessibility; and known issues in Lightning.

Maz Hermon – Enabling people to care about a11y

This is a story of slow but sure cultural change.

Imagine you turn up to work one day and there’s a massive boulder on your desk. Where did this thing come from? Why isn’t there a big group of people at your desk asking where it came from and what we’re going to do about it?

Now imagine the boulder is something that only you can see; and only you want to change it. This is the analogy for Maz’s experience getting accessibility going in his organisation.

His company has not been leading the way, it’s not a story of an easy road or shining success… it’s about being in the trenches. They’ve been chipping away for the last five years.

The goal has been to make accessibility simply “part of how we do things here”.

You’ll go back to work after a11y camp with lots of enthusiasm and energy, but people won’t embrace that overnight. Research suggests it takes about 3-5 years to substantial change culture.

Steps…

  1. Go from complainer to a11y champion – don’t complain that people don’t care, make it possible for people to start caring. If you can see how things can be better, you are aware of the boulder. You have an opportunity to make it better.
  2. Don’t wait for permission – you don’t need it, just do your job properly.
  3. Support a groundswell – you can lead from any position. Start things, share knowledge, be the catalyst. If you can relate a11y to company values and commercial opportunities, so much the better. Share widely, don’t be shy. Find some small wins and build momentum.
  4. Go from groundswell to company-wide interdisciplinary support. If you don’t have comms channels in place by this point you really need to get that going. Sometimes chipping away at the boulder reveals hidden gems, achievements that make people happy. Remember in this phase you will have to repeat messages to new people who haven’t heard it before. Find your toughest internal audience and work out what will convince them to get on board. Be prepared for pushback.
  5. Involve users – engage them as early as possible. Don’t rush, but also don’t wait too long thinking you’re not ready to start.
  6. Support people in different roles – think about what other people need to do their jobs. Think outside your usual peers and across the business. It’s your job as a champion to show other people the possibilities and enable them to reach them.
  7. Move accessibility from follow-up work to accessibility being part of daily life. Celebrate wins along the way, reflect on long term progress to stay motivated.

In the analogy, the boulder has been broken up and turned into a bridge over a barrier.

Colin Allen – The importance of digital accessibility for deaf people and sign language users

When we talk about access, what do we think about. Right now there are bushfires across the country; and people tend to rely on audible alerts. There has been an outcry from the deaf community through these fires, because no sign language interpreter was provided on TV during alerts. Only last tuesday did the deaf community get an interpreter to fully understand the depth of the disaster.

So are the human rights of deaf people really being met? And can’t they just use captions? Well the captions were alerting about a major fire in Campsie. It was actually in Kempsey.

A profile of the deaf community:

  • deaf – commonly born deaf, and profoundly deaf – no hearing at all. Most use sign language
  • hard of hearing – they have some hearing, but commonly rely on devices or implants more than sign language
  • late-deafened – unlikely to use sign language as their deafness came late in life
  • people with hearing loss/hearing impaired – different levels of hearing loss. This term is unpopular as it emphasises the idea of a deficit.

In Australia the preferred terms are “deaf” and “hard of hearing”; or “sign language users” in some contexts. If in doubt simply ask the person.

Convention on the rights of persons of disabilities – “the convention” for this talk. It’s an international treaty body to advance the rights of people with disabilities and their inclusion in society. It has led to a shift in emphasis from a medical model to one of human rights.

A question to the audience – do we really think the convention has been implemented in Australia?

Article 9 of the convention talks about state and federal government responsibilities regarding implementation of the convention. Colin is focusing on deaf people and sign language users.

Slide: covering a range of world associations related to the deaf communities.

Imagery – there are many icons to indicate deafness, many are quite negative, like the ear with a cross through it. The suggested icon is two hands crossed in a way that indicates sign language. You can get the icon from Deaf Australia. The idea is that the primary method of access for deaf people is sign language.

Sign language is a beautiful, expressive language – spoken and written language seems oddly linear and flat in comparison.

We need to be careful not to forget deaf people in our thinking. A lot of accessibility discussion focuses on people who can hear. Very very few websites include access to sign language. We need to ensure equality for the deaf community, just as we do for people who can hear.

Access to information through ICT (information and communication technologies) is as important to deaf people as it is for anyone else. This is what “universal design” should mean; and it’s critical that universal design is considered from the very beginning of product development. It’s always harder to add things back in later.

Involve people with different disabilities – each is an expert regarding their disability.

Remember there are 1 billion people in the world with a disability; and at least 70 million deaf. 1 in 5 people in Australia have a hearing loss; and there are 30k sign language users. It shouldn’t be about numbers though, it should simply be based on inclusion.

Back to emergency services. Deaf people work hard and pay tax like everyone else – the government should provide services to meet their needs. That includes accounting for technical changes into the future like AI.

International organisations have been lobbying for access to a wide variety of audiovisual content and information. That’s all services – public and private; TV; cinemas; streaming services; everywhere.

We should have sign language interpreters providers at conferences; and if it’s provided on TV the interpreter needs to be on screen and large enough to actually read. It’s not a tick box, it’s important! Also the interpreter must be accredited – famously, during Nelson Mandela’s funeral, there was a fake interpreter! They were signing total gibberish.

Colin has talked a lot about access. But looking at new technologies, he’s aware that people in this room will be involved in new technology. We need to include deaf people in our thinking as that happens.

Signing avatars do exist – but they do not replace real interpreters. Signing avatars are incredibly robotic, they lose all the emotion and inflection of real interpreters. Think of the flattest, most robotic voice you’ve ever heard – that’s what signing avatars sound like. They are ok for very functional things like announcing timetables, but they aren’t good for everything.

Innovations in speech recognition software are great, but they’re not really there yet. It’s very obviously artificial machinery… and it seems few deaf users are actually involved in testing them.

The last thought – deaf and sign language users have the right to access information. We need qualified interpreters; and representation in the development of new technologies. Please take on this responsbility as part of your jobs.

Sarah Richards – Accessibility is usability

Accessibility is usability, you cannot have one without the other. Also we should consider the language we use – while “disabled people” is a term recommended to Sarah, in truth we disable people with our choices.

The Microsoft inclusivity kit is used and reused a lot because it’s a really good way to visualise inclusivity.

There are 3.9m disabled people in Australia – you all know that, your organisations know that. It’s a big audience with a lot of spending power – it’s crazy to ignore it. We can use these numbers to open up discussions about accessibility.

We know there is a lot of assistive technology, but a lot of people still think it’s only sreen readers. Also consider:

1. people who don’t know the tech is there
2. deaf people – and don’t assume deaf people can read

Tool: NoCoffee simulator – while not scientific it’s good to get a rough idea of visual impairments.

Example: trying to read some very boring, waffly information about Brexit. For a sighted user it’s merely annoying but for someone with glaucoma it’s intensely hard work.

You can help this by writing shorter sentences. Keep it below 14 and everyone can read it more easily. Over 29 you’re losing just about everyone. Government tends to write exceptionally long sentences.

There is a fundamental difference between writing content and content design. You have just a few seconds to get someone’s attention and then to keep it. Value their time.

Inclusivity is finding the right information at the right time. There is a persistent myth that if you build a website, people will come… it’s stil not true.

It’s worth thinking about the way the brain works. There are phases in a simple search – thinking of a question, thinking of words, searching for them, picking from the results…

Traffic is a vanity metric, it’s not a measure of how useful your content is. It’s not a measure of how easy it is to consume. Measure the intention of your content. What are you trying to accomplish and for whom?

You should have a search results strategy, not just an SEO strategy. Help people get information from the results. Design for the human, let Google handle the SEO. Don’t spend your time freaking out about search algorithms change. If you design for people, they trust you, they remember you, they like you and they might come back.

Work on your content and the rest follows.

...

Hidden access needs (trigger warning: abuse)

Some access needs are hidden. A university created an app, with the assumption all students have phones… people have phones, but it may not work; or it may have run out of memory; and it’s only a phone if it has data – otherwise it’s a brick. Students who are impoverished will feel shame, they won’t tell you they can’t use your app because they can’t afford the data.

Abusers control access to information and in paticular health care – if you work in health care, get information into search results. Get contact numers in the first 11 words in google results.

...

gov.uk/info – you can spin all GOV.UK content and see the metrics for that page/url. It reveals that often nobody at all is reading the information, even thought they should. A lot of this unread information is very important and simply not published anywhere else. Basically it’s a problem of using the wrong channel. Excluding people from information by putting it in the wronge place.

When people arrive on your page, you need to show the 'edges’ of the content. Give an idea of the shape of the content. It gives people confidence because you’ve very quickly outlined what information is actually there.

Next you should orient people on whether they should keep reading. Use good headings so people scan the page. If they shouldn’t be reading the page, let them get out of there quickly.

A good exercise is to remove all the body copy from a page and see if it makes sense just from the headings.

Video: provide captions and a transcript for those who can’t watch but can read. Don’t just have a video “for engagement”, have a video if the content needs it, if the video will be more usable than other content.

Generally people can read faster than you can speak. Screen readers in particular can go incredibly fast.

Who loves jargon? Who loves arguing about jargon? People may not understand jargon and you will alienate them. Jargon, slang and idioms are difficult for anyone who doesn’t know the terms; is reading a second language; and autistic people.

You may have lots of really funny, clever language… consider how many people you might lose, just to be funny.

Resource: Readability Guidelines. This has lots of research to back up the guidelines.

Clear and easy content is better for everyone. It’s faster and easier to consume. The impact is magnified for anyone with a cognitive challenge. Clear content and headings is better for people with motor and visual impairments.

Generally, people want to understand – not marvel at your language skills.

You do not need to be boring to be clear. Nike run three words – “just do it”. Brand recognition is massive. You’ve heard these words all your life, not just in the ads because it’s a phrase that will come up quickly with small children.

VW’s “Think Small” campaign is one of the most successful campaigns in history.

Hashtags – use #TitleCase to make it easier to read them.

You don’t have to use formatting or tricks to stand out. You don’t even have to use bold.

Change your sentence length and punctuation. When you read, you’re looking for a rhythm – this is a cross-cultural thing. Our brains settle into a rhythm. If you stop that, you stop people reading.

Use space. It’s ok to give people time to think.

Struggling through content is not a success metric. So if you measure people spending a really long time reading your page…. why?

Accessibility is more than just code and colours. Change your content too. You won’t lose people by communicating well.

Rules of inclusivity:

1. be relevant
2. be where your audience is
3. value the time and pain, not numbers
4. consider the whole spectrum of needs

It’s not dumbing down.

It’s opening up.

Day Two

Ed Santow – Future tech and how to make it accessible

New technologies such as AI can improve our lives – they can foster economic development; they can create creater inclusivity. Of course there is a “but” coming… for every positive example there is a negative example. These new technologies can just as easily exclude people with disabilities as help them.

To expand on some positives before dealing with the negatives…

Self driving cars and AI assistants are changing the world – some of the rhetoric is true. Self driving cars are particularly obvious as it can give increased autonomy to people who can’t drive; and in-home assistants can make life much easier for people with disabilities.

These things are absolutely positive, but it doesn’t change the fact they often disadvantage and exclude some people. It’s not ok to ignore human rights in the name of 'progress’ or 'innovation’.

An imperfect intro to AI – Ed has been trying to find an inocuous example! Example is dutch artist Anna Ridler’s AI tulip artwork – generated tulips. It uses a training data set – lots of photos of tulips – to 'teach’ the AI what a tulip is.

This can highlight weaknesses of the AI process:

  • there is a selection process involved in choosing the training images (eg. only tulips from Europe)
  • the images were labelled as 'white’ tulips, but that is subjective and some look cream or even pale yellow

So the AI isn’t impartial. It picks up any assumption or bias in the training data. It also isn’t perfect, all training sets are finite.

So when you apply AI to something serious like facial recognition you get some real issues. This is a hot topic and many governments plan to use it in some way, at some scale. China is planning to use it at a massive scale, as part of its controversial social credit system.

The London metro police used AI to identify people suspected of committing crimes. This was used for real; and the good news is they found 104 matches… the bad news is 102 of the matches were wrong. 98% failure rate. This was corrected “in due course”, but that’s an alarming phrase – “due course” can include being arrested, detained, all kinds of coercive actions can be taken.

A big problem to call out here is that inaccuracy is not spread evenly across the community. MIT’s 'gender shades’ research showed most AI is much more accurate specifically for middle aged, white male faces; and less accurate for people with dark skin. Tested with a simple test 'male or female’ headshot. When the person had dark skin, the prediction became significantly inaccurate.

Similarly people with disabilities are much more likely to receive inaccurate results from facial recognition.

It’s concerning that people have been so willing to suspend their disbelief about this new technology. It’s new and still very inaccurate; when it fails it impacts people with disabilities heavily; and when used for high-stakes purposes it’s often impacting human rights without normal checks and balances.

As the technology improves we will see benefits for people with disabilities – eg. greater visual processing capabilities have a lot of applications to assist people with vision impairments. But these positives don’t justify the harm being done in the meantime.

Issues arose with the Commonwealth Bank’s 'Albert’ machine – which uses 'pin on glass’ technology, which was not accessible. It is possible to create inclusive pin-on-glass implementations, Albert just didn’t. It’s a reminder that design really matters.

At the Human Rights Commission, they focus on 'human rights by design’, using the term as broadly as possible. There is a lot of fantastic work going on, it’s not all doom and gloom!

A discussion paper will be released soon, on the impacts of technology on human rights and how we can apply design-led principles to the problem space.

Slide: the classic Design Thinking hexagons (empathise, define, ideate, prototype, test)

This is a well-known methodology, but we need to apply the inclusivity lens to it. Beware of the tech habit of using Minimum Viable Products – it’s ok when the stakes are low, but when they are high MVPs can violate human rights in serious ways.

This project has been running for 1.5 years; key publications can be found at tech.humanrights.gov.au

Shilpi Kapoor – Accessibility: the people first approach

Shilpi starts with the story of why she does what she does – in ’93 she got into her first internet chat room; and then got into hacking. She got an internship with an American company and worked on keeping their servers secure in India’s time zone. A couple of years into the job, the person who had hired her was on the same shift… and Shilpi caught a hacker and her mentor didn’t. She was puzzled how this could happen? It turned out her (remote working) mentor was paralysed, used a sip and puff, which was slower. But the experience shattered her previous assumptions about the limits of what disabled people could do; and ultimately led to her founding Barrier Break.

Many people start out not knowing, or not understanding accessibility; and have some kind of experience that turns them over and they start asking why isn’t this accessible?

There are 1 billion people with disabilities in the world (WHO & World Bank). The global numbers cut across regional numbers, which make the number of people affected look smaller.

Accessibilty: allowing people to use any thing, product or service in the way they want, using the tool they want. It’s about choice. It’s easy to become distracted by the technology, the products or services… and forget to focus on people.

How would you define people? We might take a well-evolved view that it’s anyone who identifies themselves as human. But most people tend to categorise and group. There is in fact a european standard, EN 301 549, which defines groups of people with disabilities… but the terminology doesn’t match in WCAG and Sectino 508. We need these terms to be harmonised globally so everyone can work to the same standards and understandings.

Now think about your own teams – how many have team members that fall into one of these groups? (many people have one or two, but most don’t have any in their team) ...this is a challenge. We work in accessibility, but do we regularly interact with the people it impacts?

Challenge: sit with a regular screen reader user and see how much of the content you can follow. See how they use it compared with our own inexpert ways of using it. We are motivated but we are not actually users of assistive tech.

So what do we do with these user groups?

  • put people first
  • experience how people access it
  • include people as part of the process

Regular users of a tool will use it differently than someone who is not a primary user of it. This includes things like knowing the right keystrokes to navigate a PDF; or the way screen reader users extract the headings and read those first. It’s a very different mode of usage than a sighted user running a screen reader.

So can you shift the paradigm? The people first approach – who, what, how? Who can you add to your team, what can they contribute, what areas could you work on? Barrier break has a testing routine that includes people with different abilities, using different technology, and the combined results are extremely effective.

Reference: NewzHook

You need to include people at every stage: different people + different tools + different devices

Strategies

  • 60-20-10-10 – 60% visually impaired, 20% cognitive challenged, 10% deaf, 10% mobility impaired
  • design to launch – bring people into the process much earlier, during design not just before launch

Julie Grundy & Chris Pidgen – Accessibility testing strategies

Every testing strategy is a custom strategy – everyone has to find what works for them.

Intopia test process:

  • planning
  • create test strategy
  • perform testing
  • iterating on the plan

Many teams find that it’s good to start with a smaller project or feature, to establish a testing process. They recommend testing to WCAG AA as a realistic baseline, then up to AAA as possible. If your audience has an unusually high proportion of disabled users then naturally AAA compliance will be more urgent.

There is a huge range of testing tools out there… a good subset is recommended. Get as much coverage as you can.

  • Browsers: Chrome and Firefox both have good a11y inspectors
  • Browser plugins: AX, ARC toolkit, accessibility insights for web
  • Colour Contrast Analyser (from Paciello Group) – much more accurate than plugins, but manual
  • Desktop screen readers: Voiceover, NVDA and JAWS are the top three. Refer to WebAIM’s screen reader surveys for stats.
  • Mobile screen readers: VoiceOver on iOS has high share, Talkback has less
  • Other inputs: Dragon Naturally Speaking

Document these choices in your process; and invest more heavily in critical path features in your application.

  • To create test cases you need to know the WCAG criteria. While learning this Chris created a high-level checklist to create something easy to work with.
  • Ensure each test case covers one thing that can clearly pass or fail; and it can help to differentiate best practice vs specific compliance (in the name of the test).
  • Group tests in a way that’s logical and makes them easy to find, eg. by page and component.
  • A traceability matrix can help track your tests back to the criteria they cover.

A good first start is to run the Accessibility Insights plugin, which includes AXE testing.

While you’re planning this out, it’s a good idea to start on change management – set expectations with people, communicate when things will start, the amount of work involved and what support is available.

If you are really lucky, people will start asking for things like checklists – it’s a good sign of enthusiasm. Keep them short, stick to actionable tasks, try to group to the person’s role.

Be ready for the fact testing will take more time at first, you’ll be slowed down for a while. Over time you speed up.

Writing bug reports – be consistent, make it very clear how to reproduce the issue, expected vs actual results. For a11y issues it may be worth adding “why it matters” information to help developers understand the impact of the bug. If you know the specific problem, say so – eg. when something doesn’t have a programmatic label.

Iteration is the most important part. The chances of getting it right on the first try are very low. Just start and iterate! It can help to gather metrics as you go, around the number of issues raised and resolved. Give praise to people who are making good efforts, it helps! Be ready to communicate the value of what you are doing. When things need to be adjusted – ask whether you need to change the strategy or the goals.

Pay attention to reports from real users – complaints will show the maximum pain points.

Also remember to actually use your process yourself. Don’t define a test process for other people without using it yourself! Initially Chris thought a huge test plan was the most valuable thing; but in the end a set of JIRA issue templates was more useful! Don’t be afraid to throw out things that aren’t working.

Alison Ennis – To automation and beyond!

This talk is about accessibility regression testing on the NAB banking app.

A11y at NAB

  • it’s mandatory
  • four key commitments: practice inclusive and universal design, embed an inclusive culture, listen to customers and act on feedback, create an environment that seamlessly accommodates diversity
  • also bound by the banking code of practice which includes a11y
  • NAB also has an internal neurodiversity program and NABility to help people who need assistance with accessibility
  • “Better Together”
  • Aware that 4.3m people in Australia live with a disability; almost everyone has a temporary disability at some point; and we have an ageing population

The NAB banking app has about 1.6m users on iOS and 930k on Android. Applying the 20% rule that means there are a lot of people with disabilities using the app.

Example considerations for the app

  • deaf – no reliance on audible alerts
  • limited mobility – large tap areas, no complex gestures
  • enable voice activation through the OS
  • cognitive – simple and clear to read, minimal amounts of animation or movement, colour and iconography used to aid understanding of text
  • vision – good colour contrast, no use of colour alone, screen readers hear what sighted users can see, specifically designed for large text

The important thing is we stick together. – Buzz Lightyear, Toy Story

How? Dedicated a11y team, about 10 people – two on the mobile platform. Working to WCAG 2.1 AA compliance. They have a lot of teams to support so they have to use strategies that can scale.

Aim is to include a11y testing in all stages of development – it’s in specs, it’s part of the definition of 'ready’. Developers test a11y as part of their normal duties, it’s part of the dev role.

Regression testing issues with the app

The goal of regression testing for the app is to ensure the app has not regressed to a less accessible state. It’s not testing initial compliance, it’s testing it hasn’t been broken by new changes.

They used to do this manually, which meant it was happening very late in the cycle and it was limited to core flows. Basically it didn’t scale; and a11y issues frequently had to be addressed after deployment.

They needed to test more screens, more quickly, on more devices, more often.

What they’ve automated

They already had a visual comparison test suite, using Applitools and Perfecto. So they applied it to a11y testing.

This lets you test changes to:

  • Large text mode
  • Contrast
  • Stucture
  • Content

You establish a baseline set of screenshots, then any change to that will fail the test. It does detect px-perfect level changes and displays a highlighted difference, which can then be reviewed and either accepted or rejected. You can exclude dynamic content like dates that will always be different.

This is hugely reducing stress; and it takes just an hour to run instead of the old three-day manual testing. They can do more devices and platforms; and scope of testing is expanding out from core flows. Issues are identified much earlier, so they can be fixed prior to launch.

Limits – dynamic content including loading screens, long screens don’t work very well, not integrating screen readers yet.

Next steps

  • increase scope – more screens, more devices
  • cover more accessibility concerns
  • add visible output (screen mode) from screen readers in visual testing, as it will detect changes

To automation, and beyond!

Matthew Brennan – Accessibility QA at scale

Matt is a Technical Program Manager for accessibility at Facebook. Keeping the scope of this talk to time, covering the mobile app.

Facebook has about 10k engineers, working on an ever-increasing range of products with a weekly release cycle. This requires a lot of testing. They have 500+ QA testers!

The upside of this scale is they can surface a lot of a11y issues before release – the majority are basic labelling issues, like missing roles or unlabelled buttons. The downside of this scale is that most of their testers are not a11y experts, in fact they have no real knowledge or training in a11y. So their testing did not reflect the real experience; and fixes had much less impact than they should have.

The definitely couldn’t do deep-dive audits before release, there just wouldn’t be time and a lot of effort would be wasted. What they did do is review the QA process:

  • how were test plans created
  • how were the tests being performed
  • tester knowledge
  • how was quality of QA being measured

There is no one team that 'makes facebook’, there are hundreds of product teams making features. The test plans weren’t owned or updated; and bad test plans produce low quality bugs. The action was to create new test plans and keep them updated.

They couldn’t individually train 500+ testers, so instead they dived through the bug backlog. A requirement for bugs is a video of how to reproduce the problem, so they could see how the tests were being performed. They were using a screen reader, but they were tapping the screen and not swiping or flipping. Action was to provide training on these specific techniques.

Lack of knowledge was a big pain point. Testers were just following instructions they’d been given, they didn’t really understand what they were about. That meant they didn’t really have much empathy or understanding of the impact of the bugs. Action was to create a11y champions and empowering them.

Measuring quality of QA is hard – what are the signals of quality? The team was really just tracking operations like raising and closing issues, not the impact or quality of those issues. Action was to define metrics that did indicate test quality, so the right thing could be measured.

Putting this into action…

They started by setting up the champions. They got leadership buy-in, found the right people and empowered them to be the owners of accessibility quality. They made sure this was a real role, freed them up from existing duties so it was not done on a volunteer basis.

Train-the-trainer strategy. They trained the champions on a11y testing; how to identify high-quality, high-impact bugs and fixes; etc. Started with ~50 champions within QA. The champions then train their own teams and the program can scale. They created pathways for people to transition from functional tester to ax tester.

Sidenote: facebook says “ax” for accessibility, too many people were confused by a11y!

A major tip regarding training – ensure there is assessment to check people have really understood the training.

They categorise accessibility issues into groups like label, focus, interaction. This helps keep things manageable.

They made sure it was clear where to go to get help, ask questions and escalate difficult issues.

Test plans are all assigned an owner; then it’s a reasonably straightforward process of identifying core flows, how to test them and what other requirements exist. They also note what’s covered by automated tests, so manual tests don’t duplicate that effort.

Final part was measuring quality of QA:

  • impact not ops – high priority, verified fix rate
  • find quality signals – actionable tasks, changes to the priority of the issue over time, categorised issue types allow measurement of different projects
  • improvements beyond QA – tasks reopened (indication of bugs closed without really being fixed), average task age, closed without verification

Measurement was the hardest!

Impact of all this work? Took about a year to implement everything.

  • doubled the amount of bugs being surfaced
  • tripled resolved bugs
  • 5x more high priority issues being closed
  • doubled the amount of focus bugs being surfaced
  • QA training scales as the team grows
  • QA leading quality programs themselves
  • QA adds value beyond testing

...

A couple of tools facebook are working on that they hope to open source:

  • internal web app testing tool that highlights accessibility issues on development environments – it literally blocks any other testing in dev environments until basic errors are resolved!
  • plugin on top of Flipper (fbflipper.com) which is for debugging iOS and Android that will do the same thing for mobile apps

These are in development, release date isn’t set but the goal is to release it to the public so other people can use them.

Ben Buchanan – Clickable DIVs and other icebergs

Naturally I can’t summarise my own talks from an audience point of view… :)

You can see the slide deck from Clickable DIVs and other icebergs, noting the animated GIFs did not survive upload.

Reminder that most of the CSS isn’t useful, it just matches Chrome (Windows) default design. In realistic scenarios you will be creating a custom design. In that custom design you need to handle static, focus, hover, active and disabled states; and set cursor and user-select.

The beginner links for those who asked: learn.shayhowe.com and webaim.org/articles/

Rico Minten – An intro to a11y in web components

Rico works at Deloitte, they decided to use Web Components for the new iteration of their component library. They chose web components to avoid having to recreate them in every new framework.

So how do you make accessible web components? Same as any other thing on the web – use the correct tags, test against WCAG, and so on. The web components specification has changed recently, so browser support can bit tricky.

Key concepts about web components:

  • they are encapsulated in the shadow DOM – that is your element has a shadow tree below the element in the DOM, aka the shadow host
  • they allow child elements via the slot mechanic
  • you can nest them to any depth required
  • you surface them as <custom-tags></custom-tags>

Google uses web components in their Polymer framework, so it’s a good thing to dig through.

Rico’s team hit a lot of issues with nested components. The first one was focus management – when they composed expanders into an accordion, you couldn’t tab between each item. What’s going on? The browser sees your custom element as a single DIV. So tabbing inside them just didn’t work as expected. The solution is to manage focus internally in your component.

Something to be aware of that web components are rendered in full on first DOM load, then style and behaviour come in after that. While normal for DOM scripting, the effect can be quite noticeable and surprising. You can hide the component until it renders, but that doesn’t really resolve the issues. You can render them server-side for first load, much as that seems odd; but it’s the same pattern as SPA hydration.

Global CSS can’t access the shadow root, but CSS custom properties can cross the boundary. If you reset a global custom property inside the web component, the new scope is the component. Also media queries cross the boundary.

Because nothing leaks out of custom components, you have to be careful when debugging; but it does mean you can safely use IDs inside your component without worrying about clashes. Very important to know when using ARIA. But note the name property is not scoped to the shadow boundary, which seems weird but you do need it to capture data from forms.

Russ Weakley – Accessible states in design systems

What is a 'state’ anyway? Common states that can be applied to components – eg. visited, unvisited, checked, hover, disbled…

During user tests, Russ has seen some interesting results of badly written states.

The players…

  • Valerie – 85, retired, rarely uses the web, “technically challenged”.
  • Mark – 28, had sustained a head injury in an accident, giving him short term memory issues and lowered emotional control
  • Judy – 65, has cerebral palsy, uses a head wand and sticky keys (use numpad as a directional control)
  • Pavel, 39, red-green colourblind
  • Diya, 35, highly technical user

The tests…

Valerie was testing a site where the links had no underlines and very subtle colours; and she couldn’t complete the process.

Mark was testing a site with no visited link state; it was sending him off to do other processes. But every time he came back to the main page, he couldn’t tell or remember where he was up to and he’d go back to the same information.

Focus should be applied to any interactive element. Judy was trying to read a news website via tab keystrokes. But the focus style had been removed from many of the elements in the page, and she got totally lost. She just had no idea where she had got up to in the page. Russ uses a really strong, reversed design so focus contrast is really obvious. Sure they’re ugly, but only keyboard users see them (you can use javascript for this).

Reference: “Removing that ugly focus ring and keeping it too” (good article and javascript for handling focus)

Hover… Judy was testing a scenario that required her to click something in the middle of the screen. But the links didn’t look like links.

Active is much the same as hover. Judy was so relieved when she finally clicked something and got an active state, confirming the thing was a link and it had been activated, she gave a small sigh.

Disabled state means you can’t interact with the element. Judy was testing a form that had disabled inputs that weirdly still had a focus state, she kept trying to edit it. Very frustrating! Make disabled elements really look disabled. This is a rare case where you don’t need high contrast, in fact you should treat the elements like they aren’t there at all.

Invalid – this is where the user has entered something that is 'wrong’ in some way. Classic problem here is using colour alone to indicate the problem state. Pavel was testing a form but was confusing red error states with green success states. So add an icon or otherwise change the design in addition to any colour cues.

Checked… Diya was filling out a form for someone and there was a checkbox so miniscule, she had to lean in and squint to see if it was checked. Make them big and obvious, make sure it’s easy to tell if it’s checked or not.

Beware of combination states:

  • checked + focused
  • pressed + focused

...Valerie was trying to fill in a form and she couldn’t tell the difference between focused and checked states for some radio buttons. The focus style looked selected. You need to test these possible combinations for some inputs.

One last thing to know is adding behaviour and states to non-native controls. Example of creating an accordion – it needs static, active, open and closed states.

So how to systemise all this?

Russ creates a big grid of the designs, so you can compare them all at once.

russmaxdesign.github.io/states -> built for illustrator, it’s a design tool!

It’s massive but it captures all the possible controls and states. Includes native and non-native components.

This will quickly surface inconsistencies between common states – eg. it’s very common for different elements to have wildly different hover states, because each was designed in isolation.

Consistency is next to godliness!

Once you have this in place, you can add new components and maintain cross-component consistency.

What are the take aways?

  • design all these states, especially for non-native widgets
  • try to systematise all of these things
  • test with users

Russ’s slides – Accessible states in design systems

Sean Fitzgerald – Smart homes, environmental control, AI and beyond

Sean presenting via video chat, having had his train from Canberra cancelled due to bushfires..!

Assistive technology is an umbrella term for any device or system that allows individuals to perform tasks they could not otherwise do; or to make it easier for them to perform those tasks. This impacts housing, mobility and technology.

Typical housing modifications are ramps, wider doorways, wider corridors, hard floors (easier for wheels), kitchen modifications, and bathroom modifications like hoists. Kitchen modifications vary hugely depending on the individual’s specific abilities, and interest level in cooking…! Bathrooms make a huge difference, a morning routine can take 2-3 hours for someone with a significant disability.

But what if you can’t open a door or turn a light on?

Video: showing door openers, curtain and blind controls, fans and aircon, appliance controls, etc.

You need an environmental control unit. ECUs come in several forms – wired, wireless, infrared, and the human interface. Wired was expensive to install and best done when the house is built; wireless and infrared tech came along around 2000 and started to make the phsyical installation a bit easier. But none of it matters if the interface isn’t right.

So the holy grail is a universal interface that truly works for everyone. In the meantime what consultants like Sean do is find solutions that work for the specific person. This could be a particular style of switch – proximity, sip and puff, a foot switch, voice controls, and so on. Then there are computers and mobile devices; with face tracking, touch and voice controls built into them (or with supplementary software like Dragon Naturally Speaking).

These interfaces are often quite ugly and uninviting; but they do work well even with devices as simple as a switch (a single button; the cursor move down rows of control until the first click, then along the row until the second click). But surely we can do better?

Then the smart home revolution comes along. Driven in part by rich people demanding things that were one step better than everyone else, we got homes with deep levels of automation and nice interfaces.

A key innovation is that the devices in the house form a mesh network, so they’re less susceptible to wifi signal issues. Slow wifi is annoying, but if you are trying to do something as fundamental as opening a door it becomes a serious issue. Mesh networks create a strong signal throughout the house, it’s not perfect but in most real situations it’s very reliable.

So how to people control all this? Mobile phones and tablets are great for a lot of people, but not everyone by any means. Voice control works for a very wide range of people, it has improved to the point where it can work well even for people who can’t speak very clearly. For environmental control you often only need to tune 20-30 commands. Google Home or Amazon Alexa are popular options as they are so cheap and can do a lot of things, it’s quite amazing what can be done with a $50 device.

But still, there has to be something better. There are systems on the way that bridge the gaps and become more holistic than voice control. A lot of human communication uses facial expression and body language. That will take things much further than voice-only.

Video: showing progress of AI and digital assistants (soulmachines.com)

Mark Seiter is the guy behind Soul Machines, he won awards for animation in Avatar and King Kong; and then had the opportunity to work with people with disabilities. They’re getting pretty good.

Of course you need to ask if this is a universal interface? What if people don’t like the avatar? Or don’t actually like talking face to face? ...and will we just reproduce all the real-world problems like blind people being unable to see the expressions of the people they are dealing with?

NDIS is working on digital assistant, Nadia. It’s a very big project with IBM involved, a team of engineers and scientists, even Cate Blanchett as its voice.

Video: NDIS promo about the Nadia project, including the consultation with people with disabilities

James Thurston – Redefining smart: creating more inclusive cities

James starts by talking about a friend of his – an urban planner, tech enthusiast and someone with a disability. His friend says he loves smart cities, but since he has a disability they don’t love him back.

While smart cities are a major trend around the world, they are not inclusive to all. A global initiative called Smart Cities For All has been created to address this. It’s a very broad mandate!

Cities are our future. Around 90% of Australians live in urban areas. Around the world around 50% of all people live in cities; and it is predicted to reach 70% by 2050. Ten of the 11 fastest growing cities in the world are in Africa.

Along with this urbanisation we are seeing a digital transformation in just about every aspect of our lives. Not just personal but also education, health care, transportation, mobility (eg. rental ebikes and scooters), public safety and even justice. Younger generations are genuine digital natives who will expect to live their lives in a connected way.

Cities are on the leading edge in some ways; and it’s huge – a $2.5 trillion market. We live, work and play in cities and technology plays a role in all of those things; and it’s not just large cities becoming Smart Cities.

A smart city uses ICTs to enhance livability, workability and sustainability. – Smart Cities Council

Consider all the services cities provide to the people who live there. Technology, done well, can improve just about all of them; but that also means they must all be accessible.

A lot of work comes up around digital payments, including mobile and web payments. It’s common to move these online instead of in-person service centres, so they must also be accessible.

A big part of a successful smart city movement is ensuring information can be crowdsourced. There are some really successful examples of systems that simply let people report things that need maintenance – like potholes.

So with a11y being critical to all this, how are we doing? Tech definitely isn’t accessible enough.

40% of countries that have signed the UN convention on human rights have accessible government websites. Just 18% have accessible websites in their top ten privately held websites. What can we expect for newer and more complex technology than websites?

Currently smart cities are making the digital divide worse, not better. And confidence reflects that – in a survey James’ team ran, 60% of respondants felt smart cities were failing people with disabilities.

We can do better. Smart Cities For All are providing resources to help:

  • guide to the standards
  • guide to procurement
  • advice on making the case for digital inclusion
  • database of solutions
  • maturity model for cities
  • inclusive innovation playbook

Five pillars of inclusive smart cities

  • strategic intent – have a digital inclusion strategy, a business case, leadership, budget. How will you get people beyond a risk avoidance basis for accessibility, to a more mature approach.
  • culture – innovation, citizen engagement, diversity, transparency
  • governance and process – org structure, procurement, measurement, partnerships
  • technology – asset deployment, global standards, solutions development, address the digital divide
  • data – address the data divide, data solutions

The challenge to Australian cities is to be both smarter and more inclusive.

Other resources