The Private Definition of Accessible
For the following post, I need to make the distinction between two definitions of the word “accessibility”. I’ll be doing this with a little parenthetical after the word. Accessibility (Universal) will refer to the first definition I will give. Accessibility (Developer) will refer to the second definition. If it’s placed in quotes, I’m referring to the string “accessibility” itself.
Accessibility: The Word
“Accessible” is an adjective that I think most software engineers would be confident in describing. There would be variation in wording if you ran a poll, but I’m sure you’d see most definitions derive from something like the following:
Accessible (Universal): a thing, that anyone is capable of using, regardless of physical or mental disability.
I think this captures the goals of interface design quite well even if it’s rather lofty, and would actually be a good jumping-off point as one of a few guiding principles of accessible (Universal) software development in the same vein as the agile manifesto principles function for building software projects. However, I don’t believe this is the most common understanding of “accessibility” when put into practice. When software developers are placed under constraints that require them to prove work, their definition can shift subtly. My experience has shown it normally morphs into something resembling the following:
Accessible (Developer): a thing, that conforms to visual restrictions necessary for the visually impaired, and features machine readable information and keyboard interactions for use by the blind.
This is a narrow and, as far as I know, a private developers-only definition of “accessibility.” We first name two groups of users which I think is an important mutation. The visually impaired and legally blind are fully distinct from the “normal” cohort of users in this definition, as compared to the universal definition which talked of “anyone”. This may initially seem prudent! Developers work with requirements, stories, and personas. These are our personas to our definition which acts as a user story.
The two personas conceptualized by the accessibility (Developer) definition are extremely specific. Someone who is visually impaired uses an increased font size and requires contrast ratios to exceed a certain minimum in order to perceive them. Someone who is blind is using assistive software to read-aloud or otherwise parse the HTML and expects there to be as much information present in that space as possible. These sorts of mental images are pretty common in discussions about accessibility (Developer). While people who align with these personas exist, I don’t believe this concrete image of disability is constructive for creating interfaces.
Concrete examples of disability are inadequate personas.
You may hear people contextualize why accessibility (Developer) is important by invoking a disability name. An example I’ve heard before is “What if one of our customers is blind! They won’t know what this image without an alt tag is.” This is not an untrue statement. If machine readable information isn’t provided to a user that uses a machine to read-aloud the content of the screen, they will not be presented with any information about the image. A blind person is likely to use this sort of tool.
The only place where this well-intentioned thinking falls short for the blind user in question, is the solution. The implication my imaginary strawperson developer is making here, is that machine readable information solves this issue; in this case by adding an HTML alt
attribute on an image tag. This is true in simple cases, however the web applications I have worked on have yet to be simple. Does this blind user hear the text you’re including in the alt tag in multiple other places? Are you making the interface itself more confusing? Do we imagine this alt tag as part of a wider interface to support this medium, or just simply as a way to “increase” accessibility (Developer)?
At many points in your life, for a temporary, indeterminate, or permanent amount of time, you will find yourself disabled. This can be due to injury or age, but situation is a much more common cause of disability. Someone wearing gloves in the cold could be thought of as temporarily disabled, as they have extremely low dexterity and sensation. Comparing this to our example of a blind user, you are both lacking a sense that we designed around the assumption you had. I use the word disability not to dilute that word of it’s emotional meaning, but to use it for it’s overly-literal meaning: unable to preform some specific action. You may not conceptualize disability at this level, but I think it’s worth conceptualizing “ability” without it’s associated lack-of-ailment for a moment.
How might we talk about what needs to be done to make an accessible (Universal) product then? Imagine instead the range of options your users have to interact with your product. This to me, is a better way to visualize our personas. **Rather than assigning these bundles of disabilities a name, we can instead target ways that people may interact with our product.** I like to think of these as layers on top of what you're building. The range of interaction options I was mentioning before can be conceptualized as the depth of these layers.
All of these interfaces exist on top of one another. Someone will be exposed to the top layer by default that we’re familiar with as “the interface”. This is normally designed by someone, who considered the medium in which it’s being displayed. A website is expected to respond to touch for example, as many devices that can use websites, support touch. A designer will carefully consider the size of touch targets to make things easy to tap with a finger, and prefer swipe gestures over requiring accurate finger-pecking.
If an interaction is impossible or difficult on someone’s device however, they instinctively look for the next layer. I call this a layer, because it’s another designed interface co-located with another, using a different set of expectations about the interaction medium. A website is expected to be navigable by keyboard for example. This shouldn’t require the use of another human interface device. The layer is dedicated to the keyboard. Our example user with cold hands in gloves will be able to use the Next/Tab key in their phone’s keyboard to jump to the next field they’re filling in. They aren’t staying in the second layer, but nothing is lost by switching to it. We have provided an alternative interaction medium which allowed them to use our product.
Another layer, oft-misunderstood or mixed into the keyboard layer, is a machine announced interface. Devtools in your browser of choice give you a peek into this layer under an “Accessibility” tab normally. This is a separate interface of UI with descriptions, commonly announced by a text-to-speech program but not always. Mixing this into the keyboard layer, straddles the user with the responsibility to use quite complex software just to use a keyboard. We should aim to keep the definition of these interface layers as small as possible so users can pick the one they can use most efficiently. A machine announced interface layer should not be unusable without a touch screen or keyboard for example. Should the user want to use any of these interface layers, they can without losing function. A VoiceOver user will regularly make use of tab order in the keyboard interface layer for example.
You might have other layers depending on your business. An example might be restaurants who have a kitchen display system. These normally must work with bumpbars, a kind of hotkey macro pad. The important thing to conceptualize here is that you’re designing multiple co-located interfaces, not one interface with accessible (Developer) sprinkles added to it that free you of any responsibility.
There is a Human
If our top layer interface (the one your designer considered every pixel, swipe, and click for) used a different visual style of text box for every single input in the app, I think most people would agree that it’s difficult to use. You might have gotten into this situation because each input was subtly different: one is for an email, the other is for a password, another is for a name which must be at least 1 character long, and the final one is used to feign some simulacra of signing your name for an antiquated legal process. Just because they are different, doesn’t mean their interaction is different, they all require me to type something. This is a distinction you already innately understand, and probably find this paragraph tedious to read though.
You might not feel the same about this same obvious issue in other interface layers you don’t use. How many forms have you seen online with an asterisk to mark a required field? This is a simple example, but it creates a bizarre world to navigate at the machine announced layer. Imagine hearing an input described as “First name star, edit text” as it would be by VoiceOver, Apple’s very popular screen reader. Even if you include a required
attribute (surely accruing accessibility (Developer) cred in your PR) this will still be “First name star required, edit text” to your users making use of this layer. I can’t easily recreate this experience for a user who does not use assistive software, but let me give you an example of what this sloppiness looks like in our top layer.
star required
The medium we’re emulating can only communicate one thing at a time, so it’s on the user to hold onto context as it goes past. What is star? Is it required? Is star an entity I have to provide in this form? This would be considered sloppy design in the visual realm, a unique splatter of colour on some input for some cryptic reason would be a better example that doesn’t rely on me trying to recreate an experience here, but when discussing a machine announced interface we assume “they will get it.” You may have even had that thought while reading this example. People that use these tools have come across this common quirk before, so surely they must have picked up on it by now. And that’s probably true! But I think we should aim to craft interfaces to the same level of quality, no matter which interaction layer a user is on.
These configurations, and the resulting information communicated to your users, is an interface. Many people experience their online world either partially or fully in this layer. There is a high chance that you will have to as well at least 1 time in your life. Consider the fact that a human is always present no matter the medium. Clear communication and sensible labels will go a long way towards removing barriers because the human being can make inferences. Adding more noise for the machine to read is not always the correct choice.
The unexplained “star” is an example of a fix you could make by removing something from the machine announced layer. role=presentation
on the asterisk and the required
attribute on the input is worth a try in VoiceOver to see the difference it can make. Consider designing this interface layer so it makes sense, not simply assuming maximum machine information will tend towards sense.
Curb Cut Effect
A colour contrast minimum rule is a favourite of the automated accessibility (Developer) test crowd. It’s not without it’s merit either! The famous WebAIM contrast checker very easily visualizes how slight changes in colour can render UI basically incomprehensible. The issue is not with the rule here at all. It’s actually to do with the narrow scope in which it’s applied.
As with the other interface layers I gave examples of, the mouse or touchscreen interface layer needs consideration for it’s own medium. Information is delivered to users two dimensionally, out of a screen with some limited range of brightness, colour, and size. Ensuring contrast is perceivable is important. It actually benefits anyone using a screen. You’ve probably come across some interfaces with transparent unreadable text boxes, or featureless square buttons that blend into the background that frustrated you. If you haven’t, congratulations! But I can show you a multitude of Hacker News posts with examples of valid complaints from many other people. Strict application of a contrast minimum would’ve at least forced someone to consider why they chose the frustrating option.
And yet, I still see readability and contrast being relegated to this accessibility (Developer) “best effort” group of tasks. Bemoaned by some minority of designers as an attack on creative freedom at the behest of the visually impaired. But if we strip away this weird private definition of the word “accessibility”, and try to find a new explanation why this matters, I think we would come to the conclusion that it’s simply good user experience design. The end result is a better, easier to use product.
The prime example of this effect is the curb cut: an angled slice made into the curb on the side of a road, to make access onto and off of the road possible by rolling instead of stepping. In your mind, you’re imagining wheelchairs. Someone that cannot walk onto the curb must be physically disabled, and therefore uses a wheelchair. This is so cemented in our collective consciousness that we use a symbol of a person in a wheelchair to represent the concept of “disability”. A parking spot with that symbol displayed, won’t be far from a curb cut.
However, this 1:1 reasoning is clearly not the full story. If you’ve ever had to use a mover’s dolley before, you’ll know how useful these curb cuts are. Same with someone on a bike, who generally values their pelvis not being blasted into shards going over a curb on thin road bike tires. The built environment was improved for everyone by adding an interface option (rolling) that we understood as being for only 1 specific kind of person.
Considering My Interfaces While Avoiding the Allure of Min/Maxing My Accessibility (Developer) Score
We developers are an interesting bunch. We take large problems, and break them down into small describable mini-problems that we can then automate, and provide an abstraction to make using our solution easy to use. While this process does work well for many problems, it’s not universal. Sometimes large “birds eye” solutions are necessary. Designers and artists are very familiar with this concept; holding steadfast to strict rules won’t always create a cohesive work, or even a work that’s fit for purpose. Instead, stepping back and considering the way someone uses the thing you’re making gives you a direction towards the impact/outcome you want to have, rather than trying to take the shortcut of following tips and tricks to save yourself the effort of exploration.
This doesn’t mean standards for interface design are bad, far from it actually, but I don’t think we can assume that following an accessibility (Developer) “best practice” standard means the result is accessible (Universal). That requires we promise respect for your users, not out of pity for some list of disabilities that someone else predetermined as the “important ones”, but rather respect for your users to use tools how they see fit. Boiling this goal down to “it passes the automated a11y test” doesn’t hold up our side of this promise often enough.