The first week

Here's what we're going today

  1. Going over the syllabus
  2. Theory
    1. Accessibility terms
    2. Usability terms
  3. Praxis
    1. Heuristic evaluation (usability)
    2. User task scenario (usability)
    3. Static HTML accessibility

Syllabus

Let's take a look at the syllabus =>

Hi! Let's talk about things.

What better place to start in any discipline than knowing the jargon. It makes googling your problems so much easier! Let's jump into accessibility and usability by going over some of the common terms for the fields' general concepts.

accessibility (the idea)

Lowercase 'a'. Accessibility as a broad term means recognizing what kind of barriers people might have to using your product or service and doing what you can to remove those barriers where possible, or, if it's not possible, providing equivalent alternatives.

I want to emphasize that accessibility is not something that you should expect to 'finish'. It's not that kind of thing. It is a way of thinking, it is a set of questions you should always be asking. There are 7 billion people out there, it is not possible to make something perfectly accessible. That's not your responsibility. What is your responsibility, I think, is to always look for opportunities to make things better, to be humble and recognize that people might have barriers that you've never thought of, and to be open to criticism and willing to take that as an opportunity to improve. That's not an easy thing to do - to accept that the things you make will always be imperfect, and that you won't always have the resources to do right by everyone. But try your best, and, odds are, this means you are a good person.

  • What kind of barriers does your product or service have?
  • Can you remove them?
  • Failing that, can you provide ways around them?

Accessibility (the law)

...with a capital 'A' takes all of the guesswork out of it. Capital A accessibility means complying with the legal guidelines set out by the goverment for making your product or service usable by people with broadly defined physical challenges. So, we often talk about visual impairments, because we're working in a primarily visual medium. Hopefully your website doesn't have a lot of audio cues, but it's common to work with video that has an audio component, so we also talk about people with hearing issues. And, of course, our products and services generally have some kind of physical interface, so we talk about mobility issues. These are the big three categories of barrier that are addressed by capital 'A' accessibility.

  • This is the law.
  • (This is also what gets you hired!)
  • Primarily addresses visual, hearing and mobility impairments.

AODA

These guidelines are codified in the AODA (Access for Ontarians with Disabilities Act) specs.

Link: https://www.ontario.ca/laws/regulation/110191#BK14

(4) Designated public sector organizations and large organizations for their internet websites shall meet the requirements of this section

2. By January 1, 2021, all internet websites and web content must conform with WCAG 2.0 Level AA, other than,

i. success criteria 1.2.4 Captions (Live), and

ii. success criteria 1.2.5 Audio Descriptions (Pre-recorded). O. Reg. 191/11, s. 14 (4).

WCAG

Which brings us to WCAG - Web Content Accessibility Guidelines. These are the recommendations written by the World Wide Web Consortium, which is the organization that creates standards around things like HTML and CSS. The vast majority of capital A accessibility regulations (like the AODA) defer to these specifications and the different tiers (levels).

WAI-ARIA

is also produced by the W3C. It is a set of specifications for techniques to make dynamic content accessible (which helps it comply with the WCAG). It's pretty simple - it's essentially just a specification for a set of attributes in your markup language. The purpose is to fill in a visibility gap created when a scripting language turns static markup into dynamic content.

Usability

Government of Canada Standard on Web Usability

The extent to which specified users can find, understand and use information and services online. Web usability can be measured through the effectiveness and efficiency with which users can complete defined tasks online.

Let's differentiate usability from some similar (and sometimes overlapping) terms.

Usability, as we said, is how quantitatively useful a product is.

User experience is how people felt about their experience of using the product.

Customer experience is how people felt about the company that the product represents.

The user interface is the mechanism by which the person uses the product.

Human-Computer Interaction is the study of the usability of user interface(s). It is an old (40+ years!), and primarily academic discipline that talks about cognitive science, ergonomics, and all kinds of wonderful stuff.

Interaction design is the functional design of the interface, while UI design is the appearance of the interface. We're really starting to split hairs here. If you're a UI designer, this is not a "not my job" type of thing.

Human-centred design and user-centred design are often used interchangeably, BUT, (and, again, we're splitting hairs), human centred design is a design process that takes into consideration the end user (i.e. humanity) at each stage of design, whereas UCD takes into consideration A SPECIFIC AUDIENCE.

So, if we were to apply these terms to, and I'm dating myself here, an ipod...

  • Usability === steps to listen to a song.
  • UX === people like using the ipod.
  • CX === people like Apple.
  • UI is both the hardware and software that provides the user with input or output (as opposed to, say, the internal database retrieval software, or internal hardware).
  • HCI studies the totality of the interactions, including the charging port.
  • IxD === how the wheel affects the display.
  • UI design === what the wheel looks like, what font the display uses.
  • HCD means, from the beginning of the design process, someone was thinking about what it would be like to hold in your hand.
  • UCD means, from the beginning of the design process, someone was thinking about the hands of people who would spend $399 on a portable music player.

Okay, so now that we know what usability is (and isn't), let's talk about how we test it.

Heuristics

For whatever reason, 'heuristic' is one of those words I have to look up every six months, I always forget the definition, probably because it's a little hard to define.

An heuristic is a good-enough solution, an approximation, a rule of thumb. In usability, heuristic analysis is basically a checklist of good usability principles. You don't need to watch a live test subject to know that you need good error messages, or system feedback.

Personas

Personas are a tool for creating a design. A persona is a fake person you use to represent a major user group for your product. Let's say you're a library, and you're creating a form to get people to sign up for an online account. You would create personas based on your current users, and any group that you want to target as a user (maybe you've got an ad campaign to drive new users to the website, for example).

Based on the information you have, you might create personas for an elderly person, a parent with children, and a student.

Personas generally include the following key pieces of information:

  • Persona Group (i.e. web manager)
  • Fictional name
  • Job titles and major responsibilities
  • Demographics such as age, education, ethnicity, and family status
  • The goals and tasks they are trying to complete using the site
  • Their physical, social, and technological environment
  • A quote that sums up what matters most to the persona as it relates to your site
  • Casual pictures representing that user group


https://www.usability.gov/how-to-and-tools/methods/personas.html

The purpose of making up a fake person, rather than simply pointing to your analytics data, is to facilitate design discussions (particularly with people who don't deal with analytics on the regular).

Scenarios

Now that you've got an idea of who is using your site, you can go a step further and write down WHAT they're using the site for - these are called Scenarios. A Scenario is a set of tasks required to accomplish a user's goal.

Further reading: https://www.nngroup.com/articles/task-scenarios-usability-testing/

So, with your personas to guide you, you've developed a prototype application. Time to see if your fake people translate to real people - it's time for usability testing! In usability tests, you take your scenarios, and you ask people to try and accomplish the goal of that scenario. Then, you spy on them like a total creep! But it's fine because you have their consent and you're probably giving them $5 and some donuts. When you run your scenario, there are a number of ways that you can get data from the test.

Qualitative and quantitative data

Before we get into methods, let's define the two categories of data that we report. Quantitative and Qualitative, aka quant and qual.

Quantitative data is, naturally, data you can quantify. How long does it take to complete the task? How many errors does a user generate, etc.

Qualitative data is observational, and, generally, subjective. Is the user struggling to find a UI element? Is there a part of the interface that they seem to enjoy fiddling with?

Quantitative research requires different conditions than qual, for a few reasons. First of all, to get good quantitative data, you need a large data set. You'll be running a lot of people through a well-defined and tightly controlled task set. Qualitative data, on the other hand, is gathered in a much looser and more adaptable way. Quantitative research is less common, but provides a business with some all-important numbers. Qualitative research is more subjective, as it's meant to be exploratory. Of course, when you're dealing with an organization, they'll still want numbers to supplement all your lovely written summaries and reports.

That's where PURE comes in - Pragmatic Usability Rating by Experts. This is just a rubric for putting your subjective findings into, so that your hunches and intuition get turned into numbers. You split your scenario into tasks, and your tasks into steps, and give each step a score out of three for how difficult the user found the task, 1 being easy, 3 being difficult.

Examples of PURE evaluations.

Now, there's a whole variety of types of user tests you can do, and if you're a usability expert or UX researcher, part of your job is having your own catalog of tests and deciding which ones are appropriate.

That's all the definitions!

Now for the fun stuff.

Get into groups, and evaluate a site based on https://www.nngroup.com/articles/ten-usability-heuristics/
Review https://www.nngroup.com/articles/task-scenarios-usability-testing/ and Brainstorm task scenarios for a website, i.e. amazon

Accessibility basics

See also: https://developer.mozilla.org/en-US/docs/Learn/Accessibility/HTML

  • Semantics: Can you use a button?
  • Content: Can the button describe its function?
  • Focus: If it's not a button, can you trigger it anyway?
  • Tables: scope and colspan
  • Inputs: label for id.
  • Images: alt text
  • Source order: Javascript, but also CSS (i.e. flex-order)

Thank you!