The second week

Here's what we're going today

  1. Review
  2. Discuss reading
  3. Reviewing a test plan template
  4. Theory
    1. End user
    2. The open web
    3. Subjectivity
    4. Test script
  5. Praxis
    1. Reviewing Krug
    2. Auditing accessibility

Last Week

  1. Accessibility is a good idea, and also it's the law.
  2. Usability is not the same as a bunch of other stuff (but it's close)
  3. 'Heuristic evaluation' is just a fancy way of saying 'checklist of good ideas'
  4. Personas and Scenarios are tools for usability design and evaluation
  5. Two kinds of data: quantitative and qualitative
  6. Basic accessibility is basic

What did you think of the reading?

What goes into a test plan?

A test plan what you submit to get approval for the plan and/or buy-in from your team. You can always make adjustments to go above and beyond! Remember, qualitative data is exploratory.

A comprehensive template is available at https://www.usability.gov/how-to-and-tools/resources/templates/usability-test-plan-template.html

Test plan template - the basics

  1. Objective/Scope/Questions: what are you going to accomplish? What are you NOT going to accomplish? What are you trying to find out in order to accomplish your objective?
  2. Participants: who is going to take the test (and why have you selected them)?
  3. Methods: how are you going to generate data to answer your questions?
  4. Metrics: how are you going to measure your data?

Test plan template - not basic, but good to have

  1. Executive summary: the whole plan, but in a couple paragraphs.
  2. Schedule and location
  3. Equipment
  4. Scenarios
  5. Roles: who's doing what?

Let`s break some of these down

Objective/Scope

You're probably not going to do a comprehensive usability test on a large website. You can test a set of features, or you can test for a set of issues. Define what these are and why you're addressing them.

Questions

Can users find X? Does A work better than B? Do users learn the interface in under Y amount of time?

Participants: Some magic numbers

Jakob Nielsen reports the ROI thresholds in different usability testing methods:

Quantitative >20
Card sorting >15
Eyetracking 39
Heuristics 4 experts
Qualitative 5

Methods

Define your procedure, testing type, tools, and tasks.

Testing type may be heuristics (don't forget to say which ones!), eyetracking, discovery (finding problems), benchmark (evaluating solutions), moderated, remote, system usability scale (survey), 5-second, First-click, etc., etc., etc.

Metrics

What is the data you'll be collecting? How should it be measured? How can you be sure it fully answers your questions?

This is probably the most important part of your plan.

Metrics - examples

  • Successful Task Completion
  • Critical Errors
  • Non-Critical Errors
  • Error-Free Rate Error-free rate is the percentage of test participants who complete the task without any errors (critical or non-critical errors).
  • Time On Task
  • Subjective Measures These evaluations are self-reported participant ratings for satisfaction, ease of use, ease of finding information, etc where participants rate the measure on a 5 to 7-point Likert scale.
  • Likes, Dislikes and Recommendations Participants provide what they liked most about the site, what they liked least about the site, and recommendations for improving the site.

Source: https://www.usability.gov/how-to-and-tools/methods/planning-usability-testing.html

An activity!

Who is the user?

You are in the strange position of making things that strangers will use. Some of these strangers may take it apart and put it back together. Some of these strangers will be malicious, some will be children, some will be clueless. Some of these strangers won't be human.

SEO is just usability for bots. Design patterns, OOP and code comments are usability for other programmers (or yourself, later).

The internet is open

We work on an open platform. The beautiful thing about the internet is that it is a (nearly) global public space built on open access (e.g. Net Neutrality), standards and specifications. This comes with the responsibility, though, that you create services that are meant to be consumed by clients that haven't even been invented yet.

You will work with (and by with, I mean for) people who do not understand the difference between accessing a pdf and opening a webpage.

Having an open spec markup language is the backbone of the information age.

Firefox gets most of their money from...

Google

Wait, why?

Google's business model depends on the open web. It has made a multi-billion dollar bet that the best place to be is not in a "walled garden", but as the concierge to the innovations afforded by a democratized space.

You will need to advocate for answering hard questions

For those people who do not understand HTML vs PDF, you will need to impress on them that the internet is a place with a multitude of users, and that's not changing anytime soon. You will need to advocate for creating a service rather than an interactive paper flyer.

This distinction is important because it acknowledges your many types of end user - from the dev/content/QA team, to Google's web crawler, to hackers, to archive.org, to Pocket and Instapaper, to screen readers, to everybody.

Users at cross-purposes

I’m dyslexic, and one of the recommendations for reducing visual stress that I’ve found tremendously helpful is low contrast between text and background color. This, though, often means failing to meet accessibility requirements for people who are visually impaired... Consider:
  • Designing for one-handed mobile use raises problems because right-handedness is the default—but 10 percent of the population is left handed.
  • Giving users a magnified detailed view on hover can create a mobile hover trap that obscures other content.
  • Links must use something other than color to denote their “linkyness.” Underlines are used most often and are easily understood, but they can interfere with descenders and make it harder for people to recognize word shapes.

Eleanor Ratliff, http://alistapart.com/article/accessibility-whack-a-mole

Your job is not to write code - it is to solve problems

All the easy code has been written already. Don't worry, there are (for the time being), still plenty of jobs writing boring code. But that is less true than it was ten years ago, and will be less true in another ten. You need to think of yourself as someone who can solve the problem of 'How can we best make this work for everyone who needs it?'

Last week we contrasted accessibility with usability by saying that usability is accessibility for a set audience, but accessibility legislation still limits that audience by defining it. You should always be open to expanding your definition of the end user.

Youtube stumbled on an audience

A few years ago, the Youtube dev team gave themselves a page-weight budget of 800kb. They optimized every conceivable aspect of the site. They predicted that they could get the average load time below 1s. When they released their new, lightning fast code, their average load times tripled.

There were millions of users in the global south who suddenly were able to watch videos without the page timing out. The team had stumbled onto a massive audience they didn't know they had simply by taking best practices seriously.

Do what you can for those you're aware of, and keep looking for those you aren't.

You are biased (don't worry, everybody is)

You are not aware of everyone, and worse yet, you are not aware of yourself. You are not objective. That's ok.

In “Usability Problem Description and the Evaluator Effect in Usability Testing” a study by Miranda G. Capra at Virginia Tech, she found that 44 usability practitioners’ evaluations reported problems that overlapped by only 22 percent.

Confirmation bias is likely going to be your biggest issue to watch out for, in yourself and other members of your team. It can occur in planning (how you phrase your questions), testing (how you interact with a subject), and analysis (how you interpret the data).

You are less biased when you work together

Capra found that '[a]dding a second evaluator results in a 30-43% increase in problem detection'.

Your assignments are not group work, but your results will be better if you trade responsibilities.

You are less biased when you stick to the script

A key reason we write our test plan is for transparency. We need to make our work reviewable so that our unconscious biases can be recognized. The more we can script, the better.

Steve Krug is one of the big names in usability, and he has given us a wonderful sample script to work from. Let`s take a look at it: https://www.sensible.com/downloads/test-script.pdf.

Auditing accessibility

The W3 provide us with a wonderful list of different accessibility auditing tools, in a wide variety of languages: https://www.w3.org/WAI/ER/tools/?q=command-line-tool.

The most commonly used one is the WAVE Browser extension

Chrome also has an accessibility audit built in. Go to the devtools > audits > Perform an audit and deselect everything except 'Accessibility'.

Using either of these tools, run an audit and raise your hand when you find something you don't understand.