Criteria | Presentation | Written |
---|---|---|
Speaking to the goals | 5 | 10 |
Talking about the methods | 5 | 10 |
Walking through the results | 10 | 5 |
Strong conclusions | 5 | 10 |
Reasonable recommendations | 10 | 5 |
Documentation | N/A | 20 |
Responding to questions | 10 | N/A |
Total | 40% | 60% |
But first - something I was reminded of.
I was reading some writings by people with disabilities, and a point that kept coming up was this - no one is disabled.
Which is to say, there is nothing about any one person that creates disability. Disability is created when society makes an assumption, and then hardcodes that into design.
It's in the way we design our buildings and our cities, our school system, our language, et al.
I think this is important, because you need to understand that when you make something, you are at risk of manufacturing disability.
You responded more than I thought you would to the reading assignments. More than one of you said you would've liked it if there was more in-class discussion.
Accessibility is both a philosophy and a legal requirement.
Philosophy: removing barriers and creating equity.
Law: The Access for Ontarians with Disabilities Act, in accordance with the Canadian Charter of Rights and Freedoms, mandates a level of WCAG compliance.
WCAG compliance mandates semantics, content, proper source order, text alternatives to visual information, accomodations for colour-blindness and low visual acuity, keyboard-only functionality, and the use of the WAI-ARIA specification.
Semantics in HTML refers to using native elements and attributes for their defined purpose.
A correct source order means that the visual flow of the document matches the source code.
Text alternatives are provided when information is presented visually - this includes images, graphs, charts and tables, but also cases where elements have implicit functions based on the visual design, i.e. buttons within a form.
We can provide alternatives to this visual information with alt
tags for graphic content, scope
and caption
for tables, and label
s or screen reader-specific content for inputs, buttons and links.
We must also keep in mind that our elements are often used as navigational landmarks for people using non-visual clients, including headers for document structure.
Accessible content means...
Because not all visually impairments require the use of a screenreader, we provide alternatives to colour cues for people with colour-blindness (i.e. focus outline, proper labelling), and for people with reduced visual abilities we check our colour contrast ratios and test our websites for up to 200% zoom.
For people with motor-impairment issues, we ensure that we do not disable the native functionality of our proper semantic HTML, and where native functionality is not available, we create keyboard events to supplement our click events.
We also maintain our tab order by having our source order properly represented.
On those rare occasions where our visual order and source order are in conflict, we can control focus with Javascript.
Where we want to make an element focusable that does not have native focusability, we can use the tabindex='0'
attribute value.
The WAI-ARIA specification allows us to supplement our HTML elements, widgets and document presentation by defining roles and properties for screenreader users.
This is especially important when updating content on the page without a location change or page refresh, which is a common practice when using modern front-end javascript frameworks, such as Angular, React, and Vue.
Of course we keep in mind that all these standards apply whether on a desktop or mobile device, with a few special considerations for mobile, including...
In order to enforce accessibility compliance and best practices, we need to help our team mates understand these techniques. We can encourage our team to use manual testing tools like the WAVE browser plugin, incorporate linting into our build process and auditing into our deployment or site monitoring
Additionally, because some accessibility techniques, especially ARIA, are context-dependent, automated enforcement can be a challenge. Manual code review is an essential part of making our whole team stronger and our product better.
Usability is not UAT, UX, UI, CX, HCI, IxD, or HCD (although they may overlap).
Usability is the extent to which specified users can find, understand and use information and services
.
To start with a solid foundation of usability, we develop personas and scenarios.
To assess usability, we test.
In testing, we use a variety of methods, designed to reduce our biases, to collect data on how real people perceive and interact with the design and function of the products we make.
We do this iteratively throughout the design and production process (including after our product has launched), testing our ideas and improvements, and comparing solutions.
During the initial design phase, we develop personas, composite identities of our audience, in order to design our product for people to use.
Based on those personas, since websites tend to be non-linear tools, we develop scenarios. Scenarios help us keep track of the different journeys that people in our audience may take in using our product.
Personas and scenarios are tools for guiding design and implementation discussions, particularly with less technical team members.
Once we begin to test our design and development, we use tests to produce both qualitative and quantitative data.
Depending on what kind of data we are generating, there are industry standard sample sizes for each test. Qualitative testing, being more exploratory, tends to have smaller sample sizes.
To facilitate discussion and decision-making, we can apply quantitative metrics to qualitative data. One example of this is the PURE rubric, for assigning values to the ease with which a task is accomplished.
This should not overshadow the fact that the most valuable information gained in qualitative testing is usually individual, and best reported in the words of the participant or observer.
Before we test, we plan. After we test, we report.
A good test plan can be a proposal (when seeking approval) that provides a clear picture of the time and costs, and makes a case for the efficacy of data that is being generated.
A good test plan might also be, assuming that approvals and funding are secured, simple documentation for your team, in order to keep sight of the scope, definitions, and goals.
A good report is readable, credible, and makes clear recommendations for how to action the findings.
A friend of mine who does usability testing says that their team has a library of possible tests. They get hired to choose the right test.
Usability tests include, but are not limited to:
When performing a test, we try to capture data that is clean and unbiased. We can do this by using careful language, so as not to influence our test participants.
We also do this by having strong sample sizes, and reducing the number of variables between our tests, i.e. maintaining a consistent hardware and software environment.
Once our test data is collected, we format it, and discuss it within our team.
Collaborating on our recommendations reduces individual bias.
We then weigh our recommendations based on issue severity and the resources required to implement.
Whenever possible, we try to provide more than one option for each issue, so the client can decide what is best for them - without putting them in the position of rejecting our work.
A report should be easy to read and understand. Choosing the right format for presenting data is essential to interpreting it.
If you have been rigourous in your testing, then your report can be presented in very plain language.
Reporting these results is often done in-person. It's important that you understand everything that went into the test and the recommendations, and anticipate questions that the client may raise.