Why is it important
In the previous chapter, we covered the role of automated accessibility tools and went over their shortcomings. To complement the findings from automated tools, we need to perform manual testing. This way, we gain a fuller picture of the state of accessibility on a project.
Through manual testing, we bridge the gap of what automated tools can cover. That is how we fill in the blank spots by testing real-world scenarios and user flows. Only with manual testing can we determine if user flows are understandable and logical to all of our users. If automated testing ensures technical conformance, manual testing ensures a valid human experience. This way, the documentation that results from the audit is more complete. Additionally, we (or the clients) can prioritise issues based on user impact, not only on rule violations.
Next to what we cover in this chapter, you can check out the web manual testing chapter in the QA part of this handbook.
What to cover
From a technical perspective, manual testing can be split into keyboard navigation and screen reader tests. That doesn’t necessarily mean that you need to separate them physically into these two categories. Find your way of incorporating both into your workflow. Each part will help you uncover issues or validate correct behaviour.
From a content perspective, we should cover a wide range of topics. With time and experience of the tester, manual testing becomes easier and better. Once you are more familiar with WCAG criteria and what is important for ensuring accessibility, you will have a better understanding and a better sense of what to look for. But in general, pay attention to the following topics.
Keyboard accessibility and focus states
We should ensure that the entire page, with all of the possible flows and interactive elements, is accessible without using a mouse. This not only covers the links, buttons, and inputs, but also menus, dropdowns, expandable fields, and other interactive elements.
We also need to make sure that the tab order of focusable elements is logical and intuitive. This means that, in principle, the tab order should go from top to bottom and left to right (in languages that follow this general direction). It also needs to follow the intended task flow. There should be no sudden jump between unrelated sections of the page, and the focus state should not land on an invisible element. This is also connected to modals and dialogues, where the focus should always be trapped within these elements until they are closed. Then the focus should return to the place from which the modal was triggered, if it was triggered by some user interaction. If not, it should return to the previous location the user was at. We also need to make sure that non-interactable elements are not part of the tab order, so as not to bloat the keyboard interaction.
An important part of this is also making sure that block bypass is implemented correctly. This allows the user to skip a repeatable block (header/navigation) through the “Skip to main content” link.
Here we use:
- Tab (moving the focus forward)
- Shift + Tab (moving the focus backwards)
- Enter / Space (confirm, submit, “click” actions)
- Arrow keys (moving through dropdown options, tabs)
Screen reader navigation
This point is closely coupled with the previous one, but there are some additional things to pay attention to.
We need to check if the semantic structure of the page is set up correctly. This means that we pay attention to heading elements and make sure that they are used correctly, and not simply to apply a certain style to text. We check form fields and make sure that the labels are correctly associated with them. Buttons and links should all have correct descriptive names. Pay special attention to icon buttons, as they require extra care.
Here we can also check ARIA landmarks to make sure that the page is structured correctly.
Forms
Forms are a large part of the user interaction, and have their own set of accessibility rules. We need to make sure that the users can navigate through the forms with ease. They have to be aware of what is expected of them and should be notified about errors in form submission.
Each field needs to have a visible label and a programmatically associated label (either correctly semantically structured, or through aria-label, for instance).
Instructions on what is expected from the user for a specific field need to be positioned before the field. If that field is mandatory, it should be clearly labelled as such.
Any validations or error messages should be specific, helpful, and announced to screen readers.
Dynamic Content
We need to identify the flows that trigger any dynamic content, such as models, dialogs, or any other pop-up elements. They need to be announced through a screen reader, and the ARIA live regions should be utilised correctly.
An important part tied to dynamic content is also reliability and ensuring a stable user experience. The page should not behave unexpectedly or erratically. Showing and hiding content on the page should be minimal, and always expected and correctly announced to the assistive technology.
We often overlook tooltips, which should be accessible as well (through the anchor elements that trigger them).
Media Accessibility
If there are audio or video recordings available on the page, we must make sure that they are accessible. We need to check that captions are present on videos and that audio recordings have provided transcripts. Media players have to support keyboard controls that allow the user to navigate around the recordings. The user should be able to pause, rewind, move forward, change recording, mute, change volume, enter and exit full screen, etc.
Responsive design
We often focus solely on analysing the “normal” state of the page. We must remind ourselves to check the responsiveness of the page, which is one of the core parts of accessibility. We should make sure that the page and its layout remain usable and understandable when the user uses different zoom levels or different screen sizes.
Validating findings from automated tests
If we haven’t done that previously, we should validate some parts of the automated test results. Here we should focus on alt attributes on images, and make sure not only that they are present, but also correctly structured.
We should also pay attention to colour contrasts. There is no need to manually check each element, but pay attention to the general interaction with the page and check if there are some parts that could cause problems. Otherwise, we should be confident in the automated tools to check this.
If needed, you can use WebAIM Contrast Checker to check the contrast ratios. If you are interested in simulating various types of colour blindness, you can check out the Color Oracle software.
A note on screen readers (VoiceOver)
Using screen readers is a skill that we hone through use. It is expected to struggle with using it, as even people with disabilities need time to adjust and gather knowledge and experience on how to best utilise it. It is not expected that we are experts in using VoiceOver. But we have to understand the basic controls and their use to navigate around the web pages, and to be able to test the main user flows.
This is one of the many reasons why including people with disabilities, who have the knowledge and first-hand experience using accessibility technology, is so important to understand and fully test product accessibility.
If you haven’t had the opportunity to try out screen readers, we recommend first going through the VoiceOver (Apple) tutorial. For more information, you can consult the official VoiceOver documentation.
VoiceOver internal tutorial can be accessed by:
- choosing Apple menu > System Settings > Accessibility (in the sidebar) > VoiceOver > select Open VoiceOver Tutorial.
If you want more tutorials to understand how to work with VoiceOver, you can check the following tutorials on YouTube:
- How To Use VoiceOver on Mac (Screen Reader Tutorial) Pt. 1 by Unisghtly Opinions
- VoiceOver Screen Reader Tutorial Part 2: File Management and Organization by Unsightly Opinions
- Basic Navigation Using Voice Over - the iOS Screen Reader by Life After Sight Loss
A note on writing audit documentation
Manual testing often includes user flows, which often makes describing the issue hard. Our current way of working is to record short videos of the flow that has detected issues. This way, the description in the documentation is easier to understand, and our clients have sufficient context to grasp the reported issue. Remember to establish a unified file naming convention for the audit project. This will make referencing the videos in the documentation more uniform and easier to grasp.
If an issue or the user flow it is present on is not so complex, a screenshot can suffice.
When describing the issue, describe the steps in the flow to reproduce it, and explain what the desired behaviour should be. If you can reference specific WCAG criteria or other documentation, this is good to have, but not mandatory.