The current document describes the Publications Office approach to assessing the level of compliance of its websites and content with the Web Content Accessibility Guidelines (WCAG) recommendations. The analysis takes into consideration not only the user’s ease of access to content, but also their level of success in accomplishing the tasks the websites were designed for.
The accessibility assessment is based on three main actions:
- evaluating the degree of accessibility of the main parts of the website and the different documents present therein, along with any difficulty experienced in accomplishing tasks on the website;
- detecting specific accessibility barriers;
- proposing concrete solutions and feasible actions.
It is essential to see accessibility as process, not a fixed status. Accessibility should not be implemented once and then forgotten. As a website evolves over time and more and more documents are added, its level of accessibility often decreases. Therefore it is important to set up control mechanisms that guarantee a minimum level of accessibility over time.
Please note that accessibility evaluation by means of user testing is outside the scope of the current procedure.
The procedure is aimed at four main target audiences:
- webmasters managing websites and intranets;
- web editors responsible for producing content;
- IT project managers in charge of controlling deliverables such as websites and online tools;
- publication production agents, who check the quality of the publications and other documents produced by service providers.
The outcome of the accessibility analysis and assessment is a report that summarises the process of auditing a website and/or the different types of documents published on it. This report is based on the World Wide Web Consortium (W3C) conformance evaluation method as described in its Evaluating Website Accessibility Overview.
The audit process provides for the evaluation of each website against the complete is report is based on the World Wide Web Consortium (W3C) conformance evaluation method as described in its WCAG 2.1 list, using the Web Accessibility in Mind (WebAIM) WCAG 2.0 checklist as a foundation.
The report highlights the level of conformance of each evaluated element and proposes the appropriate corrective measures where necessary. It lists accessibility failures, classified by their level of severity (blocking, critical, major, minor and trivial) and presents short-term, mid-term and long-term recommendations to improve the accessibility of the website and its documents.
The report defines the scope of the analysis (the entire website, a section, a website functionality or a type of content). It specifies the target conformance level for each element analysed (A, AA or AAA) and the context of its use (whether it is publicly available or in a closed environment).
The Publications Office follows the principles, recommendations and guidelines laid down by the European Commission accessibility audits and evaluation tools.
The evaluation method proposed by the Publications Office is based on automatic and manual checks.
Automatic checks are executed by software, browser add-ons or online services to help determine if a website meets the accessibility requirements. Although they can significantly reduce the time and effort needed to evaluate websites, no tool can automatically determine whether a website or publication is fully accessible or fully compliant with the WCAG.
In some cases automatic tools are prone to producing false or misleading results such as not identifying or signalling incorrect code. The results from these automatic evaluation tools should not be used to determine conformance levels unless they are operated by experienced evaluators who understand their capabilities and limitations.
The essential checks that should be performed using automatic evaluation tools are: code validation, checking broken links and colour contrast analysis (this should also be manually checked). See Section 2 ‘Audit Tools’ of the annex for the lists of tools proposed by the Publications Office to carry out these types of tests.
Manual checks (heuristic evaluation)
The majority of accessibility checks require human judgement and must be manually evaluated using different techniques. For complex websites a manual evaluation cannot be performed on all pages, and therefore proper sampling covering all types of pages, processes and functionalities is crucial.
It is important to define the success criteria in advance according to the target level and, when evaluating processes, to check the conformity of all the steps from the start to the completion of the task.
It should be possible to navigate the website and interact with the interface elements (links, buttons, fields for inputting text, etc.) by using a keyboard, without the help of a mouse. If certain elements are skipped during the tabbing or they do not receive focus in a logical order, the page should be corrected using proper markup (reviewing the underlying HTML structure, using Cascading Style Sheets (CSS) to control the visual presentation of the elements or adding ‘tabindex’ attributes and Accessible Rich Internet Applications (ARIA) roles where necessary).
Sighted keyboard users must be provided with a visual indicator of the element that currently has keyboard focus.
Tabbing through lengthy navigation may be particularly demanding for users with motor disabilities. Long lists of links or other navigable items may pose a burden for keyboard-only users.
A screen reader is text-to-speech technology that converts the text displayed on a computer or mobile device’s screen into synthesised speech, helping people with visual impairments to access websites and other digital media using only a keyboard and a screen reader.
A few of the most popular screen readers are Job Access With Speech (JAWS), Non-Visual Desktop Access (NVDA), ZoomText, Narrator for Windows and VoiceOver for Apple products.
A screen reader should speak out loud at the user’s request, pausing at punctuation such as full stops, commas and exclamation marks. The software also reads the titles of web pages and the alternative text captions for any non-textual elements on the page. Meanwhile, the user can choose to repeat a given word or passage, search for a given character or string of characters, or adjust settings such as the speed and volume of the speech.
High-contrast mode is an accessibility feature that alters the colours used by the operating systems, apps and websites to maximise legibility. It’s popular among people with low vision, colour blindness or photosensitivity. To test high-contrast mode we just need access to operating systems that support it.
On Windows, we can enable high-contrast mode by pressing Alt + Left shift + Print screen or by opening the Settings app and navigating to Ease of Access > High contrast. On macOS we can enable it using Control + Option + Command + 8.
Once enabled, what should we look for to test high-contrast mode?
- Content icons or other indicators that rely on colour alone to convey information.
- Elements that rely on background colours alone to make them distinguishable from other elements, such as buttons or modals.
- Keyboard-focus styles or error states that rely on colour alone. This is common when these states rely on a change in colour alone or on showing and hiding a border.
- Transparent images or Scalable Vector Graphics images that aren’t visible against black or white backgrounds.