Transcript - Accessibility using cross-platform app-development - Accessible publishing

Accessible publishing banner - basic

Nested Applications



Cross-platform app-development

[Narrator:] Developing for iOS and Android can require a lot of time. Native iOS apps are typically programmed using Objective-C or Swift as the programming languages and Xcode as the development environment. Native Android apps are typically developed using Java or Kotlin as the programming languages and Google’s Android Studio as the development environment. A developer has to learn how to use different programming languages, different programming frameworks for each operating system and different development environments. Once the app is developed, it needs to be maintained and kept up to date on both operating systems. This is an expensive approach, requiring a lot of time and, therefore, financial resources.

More and more developers are looking for a “write once, run anywhere” approach. This solution, once promised by the slogan of the Java programming language, is still a worthwhile goal for developers. Cross-platform frameworks and development environments promise to deliver this.

As part of this chapter, we will look at two examples following two different approaches, and their influence on accessibility. Following the proposed example scenarios, we can produce apps that can be distributed via their respective app stores, the Apple App Store and the Google Play Store.

Apache Cordova

For our first example, we will look at Apache Cordova. Apache Cordova is an open-source mobile development framework. It allows you to use standard web technologies, including HTML, style sheets and JavaScript programs, for cross-platform development. Apps execute within wrappers targeted at each platform using programming-interface bindings to access each device’s capabilities such as the accelerometer, the camera, geolocation and networking.

To understand Apache Cordova, we are going to have a brief look at its architecture. At the base is the operating system, which in our scenario is iOS or Android. A Cordova app runs natively on both operating systems. Each application offers the required frameworks for hosting the web app.

The web app itself consists of web pages, style sheets and JavaScript programs. Images and media data can be integrated as resources too. A special configuration file defines app parameters like icons, plug-ins and preferences.

The WebView rendering engine builds the web browser in which the app runs. In fact, it hosts the entire user interface of the app. Plug-ins are an integral part of the Cordova ecosystem. They provide an interface that allows Cordova and native components to communicate with each other. This enables a programmer to invoke native code from JavaScript.

The Apache Cordova project maintains a set of plug-ins called core plug-ins. These core plug-ins provide access to device-specific functions such as the accelerometer, the camera, geolocation, contacts and so on. In addition to the core plug-ins, there are several third-party plug-ins, which provide additional bindings for features not necessarily available on all platforms. You can also develop your own plug-ins. Plug-ins can extend the functionality through additional features; for example, a database interface. As app developers, we only have to worry about the components that are part of the web app. For most applications, we do not need to modify any of the other components.

Apache Cordova example

Here we have a little demo app. It shows a table of recipes. Each cell shows an image, a title and an evaluation indicator represented by a set of hearts. When we click on a cell, we see the name of the recipe and a slider. We can use the slider to evaluate how much we like a recipe. Let’s have a look at the code. Apache Cordova builds the interface with a WebView, but it has no user-interface elements of its own. It is therefore recommended that you use one of the many JavaScript user-interface libraries. They greatly facilitate the building of a modern responsive user interface. For our example we used the Onsen User Interface library, but we could have used any other JavaScript User Interface library. Please note that the fact that we used this library does not mean we recommend it.

To keep the example small, all code is embedded in a single file. The code offers templates for two types of pages: one for the recipe list and another one for displaying the recipe data. A toolbar on top of both pages helps us to build the navigation between the two pages. For demo purposes, we created an array of recipes as a data source with images, titles, labels and default values for the evaluation.

The init event listener is called up before a page gets displayed. For our purposes, we build a list of recipes and add it to the code of the first page. Note that every list item contains a pushPage operation that guides the user to the second page. The pushPage operation sends the array index of the selected item to the second page. When the second page is generated, the index is used to fill the pagewith recipe-related data.

We start VoiceOver on the iPhone to see how the app works for a visually disabled user.

[Screen reader:] Curry rice. 3 orange heart emojis. Meal curry rice png, image. Hamburger. Orange heart emoji. Orange heart emoji.

[Narrator:] The number of hearts indicates how much we like a recipe, but VoiceOver cannot translate the meaning of the number of hearts into words. VoiceOver detects the image, but it cannot find an alternative description, so it says the filename instead.

[Screen reader:] Meal hamburger png, image. Meal hamburger png, image.

[Narrator:] On the second page, the screen reader reads out the image’s filename once more.

[Screen reader:] How do you like this recipe? Heading level 1. 2: 25%. Adjustable.

[Narrator:] VoiceOver detects content for the thumb icons next to the slider, but cannot offer a description. These icons do not add any value for a visually disabled user. The functionality of the slider is not easy to understand. We need to fix this. Basically, this is HTML and JavaScript, so the Web Content Accessibility Guidelines should be used to fix the problems. We will not explain every code modification. If you want to learn how to make HTML code accessible you can find more information in the Introduction to web standards chapter.

We start by giving the image in the recipes list an alt attribute in order to add an alternative description. We will use the label data from our range of recipes as the alt attribute. The heart symbols are not very expressive. Therefore, we hide them using the aria-hidden attribute. As a replacement we add a description field, which will only be visible to screen readers.

The describeEvaluation function turns our evaluation into a description. Of course, we want to use this description whenever the user changes the slider value. We continue with the second page. The image needs an alternative description too. We will fill it out later. The thumb icons offer no additional information, so we just hide them from the screen readers using the aria-hidden attribute.

We would like to add more information to our slider, but the Onsen User Interface macro ons-range does not allow to add accessibility attributes to an input element. This is a limitation of this particular JavaScript Library. But we can use a workaround. The macro does nothing else but embed a set of styled HTML elements into the DOM tree of the Web page. Instead of using the macro, we can embed these elements ourselves.

Now we can add an aria-label to the input element. Since we are not using the macro anymore, we need to adapt the code that addresses these elements.

Finally, we update the alternative text of the image. Let’s see if the code changes improve accessibility.

[Screen reader:] Curry rice. So-so. A plate of curry rice with carrots and peas, image. Hamburger. Bad. A hamburger with tomato, cheese and salad, image. Hotdog. Excellent. Hotdog. A hamburger with tomato, cheese and salad, image. A hamburger with tomato, cheese and salad, image. How do you like this recipe? Heading level 1. Select the percentage to indicate how much you like this recipe. 2: 25%. Adjustable. 3: 50%. 4: 75%. Recipes. Recipes Demo.

[Narrator:] I think this is much better now. Many user-interface libraries have built-in support for the Web Content Accessibility Guidelines and Accessible Rich Internet Applications standards. When you select your library, you should check the documentation for accessibility support. Does the documentation mention accessibility and present demos on how to use the library to create accessible web interfaces? Does the generated code support ARIA roles?

We do not recommend any particular library because there are so many options, with new developments coming out very often, and because requirements differ from one application to another.

This brief example demonstrates that an Apache Cordova app uses normal Web mechanisms to build an app. Therefore we can use the Web Content Accessibility Guidelines to improve accessibility. If you do not understand the different steps to make web pages accessible, then please refer to the HTML coding chapter. The techniques used in our example are introduced there. For more information about Apache Cordova development, please refer to the website of the project.

Microsoft Xamarin

The second example of a cross-platform development environment is Microsoft Xamarin. Xamarin is a development architecture made up of tools, programming languages and libraries that is used to build many different types of apps. Xamarin extends Microsoft’s .NET developer platform with tools and libraries made specifically to build apps for Android, iOS, tvOS, watchOS, macOS and Windows. The preferred development environment for developing apps with Xamarin is Microsoft Visual Studio. Even though it is a commercial product, it can be downloaded and tested for free. Smaller companies can create apps without paying licence fees.

Microsoft’s Xamarin enables a programmer to write an app using a single source code running natively on iOS and Android. It does this by extending the .NET platform with tools and libraries. IOS and Android offer unique operating-system and platform application programming interfaces, API for short. Even though Xamarin offers access to these APIs via the common programming language C#, the APIs are different on each platform. This means developers have to learn to use different APIs to access platform-specific features. With Xamarin.Essentials, developers have a single, cross-platform API, which is compatible with any iOS or Android application, and which can be accessed through shared code no matter how the user interface is created.


Xamarin.Forms is an open-source cross-platform framework for building cross-platform apps with .NET from a single shared codebase. Using Xamarin.Forms, a developer can declare a user interface using Extensible Application Markup Language, XAML for short, which will be rendered into native user-interface elements on each operating system. For example, a programmer defines the tab-bar menu of an app only once in XAML, but it will appear on iOS at the bottom of the screen, and at the top on Android.

This native approach is defined for a multitude of user-interface elements. These might be basic user-interface elements like labels, buttons or checkboxes, or they could be more complex structures like web views, maps or carousels. Xamarin offers a rich library of existing cross-platform extensions, which can be loaded and reused by a programmer. The library covers many different application scenarios. Therefore, a programmer rarely needs to develop platform-specific extensions of their own.

Visual Studio

The Visual Studio development environment provides an editor with syntax highlighting, code completion, user-interface design and other functionalities conceived specifically for developing mobile apps. Visual Studio requires the platform-specific development environments Xcode and Android Studio to compile, link and distribute an app. Even though these platform-specific environments must be installed, it is rarely necessary to leave Visual Studio as they are integrated into the Xamarin workflow.

Xamarin.Forms accessibility

Making a Xamarin.Forms application accessible means thinking about the layout and design of many user-interface elements. Many accessibility concerns, such as large fonts and suitable colour and contrast settings, can be addressed by Xamarin.Forms APIs. Xamarin.Forms does not currently have built-in support for all of the accessibility APIs available on each of the underlying platforms. However, it does include the option to set automation properties on user-interface elements to support screen-reader and navigation-assistance tools, via the following attached properties.

The IslnAccessibleTree property is a Boolean that determines whether the element is accessible, and hence visible, to screen readers. It must be set to “true” to use the other attached accessibility properties.

The value of the Name property is a short, descriptive text string that a screen reader uses to announce an element. This property should be set for elements that have a meaning that is important to understand the content or interact with the user interface.

The HelpText property is set for text that describes the user-interface element; it can be thought of as tooltip text associated with the element.

The LabeledBy property allows another element to define accessibility information for the current element. For example, a label next to an entry can be used to describe what the entry represents. These attached properties set native accessibility values so that a screen reader can explain the element.

The different automation properties can be set in XAML as well as in the source code. Here is an example in XAML. The same effect could be achieved by setting the Boolean value in C# source code.

Xamarin.Forms example

Here we have a little demo app. Just like the previous example, it shows a table of recipes. Each cell shows an image, a title and an evaluation indicator represented by a set of hearts. Once we click on a cell, we see the name of the recipe and a slider. We can use the slider to evaluate how much we like a recipe. Let’s look at the source code. We have a recipe class that offers an image, a title, a label and a slider to evaluate the recipe. If the evaluation is changed, a notification procedure is triggered, which handles the updating of table content. For demo purposes, we create all recipe objects and collect them in an Observable Collection in the recipes class.

To display a recipe, we have a XAML file, defining the layout, and the associated C# file handling the data binding of a single recipe object. The home screen uses a XAML file for the layout presenting the list of recipes. The associated C# file handles the data binding of the list of recipes. Since we do not want to show the evaluation as a figure, we convert it into a number of hearts. The higher the evaluation, the more hearts there are.

We start VoiceOver on the iPhone to see how the app works for a visually disabled user.

[Screen reader:] Curry rice. 3 orange hearts emojis. Hamburger. Orange heart emoji. Orange heart emoji. Hotdog. 5 orange hearts emojis.

[Narrator:] VoiceOver cannot detect a single image. The number of hearts indicates how much we like a recipe, but VoiceOver does not understand this.

[Screen reader:] Hamburger. Orange heart emoji. Orange heart emoji. How do you like this recipe? Thumbs-down emoji. 25%. Adjustable. 35%. 50%. Thumbs-up emoji.

[Narrator:] There is no description for the large image. The purpose of the slider is not easy to understand. We need to fix this. We start with the recipe list. We want the screen reader to detect the image and its description. This can be achieved by setting the IslnAccessibleTree and Name automation properties.

We are using a converter function to display the number of hearts to express the evaluation. The new function will convert the evaluation into text, which can be read by a screen reader. We are adding this function to our C# code.

We continue adapting the code for the recipe page. The image needs to be detectable by a screen reader. Additionally, we add a description for it. The thumb icons do not add any useful information to the function of the slider, so we hide them from the screen reader. The slider does not only get a name, but also additional help text, which helps the user to understand its function.

Let’s see how this changes the behaviour of the app.

[Screen reader:] Curry rice. So-so. A plate of curry rice with carrots and peas, image. Hamburger. Bad. A hamburger with tomato, cheese and salad, image. Hotdog. Excellent. A hamburger with tomato, cheese and salad, image. A hamburger with tomato, cheese and salad, image. Hamburger. How do you like this recipe? Evaluate how much you like this recipe. 25%. Adjustable. Select the percentage to indicate how much you like this recipe. Swipe up or down with one finger to adjust the value. 35%. 50%. 60%. 75%. Demo Back button. Demo. Demo, heading.

[Narrator:] This looks much better now. One last thing. Please note how we are using names to define the font sizes; for example, body, header, title or subtitle. Please do not use figures to define the font size. If you use names to define the font size, you support the automatic text resizing function of the native operating systems, whereas if you use figures,you block this accessibility function.

This was a very short introduction to making a Xamarin.Forms app more accessible. For a detailed introduction, please refer to the original documentation.

Testing a cross-platform app

Accessibility features are always dependent on operating-system-specific features. Therefore, you always have the option to use the test tools of the native development platform. For example, on macOS, you can use the Accessibility Inspector, bundled with Xcode, to test your app created with Xamarin in the simulator. To learn more about how to test accessibility in the iOS simulator, please refer to the chapter Apps ‒ iOS.

Of course, you should always test your app on real devices of all target platforms for which you develop your app. This is the only way you can evaluate all the different settings and operations your app offers as the users will experience them. It is the only way of testing whether your app can access and use device-specific functions like the accelerometer or geolocation. For these tests, you need to know how the accessibility features of the operating system work. If you are not sure, then please refer to the Assistive Technology ‒ iOS or Assistive Technology‒ Android chapters. If you do not know how to operate the screen reader, please consider viewing the Screen readers ‒ iOS or Screen readers ‒ Android chapters.

Where to continue?

You have now had a general introduction to the different ways to improve accessibility using cross-platform app-development strategies.

Depending on your personal interests, you could continue with the following chapter:

  • Apps ‒ iOS

  • Apps ‒ Android

  • Apps ‒ Review

[Automated voice:] Accessibility. For more information visit:

Close tab