People can interact with systems in different ways, when it comes to software it’s mainly reading, speaking or listening. Deciding how users will interact with a software has implications depending on the purpose of the software and the targeted demographic of users.
Reading is the most common way for people to interact with software. Reading is generally more advantageous compared to being spoken to. People typically read information faster than if the same information is spoken to them. Text also allows the user to re-read if they have doubts about the information.
However, there are implications to be take note of when designing a system based around user reading. As some people might have difficulty reading, zooming in functions should be available for people who might find text too small. Dyslexics would also have difficulty reading text, hence other forms of interaction like listening might be taken into account if design needs to be catered to them.
Having users speaking to software has recently became more popular with smartphone applications like Apple’s Siri. However there are still issues with voice recognition not being accurate especially when users speak with different accents. Hence some implications are that speech based menus should be kept to a minimum, in terms of number of spoken options and length of commands. People generally find it hard to follow and remember menus with too many spoken options. A long voice command with multiple words also increases the possibility that some words might be inaccurately recognized by the software, resulting in a failed command.
Listening requires less cognitive effort than reading in general. Hence children usually prefer being read stories instead of reading it themselves. Hence software that targets children as their demographic typically interacts with these users by speaking to them. Spoken text is also useful in software that teaches new languages to people. Implications to design with text spoken to users include having the text spoken slowly to allow users to fully understand what is being spoken to them. Intonation needs to be accentuated in artificially synthesized speech, as users might find them harder to understand than human voices.