Monthly Archives: October 2012

Fitts’ Law: Appreciating Size, Space and Speed.



Playstation would have been a failure if its controller was like THAT. Period.

Fitts’ Law, in interaction design, describes the time taken to point at a target based on the size of and the distance to the object. It is governed by the following law:


T = k log2 (D/S + 1.0)


T = time to move the pointer to a target

D = distance between the pointer and the target

S = size of the target

k is a constant of approx. 200ms/bit

Essentially, the aim of using Fitts’ Law is to help designers determine the location and size of buttons, as well as the spacing between them so as to enhance user experience. This is an especially important point to note in cases with limited space, for example mobile devices. A trade-off always exists among the size of the object, the distance between the objects, and the speed and accuracy of getting to that object; the following video sums this up (embedding disabled, kindly click on the link):


The importance of Fitts’ Law should not be ignored, and even more so when the time to physically locate an object is critical to the task at hand – pressing the “shoot” button to score the championship-winning goal in the final minute of a soccer match in FIFA 13 after a mazy dribble through the opponent’s entire team. Now that’s critical!

And back to my Playstation…


Reading, Speaking, Listening

People can interact with systems in different ways, when it comes to software it’s mainly reading, speaking or listening. Deciding how users will interact with a software has implications depending on the purpose of the software and the targeted demographic of users.


Reading is the most common way for people to interact with software. Reading is generally more advantageous compared to being spoken to. People typically read information faster than if the same information is spoken to them. Text also allows the user to re-read if they have doubts about the information.

However, there are implications to be take note of when designing a system based around user reading. As some people might have difficulty reading, zooming in functions should be available for people who might find text too small. Dyslexics would also have difficulty reading text, hence other forms of interaction like listening might be taken into account if design needs to be catered to them.

Most web browsers have zooming capabilities.


Having users speaking to software has recently became more popular with smartphone applications like Apple’s Siri. However there are still issues with voice recognition not being accurate especially when users speak with different accents. Hence some implications are that speech based menus should be kept to a minimum, in terms of number of spoken options and length of commands. People generally find it hard to follow and remember menus with too many spoken options. A long voice command with multiple words also increases the possibility that some words might be inaccurately recognized by the software, resulting in a failed command.


Listening requires less cognitive effort than reading in general. Hence children usually prefer being read stories instead of reading it themselves. Hence software that targets children as their demographic typically interacts with these users by speaking to them. Spoken text is also useful in software that teaches new languages to people. Implications to design with text spoken to users include having the text spoken slowly to allow users to fully understand what is being spoken to them. Intonation needs to be accentuated in artificially synthesized speech, as users might find them harder to understand than human voices.

Language Learning Software that aids users by speaking to them

Presenting this week!

Our group will be doing a presentation on Scenarios during lecture this week, stay tuned!