Monday, August 16, 2010

Affordances 1

Affordances and physicality

Objects physical form determines peoples’ expectancy about possible actions to be carried out with them. “The physical form of an interface fundamentally shapes the kinds of interactions that users can and will perform” Benford et al, 2005, p. 7. This idea grasps on the Gibsonian notion of affordances (Gibson, 1977, 1979). They (affordances) are “properties of the world that make possible some actions to an organism equipped to act in a certain way” (Gaver, 1991, p. 80). Norman (1999) described two different kinds of affordances: physical or real affordances and perceived affordances. They may but do not necessarily overlap. Physical affordances are built in the computational artifact subcomponents (keyboard, display screen, pointing device, and selection buttons (e.g., mouse buttons) that affords pointing, touching, looking, and clicking (Norman, 1999). They are independent of the perceiver; they are relationships between the real world and the perceiver. Perceptual affordances refer to that class of affordances for which there is perceptual information for an existing [real] affordance (Gaver, 1991). They are things that are visible on the screen (i.e. an icon); they are “visual feedback that advertise the affordances” (Norman, 1999, p. 40). Perceptual affordances play an important role in the world of screen-based products as well as cultural conventions. The latter are socially established practices. The changing form of a cursor, specifying what actions are possible at a given moment with a given icon, is an example of a cultural convention for interface design. Conventions work only if they are known to the user, if they have been learned, unlike the affordances that do not involve memory or inference (Norman, 1999).

When “putting” “affordances” in the world (interface), people can make mistakes. Gaver (1991) wrote, “Distinguishing affordances from perceptual information about them is useful in understanding ease of use” (p. 80). He showed that in respect to the existence of an affordance and the existence of perceptual information for it there are three classes of affordances: (1) perceptible affordances (perceptual information is available for an existing affordance), (2) hidden affordances (there is no information available for an existing affordance, and must be inferred from other evidence), and (3) false affordances (information suggests a nonexistent affordance). The fourth class is that of correct rejections. People will usually not think of a given action when there are no affordances for it or any perceptual information suggesting it (p. 80). They may think of it if they desire a certain action (see below the differentiation between expected, sensed, and desired movements).











Figure 1. Separating affordances from the information available about them (Gaver, 1991, p. 80)

The notion of affordance is very promising for interface design, because it implies that if the affordance exists and is perceived, then a person acting on that interface will know how to use it without relying on memory (retrieve cultural conventions) or thinking (inferring information). Real affordances are in the world; they rely on the physical shape of the objects. Therefore, it is important to bring physicality back by designing devices based on perceived physical affordances rather than on cultural conventions.

Physicality is a new trend in User Interfaces considered by Norman (2007) to be the an important UI breakthrough. The main idea of physicality is the use of physi­cal controls and devices, where “we control things by physical body movement, by turning, moving, and manipulating appro­priate mechanical devices” (Norman, 2007, p. 43), i.e. tuning the radio by turning a knob instead of acting over a complex graphical interface. It is not a reversion to mechanical controls; it is a shift to physical devices that are “coupled with intelligent, embedded processors and communication” (Norman, 2007, p. 47).


References

Gaver, W. (1991). Technologie affordances. Paper presented at the Conference on Human Factors in Computing Systems, New Orleans, Louisiana, Unated States.

Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, Acting, and Knowing: Toward an Ecological Psychology (pp. 62-82): Hillsdale, NJ: Erlbaum.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.

Norman, D. A. (1999). Affordances, conventions and design. interactions, 6(3), 38-43.

Norman, D. A. (2007). The Next UI Breakthrough, Part 2: Physicality. interactions, 14(4), 46-47.

Thursday, September 10, 2009

User Centered Design in Interface Design: Assumptions and Consequences of Considering that Users Count

Being involved in a development process of some Tangible User Interface for conceptual design, I tried for a year now to put some order in my mind about what interface design really is, what User Centered Design (UCD) and some other approaches (Activity Centered Design, Usability Engineering, and so on) have to do with each other. I must confess that I am still confused. There are a lot of models and approaches, like Contextual Design, Participatory Design, Usability Engineering. I had some trouble to understand how they connect to each other and why something new is needed is not all the time easy to get. Anyway, my biggest problem is not a historical one. I am rather preoccupied with some consequences of applying UCD in the developmental process.

Before talking about anything else, let us see what UCD actually is. UCD is defined by one of his creators and big promoters as “a philosophy based on the needs and interests of the user, with an emphasis on making products usable and understandable” (D. A. Norman, 2001, p. 188). The big idea about UCD is quite easy to get: place prospective users are in the focus of the development process. However, I am preoccupied with how this philosophy shapes the developmental process. Therefore I am going to talk about some important assumptions of UCD and their consequences.

The first assumptions I want to talk about is: users know what they need and want, and its consequence: therefore, following their suggestions is the best way to create a well-designed interface. A hard criticism of this assumption comes from the field of interaction design. Cooper (1999) warns about the danger of blindly following users’ suggestions. In this way the design process takes the form of a “customer-driven death spiral” (Cooper, 1999). One of the most important problems of costumer-driven products is the lack of coherence in design. Their development process is reduced to a mutation from one release to an other, instead of “growing in a an orderly manner” (Cooper, 1999, p. 221).

Anyway, let us place the users in the focus in the focus of the design process, assume that they know what they want, and see what happened. The next question arising in my mind is: which users should the interface designer consider as being more important?

There are various classifications, not all the time that different though. For exemplification I propose Eason’s classification. According to him, three different categories of users have to be taken into account in the development process: primary users (frequent hands-on users of the proposed system), secondary users (occasional users or those who use the system trough an intermediary), and tertiary users (those affected by the introduction of the system or who will influence its purchase, but who are unlikely to be hands-on users) (Eason, 1988). Their wishes and needs may not always coincide. So once again, whose wishes should the interface designer take into account. I will propose to care more for the primary user as the ones that eventually will have to adapt to your interface, if the tertiary users are going to purchase it. Happily for us sometimes these category of user coincide.

A second problematic assumption is: users know what they are talking about. Is that true or are users’ reports usually subjective and seldom contain all relevant information. I claim that it is not enough to ask the users what they need and it is difficult to let them analyze their own activity. Users analysis should reveal the users’ mental models (D. A. Norman, 1986) of the task, much of it consisting in users’ tacit knowledge in respect to the way they fulfill their task. This means that the main concern of the analyst will be the users’ procedural knowledge concerning the most familiar and obvious part of the job they are trying to accomplish. Users are asked to explain why and how they do certain things. In order to do this, they have to become aware of their own behavior. Nevertheless, the retrospective description of events is not always very accurate, and users tend to leave out the ordinary activities, concentrating on particularly exciting or boring activities. How true is the wheeze “out of sight, out of mind”? What is the solution? I argue for: Get some objective data!

Let us say that we have eliciting requirements for an interface (mailing program, what ever), we developed a mock up and started a user test session. What can go wrong? We just did what they wanted, and hope that the elicitation process is over. No, it is not always that easy. Users can “change their minds once they see the possibilities more clearly, and discoveries made during later phases may also force retrofitting requirements” (Goguen & Linde, 1993, p. 152). Once again, we remind ourselves that users do not really know all the time what they need, and there are no guarantees that by following their suggestions the product is going to be a success. The development process is an ongoing one, the possibility of deciding different courses of action have to remain possible. User tests have to be performed, so that the elicited requirements can be proofed and … changed. The consequence of this assumption is the development of interfaces that correspond to the users' wishes, but nobody wants to buy them.

The last assumption I would like to talk about is: mental models, skills, and preferences are stable over time. In the UCD approach is already a common fact that considering users in the development process is very important: the product being developed has to be adapted to their experience and mental models, and to take account of their skills and preferences. This way the acceptance of a new product increases in a participatory design process, and deployment problems can be avoided. I am not going to argue that they are not important or do not exist. I just want to ask: Are mental models, skills, and preferences stable over time? and argue that they are not, by asking some more questions. Should one think the same way about operating systems like Windows as he/she did ten years ago? Are the skills from the first day of interacting with an interface equal to those gained after one month of experience? Should we let users change their preferences?

Users may wish for easy-of-learning when they start learning to work with a new interface and for efficiency, defined as speed (with accuracy) in which users can complete the tasks, when they got used to it. Some fans of software upgrades could argue that the new release of the software is going to solve this problem. The question is for whom, for beginner or for experts. I think that most of the time some functions are added, but they are not going to solve the problem. Not all the users start at the same time, learn with the same speed, and even if, they cannot learn with the speed of software upgrades. The solution is to be aware of these changes, to make the software more personalized. Let the user choose when he has reached the next level. Do not give them thousands of functions from the beginning.

The consequence of this assumption is the development of complicated programs, with so many functions and some many ways of doing the same thing, that no user can possibly know them. Those programs that give the beginner no chance of doing his job and transform the experts in proficient clickers. By proficient clickers I mean, those users pressing trying to get rid of pop up windows that try to teach the users all the time how to do what he is doing, or to prevent them to do some “beginners” mistakes.

I think that it is time to get sure that we understand all assumptions of the design philosophy we promote, and their consequences for the developmental process. What do you think?

Some books ....

Cooper, A. (1999). The Inmates are Running the Asylum: SAMS Maymillian Computer Publishing.
Eason, K. (1988). Information Technology and Organisational Change: Taylor & Francis.
Goguen, J. A., & Linde, C. (1993). Technique for requirements elicitation. Paper presented at the Requirements Engineering '93.
Norman, D. A. (1986). Cognitive Engineering. In D. Norman & S. Draper (Eds.), User Centered System Design: New Perspectives on Human-Computer Interaction: Lawrence Erlbaum Associates.
Norman, D. A. (2001). The Design of Everyday Things (Fourth printing ed.): MIT Press.