2.3.2 Step 2: Collect Task Data

fizik100 fizik100 fizik100 · 1400/8/11 17:17 · خواندن 6 دقیقه

Task data are collected by observing and talking with multiple users.
The overall objective is to see the world through the eyes of the person
the system is being designed for, and to develop empathy for
the challenges, demands, and responsibilities they face. This empathy
helps focus attention to the details of the system that matter to
the user. These details might be very different than those that might
be noticed by the engineer implementing the design. The particular
way to understand users’ tasks depends on the information
required for the analysis. Ideally, human factors specialists observe
and question users as they performtasks in the place where they
typically perform those tasks. This is not always possible, and it
may bemore cost effective to collect some information with other
techniques, such as surveys or questionnaires. Task data collection
techniques include:

1. Observations and questions of people as they use an existing
system

2. Retrospective and prospective verbal protocol analysis

3. Unstructured and structured interviews including focus groups

4. Surveys and questionnaires

5. Automatic data recording

Observation involves watching users as they interact with existing
versions of the product or system. This is one of themost useful
data collection methods. If we were interested in car design, we
would find drivers who represent the different types of people the
car is to be designed for and then observe how they use their cars.
People are asked to performthe activities under a variety of typical
scenarios, and the analyst observes the work, asking questions as
needed. Observation should be performed in the environment that
the person normally accomplishes the task (See Table 2.1).

Source: Wikimedia Commons. 3
Probe questions for theMaster/Apprentice
observation approach:
Who and what is needed to perform
the task?
What happens before, what after?
What does the task change, how is
this detected?
What has to be remembered?
What is the consequence of failure to
complete the task?
When in the day, and when relative to
other events, is the task performed?
How do people communicate and
coordinate their activity?

Table 2.1 Observing people to guide
design.

The meaning behind users’ tasks is often revealed in their
thoughts, goals, and intentions, and so observations of physical
activity may not be sufficient to understand the tasks. This is particularly
true with primarily cognitive tasks that may generate little
observable activity. In such cases, it can be useful for people to
think out loud as they performvarious tasks. One approach is to
adopt a master-apprentice relationship, where the observer acts
as an apprentice trying to learn how the user performs tasks [39].
Adopting this relationship makes it easy for observers to ask questions that help users to describe their underlying goals, strategies,
decisions

Retrospective and prospective protocol analysis address important
limits of direct observations. Direct observations disrupt
ongoing activity or they can fail to capture rarely occurring situations.
For example, trying to understand how people deal with
being lost as they drive would be difficult to observe because talking
to the driver could be distracting and the analyst would have to
ride with the driver for many trips to observe the rare case that they
get lost. Talking about tasks is termed verbal protocol, and retrospective
verbal protocols require that people describe past events
and prospective verbal protocols require that people imagine how
they would act in future situations

Video recordings of users’ activity are an effective way to prompt
retrospective protocols. The human factors specialist and user can
watch the video together and the users can describe what they
were thinking as they performed the tasks. The human factors specialist
can pause the playback and ask probe questions. Because
users do not have to performthe task and talk about it at the same
time retrospective protocols can be easier on the user. Retrospective
protocols can even yield more information than concurrent
protocols.

Structured and unstructured interviews involve the human
factors specialist asking the user to describe their tasks. Structured
interviews use a standard set of questions that ensure the interview
captures specific information for all interviewees. Unstructured
interviews use questions that are adjusted during the interview
according to the situations. The analyst might ask about how the
users go about the activities and also about their preferences and
strategies. Analysts should also note points where users fail to
achieve their goals,make errors, show lack of understanding, and
seem frustrated or uncomfortable.

Unstructured interviews use probe questions similar to those
used for direct observation. These questions address when, how,
and why a particular task is performed, as well as the consequences
of not performing the task. Critical incident technique is a particularly
useful approach for understanding how people respond to
accident and near accident situations in high-risk systems. Because
accidents are rare, direct observation is not feasible. With
the critical incident technique, the analyst asks users to recall the
details of specific situations and relive the event. By reliving the
event with the user, the analyst can get insights similar to those
from direct observation [42].

Critical incident technique
makes it possible to “observe”
past events.

Focus groups are interviews with small groups of users, rather
than individuals [43, 44]. Focus groups typically consist of between
six and ten users led by a facilitator familiar with the task and
system. The facilitator should be neutral with respect to the outcome
of the discussion and not lead the discussion to a particular
outcome. One benefit of focus groups is that they cost less than individual interviews because they require less time for the analyst.
Also discussion among users often draws out more information
because the conversation reminds them of things they would not
otherwise remember.

Observations are typically more valuable than interviews or
focus groups because what people say does not alwaysmatch what
they do. In addition, peoplemay omit critical details of their work,
they may find it difficult to imagine new technology, and theymay
distort their description to avoid appearing incompetent. It is often
difficult for people to describe how they would perform a given
task without actually doing it—describe how you tie your shoes
without being able to touch and see your shoes.

Surveys and questionnaires are typically used after designers
have obtained preliminary descriptions of activities or basic tasks.
Questionnaires are often used to affirmthe accuracy of the information,
determine the frequency with which various groups of
users performthe tasks, and identify any user preferences or biases.
These data help designers prioritize different design functions or
features. See Chapter 3 for a more complete discussion of surveys
and their limits.

Automatic data recording uses smartphones and activity monitors
to record people’s activities unobtrusively. An example of such
a system is a data logging device that records the position of the
car, its speed, and any hard braking or steering events. Such a
device can show where drivers go, when they choose to travel, and
whether they travel safely. Such data has the benefit of providing a
detailed and objective record, but it lacks information regarding the
purpose behind the activity. This limitation might be avoided by
using automatically recorded data to prompt retrospective verbal
protocol.

Limitations of data collection techniques. All methods have
limits, but combinations of the methods can compensate. A more
general limit is that all of these methods document existing behavior.
Designing to support existing behavior means that new
controls, displays, or other performance aids might simply enable
people to do the same tasks better, butmight not produce dramatic
innovation. Innovation requires the analysis to focus on underlying
goals and needs, and identify different ways of accomplishing
these goals.

One way to go beyond describing existing tasks is to evaluate
the underlying characteristics of the environment and the control
requirements of the system. In driving, this would mean examining
the reason why people get into their cars. Often, such an analysis
reveals new ways to doing things that might not be discovered by
talking with people in a focus group.

Innovation based on data requires
analysts to go beyond
current activities and identify
better ways to achieve users’
goals.