my blog

Many of the decision tasks studied in human factors require diagnosis, which is the process of inferring the underlying or “true” state of a system. Examples of inferential diagnosis include medical diagnosis, fault diagnosis of a mechanical or electrical system, inference of weather conditions based on measurement values or displays, and so on. Sometimes this diagnosis is of the current state, and sometimes it is of the predicted or forecast state, such as in weather forecasting or economic projections. The cognitive processes of problem solving and troubleshooting are often closely linked because they have so many overlapping elements. Both start with a difference between an initial “state” and a final “goal state” and typically require a number of cognitive operations to reach the latter. The identity of those operations is often not immediately apparent to the human engaged in problemsolving behavior. Troubleshooting is often embedded within problem solving in that it is sometimes necessary to understand the identity of a problem before solving it. Thus, we may need to understand why our car engine does not start (troubleshoot) before trying to implement a solution (problem solving). Although troubleshooting may often be a step within a problem-solving sequence, problem solving may occur without troubleshooting if the problem is solved through trial and error or if a solution is accidentally encountered through serendipity. While both problem solving and troubleshooting involve attaining a state of knowledge, both also typically involve performance of specific actions. Thus, troubleshooting usually requires a series of tests whose outcomes are used to diagnose the problem, whereas problem solving usually involves actions to implement the solution. Both are considered to be iterative processes of perceptual, cognitive, and response-related activities. Both problem solving and troubleshooting impose heavy cognitive demands, which limits human performance [461, 462]. Many of these limits are manifest in the heuristics and biases discussed earlier in the chapter, in the context of decision making. In troubleshooting, for example, people usually maintain no more than two or three active hypotheses in working memory as to the possible source of a problem [463]. More than this number overloads the limited capacity of working memory, since each hypothesis is complex enough to form more than a single chunk. Furthermore, when testing hypotheses, there is a tendency to focus on only one hypothesis at a time to confirm it or reject it. Thus, in troubleshooting our car we will probably assume one problem and perform tests to confirm that it is the problem. Naturally, troubleshooting success depends on attending to the appropriate cues and test outcomes. This dependency makes troubleshooting susceptible to attention and perceptual biases. The operator may attend selectively to very salient outcomes (bottom-up processing) or to outcomes that are anticipated (top-down processing). As we consider the first of these potential biases, it is important to realize that the least salient stimulus or event is the nonevent. People do not easily notice the absence of something [433]. Yet the absence of a symptom can often be a very valuable and diagnostic tool in troubleshooting to eliminate faulty hypotheses of what might be wrong. For example, the fact that a particular warning light might not be on could eliminate from consideration a number of competing hypotheses.

 

7.6.1 Principles for Improving Problem Solving and Troubleshooting

The systematic errors associated with troubleshooting suggest several design principles. 1. Present alternate hypotheses. An important bias in troubleshooting, resulting from top-down or expectancy-driven processing, is often referred to as cognitive tunneling, or confirmation bias [464, 407]. In troubleshooting, this is the tendency to stay fixated on a particular hypothesis (that chosen for testing), look for cues to confirm it (top-down expectancy guiding attention allocation), and interpret ambiguous evidence as supportive (top-down expectancy guiding perception). In problem solving, the corresponding phenomenon is to become fixated on a particular solution and stay with it even when it appears not to be working. Decision aids can challenge the persons’ hypothesis and highlight disconfirming evidence. 2. Create displays that can act as an external mental model. These cognitive biases are more likely to manifest when two features characterize the system under investigation. First, high system complexity (the number of system components and their degree of coupling or links) makes troubleshooting more difficult [465]. Complex systems are more likely to produce incorrect or “buggy” mental models [466], which can hinder the selection of appropriate tests or correct interpretation of test outcomes. Second, intermittent failures of a given system component turn out to be particularly difficult to troubleshoot [462]. A display that shows the underlying system structure, such as flow through the network of pipes in a refinery, can remove the burden of remembering that information. 3. Create systems that encourage alternate hypotheses. People generate a limited number of hypotheses because of working memory limitations [390]. Thus, people will bring in somewhere between one and four hypotheses for evaluation. Because of this people often fail to consider all relevant hypotheses [351]. Under time stress, decision makers often consider only a single hypothesis [467]. This process degrades the quality of novice decision makers far more than expert decision makers. The first option considered by experts is likely to be reasonable, but not for novices. Systems that make it easy for people to suggest many alternate hypothesis make it more likely a complete set of hypotheses will be considered.

7.5 Situation Awareness

fizik100 fizik100 fizik100 · 1400/8/7 02:15 ·

7.5 Situation Awareness

The diagnosis error made by the medical specialist, Amy in our vignette can be examined more thoroughly using the concept of situation awareness (SA). Situation awareness, or SA, characterizes people’s awareness and understanding of dynamic changes in their environment [447, 448, 449]. A pilot loses SA whenever he or she suffers a catastrophic controlled-flight into terrain [450, 229], and as we shall see later in Chapter 16, control room operators at the Three Mile Island nuclear power plant lost SA when they believed the water level in the plant to be too high rather than too low, a misdiagnosis that led to a catastrophic release of radioactive material [395]. SA is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” [383](p. 36). These three levels, perception (and selective attention), understanding, and prediction, must be applied to a specific situation. Thus, a person cannot be said to have SA without specifying what that awareness is (or should be) about. A car driver might have good awareness of navigational information and time (where I am and how much time it will take me to drive to my destination), but poor awareness of the vehicle ahead that is merging onto the highway. Improving situation awareness for navigation and for the merging vehicle would require very different designs. Note that SA does not define nor incorporate action. That concerns the decisions made from one’s awareness or assessment of the situation. Many elements of microcognition support SA and were covered in the previous chapter. Selective attention is necessary for the first level, while the second level of understanding depends very much upon both working memory and long-term memory. The third level, projection and prediction, has not yet been discussed but will be considered in more detail in the planning and scheduling section. In addition, mental models guide SA development by defining what information people pursue and the interpretation of that information. For example, Amy’s mental model of the operating room procedures might guide her to ask a nurse for estimated completion time for the perforated viscus procedure. She only asks about this procedure because her mental model of the other procedures gives her a good sense of when they would be done and so she only needs information about the procedure with an uncertain completion time. As noted above, situation awareness is not the same as performance. One can have good performance (a lucky decision outcome that was correct) without good awareness. Correspondingly, the pilot of an out-of-control aircraft may have very good situation awareness of the loss of stability; but be unable to perform the necessary actions to recover.

 

7.5.1 Measuring Situation Awareness

The importance of SA can often be realized after an accident by inferring that the loss of SA was partially responsible. In controlledflight-into-terrain accidents it is almost always assumed that the pilot lost awareness of the aircraft’s altitude over the terrain [450]. However, “measuring” SA after the fact by assuming its absence is not the same as measuring how well a particular system or operator maintains SA in the absence of an unexpected event [451]. A popular technique for SA measurement is the SA global assessment technique (SAGAT) [452]; in which the operator is briefly interrupted in the performance of a dynamic task and asked questions about it; for example, asking a driver to identify the location of other road traffic [453] or asking an anesthesiologist about the patient’s state [454] or asking the pilot to identify the direction to the nearest hazardous terrain [455]. Sometimes the display is blanked after the question, to assure that the information is stored in memory. One can then assess the accuracy of answering such questions. Alternatively, one can assess the time required to retrieve the correct answer off of a display that remains visible, in a technique called SPAM (Situation Present Assessment Method) [456]. SA can sometimes be measured by a subjective evaluation (“rate your SA on a scale of 1 to 10” [457]), which has been embodied in a well-used measurement tool called SART (situation awareness rating technique) [458]. However, a concern about the validity of such self-rating techniques is that people are not always aware of what they are not aware. This issue of metacognition is addressed at the end of this chapter. SA can be an important tool for accident analysis, understanding when its loss was a contributing factor [450]. To the extent that accidents may be caused by SA loss, an added implication is that systems should be designed and, when appropriate, certified to support SA. This becomes important when federal regulators are responsible for certification, such as the case with new aircraft or nuclear power plants. Although situation awareness is most commonly applied to individuals, distributed situation awareness merits consideration when multiple people work together [459]. Distributed situation awareness refers to the SA that the members of a team jointly hold. Distributed SA, like concepts of team mental model, can guide design when the focus shifts from individual to team performance. We cover these issues in more depth in Chapter 18.

7.5.2 Principles for Improving Situation Awareness

 Specific principles that follow from these considerations and from a recent review include [449]: 1. Create displays that help people notice changes (level 1 SA). Particularly in multitasking situations with dynamic systems, displays should highlight changes to make them easy for people to notice. Chapter 8 addresses issues of display layout to support SA. 2. Make the situation easy to understand (level 2 SA). Present information about the state of the system relative to the person’s goals rather than require that they interpret and mentally combine and transform information. This might also mean bringing together there are several display elements that might otherwise be placed in different locations. 3. Keep the operator somewhat “in the loop”. This issue will be addressed in more detail in Chapter 11 (Automation). The critical concept introduced here is related to the generation effect. People are more likely to remember actions, and the consequence of actions, if they themselves have generated the action, than if they were watching another agent generate the same action. Automobile manufacturers of self driving cars are struggling to find ways of keeping the driver somewhat in the loop (e.g., hands on the wheel), even as automation is steering the car, in order to preserve SA, should automation fail. 4. Help people project the state of the system into the future (level 3 SA). This is particularly important when the system responds slowly, like a supertanker, industrial oven, or air traffic system. Here create a display that shows the future state, such as the predictive displays we discuss in Chapter 8. This relieves the person of mentally simulating and projecting future states. 5. Organize information around goals. Rather than arbitrary or technology oriented placement of information, displays should cluster information according to the goals the person is trying to achieve. 6. Display to broaden attention. Recognizing that SA may be most critical for dealing with unexpected situations, displays should avoid narrowing people’s attention to a limited array of information that is specific to a particular task or limited to routine situations. Supporting SA when unexpected things happen typically means adding information to the display. This information must be carefully integrated to avoid issues of clutter. 7. Train for SA. When training for SA, it is important to realize that training for routine performance may conflict with training to maintain SA [460]. The former will focus on the information needed for the task as it was intended to be performed. The latter should focus on what is often a broader scope of selective attention, to be aware of the state of the world should the system fail.) Many of the biases relevant to diagnosis, discussed above, are paralleled by biases in situation awareness: for example the confirmation bias or anchoring. Hence debiasing training, can be effective here.

انتهای مورد 7-4

fizik100 fizik100 fizik100 · 1400/8/7 02:12 ·

As we will discuss in the following chapter, sources of information that need to be integrated in diagnosis, should be made available simultaneously (not sequentially; to mitigate anchoring), and in close display proximity so that all can be accessed with minimal effort. Emergent features of object displays can sometimes facilitate the integration process in diagnosis [442, 443, 444]. Automation and decision support tools. Finally, automation and expert systems have offered promise in supporting human decision making. This is described in much more detail in Chapter 11, but to provide a link here, such support can be roughly categorized into front end (diagnosis and situation assessment) and back end (treatment, choice and course-of-action recommendations) support. This dichotomy is well illustrated in the two major classes of medical decision aids [445, 446], because automation is so closely bound to decision support tools and expert systems decision advisors, we postpone further discussion of this topic until Chapter 11, where the entire chapter is devoted to human-automation interaction. 

ادامه 7.4 از مورد Framing bias تا وسط صفحه 228

In contrast suppose you are late for a job interview across town. You can speed, with a high chance of getting to the appointment on time, but also incurring the risk of getting caught by the police, fined, and be very late for the appointment. Alternatively you can choose to drive the speed limit, and certainly be slightly late. Here the choice is between two negatives, a risky one and a sure thing. You are “caught between a rock and a hard place”, and under such circumstances people tend to be risk-seeking. [413, 410]. The second of these contexts, the negative frame of choice, is often characteristic of real life decisions. For example, in addition to the speeding choice above, consider a company with major safety violations in its plant. Management can choose to invest heavy funding into addressing them through new equipment, hiring safety consultants, and pulling workers off the line for safety training, thus incurring the sure loss of time and money. Alternatively they can chose to take the risk that there will be neither a serious injury nor a surprise inspection from federal safety inspectors. All too-often, the framing bias will lead to an inclination toward the second option, at the expense of worker safety. A direct expression of this form of the framing bias is known as the sunk cost bias [414, 415]. This bias affects individual investors who hesitate to sell losing stocks (a certain loss), but tend to sell winning stocks to lock in a gain. Likewise, when you have invested a lot of money in a project that has “gone sour”, there is a tendency to keep it in the hopes that it will turn around. Similarly, managers and engineers tend to avoid admitting a certain cost when replacing obsolete equipment. The sunk cost bias describes the tendency to choose the risky loss over the sure one, even when the rational, expected value choice should be to abandon the project. Because people tend to incur greater risk in situations involving losses, decisions should be framed in terms of gains to counteract this tendency. Y Sunk cost bias makes it difficult for you to make money in the stock market. 7. Default heuristic. Faced with uncertainty regarding what choice to make people often adopt the default alternative [416]. Most countries use their drivers’ licenses to allow people to specify whether to donate their organs or not in the event of a fatal crash. Countries differ according to whether people need to opt in and decide to donate, or opt out and decide not to donate. Over 70% people follow the default and let the designers of the form decide for them. A similarly large effect is seen for people choosing to enroll in a retirement savings plan or having to opt out. Defaulting people into a retirement plan increased participation from about 50% to about 90% [417, 418].

7.4.2 Benets of Heuristics and the Cost of Biases The long list of decision-making biases and heuristics above may suggest that people are not very effective decision makers in everyday situations, and might suggest that human contributions to decision making are a problem that should be fixed. However, this perspective neglect the fact that most people do make good decisions most of the time, and have the flexibility to deal with situations that can’t be reduced to an equation. The list of biases accounts for the infrequent circumstances, like the decision makers in the Three Mile Island nuclear plant, when decisions produce bad outcomes. One reason that most decisions are good, is that heuristics are accurate most of the time. A second reason is that people have a profile of resources: information-processing capabilities, experiences, and decision aids (e.g., a decision matrix) that they can adapt to the situations they face. Experts are proficient in adjusting their decision strategies. To the extent that people have sufficient resources and can adapt to them, they make good decisions. When people are not able to adapt, such as where people have little experience with the situations, poor decisions can result [357]. The focus can be either on the general high quality of most decisions, or on the errors due to biases associated with heuristics. Both of these approaches are equally valid, but focusing on the errors supports the search for human factors solutions to eliminate, or at least mitigate those biases that do show. It is to this that we now turn. 7.4.3 Principles for Improving Decision Making Decision making is often an iterative cycle in which decision makers are often adaptive, adjusting their response according to their experience, the task situation, cognitive ability, and the available decision-making aids. It is important to understand this adaptive decision process because system design, training, and decision aids need to support it. Attempts to improve decision making without understanding this process tend to fail. In this section, we briefly discuss some possibilities for improving human decision making: task redesign, including choice architecture and procedures; training; displays; and automated decision support systems. Task redesign. We often jump to the conclusion that poor performance in decision making means we must do something “to the person” to make him or her a better decision maker. However, sometimes a change in the system can support better decision making, eliminating the need for the person to change. As described in Chapter 1, decision making may be improved by task design. Changing the system should be considered before changing the person through training or even providing a computer-based decision aid. For example, consider the situation in which the removal of a few control rods led to a runaway nuclear reaction, which resulted in 3 deaths and 23 cases of exposure to high levels of radioactivity. Learning from this experience, reactor designers now create reactors that remain stable even when several control rods are removed [227]. Creating systems with greater stability leaves a greater margin for error in decisions and can also make it easier to develop accurate mental models. Choice architecture. The structure of the interaction influences choice in much the same way architecture of a building influences the movement of people through buildings [18]. Choice architects influence decisions by recognizing the natural cognitive tendencies we have discussed and presenting people with information and options that will take advantage of these tendencies to generate good decisions. The following principles show how choice architecture can nudge people towards decisions [419]. 1. Limit the number of options. Because too many options place a high burden on the decision maker, the number of options should be limited to the fewest number that will encourage exploration of options. Although the appropriate number depends on the specific elements of the decision maker and situation, four to five options where none is better on all dimensions. Fewer options should be offered if decision makers are less capable, such as older people, those in a time pressured situation, or less numerate decision makers faced with numerical options [420, 419]. 2. Select useful defaults. The effect of defaults on organ donation rates demonstrates the power of defaults: People often choose default options. Options for designing defaults include random, uniform choice for all users, forced choice, persistent default where the system remembers previous settings, and predictive default where the system picks based on user characteristics. If there is no time pressure and the choice is important then active choice should be used. If there is an obvious benefit to a particular choice then a uniform default for all users should be used, such when organizations select double-sided printing as the default [421]. As laptops, tablet and desktop computers, as well as phones, TVs and cars become more integrated predictive defaults become more feasible and valuable.

3. Make choices concrete. People focus on concrete immediate outcomes and tend to be overly optimistic about future regarding available time and money. To counteract people’s tendency to neglect the abstract future situation a limited window on opportunity can focus their attention like: “offer ends midnight tonight.” Another approach is to translate the abstract future value choices into immediate, salient consequence. For example, show people their future self so they can invest for that future self [422]. People who saw realistic computer renderings of older version of themselves invested more.

4. Create linear, comparable relationships. People tend to struggle to consider complex transformations and non-linear relationships. Transforming variables to their concrete linear equivalent promotes better decisions. For example, describing interest rates in terms of the number of payments to eliminate debt in three years is more effective than expecting people to calculate the non-linear, compounding effect of interest. Likewise, presenting fuel economy data in terms of gallons per 100 miles rather than miles per gallon, eliminates the mental transformation that is needed to compare vehicles [423]. The units presented should be those directly relevant to the decision.

 

Consistent with our previous discussion of skill-, rule-, and knowledgebased performance, how people make decisions depends on the situation. People tend to make decisions at one of three ways: intuitive skill-based processing, heuristic rule-based processing, and analytical knowledge-based processing. Making decisions as described by the normative models is an example of analytic decision making and using satisficing heuristics is an example of rule-based decision making. Intuitive decision-making occurs when people recognize the required response without thinking. As we learned in the context of Figure 7.2, people with a high degree of expertise often approach decision making in a fairly automatic pattern matching style, just as Amy did with her first diagnosis. Recognition primed decision making (RPD) describes this process in detail [374]. In most instances, experts simply recognize a pattern of cues and recall a single course of action, which is then implemented. In spite of the prevalence of rapid patternrecognition decisions, there are cases where decision makers will use analytical methods, such as when the decision maker is unsure of the appropriate course of action. The decision maker resolves the uncertainty by imagining the consequences of what might happen if a course of action is adopted: a mental simulation, where the decision maker thinks: “if I do this, what is likely to happen” [375]. Mental simulation can help assess the alternatives, action, or plan under consideration [376]. In this process, the mental simulation can play out possible solutions based on information from the environment and their mental model. Mental simulation shows which options are the most promising, and also generates expectations for other cues not previously considered [377]. Also, if uncertainty exists and time is adequate, decision makers will spend time to evaluate the current situation assessment, modify the retrieved action plan, or generate alternative actions [356]. Experts adapt their decision-making strategy to the situation. Table 7.2 summarizes some of the factors that lead to intuitive rulebased decision making and those that lead to analytical knowledgebased decision making. These characteristics of the person, task, and technology influence the use of heuristics as well as the prevalence of biases that sometimes accompany those heuristics, which we discuss in detail in the next section.

7.4.1 Vulnerabilties of Heuristics: Biases Cognitive heuristics are rules-of-thumb that are easy ways of making decisions. Heuristics are usually very powerful and efficient [378], but they do not always guarantee the best solution [354, 379]. Unfortunately, because they represent simplifications, heuristics occasionally lead to systematic flaws and errors. The systematic flaws represent deviations from the normative model and are sometimes referred to as biases. Experts tend to avoid these biases because they draw from a large set of experiences and they are vigilant to small changes in the pattern of cues that might suggest the heuristic is inappropriate. To the extent a situation departs from these experiences, even experts will fall prey to the biases associated with various heuristics. Although the list of heuristics is large (as many as 37 [380]), the following presents some of the most notorious ones. Acquire and Integrate Cues: Heuristics and Biases. The first stage of the decision process begins with attending to information and integrating it to understand the situation or form a situation assessment (e.g., to support stage 2).

1. Attention to a limited number of cues. Due to working memory limitations, people can use only a relatively small number of cues to develop a picture of the world or system. This is one reason why configural displays that visually integrate several variables or factors into one display are useful (see Chapter 8 for a description). 2. Anchoring and cue primacy. When people receive cues over a period of time, there are certain trends or biases in the use of that information. The first few cues receive greater weight than subsequent information–cue primacy [381]. It often leads people to “anchor" on initial evidence and is therefore sometimes called the anchoring heuristic [354], characterizing the familiar phenomenon that first impressions are lasting. Amy anchored on the cues supporting her initial diagnosis, and gave little processing to additional information available in the phone call by the patient 24 hours later. Importantly, when assessing a dynamic changing situation, the anchoring bias can be truly detrimental because older information becomes progressively less reliable, even as the older information was, by definition, the first encountered and hence served as the anchor. The order of information has an effect because people use the information to construct plausible stories or mental models of the world or system. These models differ depending on which information is used first [382]. The key point is that, information processed early is often most influential. 3. Cue salience. Perceptually salient cues are more likely to capture attention and be given more weight [383, 11]; see also Chapter 6. As you would expect, salient cues in displays are things such as information at the top of a display, the loudest alarm, the largest display, the loudest most confident sounding voice in the room, and so forth. Unfortunately, the most salient cue is not necessarily the most diagnostic, and sometimes very subtle ones, such as the faint discoloration observed by Amy are not given much weight. 4. Overweighting of unreliable cues. Not all cues are equally reliable. In a trial, some witnesses, for example, will always tell the truth. Others might have faulty memories, and still others might intentionally lie. However, when integrating cues, people often simplify the process by treating all cues as if they are all equally valid and reliable. The result is that people tend to give too much weight to unreliable information [384, 385]. Interpret and Assess: Heuristics and Biases. After a limited set of cues is processed in working memory, the decision maker generates and interprets the information, often by retrieving similar situations from long-term memory. These similar situations represent hypotheses about how the current situation relates to past situations. There are a number of heuristics and biases that affect this process:

1. Availability. The availability heuristic reflects people’s tendency to make certain types of judgments or assessments, for example, estimates of frequency, by assessing how easily the state or event is brought to mind [386, 387, 388]. People more easily retrieve hypotheses that have been considered recently and hence more available to memory. The implication is that although people try to generate the most likely hypotheses, the reality is that if something comes to mind relatively easily, they assume it is common and therefore a good hypothesis. As an example, if a physician readily thinks of a hypothesis, such as acute appendicitis, he or she will assume it is relatively common, leading to the judgment that it is a likely cause of the current set of symptoms. Unusual illnesses tend not to be the first things that come to mind to a physician. Amy did not think of the less likely condition. In actuality, availability to memory may not be a reliable basis for estimating frequency. 2. Representativeness. Sometimes people diagnose a situation because the pattern of cues “looks like” or is representative of the prototypical example of this situation. This is the representativeness heuristic [353, 389], and usually works well; however, the heuristic can bias decisions when a perceived situation is slightly different from the prototypical example even though the pattern of cues is similar or representative. 3. Overconfidence. People are often biased in their confidence with respect to the hypotheses they have brought into working memory [390, 351], believing that they are correct more often than they actually are and reflecting the more general tendency for overconfidence in metacognitive processes, as described in Chapter 6 [391]. Such overconfidence appears to grow when judgments are more predictive about the future (than of the current state) and when predictions become more difficult [11]. As a consequence, people are less likely to seek out evidence for alternative hypotheses or to prepare for the circumstances that they may be wrong. Less skilled people are more likely to overestimate their ability, even when they understand their relative ability [392]. 4. Cognitive tunneling. As we have noted earlier in the context of anchoring, once a hypothesis has been generated or chosen, people tend to underutilize subsequent cues. We remain stuck on our initial hypothesis, a process introduced in the previous chapter as cognitive tunneling [393]. Examples of cognitive tunneling abound in the complex systems [394]. Consider the example of the Three Mile Island disaster in which a relief valve failed and caused some of the displays to indicate a rise in the level of coolant [395]. Operators mistakenly thought that that emergency coolant flow should be reduced and persisted to hold this hypothesis for over two hours. Only when a supervisor arrived with a fresh perspective did the course of action get reversed. Notice that cognitive tunneling is different than the primacy, which occurs when the decision maker is first generating hypotheses.