my blog

ادامه 7.4 از مورد Framing bias تا وسط صفحه 228

In contrast suppose you are late for a job interview across town. You can speed, with a high chance of getting to the appointment on time, but also incurring the risk of getting caught by the police, fined, and be very late for the appointment. Alternatively you can choose to drive the speed limit, and certainly be slightly late. Here the choice is between two negatives, a risky one and a sure thing. You are “caught between a rock and a hard place”, and under such circumstances people tend to be risk-seeking. [413, 410]. The second of these contexts, the negative frame of choice, is often characteristic of real life decisions. For example, in addition to the speeding choice above, consider a company with major safety violations in its plant. Management can choose to invest heavy funding into addressing them through new equipment, hiring safety consultants, and pulling workers off the line for safety training, thus incurring the sure loss of time and money. Alternatively they can chose to take the risk that there will be neither a serious injury nor a surprise inspection from federal safety inspectors. All too-often, the framing bias will lead to an inclination toward the second option, at the expense of worker safety. A direct expression of this form of the framing bias is known as the sunk cost bias [414, 415]. This bias affects individual investors who hesitate to sell losing stocks (a certain loss), but tend to sell winning stocks to lock in a gain. Likewise, when you have invested a lot of money in a project that has “gone sour”, there is a tendency to keep it in the hopes that it will turn around. Similarly, managers and engineers tend to avoid admitting a certain cost when replacing obsolete equipment. The sunk cost bias describes the tendency to choose the risky loss over the sure one, even when the rational, expected value choice should be to abandon the project. Because people tend to incur greater risk in situations involving losses, decisions should be framed in terms of gains to counteract this tendency. Y Sunk cost bias makes it difficult for you to make money in the stock market. 7. Default heuristic. Faced with uncertainty regarding what choice to make people often adopt the default alternative [416]. Most countries use their drivers’ licenses to allow people to specify whether to donate their organs or not in the event of a fatal crash. Countries differ according to whether people need to opt in and decide to donate, or opt out and decide not to donate. Over 70% people follow the default and let the designers of the form decide for them. A similarly large effect is seen for people choosing to enroll in a retirement savings plan or having to opt out. Defaulting people into a retirement plan increased participation from about 50% to about 90% [417, 418].

7.4.2 Benets of Heuristics and the Cost of Biases The long list of decision-making biases and heuristics above may suggest that people are not very effective decision makers in everyday situations, and might suggest that human contributions to decision making are a problem that should be fixed. However, this perspective neglect the fact that most people do make good decisions most of the time, and have the flexibility to deal with situations that can’t be reduced to an equation. The list of biases accounts for the infrequent circumstances, like the decision makers in the Three Mile Island nuclear plant, when decisions produce bad outcomes. One reason that most decisions are good, is that heuristics are accurate most of the time. A second reason is that people have a profile of resources: information-processing capabilities, experiences, and decision aids (e.g., a decision matrix) that they can adapt to the situations they face. Experts are proficient in adjusting their decision strategies. To the extent that people have sufficient resources and can adapt to them, they make good decisions. When people are not able to adapt, such as where people have little experience with the situations, poor decisions can result [357]. The focus can be either on the general high quality of most decisions, or on the errors due to biases associated with heuristics. Both of these approaches are equally valid, but focusing on the errors supports the search for human factors solutions to eliminate, or at least mitigate those biases that do show. It is to this that we now turn. 7.4.3 Principles for Improving Decision Making Decision making is often an iterative cycle in which decision makers are often adaptive, adjusting their response according to their experience, the task situation, cognitive ability, and the available decision-making aids. It is important to understand this adaptive decision process because system design, training, and decision aids need to support it. Attempts to improve decision making without understanding this process tend to fail. In this section, we briefly discuss some possibilities for improving human decision making: task redesign, including choice architecture and procedures; training; displays; and automated decision support systems. Task redesign. We often jump to the conclusion that poor performance in decision making means we must do something “to the person” to make him or her a better decision maker. However, sometimes a change in the system can support better decision making, eliminating the need for the person to change. As described in Chapter 1, decision making may be improved by task design. Changing the system should be considered before changing the person through training or even providing a computer-based decision aid. For example, consider the situation in which the removal of a few control rods led to a runaway nuclear reaction, which resulted in 3 deaths and 23 cases of exposure to high levels of radioactivity. Learning from this experience, reactor designers now create reactors that remain stable even when several control rods are removed [227]. Creating systems with greater stability leaves a greater margin for error in decisions and can also make it easier to develop accurate mental models. Choice architecture. The structure of the interaction influences choice in much the same way architecture of a building influences the movement of people through buildings [18]. Choice architects influence decisions by recognizing the natural cognitive tendencies we have discussed and presenting people with information and options that will take advantage of these tendencies to generate good decisions. The following principles show how choice architecture can nudge people towards decisions [419]. 1. Limit the number of options. Because too many options place a high burden on the decision maker, the number of options should be limited to the fewest number that will encourage exploration of options. Although the appropriate number depends on the specific elements of the decision maker and situation, four to five options where none is better on all dimensions. Fewer options should be offered if decision makers are less capable, such as older people, those in a time pressured situation, or less numerate decision makers faced with numerical options [420, 419]. 2. Select useful defaults. The effect of defaults on organ donation rates demonstrates the power of defaults: People often choose default options. Options for designing defaults include random, uniform choice for all users, forced choice, persistent default where the system remembers previous settings, and predictive default where the system picks based on user characteristics. If there is no time pressure and the choice is important then active choice should be used. If there is an obvious benefit to a particular choice then a uniform default for all users should be used, such when organizations select double-sided printing as the default [421]. As laptops, tablet and desktop computers, as well as phones, TVs and cars become more integrated predictive defaults become more feasible and valuable.

3. Make choices concrete. People focus on concrete immediate outcomes and tend to be overly optimistic about future regarding available time and money. To counteract people’s tendency to neglect the abstract future situation a limited window on opportunity can focus their attention like: “offer ends midnight tonight.” Another approach is to translate the abstract future value choices into immediate, salient consequence. For example, show people their future self so they can invest for that future self [422]. People who saw realistic computer renderings of older version of themselves invested more.

4. Create linear, comparable relationships. People tend to struggle to consider complex transformations and non-linear relationships. Transforming variables to their concrete linear equivalent promotes better decisions. For example, describing interest rates in terms of the number of payments to eliminate debt in three years is more effective than expecting people to calculate the non-linear, compounding effect of interest. Likewise, presenting fuel economy data in terms of gallons per 100 miles rather than miles per gallon, eliminates the mental transformation that is needed to compare vehicles [423]. The units presented should be those directly relevant to the decision.

 

Consistent with our previous discussion of skill-, rule-, and knowledgebased performance, how people make decisions depends on the situation. People tend to make decisions at one of three ways: intuitive skill-based processing, heuristic rule-based processing, and analytical knowledge-based processing. Making decisions as described by the normative models is an example of analytic decision making and using satisficing heuristics is an example of rule-based decision making. Intuitive decision-making occurs when people recognize the required response without thinking. As we learned in the context of Figure 7.2, people with a high degree of expertise often approach decision making in a fairly automatic pattern matching style, just as Amy did with her first diagnosis. Recognition primed decision making (RPD) describes this process in detail [374]. In most instances, experts simply recognize a pattern of cues and recall a single course of action, which is then implemented. In spite of the prevalence of rapid patternrecognition decisions, there are cases where decision makers will use analytical methods, such as when the decision maker is unsure of the appropriate course of action. The decision maker resolves the uncertainty by imagining the consequences of what might happen if a course of action is adopted: a mental simulation, where the decision maker thinks: “if I do this, what is likely to happen” [375]. Mental simulation can help assess the alternatives, action, or plan under consideration [376]. In this process, the mental simulation can play out possible solutions based on information from the environment and their mental model. Mental simulation shows which options are the most promising, and also generates expectations for other cues not previously considered [377]. Also, if uncertainty exists and time is adequate, decision makers will spend time to evaluate the current situation assessment, modify the retrieved action plan, or generate alternative actions [356]. Experts adapt their decision-making strategy to the situation. Table 7.2 summarizes some of the factors that lead to intuitive rulebased decision making and those that lead to analytical knowledgebased decision making. These characteristics of the person, task, and technology influence the use of heuristics as well as the prevalence of biases that sometimes accompany those heuristics, which we discuss in detail in the next section.

7.4.1 Vulnerabilties of Heuristics: Biases Cognitive heuristics are rules-of-thumb that are easy ways of making decisions. Heuristics are usually very powerful and efficient [378], but they do not always guarantee the best solution [354, 379]. Unfortunately, because they represent simplifications, heuristics occasionally lead to systematic flaws and errors. The systematic flaws represent deviations from the normative model and are sometimes referred to as biases. Experts tend to avoid these biases because they draw from a large set of experiences and they are vigilant to small changes in the pattern of cues that might suggest the heuristic is inappropriate. To the extent a situation departs from these experiences, even experts will fall prey to the biases associated with various heuristics. Although the list of heuristics is large (as many as 37 [380]), the following presents some of the most notorious ones. Acquire and Integrate Cues: Heuristics and Biases. The first stage of the decision process begins with attending to information and integrating it to understand the situation or form a situation assessment (e.g., to support stage 2).

1. Attention to a limited number of cues. Due to working memory limitations, people can use only a relatively small number of cues to develop a picture of the world or system. This is one reason why configural displays that visually integrate several variables or factors into one display are useful (see Chapter 8 for a description). 2. Anchoring and cue primacy. When people receive cues over a period of time, there are certain trends or biases in the use of that information. The first few cues receive greater weight than subsequent information–cue primacy [381]. It often leads people to “anchor" on initial evidence and is therefore sometimes called the anchoring heuristic [354], characterizing the familiar phenomenon that first impressions are lasting. Amy anchored on the cues supporting her initial diagnosis, and gave little processing to additional information available in the phone call by the patient 24 hours later. Importantly, when assessing a dynamic changing situation, the anchoring bias can be truly detrimental because older information becomes progressively less reliable, even as the older information was, by definition, the first encountered and hence served as the anchor. The order of information has an effect because people use the information to construct plausible stories or mental models of the world or system. These models differ depending on which information is used first [382]. The key point is that, information processed early is often most influential. 3. Cue salience. Perceptually salient cues are more likely to capture attention and be given more weight [383, 11]; see also Chapter 6. As you would expect, salient cues in displays are things such as information at the top of a display, the loudest alarm, the largest display, the loudest most confident sounding voice in the room, and so forth. Unfortunately, the most salient cue is not necessarily the most diagnostic, and sometimes very subtle ones, such as the faint discoloration observed by Amy are not given much weight. 4. Overweighting of unreliable cues. Not all cues are equally reliable. In a trial, some witnesses, for example, will always tell the truth. Others might have faulty memories, and still others might intentionally lie. However, when integrating cues, people often simplify the process by treating all cues as if they are all equally valid and reliable. The result is that people tend to give too much weight to unreliable information [384, 385]. Interpret and Assess: Heuristics and Biases. After a limited set of cues is processed in working memory, the decision maker generates and interprets the information, often by retrieving similar situations from long-term memory. These similar situations represent hypotheses about how the current situation relates to past situations. There are a number of heuristics and biases that affect this process:

1. Availability. The availability heuristic reflects people’s tendency to make certain types of judgments or assessments, for example, estimates of frequency, by assessing how easily the state or event is brought to mind [386, 387, 388]. People more easily retrieve hypotheses that have been considered recently and hence more available to memory. The implication is that although people try to generate the most likely hypotheses, the reality is that if something comes to mind relatively easily, they assume it is common and therefore a good hypothesis. As an example, if a physician readily thinks of a hypothesis, such as acute appendicitis, he or she will assume it is relatively common, leading to the judgment that it is a likely cause of the current set of symptoms. Unusual illnesses tend not to be the first things that come to mind to a physician. Amy did not think of the less likely condition. In actuality, availability to memory may not be a reliable basis for estimating frequency. 2. Representativeness. Sometimes people diagnose a situation because the pattern of cues “looks like” or is representative of the prototypical example of this situation. This is the representativeness heuristic [353, 389], and usually works well; however, the heuristic can bias decisions when a perceived situation is slightly different from the prototypical example even though the pattern of cues is similar or representative. 3. Overconfidence. People are often biased in their confidence with respect to the hypotheses they have brought into working memory [390, 351], believing that they are correct more often than they actually are and reflecting the more general tendency for overconfidence in metacognitive processes, as described in Chapter 6 [391]. Such overconfidence appears to grow when judgments are more predictive about the future (than of the current state) and when predictions become more difficult [11]. As a consequence, people are less likely to seek out evidence for alternative hypotheses or to prepare for the circumstances that they may be wrong. Less skilled people are more likely to overestimate their ability, even when they understand their relative ability [392]. 4. Cognitive tunneling. As we have noted earlier in the context of anchoring, once a hypothesis has been generated or chosen, people tend to underutilize subsequent cues. We remain stuck on our initial hypothesis, a process introduced in the previous chapter as cognitive tunneling [393]. Examples of cognitive tunneling abound in the complex systems [394]. Consider the example of the Three Mile Island disaster in which a relief valve failed and caused some of the displays to indicate a rise in the level of coolant [395]. Operators mistakenly thought that that emergency coolant flow should be reduced and persisted to hold this hypothesis for over two hours. Only when a supervisor arrived with a fresh perspective did the course of action get reversed. Notice that cognitive tunneling is different than the primacy, which occurs when the decision maker is first generating hypotheses.

 

ادامه 7.3

fizik100 fizik100 fizik100 · 1400/8/7 01:36 ·

Figure 7.5 shows the analysis of four different options, where the options are different cars that a student might purchase. Each car is described by five attributes. These attributes might include the sound quality of the stereo, fuel economy, insurance costs, and maintenance costs. The utility of each attribute reflects its importance to the student. For example, the student cannot afford frequent and expensive repairs, so the utility or importance of the fifth attribute (maintenance costs) is quite high (8), whereas the student does not care as much about the sound quality of the stereo (4) or the fourth attribute (color), which is quite low (1). The cells in the decision table show the magnitude of each attribute for each option. For this example, higher values reflect a more desirable situation. For example, the third car has a poor stereo, but low maintenance costs. In contrast, the first car has a slightly better stereo, but high maintenance costs. Combining the magnitude of all the attributes shows that third car (option 3) is most appealing or “optimal” choice and that the first car (option 1) is least appealing.

 

Figure 7.5 Multi-attribute utility analysis combines information from multiple attributes of each of several options to identify the optimal decision.

 

Multi-attribute utility theory, shown in Figure 7.5, assumes that all outcomes are certain. However, life is uncertain, and probabilities often define the likelihood of various outcomes (e.g., you cannot predict maintenance costs precisely). Another example of a normative model is expected value theory, which addresses uncertainty. This theory replaces the concept of utility in the previous context with that of expected value. The theory applies to any decision that involves a “gamble” type of decision, where each choice has one or more outcomes and each outcome has a worth and a probability. For example, a person might be offered a choice between: 1. Winning $50 with a probability of 1.0 (a guaranteed win), or 2. Winning $200 with a probability of 0.30. Expected value theory assumes that the overall value of a choice (Equation 7.1) is the sum of the worth of each outcome multiplied by its probability where E(v) is the expected value of the choice, p(i) is the probability of the ith outcome, and v(i) is the value of the i th outcome.

 

E(v)= به کتاب مراجعه شود

 

The expected value of the first choice for the example is $50×1.0, or $50, meaning a certain win of $50. The expected value of the second choice is $200 × 0.30, or $60, meaning that if the choice were selected many times, one would expect an average gain of $60, which is a higher expected value than $50. Therefore, the normative decision maker should always choose the second gamble. In reality, people tend to avoid risk and go with the sure thing [372].

Figure 7.6 shows two states of the world, 1 and 2, which are generated from situation assessment. Each has a probability, P1 and P2, respectively. The two choice options, A and B, may have four different outcomes as shown in the four cells to the right. Each option may also have different Utilities (U), which could be positive or negative, contingent upon the existing state of the world. The normative view of decision making dictates that the chosen option should be the one with the highest (most positive) sum of the products within the two different states. Descriptive decision making accounts for how people actually make decisions. People can depart from the optimum, normative, expected utility model. First, people do not always try to maximize, EV nor should they because other decision criteria beyond expected value can be more important. Second, people often shortcut the time and effort-consuming steps of the normative approach. They do this because time and resources are not adequate to “do things right” according to the normative model, or because they have expertise that directly points them to the right decision. Third, these shortcuts sometimes result in errors and poor decisions. Each of these represents an increasingly large departure from normative decision making.

As an example of using a decision criterion different from maximizing expected utility, people may choose instead to minimize the possibility of suffering the maximum loss. This certainly could be considered as rational, particularly if one’s resources to deal with the loss were limited. This explains why people purchase insurance; even though such a purchase decision does not maximize their expected gain. If it did, the insurance companies would soon be out of business! The importance of using different decision criteria reflects the mismatch between the simplifying assumptions of expected utility and the reality of actual situations. Not many people have the ability to absorb a $100,000 medical bill that might accompany a severe health problem. Most decisions involve shortcuts relative to the normative approach. Simon [373] argued that people do not usually follow a goal of making the absolutely best or optimal decision. Instead, they opt for a choice that is “good enough” for their purposes, something satisfactory. This shortcut method of decision making is termed satisficing. In satisficing, the decision maker generates and evaluates choices only until one is found that is acceptable rather thanone that is optimal. Going beyond this choice to identify something that is better is not worth the effort. Satisficing is a very reasonable approach given that people have limited cognitive capacities and limited time. Indeed, if minimizing the time (or effort) to make a decision is itself considered to be an attribute of the decision process, then satisficing or other shortcutting heuristics can sometimes be said to be optimal—for example, when a decision must be made before a deadline, or all is lost. In the case of our car choice example, a satisficing would be to take the first car that gets the job done rather than doing the laborious comparisons to find the best. Satisficing and other shortcuts are often quite effective [366], but they can also lead to biases and poor decisions as we will discuss below. Our third characteristic of descriptive decision making concerns human limits that contribute to decision errors. A general source of errors concerns the failure of people to recognize when shortcuts are inappropriate for the situation and adopt the more laborious decision processed. Because this area is so important, and its analysis generates a number of design solutions, we dedicate the next section to this topic.

7.3 Decision Making

fizik100 fizik100 fizik100 · 1400/8/7 01:31 ·
7.3 Decision Making

What is decision making? Generally, it is a task in which (a) a person must select one option from several alternatives, (b) a person must interpret information for the alternatives, (c) the timeframe is relatively long (longer than a second), (d) the choice includes uncertainty; that is, it is not necessarily clear which is the best alternative. By definition, decision making involves risk—there is a consequence to picking the wrong alternative—and so a good decision maker effectively assesses risks associated with each alternative. The decisions discussed in this chapter range from those involving a slow deliberative process, involving how to allocate resources to those which are quite rapid, with few alternatives, like the decision to speed up, or apply the brakes, when seeing a yellow traffic light, or whether to open a suspicious e-mail [362]. Decision making can generally be represented by four stages as depicted in Figure 7.4: (1) acquiring and integrating information relevant for the decision, (2) interpreting and assessing the meaning of this information, (3) planning and choosing the best course of action after considering the costs and values of different outcomes, and (4) monitoring and correcting the chosen course of action. People typically cycle through the four stages in a single decision.

 

Figure 7.4 The four basic stages of decision making that draw upon limited attention resources and metacognition.

 

1. Acquire and integrate a number of cues, or pieces of information, which are received from the environment and go into working memory. For example, an engineer trying to identify the problem in a manufacturing process might receive a number of cues, including unusual vibrations, particularly rapid tool wear, and strange noises. The cues must be selectively attended, interpreted and somehow integrated with respect to one another. The cues may also be incomplete, fuzzy, or erroneous; that is, they may be associated with some amount of uncertainty. 2. Interpret and assess cues and then use this interpretation to generate one or more situation assessments, diagnoses, or inferences as to what the cues mean. This is accomplished by retrieving information from long-term memory. For example, an engineer might hypothesize that the set of cues described previously is caused by a worn bearing. Situation assessment is supported by maintaining good situation awareness, a topic we discuss later in the chapter. The difference is that while maintaining SA refers to a continuous process, making a situation assessment involves a one time discrete action with the goal of supporting a particular decision. 3. Plan and choose one of alternative actions generated by retrieving possibilities from long-term memory. Depending on the time available, one or more of the alternatives are generated and considered. To choose an action, the decision maker might evaluate information such as possible outcomes of each action (where there may be multiple possible outcomes for each action), the likelihood of each outcome, and the negative and positive factors associated with each outcome. This can be formally done in the context of a decision matrix in which actions are crossed against the diagnosed possible states of the world that could occur, and which could have different consequences depending on the action selected. 4. Monitor and correct the effects of decisions. The monitoring process is a particularly critical part of decision making and can serve two general purposes. First, one can revise the current decision as needed. For example, if the outcomes of a decision to prescribe a particular treatment are not as expected, as was the case with Amy’s patient is getting worse, not better, then the treatment can be adjusted, halted or changed. Second, one can revise the general decision process if that process is found wanting and ineffective, as Amy also did. For example, if heuristics are producing errors, one can learn to abandon them in a particular situation and instead adopt the more analytical approach shown to the left of Figure 7.2. In this way, monitoring serves as an input for the troubleshooting element of macrocognition. Monitoring, of course, provides feedback on the decision process. Unfortunately, in decision making that feedback is often poor, degraded, delayed or non-existent, all features that undermine effective learning [11]. It is for this reason that consistent experience in decision making does not necessarily lead to improved performance [363, 357]. Figure 7.4 also depicts the two influences of attentional resources and metacognition. Many of the processes used to make ideal or “optimal” decisions impose intensive demands on perception and selective attention (for stage 1), particularly on the working memory used to entertain hypotheses in stage 2, and to evaluate outcomes in stage 4. If these resources are scarce, as in a multitasking environment, decision making can suffer. Furthermore, because humans are effort conserving, we often tend to adopt mental shortcuts or heuristics that can make decision making easier and faster, but may sacrifice its accuracy. Metacognition describes our monitoring of all of the processes by which we make decisions, and hence is closely related to stage 4. We use such processes for example to assess whether we are confident enough in a diagnosis (stage 2) to launch an action (stage 3) without seeking more information. We describe metacognition in more detail near the end of the chapter.7.3.1 Normative and Descriptive Decision Making Decision making has, for centuries, been studied in terms of how people should make optimal decisions: those likely to produce the best outcomes in the long run [364, 365]. This is called normative decision making. Within the last half century however, decision scientists have highlighted that humans often do not, in practice, adhere to such optimal norms for a variety of reasons, and so their decisions can be described in ways classified as descriptive decision making. We now discuss both the normative and descriptive frameworks. Normative decision making considers the four stages of decision making in terms of an idealized situation in which the correct decision can be made by calculating the mathematical optimal choice. This mathematical approach is often termed normative decision making. Normative decision making specifies what people should do; they do not necessarily describe how people actually perform decision-making tasks. Importantly, these normative models make many assumptions that incorrectly simplifies and limits their application to the decisions people actually face [366]. Normative models are important because they form the basis for many computer-based decision aids, and justify (often wrongly) that humans’ fallible judgment should be removed from the decision process [367]. Although such normative models often outperform people in situations where their assumptions hold, many real-life decision cannot be reduced to a simple formula [368]. Normative decision making revolves around the central concept of utility, the overall value of a choice, or how much eachoutcome is “worth” to the decision maker. This model has application in engineering decisions as well as decisions in personal life. Choosing between different corporate investments, materials for product, jobs, or even cars are all examples of choices that can be modeled using multiattribute utility theory. The decision matrix described in Chapter 2 is an example of how multiattribute utility theory can be used to guide engineering design decisions. Similarly, it has been used to resolve conflicting objectives, to guide environmental cleanup of contaminated sites [369], to support operators of flexible manufacturing systems [370], and even to select a marriage partner [371]. The number of potential options, the number of attributes or features that describe each option, and the challenge in comparing alternatives on very different dimensions make decisions complicated. Multiattribute utility theory addresses this complexity, using a utility function to translate the multidimensional space of attributes into a single dimension that reflects the overall utility or value of each option. In theory, this makes it possible to compare apples and oranges and pick the best one. Multiattribute utility theory assumes that the overall value of a decision option is the sum of the magnitude of each attribute multiplied by the utility of each attribute (Equation 7.1), where U(v) is the overall utility of an option, a(i) is the magnitude of the option on the i th attribute, u(i) is the utility (goodness or importance) of the i th attribute, and n is the number of attributes.

U(v) = به کتاب رجوع شود 

In understanding decision making over the last 50 years, there have been a variety of approaches to analyzing the skill or proficiency in reasoning that develops as the decision maker gains expertise. These are shown in Figure 7.2. To some degree, all of these approaches are related, but represent facets of decision making and macrocognition. These approaches provide a framework for many of the sections to follow. In the first row of Figure 7.2, Rasmussen [347] has proposed a three-level categorization of behavior. These levels evolve as the person develops progressively more skill or as the problems become progressively less complex. The progression from knowledgebased to rule-based to the more automatic skill-based behavior parallels the development of automaticity described in the previous chapter. Closely paralleling this, in the second row, is the distinction between careful analytic processing (describing all the options and factors that should enter into a choice), and the more “gut level” intuitive processing, often less accessible to conscious awareness [348]. Here, as with Rasmussen’s levels of behavior, more intuitive decisions are more likely to emerge with greater skill and simpler problems.

The third row shows different cognitive systems that underly how people make decisions [349, 350, 351]. System 2, like analytical judgments and knowledge-based reasoning, is considered to serve a deliberative function that involves resource-intensive effortful processes. In contrast, System 1 like intuitive judgments and the skill-based reasoning, engages relatively automatic “gut-feel” snap judgments. System 1 is guided by what is easy, effort-free and feels good or bad; that is, the emotional component of decision making. In partial contrast with skill-based behavior and intuitive judgments however, engaging System 1 does not necessarily represent greater expertise than engaging System 2. Instead, the two systems operate in parallel in any given decision, with System 1 offering a snap decisions of what to do, but then System 2, if time and cognitive resources or effort are available, overseeing and checking the result of System 1 to assure its correctness. System 1 also aids System 2 by focusing attention and filtering options—without it we would struggle to make a decision [352]. In the fourth row, we show two different “schools” of decision research that will be the focus of much of our discussion below. The “heuristics and biases” approach, developed by Kahneman and Tversky [353, 354] has focused on the kinds of decision shortcuts made because of the limits of reasoning, and hence the kinds of biases that often lead to decision errors. These biases identify “what’s wrong” with decision making and what requires human factors interventions. In contrast, the naturalistic decision making school, proposed by Klein [355, 356] examines decision making of the expert, many of whose choices share features of skill-based behavior, intuitive decision making that are strongly influenced by System 1. That is, such decisions are often quick, relatively effortfree, and typically correct. While these two approaches are often set in contrast, it is certainly plausible to see both as being correct,but applicable in different circumstances, and hence more complementary than competitive [357]. Heuristics and intuitive decision making work well for experienced people in familiar circumstances, but biases undermine performance of novices or experts in unfamiliar circumstances. In the final row, we describe a characteristic of metacognition that appears, generally to emerge with greater skill. That is, it becomes increasingly adaptive, with the human better able to select the appropriate tools, styles, types, and systems, given the circumstances. That is, with expertise, people develop a larger cognitive toolkit, as does the wisdom regarding which tools to apply when. The first row of Figure 7.2, shows skill-, rule-, and knowledgebased (SRK) behavior depends on people’s expertise and the situation [358, 347, 359]. High levels of experience with analog representations promote relatively effortless skill-based behavior (e.g., riding a bicycle), whereas little experience with numeric and textual information will lead to knowledge-based behavior (e.g., selecting an apartment using a spreadsheet). In between, like the decision to bring a raincoat on a bike ride, follows rule-based behavior: “if the forecast chance of rain is greater than 30%, then bring it.” These SRK distinctions also describe types of human errors [360], which we discuss in Chapter 16). These distinctions are particularly important because we can improve decision making and reduce errors by supporting skill-, rule-, and knowledge-based behavior. Figure 7.3 shows the SRK process for responding to sensory input that enters at the lower left. This input can be interpreted at one of three levels, depending on the operator’s degree of experience with the particular situation and how information is represented [358, 348]. The right side shows an example of sensory input: a meter that an operator has to monitor. The figure shows that the same meter is interpreted differently depending on the level of behavior engaged: as a signal for skill-based behavior, as a sign for rule-based behavior, and as a symbol for knowledge-based behavior. Signals and skill-based behavior. People who are extremely experienced with a task tend to process the input at the skill-based level, reacting to the perceptual elements at an automatic, subconscious level. They do not have to interpret and integrate the cues or think of possible actions, but only respond to cues as signals that guide responses. Because the behavior is automatic, the demand on attentional resources described in Chapter 6 is minimal. For example, an operator might turn a valve in a continuous manner to counteract changes in flow shown on a meter (see bottom left of Figure 7.3). Y Designs that enable skillbased behavior are “intuitive”. Signs and rule-based behavior. When people are familiar with the task but do not have extensive experience, they process input and perform at the rule-based level. The input is recognized in relation to typical system states, termed signs, which trigger rules for accumulated knowledge. This accumulated knowledge can be inthe person’s head or written down in formal procedures. Following a recipe to bake bread is an example of rule-based behavior. The rules are “if-then” associations between cue sets and the appropriate actions. For example, Figure 7.3 shows how the operator might interpret the meter reading as a sign. Given that the procedure is to reduce the flow if the meter is above a set point, the operator then reduces the flow. Symbols and knowledge-based behavior. When the situation is new, people do not have any rules stored from previous experience to call upon, and do not have a written procedure to follow. They have to operate at the knowledge-based level, which is essentially analytical processing using conceptual information. After the person assigns meaning to the cues and integrates them to identify what is happening, he or she processes the cues as symbols that relate to the goals and decides on an action plan. Figure 7.3 shows how the operator might reason about the low meter reading and think about what might be the reason for the low flow, such as a leak. It is important to note that the same sensory input, the meter in Figure 7.3, for example, can be interpreted as a signal, sign, or symbol. The relative role of skill-, rule-, and knowledge-based behavior depends on characteristics of the person, the technology, and the situation [354, 361]. Characteristics of the person include experience and training. As we will see people can be trained to perform better in all elements of macrocognition; however, as with most human factors interventions, changing the task and tools is more effective.In the following sections, we first discuss the cognitive processes in decision making: how it too can be described by stages, the normative approach to decision making (how it “should” be done to produce the best outcomes), and the reasons why people often do not follow the normative decision making processes. Two important departures from normative decision making, receive detailed treatment: naturalistic decision making and heuristics and biases. Because decision errors produced by the heuristics and biases can be considered to represent human factors challenges, we complete our treatment of decision making by describing several human factors solutions to mitigate decision errors. Finally, our chapter concludes by describing four “close cousins” of decision making within the family of macrocognitive processes: situation awareness, troubleshooting, planning and metacognition.