my blog

ادامه 7.3

fizik100 fizik100 fizik100 · 1400/8/7 01:36 ·

Figure 7.5 shows the analysis of four different options, where the options are different cars that a student might purchase. Each car is described by five attributes. These attributes might include the sound quality of the stereo, fuel economy, insurance costs, and maintenance costs. The utility of each attribute reflects its importance to the student. For example, the student cannot afford frequent and expensive repairs, so the utility or importance of the fifth attribute (maintenance costs) is quite high (8), whereas the student does not care as much about the sound quality of the stereo (4) or the fourth attribute (color), which is quite low (1). The cells in the decision table show the magnitude of each attribute for each option. For this example, higher values reflect a more desirable situation. For example, the third car has a poor stereo, but low maintenance costs. In contrast, the first car has a slightly better stereo, but high maintenance costs. Combining the magnitude of all the attributes shows that third car (option 3) is most appealing or “optimal” choice and that the first car (option 1) is least appealing.

 

Figure 7.5 Multi-attribute utility analysis combines information from multiple attributes of each of several options to identify the optimal decision.

 

Multi-attribute utility theory, shown in Figure 7.5, assumes that all outcomes are certain. However, life is uncertain, and probabilities often define the likelihood of various outcomes (e.g., you cannot predict maintenance costs precisely). Another example of a normative model is expected value theory, which addresses uncertainty. This theory replaces the concept of utility in the previous context with that of expected value. The theory applies to any decision that involves a “gamble” type of decision, where each choice has one or more outcomes and each outcome has a worth and a probability. For example, a person might be offered a choice between: 1. Winning $50 with a probability of 1.0 (a guaranteed win), or 2. Winning $200 with a probability of 0.30. Expected value theory assumes that the overall value of a choice (Equation 7.1) is the sum of the worth of each outcome multiplied by its probability where E(v) is the expected value of the choice, p(i) is the probability of the ith outcome, and v(i) is the value of the i th outcome.

 

E(v)= به کتاب مراجعه شود

 

The expected value of the first choice for the example is $50×1.0, or $50, meaning a certain win of $50. The expected value of the second choice is $200 × 0.30, or $60, meaning that if the choice were selected many times, one would expect an average gain of $60, which is a higher expected value than $50. Therefore, the normative decision maker should always choose the second gamble. In reality, people tend to avoid risk and go with the sure thing [372].

Figure 7.6 shows two states of the world, 1 and 2, which are generated from situation assessment. Each has a probability, P1 and P2, respectively. The two choice options, A and B, may have four different outcomes as shown in the four cells to the right. Each option may also have different Utilities (U), which could be positive or negative, contingent upon the existing state of the world. The normative view of decision making dictates that the chosen option should be the one with the highest (most positive) sum of the products within the two different states. Descriptive decision making accounts for how people actually make decisions. People can depart from the optimum, normative, expected utility model. First, people do not always try to maximize, EV nor should they because other decision criteria beyond expected value can be more important. Second, people often shortcut the time and effort-consuming steps of the normative approach. They do this because time and resources are not adequate to “do things right” according to the normative model, or because they have expertise that directly points them to the right decision. Third, these shortcuts sometimes result in errors and poor decisions. Each of these represents an increasingly large departure from normative decision making.

As an example of using a decision criterion different from maximizing expected utility, people may choose instead to minimize the possibility of suffering the maximum loss. This certainly could be considered as rational, particularly if one’s resources to deal with the loss were limited. This explains why people purchase insurance; even though such a purchase decision does not maximize their expected gain. If it did, the insurance companies would soon be out of business! The importance of using different decision criteria reflects the mismatch between the simplifying assumptions of expected utility and the reality of actual situations. Not many people have the ability to absorb a $100,000 medical bill that might accompany a severe health problem. Most decisions involve shortcuts relative to the normative approach. Simon [373] argued that people do not usually follow a goal of making the absolutely best or optimal decision. Instead, they opt for a choice that is “good enough” for their purposes, something satisfactory. This shortcut method of decision making is termed satisficing. In satisficing, the decision maker generates and evaluates choices only until one is found that is acceptable rather thanone that is optimal. Going beyond this choice to identify something that is better is not worth the effort. Satisficing is a very reasonable approach given that people have limited cognitive capacities and limited time. Indeed, if minimizing the time (or effort) to make a decision is itself considered to be an attribute of the decision process, then satisficing or other shortcutting heuristics can sometimes be said to be optimal—for example, when a decision must be made before a deadline, or all is lost. In the case of our car choice example, a satisficing would be to take the first car that gets the job done rather than doing the laborious comparisons to find the best. Satisficing and other shortcuts are often quite effective [366], but they can also lead to biases and poor decisions as we will discuss below. Our third characteristic of descriptive decision making concerns human limits that contribute to decision errors. A general source of errors concerns the failure of people to recognize when shortcuts are inappropriate for the situation and adopt the more laborious decision processed. Because this area is so important, and its analysis generates a number of design solutions, we dedicate the next section to this topic.

7.3 Decision Making

fizik100 fizik100 fizik100 · 1400/8/7 01:31 ·
7.3 Decision Making

What is decision making? Generally, it is a task in which (a) a person must select one option from several alternatives, (b) a person must interpret information for the alternatives, (c) the timeframe is relatively long (longer than a second), (d) the choice includes uncertainty; that is, it is not necessarily clear which is the best alternative. By definition, decision making involves risk—there is a consequence to picking the wrong alternative—and so a good decision maker effectively assesses risks associated with each alternative. The decisions discussed in this chapter range from those involving a slow deliberative process, involving how to allocate resources to those which are quite rapid, with few alternatives, like the decision to speed up, or apply the brakes, when seeing a yellow traffic light, or whether to open a suspicious e-mail [362]. Decision making can generally be represented by four stages as depicted in Figure 7.4: (1) acquiring and integrating information relevant for the decision, (2) interpreting and assessing the meaning of this information, (3) planning and choosing the best course of action after considering the costs and values of different outcomes, and (4) monitoring and correcting the chosen course of action. People typically cycle through the four stages in a single decision.

 

Figure 7.4 The four basic stages of decision making that draw upon limited attention resources and metacognition.

 

1. Acquire and integrate a number of cues, or pieces of information, which are received from the environment and go into working memory. For example, an engineer trying to identify the problem in a manufacturing process might receive a number of cues, including unusual vibrations, particularly rapid tool wear, and strange noises. The cues must be selectively attended, interpreted and somehow integrated with respect to one another. The cues may also be incomplete, fuzzy, or erroneous; that is, they may be associated with some amount of uncertainty. 2. Interpret and assess cues and then use this interpretation to generate one or more situation assessments, diagnoses, or inferences as to what the cues mean. This is accomplished by retrieving information from long-term memory. For example, an engineer might hypothesize that the set of cues described previously is caused by a worn bearing. Situation assessment is supported by maintaining good situation awareness, a topic we discuss later in the chapter. The difference is that while maintaining SA refers to a continuous process, making a situation assessment involves a one time discrete action with the goal of supporting a particular decision. 3. Plan and choose one of alternative actions generated by retrieving possibilities from long-term memory. Depending on the time available, one or more of the alternatives are generated and considered. To choose an action, the decision maker might evaluate information such as possible outcomes of each action (where there may be multiple possible outcomes for each action), the likelihood of each outcome, and the negative and positive factors associated with each outcome. This can be formally done in the context of a decision matrix in which actions are crossed against the diagnosed possible states of the world that could occur, and which could have different consequences depending on the action selected. 4. Monitor and correct the effects of decisions. The monitoring process is a particularly critical part of decision making and can serve two general purposes. First, one can revise the current decision as needed. For example, if the outcomes of a decision to prescribe a particular treatment are not as expected, as was the case with Amy’s patient is getting worse, not better, then the treatment can be adjusted, halted or changed. Second, one can revise the general decision process if that process is found wanting and ineffective, as Amy also did. For example, if heuristics are producing errors, one can learn to abandon them in a particular situation and instead adopt the more analytical approach shown to the left of Figure 7.2. In this way, monitoring serves as an input for the troubleshooting element of macrocognition. Monitoring, of course, provides feedback on the decision process. Unfortunately, in decision making that feedback is often poor, degraded, delayed or non-existent, all features that undermine effective learning [11]. It is for this reason that consistent experience in decision making does not necessarily lead to improved performance [363, 357]. Figure 7.4 also depicts the two influences of attentional resources and metacognition. Many of the processes used to make ideal or “optimal” decisions impose intensive demands on perception and selective attention (for stage 1), particularly on the working memory used to entertain hypotheses in stage 2, and to evaluate outcomes in stage 4. If these resources are scarce, as in a multitasking environment, decision making can suffer. Furthermore, because humans are effort conserving, we often tend to adopt mental shortcuts or heuristics that can make decision making easier and faster, but may sacrifice its accuracy. Metacognition describes our monitoring of all of the processes by which we make decisions, and hence is closely related to stage 4. We use such processes for example to assess whether we are confident enough in a diagnosis (stage 2) to launch an action (stage 3) without seeking more information. We describe metacognition in more detail near the end of the chapter.7.3.1 Normative and Descriptive Decision Making Decision making has, for centuries, been studied in terms of how people should make optimal decisions: those likely to produce the best outcomes in the long run [364, 365]. This is called normative decision making. Within the last half century however, decision scientists have highlighted that humans often do not, in practice, adhere to such optimal norms for a variety of reasons, and so their decisions can be described in ways classified as descriptive decision making. We now discuss both the normative and descriptive frameworks. Normative decision making considers the four stages of decision making in terms of an idealized situation in which the correct decision can be made by calculating the mathematical optimal choice. This mathematical approach is often termed normative decision making. Normative decision making specifies what people should do; they do not necessarily describe how people actually perform decision-making tasks. Importantly, these normative models make many assumptions that incorrectly simplifies and limits their application to the decisions people actually face [366]. Normative models are important because they form the basis for many computer-based decision aids, and justify (often wrongly) that humans’ fallible judgment should be removed from the decision process [367]. Although such normative models often outperform people in situations where their assumptions hold, many real-life decision cannot be reduced to a simple formula [368]. Normative decision making revolves around the central concept of utility, the overall value of a choice, or how much eachoutcome is “worth” to the decision maker. This model has application in engineering decisions as well as decisions in personal life. Choosing between different corporate investments, materials for product, jobs, or even cars are all examples of choices that can be modeled using multiattribute utility theory. The decision matrix described in Chapter 2 is an example of how multiattribute utility theory can be used to guide engineering design decisions. Similarly, it has been used to resolve conflicting objectives, to guide environmental cleanup of contaminated sites [369], to support operators of flexible manufacturing systems [370], and even to select a marriage partner [371]. The number of potential options, the number of attributes or features that describe each option, and the challenge in comparing alternatives on very different dimensions make decisions complicated. Multiattribute utility theory addresses this complexity, using a utility function to translate the multidimensional space of attributes into a single dimension that reflects the overall utility or value of each option. In theory, this makes it possible to compare apples and oranges and pick the best one. Multiattribute utility theory assumes that the overall value of a decision option is the sum of the magnitude of each attribute multiplied by the utility of each attribute (Equation 7.1), where U(v) is the overall utility of an option, a(i) is the magnitude of the option on the i th attribute, u(i) is the utility (goodness or importance) of the i th attribute, and n is the number of attributes.

U(v) = به کتاب رجوع شود 

In understanding decision making over the last 50 years, there have been a variety of approaches to analyzing the skill or proficiency in reasoning that develops as the decision maker gains expertise. These are shown in Figure 7.2. To some degree, all of these approaches are related, but represent facets of decision making and macrocognition. These approaches provide a framework for many of the sections to follow. In the first row of Figure 7.2, Rasmussen [347] has proposed a three-level categorization of behavior. These levels evolve as the person develops progressively more skill or as the problems become progressively less complex. The progression from knowledgebased to rule-based to the more automatic skill-based behavior parallels the development of automaticity described in the previous chapter. Closely paralleling this, in the second row, is the distinction between careful analytic processing (describing all the options and factors that should enter into a choice), and the more “gut level” intuitive processing, often less accessible to conscious awareness [348]. Here, as with Rasmussen’s levels of behavior, more intuitive decisions are more likely to emerge with greater skill and simpler problems.

The third row shows different cognitive systems that underly how people make decisions [349, 350, 351]. System 2, like analytical judgments and knowledge-based reasoning, is considered to serve a deliberative function that involves resource-intensive effortful processes. In contrast, System 1 like intuitive judgments and the skill-based reasoning, engages relatively automatic “gut-feel” snap judgments. System 1 is guided by what is easy, effort-free and feels good or bad; that is, the emotional component of decision making. In partial contrast with skill-based behavior and intuitive judgments however, engaging System 1 does not necessarily represent greater expertise than engaging System 2. Instead, the two systems operate in parallel in any given decision, with System 1 offering a snap decisions of what to do, but then System 2, if time and cognitive resources or effort are available, overseeing and checking the result of System 1 to assure its correctness. System 1 also aids System 2 by focusing attention and filtering options—without it we would struggle to make a decision [352]. In the fourth row, we show two different “schools” of decision research that will be the focus of much of our discussion below. The “heuristics and biases” approach, developed by Kahneman and Tversky [353, 354] has focused on the kinds of decision shortcuts made because of the limits of reasoning, and hence the kinds of biases that often lead to decision errors. These biases identify “what’s wrong” with decision making and what requires human factors interventions. In contrast, the naturalistic decision making school, proposed by Klein [355, 356] examines decision making of the expert, many of whose choices share features of skill-based behavior, intuitive decision making that are strongly influenced by System 1. That is, such decisions are often quick, relatively effortfree, and typically correct. While these two approaches are often set in contrast, it is certainly plausible to see both as being correct,but applicable in different circumstances, and hence more complementary than competitive [357]. Heuristics and intuitive decision making work well for experienced people in familiar circumstances, but biases undermine performance of novices or experts in unfamiliar circumstances. In the final row, we describe a characteristic of metacognition that appears, generally to emerge with greater skill. That is, it becomes increasingly adaptive, with the human better able to select the appropriate tools, styles, types, and systems, given the circumstances. That is, with expertise, people develop a larger cognitive toolkit, as does the wisdom regarding which tools to apply when. The first row of Figure 7.2, shows skill-, rule-, and knowledgebased (SRK) behavior depends on people’s expertise and the situation [358, 347, 359]. High levels of experience with analog representations promote relatively effortless skill-based behavior (e.g., riding a bicycle), whereas little experience with numeric and textual information will lead to knowledge-based behavior (e.g., selecting an apartment using a spreadsheet). In between, like the decision to bring a raincoat on a bike ride, follows rule-based behavior: “if the forecast chance of rain is greater than 30%, then bring it.” These SRK distinctions also describe types of human errors [360], which we discuss in Chapter 16). These distinctions are particularly important because we can improve decision making and reduce errors by supporting skill-, rule-, and knowledge-based behavior. Figure 7.3 shows the SRK process for responding to sensory input that enters at the lower left. This input can be interpreted at one of three levels, depending on the operator’s degree of experience with the particular situation and how information is represented [358, 348]. The right side shows an example of sensory input: a meter that an operator has to monitor. The figure shows that the same meter is interpreted differently depending on the level of behavior engaged: as a signal for skill-based behavior, as a sign for rule-based behavior, and as a symbol for knowledge-based behavior. Signals and skill-based behavior. People who are extremely experienced with a task tend to process the input at the skill-based level, reacting to the perceptual elements at an automatic, subconscious level. They do not have to interpret and integrate the cues or think of possible actions, but only respond to cues as signals that guide responses. Because the behavior is automatic, the demand on attentional resources described in Chapter 6 is minimal. For example, an operator might turn a valve in a continuous manner to counteract changes in flow shown on a meter (see bottom left of Figure 7.3). Y Designs that enable skillbased behavior are “intuitive”. Signs and rule-based behavior. When people are familiar with the task but do not have extensive experience, they process input and perform at the rule-based level. The input is recognized in relation to typical system states, termed signs, which trigger rules for accumulated knowledge. This accumulated knowledge can be inthe person’s head or written down in formal procedures. Following a recipe to bake bread is an example of rule-based behavior. The rules are “if-then” associations between cue sets and the appropriate actions. For example, Figure 7.3 shows how the operator might interpret the meter reading as a sign. Given that the procedure is to reduce the flow if the meter is above a set point, the operator then reduces the flow. Symbols and knowledge-based behavior. When the situation is new, people do not have any rules stored from previous experience to call upon, and do not have a written procedure to follow. They have to operate at the knowledge-based level, which is essentially analytical processing using conceptual information. After the person assigns meaning to the cues and integrates them to identify what is happening, he or she processes the cues as symbols that relate to the goals and decides on an action plan. Figure 7.3 shows how the operator might reason about the low meter reading and think about what might be the reason for the low flow, such as a leak. It is important to note that the same sensory input, the meter in Figure 7.3, for example, can be interpreted as a signal, sign, or symbol. The relative role of skill-, rule-, and knowledge-based behavior depends on characteristics of the person, the technology, and the situation [354, 361]. Characteristics of the person include experience and training. As we will see people can be trained to perform better in all elements of macrocognition; however, as with most human factors interventions, changing the task and tools is more effective.In the following sections, we first discuss the cognitive processes in decision making: how it too can be described by stages, the normative approach to decision making (how it “should” be done to produce the best outcomes), and the reasons why people often do not follow the normative decision making processes. Two important departures from normative decision making, receive detailed treatment: naturalistic decision making and heuristics and biases. Because decision errors produced by the heuristics and biases can be considered to represent human factors challenges, we complete our treatment of decision making by describing several human factors solutions to mitigate decision errors. Finally, our chapter concludes by describing four “close cousins” of decision making within the family of macrocognitive processes: situation awareness, troubleshooting, planning and metacognition.

7.1 Macrocognitive Environment

fizik100 fizik100 fizik100 · 1400/8/7 00:45 ·

7.1 Macrocognitive Environment

The cognitive environment governs how characteristics of microcognition, such as the limits of working memory, influence human performance. In a similar way, the environment governs how the characteristics of macrocognition influence performance, and when it is important to consider macrocognition.

Features of situations where macrocognition matters :

 

III-structured problems with ambiguous goals , example : There is no single “best” way of responding to a set of a patient’s symptoms.

Uncertain, dynamic environments. example : The situation at Amy’s hospital is continually changing, presenting new decisions and considerations.

Information-rich environments. example: There is information on status boards, electronic patient records, and through talking with others.

Iterative perception-action feedback loops. example: Any decision to regarding treatment, particularly after an initial misdiagnosis, is monitored and used decide what to do next.

Time pressure. example: Decisions often need to be made quickly because delays can jeopardize the outcome of a procedure.

High-risk situations. example: Loss of life can result from a poor decision..

Multiple shifting and competing individual and organizational goals. example: As the day evolves, the goals may shift from minimizing delays for routine procedures to responding to a major emergency. Also, what might be the top priority physician might not be the same for a nurse or patient.

Interactions with multiple people. example: Many people contribute information and perspectives to decisions: patients and nurses negotiate with Amy

People often make decisions in dynamic, changing environments, like those confronting the internal medicine specialist, Amy, described at the outset of the chapter [344, 345, 346]. Amy faced incomplete, complex, and dynamically changing information; time stress; interactions with others; high risk; uncertain outcomes, each with different costs and benefits. Not every situation is so complicated, but those that include these elements indicate a need to consider the processes of macrocognition discussed in this chapter. Table 7.1 summarizes features of the cognitive environment that makes it important to consider macrocognition. These features cause us to adopt different decision processes. Sometimes, particularly in high-risk situations, we carefully calculate and evaluate alternatives, but in many cases, we just interpret it to the best of our ability and make educated guesses about what to do. Some decisions are so routine that we might not even consider them to be decisions. Unlike the situations that influence microcognition, critical features associated with macrocognition include poorly defined goals that might not be shared by all involved. As in Amy’s situation, concepts of macrocognition are particularly important in situations that have multiple people interacting in an evolving situation where decisions and plans are made and then revisedover time. In many cases, these features make decision making and problem solving difficult and error-prone. This makes macrocognition a central concern to human factors specialists working in complex systems, such as military operations, hospitals, aircraft cockpits, and process control plants. We begin this chapter by describing the overall nature of skill and expertise in macrocognition, and how they change with practice and experience. We present three types of behavior that have implications for all elements of macrocognition, and then consider these behaviors with respect to decision making. Decision making highlights the challenges of engaging analytic thinking, the power of heuristics and the pitfalls of the associated biases. Principles to improve decision making are described in terms of task design, decision support systems, displays, and training. The final sections of the chapter addresses four closely related areas of macrocognition: situation awareness, troubleshooting, planning, and metacognition.

 

7.intro

fizik100 fizik100 fizik100 · 1400/8/7 00:37 ·

Amy, a relatively new internal medicine specialist treated a patient who exhibited a set of symptoms typical of a fairly common condition: rash, reported localized mild pain, headache, 102 °F temperature, and chills. A localized skin discoloration near the rash was not considered exceptional or unusual (“just a bruise from a bump”), and a quick glance at the chart of the patient’s history revealed nothing exceptional. Amy, already behind on her appointments, quickly and confidently decided, “that’s flambitis” (a condition that was the subject of a recent invited medical seminar at the hospital), prescribed the standard antibiotics and dismissed the patient. A day later the patient phoned the nurse to complain that the symptoms had not disappeared, but Amy, reading the message, instructed the nurse to call back and say that it would take some time for the medicine to take effect, and not to worry. Yet another 24 hours later, the patient appeared at the ER, with a temperature now of 104 °F, and more intense pain. Amy was called in and a careful inspection revealed that the slight discoloration had darkened, and a prior condition in the medical chart had been overlooked in Amy’s quick scan. These two newly appreciated symptoms or cues suggested that flambitis was not the cause, and led Amy to do a rapid, but intense and thoughtful, search of the medical literature to obtain reasonable evidence that the condition was a much less prevalent one called stabulitus. This was consistent with an earlier report in the patient’s medical record that Amy, in her quick glance, had overlooked. Further research suggested a very different medication. After making that prescription, Amy now started monitoring the patient very closely and frequently, until she observed that, indeed, the symptoms were now diminished. Following this close call of a misdiagnosis and the resulting poor decision on treatment, the first serious decision error since her licensing, Amy vowed to double check her immediate instincts, no matter how much the symptoms looked like a common condition, to more thoroughly check the medical history, and to follow up on the patient’s condition after the initial treatment. Although this scenario happened to occur in the medical domain, each day people make many decisions in situations that range from piloting an aircraft and voting for a candidate to financial planning and shopping. Some of these decisions have life and death implications and other times a poor choice is just a minor annoyance. Generally, these decisions depend on understanding the situation by integrating multiple sources of information, determining what the information represents, and selecting the best course of action. This course of action might be simply dropping an item into your shopping cart or it might require a plan that coordinates other activities and people. This chapter builds on the previous chapter’s description of cognition. The elemental information processing stages of selective attention, perception, working memory, long-term memory, and mental workload all contribute to decision making. These 7.1 Macrocognitive Environment 203 concepts form the building blocks of cognition and can be thought of as elements of microcognition. In contrast, this chapter describes decision making in the context of macrocognition, or the high-level mental processes that build on the stages of information processing, which include situation awareness, decision making, problem solving, and metacognition. Macrocognition is defined by high-level processes that help people negotiate complex situations that are characterized by ambiguous goals, interactions over time, coordination with multiple people, and imperfect feedback.

 

Figure 7.1 highlights five elements of macrocognition, with the elements arrayed in a circle roughly in the order they might occur, but in reality, the process is more complex with all processes being linked to all other processes and occurring in a repeated cycle. At the center is metacognition—thinking about one’s own thinking—which guides the individual macrocognitive processes. Microcognition and macrocognition offer complementary perspectives that suggest different ways to enhance safety, performance, and satisfaction.