Kunda (1990) defines motivation as "any wish, desire, or preference that concerns the outcome of a given reasoning task" In this article, she is concerned with accuracy motives and directional motives. When an individual has an accuracy motive, that individual wishes "to arrive at an accurate conclusion." When an individual has a directional motive, that individual desires to "arrive at a particular, directional conclusion." In other words, when an individual has a directional motive, they are reasoning in order to arrive at a conclusion with specific, predetermined, characteristics. For example, they may want a conclusion that helps them to feel good about themselves and their behavior. Kruglanski, Azjen, Klar, Chaiken, Liberman, Eagly, Pyszczynski, and Greenberg have all produced research supporting this distinction between accuracy motives and directional motives.
For example, when people are responding to accuracy motives they tend to a) "expend more cognitive effort on issue-related reasoning," b) "attend to relevant information more carefully," and c) "process it more deeply, often using more complex rules." An experimenter can increase a participant's accuracy goals by "by increasing the stakes involved in making a wrong judgment or in drawing the wrong conclusion, without increasing the attractiveness of any particular conclusion." Tetlock demonstrated that accuracy goals only affect information processing if they are induced before that processing occurs. If they are induced after information exposure but before participants are asked to make a judgement, they have no observable effect.
"Tetlock and Kim (1987) showed that subjects motivated to be accurate (because they expected to justify their beliefs to others) wrote more cognitively complex descriptions of persons whose responses to a personality test they had seen: They considered more alternatives and evaluated the persons from more perspectives, and they drew more connections among characteristics.
Partly as a result of this increased complexity of processing, subjects motivated to be accurate were in fact more accurate than others in predicting the persons' responses on additional personality measures and were less overconfident about the correctness of their predictions.
In a similar vein, Harkness, DeBono, and Borgida (1985) showed that subjects motivated to be accurate in their assessment of which factors affected a male target person's decisions about whether to date women (because the subjects expected to date him later and therefore presumably wanted to know what he was like) used more accurate and complex covariation detection strategies than did subjects who did not expect to date him."
Kruglanski and Freund (1983; Freund, Kruglanski, & Shpitzajzen, 1985) "showed that subjects motivated to be accurate . . . showed less of a primacy effect in impression formation, less tendency to use ethnic stereotypes in their evaluations of essay quality, and less anchoring when making probability judgments." Once concern with Kruglanski and Freund's work, however, was that participants may simply be making less extreme judgments, rather then increasing their processing of the information. The less biased judgment, in these experiments, was always the less extreme one. However, a 1985 study by Tetlock put this concern, partially, to rest. He examined the Fundamental Attribution error and demonstrated "[i]n comparison with other subjects, accuracy-motivated subjects made less extreme dispositional attributions about a target person when they knew that the target person had little choice in deciding whether to engage in the observed behavior, but not when they knew that the target person had a high degree of choice." In other words, participants still made extreme judgments when it was appropriate to do so.
Although accuracy motives can lead to reduces bias in some cases, in others they can increased biased reasoning. Subjects who "expected to justify their judgments to others) were more susceptible than other subjects to the dilution effect--that is, were more likely to moderate their predictions about a target when given nondiagnostic information about that target---and this tendency appeared to have resulted from more complex processing of information (Tetlock & Boettger, 1989)."
Kunda argues that participants with directional motives want to "arrive at a particular conclusion [with a] . . .a justification of their desired conclusion that would persuade a dispassionate" person. In order to do this they "they search memory for those beliefs and rules that could support their desired conclusion. They may also creatively combine accessed knowledge to construct new beliefs that could logically support the desired conclusion." Both processes are "biased by directional goals (cf. Greenwald, 1980)" and "they are accessing only a subset of their relevant knowledge." Evidence suggests that directional goals can lead people to "endorse different attitudes (Salancik & Conway, 1975; Snyder, 1982), express different self-concepts (Fazio, Effrein, & Falender, 1981), make different social judgments (Higgins & King, 1981), and use different statistical rules (Kunda & Nisbett, 1986; Nisbett, Krantz, Jepson, & Kunda, 1983)."
Kunda presents evidence that directional goals induce a biased recall of existing information by analyzing various dissonance studies. For example, she argues that "[d]issonance clearly would be most effectively reduced if one were able to espouse an attitude that corresponds perfectly to one's behavior. Yet this is not always the case. In many dissonance experiments, the attitudes after per forming the counterattitudinal behavior remain in opposition to the behavior. For example, after endorsing a law limiting free speech, subjects were less opposed to the law than were control subjects, but they remained opposed to it (Linder, Cooper, & Jones, 1967). Similarly, after endorsing police brutality on campus, subjects were less opposed to such brutality than were control subjects but they remained opposed to it (Greenbaum & Zemach, 1972)"
Similar results were found from "[i]nduced compliance studies in which subjects are led to describe boring tasks as enjoyable often do produce shifts from negative to positive task evaluations, but in these studies, initial attitudes are not very negative (e.g., -0.45 on a scale whose highest negative value was - 5 in Festinger & Carlsmith's classic 1959 study), and postdissonance attitudes still seem considerably less positive than subjects' descriptions of the task. For example, after they were induced to describe a task as "very enjoyable . . , a lot of fun . . , very interesting . . , intriguing, . . exciting," subjects rated the task as 1.35 on a scale whose highest positive value was 5 (Festinger & Carlsmith, 1959)."
Biased Memory Search
The existence of a biased memory search is one of the key features of Kunda's model, and one that her own research has supported. Kunda and Sanitios' 1989 experiment "showed that subjects induced to theorize that a given trait (extraversion or introversion) was conducive to academic success came to view themselves as characterized by higher levels of that trait than did other subjects, presumably because they were motivated to view themselves as possessing success-promoting attributes. These changes in self-concepts were constrained by prior self-knowledge: The subjects, who were predominantly extraverted to begin with, viewed themselves as less extraverted when they believed introversion to be more desirable, but they still viewed themselves as extraverted." Similar studies, such as Dunning, Story, and Tan's experiment praising social skills versus task skills, have demonstrated similar patterns.
Further demonstrating this memory search, in "another study, subjects led to view introversion as desirable were faster to generate autobiographical memories reflecting introversion and slower to generate memories reflecting extraversion than were subjects led to view extraversion as desirable. These studies both indicate that the accessibility of autobiographical memories reflecting a desired trait was enhanced, which suggests that the search for relevant memories was biased.":
"Additional direct evidence for biased memory search was obtained by Markus and Kunda (1986). Subjects were made to feel extremely unique or extremely ordinary Both extremes were perceived as unpleasant by subjects, who could therefore be assumed to have been motivated to see themselves in the opposite light. Subjects were relatively faster to endorse as selfdescriptive those adjectives reflecting the dimension opposite to the one that they had just experienced. Apparently they had accessed this knowledge in an attempt to convince themselves that they possessed the desired trait and, having accessed it, were able to endorse it more quickly."
Biased Knowledge-Construction
Directional goals can bias people's ideas, but only in light of past information. For example, people tend to see themselves as farther above average on "ambiguous traits that are open to multiple construals" than they do on "unambiguous traits." For ambigious traits this tendency can be reduced "when people are asked to use specific definitions of each trait in their judgments."
Where there is little past information, directional goals can lead to very biased processing. "McGuire (1960) showed that the perceived desirability of events may be biased by motivation. Subjects who were persuaded that some events were more likely to occur came to view these events as more desirable, presumably because they were motivated to view the future as pleasant. Subjects also enhanced the desirability of logically related beliefs that had not been specifically addressed by the manipulation, which suggests that they were attempting to construct a logically coherent pattern of beliefs."
Accuracy goals can increase processing while directional goals bias this processing. Several studies indicate that people tend to see others as more likable if they expect to interact with them" In one study by Berscheid, Graziano, Monson, and Dermer (1976), "subjects liked the person whom they expected to date better than they liked the other persons. Ratings of the three persons' personalities were affected as well: Subjects rated their expected dates more extremely and positively on traits and were more confident of their ratings. Subjects also awarded more attention to their prospective dates and recalled more information about them than about other target persons, but the enhanced liking and trait ratings were not due to differential attention." Here, subjects seemed to have "both a directional goal and an accuracy goal: They wanted their date to be nice so that the expected interactions would be pleasant, and they wanted to get a good idea of what the date was like so that they could better control and predict the interaction. The accuracy goal led to more intense processing, and the directional goal created bias."
Biased Use of Heuristics
"In two studies, researchers examined directly whether people with different directional goals use different statistical heuristics spontaneously. A study by Ginossar and Trope (1987) suggested that goals may affect the use of base rate information. Subjects read Kahneman and Tversky's (1972b) cab problem, in which a witness contends that the car involved in a hit-and-run accident was green. They received information about the likelihood that the witness was correct and about the prior probability that the car would be green (the base rate) and were asked to estimate the likelihood that the car was green. Subjects typically ignore the base rate when making such estimates. However, when asked to answer as though they were the lawyer for the green car company (a manipulation presumed to motivate subjects to conclude that the car was not green), subjects did use the base rate information and made considerably lower estimates when the base rate was low. The finding that only motivated subjects used base rate information suggests that motivated subjects conducted a biased search for an inferential rule that would yield their desired conclusion. It is also possible, however, that those subjects conducted a more intense but essentially objective search for rules. This is because the pattern of results and the design do not permit assessment of whether all subjects pretending to be lawyers used the base rate or whether only subjects for whom use of base rate could promote their goals (i.e, subjects told that the prior probability was low) used it."
Bias
"Of importance is that all subjects were also quite responsive to the differential strength of different aspects of the method, which suggests that they were processing the evidence in depth. Threatened subjects did not deny that some aspects were strong, but they did not consider them to be as strong as did nonthreatened subjects. Thus bias was constrained by plausibility."
"Kunda (1987) found . . . attempted to control for the possibility that the effects were mediated by prior beliefs rather than by motivation. Subjects read an article claiming that caffeine was risky for women. Women who were heavy caffeine consumers were less convinced by the article than were women who were low caffeine consumers. No such effects were found for men, who may be presumed to hold the same prior beliefs about caffeine held by women, and even women showed this pattern only when the health risks were said to be serious"
"Lord, Ross, and Lepper (1979) . . . preselected subjects who were for or against capital punishment and exposed them to two studies with different methodologies, one supporting and one opposing the conclusion that capital punishment deterred crime. Subjects were more critical of the research methods used in the study that disconfirmed their initial beliefs than they were of methods used in the study that confirmed their initial beliefs. The criticisms of the disconfirming study were based on reasons such as insufficient sample size, nonrandom sample selection, or absence of control for important variables; this suggests that subjects' differential evaluations of the two studies were based on what seemed to them a rational use of statistical heuristics but that the use of these heuristics was in fact dependent on the conclusions of the research, not on its methods. Having discounted the disconfirming study and embraced the confirming one, their attitudes, after exposure to the mixed evidence, became more polarized. Because subjects were given methodological criticisms and counterarguments, however, the study did not address whether people would spontaneously access differential heuristics. In fact, after exposure to a single study but before receiving the list of criticisms, all subjects were swayed by its conclusions, regardless of their initial attitudes. This suggests further that people attempt to be rational: They will believe undesirable evidence if they cannot refute it, but they will refute it if they can. Also, although the differential evaluation of research obtained in this study may have been due to subjects' motivation to maintain their desired beliefs, it may also have been due to the fact that one of these studies may have seemed less plausible to them because of their prior beliefs.""
"B. R. Sherman and Kunda (1989) used a similar paradigm to gain insight into the process mediating differential evaluation of scientific evidence. Subjects read a detailed description of a study showing that caffeine either facilitated or hindered the progress of a serious disease. Subjects motivated to disbelieve the article (high caffeine consumers who read that caffeine facilitated disease, low caffeine consumers who read that caffeine hindered disease) were less persuaded by it. This effect seemed to be mediated by biased evaluation of the methods employed in the study because, when asked to list the methodological strengths of the research, threatened subjects spontaneously listed fewer such strengths than did nonthreatened subjects. Threatened subjects also rated the various methodological aspects of the study as less sound than did nonthreatened subjects. These included aspects pertaining to inferential rules such as those relating sample size to predictability, as well as to beliefs about issues such as the validity of self-reports or the prestige of research institutions."
Alternative Explanations
"For example, the memory-listing findings may reflect a response bias rather than enhanced memory accessibility. Thus the enhanced tendency to report autobiographical memories that are consistent with currently desired self-concepts may have resulted from a desire to present oneself as possessing these self-concepts. And the reaction time findings may have resulted from affective interference with speed of processing rather than from altered accessibility. However, neither of these alternative accounts provides a satisfactory explanation of the full range of findings. Thus self-presentational accounts do not provide a good explanation of reaction time findings, which are less likely to be under volitional control. And affective interference with speed of processing does not provide a good explanation for changes in overall levels of recall."
"In a similar vein, the tendency to evaluate research supporting counterattitudinal positions more harshly (Lord et al, 1979) is not affected when subjects are told to be objective and unbiased, but it is eliminated when subjects are asked to consider what judgments they would have made had the study yielded opposite results (Lord et al, 1984). Taken together, these findings imply that people are more likely to search spontaneously for hypothesis-consistent evidence than for inconsistent evidence. This seems to be the mechanism underlying hypothesis confirmation because hypothesis confirmation is eliminated when people are led to consider inconsistent evidence. It seems either that people are not aware of their tendency to favor hypothesis-consistent evidence of that, upon reflection, they judge this strategy to be acceptable, because accuracy goals alone do not reduce this bias. Thus the tendency to confirm hypotheses appears to be due to a process of biased memory search that is comparable with the process instigated by directional goals. This parallel lends support to the notion that directional goals may affect reasoning by giving rise to directional hypotheses, which are then confirmed; if the motivation to arrive at particular conclusions leads people to ask themselves whether their desired conclusions are true, normal strategies of hypothesis-testing will favor confirmation of these desired conclusions in many cases. One implication of this account is that motivation will cause bias, but cognitive factors such as the available beliefs and rules will determine the magnitude of the bias."
To What Extent is the Process Motivated?
"In its strongest form, this account removes the motivational "engine" from the process of motivated reasoning, in that motivation is assumed to lead only to the posing of directional questions and to have no further effect on the process through which these questions are answered. If this were true, many of the distinctions between cognitive, expectancy-driven processes and motivated processes would break down. Both directional goals and "cold" expectancies may have their effects through the same hypothesis-confirmation process. The process through which the hypotheses embedded in the questions "Is my desired conclusion true?" and "Is my expected conclusion true?" are confirmed may be functionally equivalent. Indeed, there are some interesting parallels between motivated reasoning and expectancy confirmation that lend support to this notion. For example, B. R. Sherman and Kunda (1989) found that implausible evidence (namely, that caffeine may by good for one's health) was subjected to elaborate and critical scrutiny that was comparable to the scrutiny triggered by threatening evidence. Similarly, Neuberg and Fiske (1987) found that evidence inconsistent with expectations received increased attention comparable in magnitude to the increase caused by outcome dependency. Finally, Bem's (1972) findings that the beliefs of observers mirrored those of actors in dissonance experiments suggest that observers' expectations and actors' motivations may lead to similar processes."
"For example, when motivation is involved, one may persist in asking one directional question after another (e.g, "Is the method used in this research faulty?" "Is the researcher incompetent?" "Are the results weak?"), thus exploring all possible avenues that may allow one to endorse the desired conclusion. It is also possible that in addition to posing directional questions, motivation leads to more intense searches for hypothesis confirming evidence and, perhaps, to suppression of disconfirming evidence (cf. Pyszczynski & Greenberg, 1987). If motivation does have these additional effects on reasoning, the parallels between motivated reasoning and expectancy confirmations gain new meaning. They suggest that seemingly "cold" expectancies may in fact be imbued with motivation. The prospect of altering one's beliefs, especially those that are well established and long held, may be every bit as unpleasant as the undesired cognitions typically viewed as "hot : It was this intuition that led Festinger (1957) to describe the hypothetical situation of standing in the rain without getting wet as dissonance arousing."