Saturday, October 29, 2011

Constraints: Trade-Off Reasoning

Tetlock, P. E. (2000). Coping with trade-offs: Psychological constraints and political implications. In A. Lupia, M. D. McCubbins, & S. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 239–263). Cambridge University Press: Cambridge, UK. Retrieved from

Phillip Tetlock’s research has long explored trade-off reasoning. He has identified three major categories of constraints affecting trade-off reasoning:

Incommensurability: People have difficulty making a trade-off when the trade-off involves outcomes that are valued in qualitatively distinct ways. People don’t come with ready-made scales for comparing, for example, the opportunity to further their education versus the cost of paying off student loan debt. When asked to make these decisions, individual choices are influenced by the presentation of the decisions and do not display transitivity or consistent preferences.

Emotional responses: Emotion-laden trade-offs, where a highly valued outcomes is surrendered or a highly disvalued outcomes accepted, often lead people to “spread the alternatives” by emphasizing the worth of the outcomes that they chose and derogating the worth of the outcome that they surrendered. In part because losses can seem more important than gains, coping with losses can lead to a denial of losses.

It is not always possible to spread the alternatives. In this case, it may feel extremely uncomfortable, even disgusting, to acknowledge that one has made a trade-off. This “constitutive incommensurability” occurs when one would have to, for example, find a way to decide between two immoral outcomes. One may want to choose a crude metric by which to compare the immorality of these outcomes, the number of lives lost, for example, but feel extremely uncomfortable doing so.

Fear of criticism: We know that other people will be less motivated to “spread the alternatives” in a way that favors our decision than we are. In fact, if other people are aware that an individual is making moral trade-offs, they may judge him to be immoral or insane, feel anger, contempt, and disgust, and seek to punish or ostracize the individual.

It should be noted that people may happily make a trade-off when the fact of the trade-off is disguised, or when the trade off is between maintaining the status quo and pursuing a positive, but costly, outcome.

It should also be noted that people may sometimes admire an individual who is willing to make hard trade-offs. In America, for example, the “thoughtful statesman” script values a politician who is able to balance equally valid interests and make the best, albeit difficult, choice. In America, however, the “opportunistic vaccilator” script also exists, in which a politician is thought to “flip flop” on an issue just to win votes. Likewise, a leader who refused to compromise and displays a rigid mindset may be portrayed as either a demagogue or a principled leader (Tetlock, 2000). In this experiment by Tetlock et al., only participants who were primed with the “thoughtful statesman” script favored speeches that acknowledged trade-offs. Participants receiving neutral primes and participants receiving an anti-trade-off prime responded with similar negativity to speeches acknowledging trade-offs.

However, where a debate is highly polarized, acknowledging trade-offs may lead to negative responses regardless of the position taken. In a study on attitudes towards abortion, Tetlock et al. found that supporters of the fictional politician’s position were galled that he acknowledged the legitimacy of an alternate perspective and the opponents of his position were galled that he could acknowledge the legitimacy of their views and take an opposite stance.

For an application of this theory, consider responses to President Obama's speech seeking to detail common ground between proponents and opponents of abortion-access.

Sunday, October 9, 2011

Worldview and Epistemic Motives


Golec and Van Bergh (2007) present evidence that the Need for Closure moderates political conservatism but that the relationship is mediated by adherence to a traditional or modern personal worldview. The authors argue that both types of worldview can satisfy needs for closure by presenting values “as absolute rather than relative . . . [and assuming] a definite rather than approximate nature of truth” (Golec & Van Bergh, 2007, p. 587). The traditional worldview, in this schema, “is based on a belief in a single, unshakeable truth of a transcendental, nonhuman character, not susceptible to rational verification or evaluation” (Golec & Van Bergh, 2007, p. 590).

While one might quibble with their emphasis on a single “nonhuman character,” given the preponderance of polytheist. Agnostic, or atheist belief systems that assert absolute values and absolute truth “not susceptible to rational verification,” their distinction remains useful. They are able to delineate a “traditional” from a modern” worldview,” where truth is absolute but “verifiable and legitimized by rational, scientific means” (Golec & Van Bergh, 2007, p. 590), and from a “postmodern” worldview where “‘Truths’ are perceived as fragmentary and partial (p. 591).

In their study, Polish participants who adhered to a traditional worldview when experiencing a need for closure tended to be more conservative—as measured by questions used in a Polish public opinion pole, questions which the authors claim are highly reliable.  Those who adhered to a modern worldview tended to be less conservative. Golec and Van Bergh (2007) argue that the need for closure could lead to a preference for traditional and modern worldviews over postmodern ones. The Worldview Model argues that both traditional and modern worldviews could moderate need for closure.

Before the discussion continues, it should be noted that one need not claim that traditionalists are necessarily conservative in the modern political sense. Golec and Van Bergh’s Polish traditionalist participants were conservative, but some traditionalists may be radical in their social goals, seeking to establish a society based on values that include tolerance of religious differences and intolerance of inequality.  Even when this is the case, however, the Worldview Model suggests that a traditional worldview should increase the need for closure. 

Traditional worldviews could motivate a need for cognitive closure by suggesting that uncertainty is unreasonable, surprising, and must be quickly resolved, at least with regard to transcendental truths and social values. The traditional worldview provides many heuristics by which such closure could be achieved, including a simple surrendering of judgment to an authority with esoteric knowledge of transcendental truths. However, the Worldview Model predicts that traditional worldview would, under certain circumstances, provide more complex arguments that could lead to a resolution of uncertainty.

Some circumstances demand that closure be achieved through elaboration—the careful consideration of information in order to achieve stable understanding (Petty, Cacioppo, Strathman, & Priester, 2004). Elaboration occurs when people are able to grapple with new information and are motivated to do so. One source of motivation is ambivalence, explicit or implicit (Johnson, Petty, & Brinol 2011). Explicit ambivalence refers to a conflict between consciously held beliefs and implicit ambivalence refers to a conflict between automatic evaluations and consciously held beliefs. Implicit ambivalence can occur, for example, when an individual has many, automatic, prejudiced attitudes but explicitly endorses non-prejudiced ones. Despite the traditional worldview’s emphases on resolution and a transcendental order, implicit ambivalence may occur, for example, when implicit death anxiety (Jost et al., 2003a) conflicts with an explicit belief that death is merely an end to earthly life and a transition to eternal paradise. Implicit ambivalence could also occur when explicit sympathy for the victim of a crime conflicts with implicit needs to believe in a stable, moral universe (Furnham, 2003).

The last two examples of conflict could take the form of implicit ambivalence but they could also take the form of explicit ambivalence. In either case, they should motivate elaboration. Traditional worldviews may offer a transcendental truth, but the very fact of that truth can be in conflict with individual or even community experience. Disease, natural disasters, invasions by other cultural groups, and disagreements within the community, all may prove inherently threatening to the idea of a natural, ontologically-based, order.

In these cases, although closure needs are high (and need to avoid closure are very low), the task demands are such that integrative complexity should result. Integrative complexity can result from conflict between two equally strong values (Tetlock, 1986).  While liberals, in general, demonstrate greater integrative complexity than conservatives, they may show less integrative complexity on certain issues, such as gay marriage, where little value conflict is perceived (Critcher, Huber, Ho, & Koleva, 2009; Jost et al. 2003b; Joseph, Graham, & Haidt, 2009).

Integrative complexity could result from those situations in which traditionalists who are high in need for closure cannot find closure by relying on the first relevant set of values that comes to mind—for example when an automatic attitude conflicts with the traditional worldview. Depending on the strength of the need for closure, however, traditionalists may employ various strategies for avoiding having to produce complex responses—including categorizing issues in terms of single value dimensions whenever possible and automatically shifting their stated values in response to situational influences in a way that quickly resolves feelings of conflict and uncertainty (Critcher et al. 2009).

The extent to which these strategies would be effective would vary with the situation. Of the previously considered scenarios that could encourage elaboration and integrative complexity, conflict within the community may be both the most common and the most likely to produce a response higher in integrative complexity. People who are high in need for closure tend to seek conformity and adhere to cultural norms, but only when they are uncertain, as Fu, Morris, Lee, and Hong demonstrated with American and Chinese participants (2007). Otherwise, they tend to fall back on preexisting beliefs, especially beliefs made more accessible through priming (Fu et al., 2007, Kruglanski, Webster, & Klem, 1993).

Where community conflicts are generated by fundamental disagreement, resolving resultant conflicts would require either the repression of dissent or novel thinking. Agreeing to disagree would not be an option for traditionalists who are members of the same community, although communities might divide into different sects. Once conflicts are resolved, however, the new perspective should become standard and used to resolve or avoid future conflicts.  This perspective may even the basis of the next generation of closure-granting automatic-associations and heuristics.
Last, it is possible that traditionalists could conflict in practice but not in philosophy.

While some worldviews may suggest that there is only one right way of doing anything, others may allow for variation as long as it does not challenge a fundamental order. Traditionalists, then, should vary depending on their culture in the extent to which their absolute truth moderates a nonspecific need for closure—leading to a general resistance to change and uncertainty—and the extent to which it only moderates a specific needs for closure on what the absolute truth is and what absolute values should be held. The latter traditionalists may have the same level of integrative complexity as modernists when it comes to justifying their particular actions.  The Worldview Model would predict revolutionary traditionalists, traditionalists who seek to change the social order, to come from cultures that tolerate political uncertainty while being intolerant, for example, of religious uncertainty.

Interestingly, when high in need for closure, Critcher et al’s (2009) conservative participants were more likely to change their self-reported attitudes towards abortion in response to an issue-relevant explicit prime—writing an essay either on the value of life or the value of choice. Self-reported liberal participants were less responsive to priming when high in need for closure. This suggests that worldview differences between liberals and conservatives influence how individuals respond to need for closure. One source of these differences could be variation between the traditional and modern worldviews, if we assume that liberalism and modernism and traditionalism and conservatism are both correlated in this sample. 

Relative to the modern worldview the traditional worldview tolerates more mystery. Truths are transcendental and an individual does not expect to have specific answers to all of life’s perplexities. In the modernist’s world, each individual expects to be able to achieve closure only after an examination of the evidence—a process that could be cognitively intensive. While closure is desired, it is not, necessarily, expected. Where it is demanded, the modernist must study her environment, elaborating information until stable understanding can be reached. Alternatively, the modernist might rely on experts—individuals who share the same goals but are better able to achieve them. In that case, however, she must still take responsibility for choosing her experts wisely. The liberal participants in Critcher et al.’s (2009) study were just as high in need for closure but were less susceptible to automatic, primed evaluations, perhaps because they habitually regulate against resolving uncertainty based on a gut instinct.

An alternate explanation could be that these liberal participants held on to previously established beliefs in the face of new evidence because those beliefs were the result of a more elaborative process and, perhaps, were higher in integrative complexity. Not only is the modern world inherently more uncertain, with that uncertainty being an obstacle rather than a necessity, it is also amoral. Moral judgments are supposed to be the products of deliberate reasoning and not, at least in Critcher et al.’s (2009) American sample, based on gut-instinct.. 

At the same time, the modernist’s behavior may not always conform to his ideal. For example, he may act on instincts, which he expects to be perfectible and thus potentially imperfect, when he is under cognitive load. Employing moral foundations theory, Joseph et al. discussed a study by Skitka, Mullen, Griffin, Hutchinson, and Chamberlin (2002), in which liberals were asked to decide whether to allocate funds to subsidize the medical treatment of AIDS patients who were believed to be responsible for their illness. Skitka et al (2002) found that liberals tended to deny funding when under cognitive load.

Cognitive load should, as discussed previously, contribute to closure needs (Kruglanski, 2004, p. 74). In this case, Skitka et al. (2009) argue that their participants were more likely to demonstrate the fundamental attribution error when under cognitive load. Western participants have been shown to make more individual-based attributions (and Eastern participants more situation-based attributions) when experiencing an elevated need for closure (Kruglanski, 2004, p. 73). Haidt et al. (2009) argue that liberals who were not under cognitive load sought a resolution between implicit concerns for purity and explicit concerns for harm, elaborating in the face of implicit ambivalence. Under cognitive load, automatic concerns for purity dominated may have decision-making.

In this case, a modernist participant may act similarly to a traditionalist participant, but with different justifications.  The traditionalist may be more motivated to incorporate automatic reactions into her worldview and not to see conflict between automatic and deliberate judgments than the modernist, but the modernist may also decide to rationalize his judgments in order to think of himself as a rational person. The traditionalist may, alternatively, reject her judgment, considering it to be morally flawed. The modernist may reject his judgment, considering it to be irrational.

The Worldview Model cannot predict the exact conditions under which a modernist participant would reject her automatic evaluation and a traditionalist participant accept his automatic evaluation are unknown. The modernist’s realization that she is automatically stereotyping, for example, can trigger self-regulatory effort if she does not explicitly endorse these stereotypes (Richeson & Trawalter, 2005). Over time, automatic activation of stereotyping can be inhibited by the automatic activation of competing goals (Moskowitz & Li, 2011) and her automatic attitudes can grow closer to her explicitly endorsed ones. She can, in other words, reject her automatic reaction and seek to become more “perfect.” Under high need for closure, she may be more likely to stereotype (Kruglanski, 2004, p. 83). However, this failure to adhere to explicit norms may later motivate self-regulation.

Beyond stereotyping, modernists may also have to experience automatically-activated traditional beliefs, beliefs which are common in most societies for historical and, potentially, social cognitive reasons. Work by Kay and colleagues suggest that traditionalists and modernists will both deny randomness and assert control. His research, as well as that of Landau and colleagues (2010), suggests that in more modernist societies individuals may prefer to believe that the world is structured and non-random even if they do not have specific evidence for these beliefs.

Two strong sources of perceptions of order or randomness in the world are beliefs in supernatural forces and beliefs in the government. In one experiment, Kay, Shepherd, Blatz, Chua, and Galinsky (2010) looked at changes in perceptions of the stability of government in both Malaysian and Canadian participants and saw a subsequent increase in belief in a controlling God. Affirming government decreased belief in a controlling God among Canadian participants but did not affect whether participants reported that the concept of God helped them to find meaning in their own lives. Using a representative sample recruited online, Kay and colleagues then presented articles suggesting that cutting edge physics indicated that either science could explain all events or that there were some events that might be influenced by a supernatural force. Participants in the first condition showed relatively more support for the political system than participants in the second condition.

It would be more remarkable if belief in God was manipulated among those participants who were absolutely did not or absolutely did believe. However, many of the participants in Kay et al.’s study (2010) could probably draw on both more traditionalist and more modernist beliefs. Some participants may have endorsed modernist beliefs more strongly, but beliefs in God may have been both accessible and considered potentially viable. The modern worldview, after all, does not necessarily deny God, it only claims that the existence of God can be rationally evaluated. Many people may take tentative stances that are responsive to situational motivations, at least in the short-term.

A modernist who absolutely did not believe in God may have responded to the need to perceive nonrandomness and stability in his world by supporting the government. It should be noted that, in another study by Kay (Kay, Moskovitch, & Laurin, 2010), participants who were able to misattribute the arousal caused by being primed with reminders of randomness to an herbal supplement showed no significant change in their belief in supernatural sources of control. This is further evidence that the motive to perceive randomness must be interpreted by the individual before it affects worldview. Both a traditional and modern worldview may equally well satisfy the motivation to perceive non-randomness, depending on situational information of the sort that Kay and colleagues provided their participants. 

The relationship between the need for closure and Kay and colleagues other, potentially related, need to perceive non-randomness is not absolutely clear. However, it is possible that the motivation to perceive non-randomness constitutes a distinct source of the need for closure.  The need for closure scale, which is divided into five facets, each of which is defined as a potential source of need for closure, includes three facets that could relate to the motivation to perceive non-randomness—preference for order, preference for predictability, and discomfort with ambiguity (Kruglanski, Atash, DeGrada, Mannetti, Pierro, & Webster, 1997).

Under high need for closure that originates from cognitive load, participants who are motivated to perceive non-randomness may increase their belief in a controlling God or their belief in a strong government, whichever was more accessible. This interaction of needs could explain why some conservatives come to accept the status quo, even when that status quo includes beliefs with modernist origins (Jost et al., 2009). Conservatives who are relatively higher in nonspecific needs for closure may accept any belief that satisfies their need to perceive non-randomness.

The Need for Closure and Political Ideology


The Need for Closure Scale’s need for cognitive closure:
  • Has been extensively researched, in both political and non-political contexts (Kruglanski, 2004).
  • Is moderated by the environment (Kruglanski, 2004; Orehek, Fishman, Dechesne, Doosje, Kruglanski, Cole, Saddler, & Jackson, 2010).
  • Moderates politically-relevant beliefs and behaviors with this moderation being mediated by cultural context (Jost, Napier, Thorisdottir, Gosling, Palfair, and Ostafin, 2007; Orehek et al., 2010; Chirumbolo & Leone, 2007; Jost, Krochik, Gaucher, Hennes, 2009; Schoel, Bluemke, Mueller, & Stahliberg, 2011; Peterson, Smith, Tannebaum, & Shaw, 2009; Kossowska & van Hiel, 2003; Stalder, 2007; Ho-ying, Fu, Morris, Lee, Chao, Chiu, & Hong; 2007; De Zavala, Cislak, & Wesolowska, 2010)

The need for cognitive closure can fulfill these multiple roles because it is fundamental to task achievement, allowing the need for closure to permeate both individual and political life. 

Webster and Kruglanski’s Need for Closure scale measures two epistemic motives using a bipolar scale.  These motives are the need for cognitive closure and the need to avoid cognitive closure. The need to avoid cognitive closure can be described as a motivation to believe that that the world is ambiguous and complex. Ambiguity and complexity are desired because they provide further opportunities for thought and reflection. People who are high in a need to avoid closure may find certainty constraining or generally aversive and may prefer to stop thought processes before they reach any definite conclusions (Kruglanski, 2004, p. 6-13).

The need for closure, in contrast, is a need to live in a world that is unambiguous and more easily understood. People who are high in a need for cognitive closure typically attend to information that can easily be incorporated into existing schemas. They typically seek to confirm existing beliefs and may continue their analysis until they have supported these beliefs. Under the right conditions, then, a person who is high in need for cognitive closure may process more information than a person who is low in need for cognitive closure. However, conclusions will tend to be biased in favor of the status quo. Over time, people who are habitually motivated to seek cognitive closure may come to rely on more general, less individuated schemas to which more information can be easily assimilated.  Also, a person who is high in need for closure may sometimes reject weakly held attitudes more quickly if those attitudes hinder their sense-making process (Kruglanski, 2004, p. 14-20).
Both the need to avoid closure and the feeds for closure may occur as specific, goal-directed needs.

These needs, however, can be somewhat difficult to pinpoint. For example, if a self-identified liberal reads the 2006 Pew Research Center report “Are We Happy Yet?” and learns that Republicans tend to report being happier than Democrats, that person may experience a range of specific responses that represent motives at both ends of the Need for Closure scale. For example, she may wish to affirm the methodology employed by the researchers (a need for cognitive closure with regards to the methodology) but argue that reality is more complicated than data can capture (a need to avoid closure because of the aversive implications of the results). If another self-identified liberal was to read that study and wished to challenge the results, he might begin analyzing the report in search of methodological flaws. Because this search has a specific goal—a belief that the methodology is flawed and that the results can be rejected—this search would be in response to a specific need for closure.

These needs, then, can shift as information processing continues. The first liberal might later be dissatisfied with her avoidance of closure and instead want to be able to specifically challenge the results. The latter liberal may find his need for closure satisfied by the discovery that the researchers employed an outdated or less precise statistical measure but then later require even stronger evidence against the results. If the results turn out to be well-supported, he may wish to avoid closure on the issue the same way that the first liberal initially did, arguing that methodology is sound but the results fail to capture the big picture. Alternatively, he may become ambivalent, wishing to reject the results but, for the time being, being unable to do so. 

While specific needs may vary considerably over the course of a thought-process, nonspecific needs to avoid closure and nonspecific needs for closure may vary as well. These general epistemic styles do not target the ambiguity or lack of ambiguity in specific situations but represent the approach or avoidance of ambiguity in general. If a person is under cognitive load, for example, she may be high in a nonspecific need for closure, seeking to draw rapid conclusions and make quick decisions about her environment with little need to reach specific conclusions or decisions (Kruglanski, 2004, p. 74).

Some people may habitually experience a high need for nonspecific closure. For example, following a traumatic event, such as a terrorist attack, a medium-term elevation in need for closure may be correlated with other psychological changes, both more general and more topical, including “enhanc[ed] ingroup identification; interdependence with others; outgroup derogation; and support for tough and decisive counterterrorism policies and for leaders likely to carry out such policies” (Orehek et al., 2010). 

This last example, with its mix of more general and more specific psychological outcomes of an elevated need for closure, draws attention to the difficulty of distinguishing specific from non-specific closure needs. The best evidence of non-specific closure needs would be evidence for the need to avoid or approach closure across a variety of domains. For example, if a person experiencing an elevated need for closure due to trauma became less tolerant of disorganization in his home and became less tolerant of his boss’s confusing instructions one could infer that his need for closure was nonspecific. Even if all of the evidence for need for closure comes from a single domain, the need to understand a situation, regardless of whether that situation is understood to be attractive or aversive, would also be evidence for a nonspecific need for closure.

The needs (whether specific or nonspecific) to approach or avoid closure are correlated to a variety of measures of political orientation, including measures of cultural conservatism, economic conservatism, party loyalty, and attitudes towards specific policies (Jost et al. 2003a). Both specific and non-specific needs for closure may interact with cultural context (Fu et al., 2007) and with salient, task-relevant, primes (Golec et al., 2007). Culture, potentially, can moderate need for closure, although there is limited experimental evidence to date.

Cognitive Constraints: Consistency Motives


Nisbett and Wilson, in their 1977 article, explore the discrepancy between the explanations that psychologists give for participant behavior and the explanations that participants themselves give.  For example, in a standard cognitive dissonance experiment participants adjust their attitudes in line with their recent behaviors but tend to be unaware that they have done so.  I don’t know to what degree this effect would vary depending on attitude strength.   In any case, the standard psychological explanation, based on a program of research involving multiple researchers and myriad experiments, is that participants, faced with behavior that is discordant with their previous attitudes, change their attitudes.  By changing their attitudes participants resolve this discord.  If participants can attribute their behavior to an experimental manipulation—i.e. the experimenter makes them do it—they will tend not to experience attitude change.  Attitude change, in other words, seems to arise as a self-focused process, a way of squaring the sense of self with the actions of that self. 

Unfortunately, as Nisbett and Wilson illustrate using Goethals and Reckman’s conceptually-related 1973 experiment on attitudes towards bussing in pursuit of school integration, participants rarely report being aware of this change.  Reviewing the original article, the pro- and anti-bussing participants were, on average, equally confident in their attitudes.  Both groups of participants changed their attitudes in the direction of the opposite position and, when asked, both groups of participants tended to underestimate the extremity of their previous support or opposition.  All of the participants were high school students and the article supplies no data on the personal relevancy of this issue to these students.  Whatever the initial strength of these attitudes, this experiment, among many others, is interesting because the recall of past attitudes is biased by present attitudes. What could explain this bias?

It is likely that these attitude-behavior consistency and, perhaps, self-concept consistency motives operate implicitly.  The existence of implicit goals is well-established.  They can be primed (Bargh & Chartrand, 2000) and they can shield a participant from undesirable conscious experiences, such as being or acting prejudiced (Moskowitz & Li, 2011).  It is also likely that these implicit processes could operate independently of lay theories of the self and motivation, unless a conscious effort is made to act in accordance with these theories.   Given that teasing apart these implicit processes has taken thousands of man-hours of controlled-laboratory experiments on the part of psychologists, it is not surprising that lay theories do not always incorporate specific analogs to what we, as psychologists, believe is really going on. 

Beyond this distinction, however, as Cohen discussed in his 2003 article, citing among others Pronin, Lin, and Ross’s research on bias blindspots, people are more consistent and more objective when judging others than when judging themselves.   Even if psychologists were to introduce implicit processes into lay epistemology, as has, of course, occurred to a certain degree, those very processes may disrupt the accurate application of the epistemology when a person is judging himself.   

However, it is still, to my knowledge, an open empirical question as to whether these implicit motives could be manipulated in the laboratory, through self-regulation, or through larger, natural-experiment-type patterns of cultural regulation.  

Epistemic Motives

Post tagged with the "Epistemic Motives" label concern state and trait motivations that influence cognition.

Belief in a Just World as Cognitive Constraint

Lerner and Miller (1978) discuss Belief in a Just World (BJW) in terms of
  • The belief that one lives “in a world where people generally get what they deserve.” 
  • The belief that the “physical and social environment . . . [are] stable and orderly.” (pp. 1030–1031). 
Here, some people may have beliefs about causality that make them feel, generally, that they can influence their personal and social environment (Rotter’s Locus of Control Scale). Other people perceive greater randomness in this psychosocial environment. Unfortunately, the Locus of Control scale seems to conflate perceiving randomness in the environment with perceiving the difficulty of attaining positive actions and avoiding negative ones, rather than being a simple measure of perceived environmental uncertainty. If it emphasized the possibility of good luck to what degree would its correlation with BJW be reduced (Furnham, 2003)? Luck is mentioned in the scale but the scale consistently discusses luck in terms of lack of control over whether valued outcomes are achieved rather than their spontaneous attainment: “Without the right breaks, one cannot be an effective leader” and “Getting a good job depends mainly on being in the right place at the right time,” for example.

Scales measuring Belief in a Just World examine beliefs that life, in some domains, is generally just or unjust. Justice and injustice can be immanent (present events are just or unjust) and ultimate (justice or injustice is an ultimate outcome of world events). Measured domains include the personal, interpersonal, and sociopolitical and beliefs in a just or unjust world can vary across each domain. Many of the benefits of belief in a just world are within the personal and interpersonal domain.

Belief in a Just World is correlated with greater long-term goal striving, a decreased sense of personal risk, and greater subjective and objective well-being. It seems that believing that one’s actions will generally be rewarded increases one’s willingness to commit resources. Or, alternatively, believing that one’s actions will sometimes go without reward decreases willingness to invest resources. I haven’t yet read research distinguishing between these perspectives. It also seems that those who are high in Belief in a Just World tend to have a decreased sense of environmental threat (moderated, I assume, by personal evaluations of virtue). This decreased threat could encourage a promotion focus, encouraging greater risk-taking. Risk-taking can be beneficial, maximizing outcomes, at least in environments that have a certain degree of safety. Last, personal well-being could be influenced not only by the greater rewards that come from greater resource commitment, but by a worldview that decreases the likelihood of ruminating on those negative outcomes that one cannot control and that encourages seeing these negative outcomes as in some way less negative or more positive.

Belief in a Just World is alternatively praised as a being correlated with subjective measures of personal well-being criticized for being associated with victim blaming and derogation. Both of these practices are moderated by several variables, both in the short and longer terms, and the relationship between BJW and political orientation has proved subtle (Furnham, 2003).

Cognitive Constraints: Worldview

Research on one epistemic motive, the need for cognitive closure (and the related need to avoid cognitive closure), suggests that Jost et al. (2003a) are remiss in ignoring the potential influence of culture on epistemic and other motives. It should be noted that Jost et al. (2003a) do not claim to have a complete model. It should also be noted that the coauthors themselves have presented variations on this model (Jost, Frederico, & Napier, 2009; Kruglanski 2004). I argue that a more complete model would take as its starting point personal worldview—as defined by Golec and Van Bergh (2007). The authors divide personal worldviews into three categories—the traditionalist, the modernist, and the postmodernist.

The traditionalist should believe in an absolute, transcendental truth that cannot be known through rational inquiry. The modernist should believe in an absolute truth that can only be known through rational inquiry. The postmodernist should believe that no absolute truth can be known, even in theory.

One assumption of the Worldview Model is that cultures and individuals will have a strong orientation towards a single worldview. This assumption has not been empirically tested but appears to be logical. Each worldview is fundamentally contradictory with each other worldview. However, this model also suggests that individuals may actively choose between the structuring logics of each worldview when defending existing beliefs or originating new beliefs. When making these choices, individuals may defend beliefs using a worldview that is different from the worldview that they generally prefer. They should be especially likely to ignore the conflict between worldviews when their need for closure is elevated.

Finally, the Worldview model predicts that worldview will be inherited and that participating in any particular culture will socialize individuals into a particular, dominant, worldview.


The Worldview Model predicts a moderating effect of worldview on the need for cognitive closure:
  • Individuals in traditionalist cultures should have higher needs for closure when considering information that could challenge their transcendental, absolute truth.
  • Individual cultures should vary as to what kinds of information could challenge this truth.
  • Individuals in modernist cultures should have higher closure needs across a variety of tasks, with modernist individuals believing that they have a personal responsibility to discover the truth.  Modernist individuals may, however, when faced with individual uncertainty, have elevated needs to avoid closure. 
  • In societies where individuals may have traditionalist or modernist orientations, both traditionalists and modernists will offer alternate interpretations of events in a variety of domains, challenging each other’s worldviews and elevating each other’s needs for specific closure.
  • Individuals may temporarily shift from traditionalist to modernist orientations when doing so fulfills strong epistemic needs.  If these epistemic needs are consistently elevated, longer-term changes in worldview-orientation could result.
  • Some beliefs may be so widely held in society that individuals refuse to challenge them, even if they are based in a rejected worldview.  Traditionalists, for examples, may embrace the concept of free will as a transcendental truth while modernists accept the concept without examining it too closely, or examining it closely, rejecting it, but choosing to act as if they believe in it. Both modernists and traditionalists should have elevated needs for closure about such foundational beliefs.  Modernists should seek to avoid closure when presented with arguments against free will that, based in the modern worldview, would normally be highly acceptable. 
  • Individuals in a postmodernist society should have an elevated need to avoid closure, accept when defending themselves against modernist or traditionalist authorities.

Empirical tests of this model have not been conducted.  

Cognitive Constraints: Moral Convictions

Moral convictions may be correlated to particularly rigid mindsets which give the individual who holds them a sense of objectivity. Tetlock et al. (2000), for example, “found that people resisted consideration of counterfactual reasoning with respect to their moral beliefs but were willing to engage in this kind of reasoning in amoral contexts” (Skitka et al., 2005).

Cognitive Constraints: The Unthinkable and the Undesirable


The following entries, and any future entries tagged with the phrase “cognitive constraints,” concern beliefs that make certain thoughts so undesirable or so nonsensical as to be unthinkable. 

Friday, April 29, 2011

Cognitive Constraints: Stereotyping Increased and Decreases Persuasion Even For Stereotype-Irrelevant Messaging

Messages attributed to stigmatized sources—here defined as members of out-groups that, historically, have been the targets of prejudice—have different persuasive outcomes than those attributed to non-stigmatized sources (Mackie, Worth, and Asuncion, 1990, Petty, Fleming, & White 1999, 2005, Livingston & Sinclair 2008). These outcomes have real-world consequences for politics, juror responses, and decision-making dynamics in the workplace. When judging legal cases (Sargent & Bradfield, 2004) or when considering a coworker’s contribution to workforce planning (De Dreu & West, 2001), even individuals who report being low-prejudiced will treat a message attributed to a stigmatized source differently from a message attributed to an in-group source. In some cases, the message recipient may think more about the message and take its content more seriously when it is attributed to a stigmatized source (Petty, Fleming, & White 1999, 2005). In other cases, however, the message recipient may, knowingly or not, ignore or counter-argue that message (Mackie, Worth, & Asuncion, 1990). Individual belief about personal level of prejudice, then, is an insufficient predictor of prejudiced behaviors.

Persuasion researchers have not yet directly examined potential moderating effects of motivation to act without prejudice on the processing of a message attributed to a stigmatized source. Researchers have, however, examined the different moderating effects of levels of explicit prejudice and, very recently, different moderating effects of implicit and explicit prejudice on the processing of these messages (Petty & Brinol 2009, Petty, Brinol, & Johnson 2010). Level of prejudice influences the extent to which an individual is motivated to think about and respond to the message, with lower-prejudiced individuals often being more motivated to process a message attributed to a stigmatized versus a non-stigmatized source (Petty, Fleming, & White, 1999, 2005). However, this effect varies by the personal relevancy of the message and the degree to which it is counterattitudinal (Livingston & Sinclair, 2008). When a message is both relevant and threatening, even a low-prejudiced recipient may reject the arguments of a stigmatized versus a non-stigmatized source.

Plant and Devine’s Internal Motivation to Respond Without Prejudice Scale (IMS) and External Motivation to Respond Without Prejudice Scale (EMS) (Plant and Devine, 1998) are well-established and versatile measures of motivations to act without prejudice (Butz & Plant, 2009). Correlated differently with implicit and explicit prejudices (Devine, Amodio, Harmon-Jones, & Vance, 2002) as well as different self-regulation strategies (Legault, Green-Demers, Grant, & Chung, 2007), these measures are powerful indicators of implicit and explicit prejudiced attitudes and of different responses to these attitudes. No research to date has directly tested whether IMS and EMS levels moderate processing of a message attributed to a stigmatized source, despite the fact that existing research on these scales may imply a number of moderating effects.

Current research illustrates several ways in which a stigmatized message source can affect message processing. For example, Livingston and Sinclair (2008) found that European Canadian research participants derogated the source of a threatening message more if that source was a member of a stigmatized out-group, in this case an Aboriginal Canadian. In that study, European Canadian students listened to a message in favor of the implementation of senior comprehensive exams at their university. These exams would be implemented prior to graduation and this message, for some students, was highly threatening.

When an Aboriginal Canadian speaker delivered the message, participants derogated the source and his message, expressing less confidence in his “competency, professionalism, intelligence, and overall impression” (Livingston and Sinclair 2008). When a European Canadian speaker delivered the message, neither the source nor the message were as negatively evaluated and attitudes were comparatively more favorable toward the senior comprehensive exam policy. When the message was not personally relevant to the students, the proposed exams were to be implemented at another university, this derogation did not take place. This derogation of the source and message, then, occurred only when the message was threatening. Surprisingly, both European Canadians who reported being low in prejudice towards Aboriginals and European Canadians who reported being higher in prejudice towards Aboriginals derogated the Aboriginal source.

Other, potentially contradictory, studies have shown that low-prejudiced individuals actively avoid prejudiced behaviors. Petty, Fleming, and White’s Watchdog Hypothesis argues that low-prejudiced individuals understand that they and others may have prejudiced reactions and will thus think more about the message in order to guard against letting these prejudices influence their behavior (Petty, Fleming, and White 1995, 2005). In their studies, Fleming, Petty, and White have shown that low-prejudiced individuals think more about messages when they are attributed to a black or homosexual stigmatized source. They presented white, heterosexual research participants with either strong or weak messages in favor of senior comprehensive exams. Unlike in Livingston and Sinclair’s study, however, the personal relevancy of these messages was left ambiguous. Petty, Fleming, and White found that low-prejudiced individuals thought more about the message than did high-prejudiced individuals. If low-prejudiced participants were presented a strong message, their thoughts in response to the message were more positive and they were ultimately more persuaded by the message. If they were presented a weak message, their thoughts were more negative and they were not persuaded. Higher-prejudiced individuals had less polarized reactions to the message, regardless of whether it was strong or weak, suggesting that they lacked motivation to think carefully about the message. In subsequent research, they demonstrated that a message about an African American target lead to a similar pattern of responses among low-prejudiced individuals.

When Petty, Fleming, and White’s research participants generated topic-relevant thoughts in response to message, they elaborated that message. Elaboration, as defined by Petty and Cacioppo (Petty, Cacioppo, Strathman, and Priester 2005), is a state in which individuals are motivated and able to pay attention to a message, responding to its arguments, and integrating it into their own attitudes. According to the Elaboration Likelihood Model of persuasion (ELM), if individuals elaborate the message and then accept or reject the arguments of the message, this new attitude towards the message will be more stable and lasting than if the reasons for rejection or acceptance are heuristic. Heuristics are rules of thumb, such as the expertise of a source, shared group membership with a source, and even the length of a message, which are characteristic of message-processing when an individual is low in motivation or ability.

The lines of research discussed suggest that personal relevancy, group identity, the Watchdog effect, and personal threat can all play separate roles in the persuasive context. The effects of each variable are not, however, clear. In the Livingston and Sinclair study, individuals may have elaborated the message because it was personally relevant and then derogated the source and the message in order to avoid a threatening situation. Interestingly, source derogation was minimized when accuracy goals were introduced, although criticisms of the message remained unchanged (Livingston and Sinclair 2008). In Petty, Fleming, and White’s work (1999, 2005), individuals actively elaborated messages of ambiguous personal relevance attributed to stigmatized sources with the presumed goal of objective processing. If similar motivations were shared by Livingston and Sinclair’s low-prejudiced participants, these participants obviously failed to effectively pursue their egalitarian goals.

Both Livingston and Sinclair (2008) and Petty, White, and Fleming (1999, 2005) examine self-reported, explicit, prejudiced attitudes. All prejudiced attitudes can be defined as associations between an attitude object—in this case an out-group member, an out-group, or a defining feature of the out-group—and a negative evaluation. Individuals could negatively evaluate an African American based on her group membership, negatively evaluate the entire category of African American, or have a reaction to a specific feature associated with that category, such as dark-skin color. All evaluations, prejudiced or non-prejudiced, can be described as valenced—positive or negative—and prejudiced attitudes include, by definition, negative evaluations.

Even when an individual does not explicitly endorse prejudiced attitudes if they are culturally common then she is still likely to be aware of them (Plant and Devine, 1998). Repeated exposure to these prejudiced attitudes in the social environment can make the attitudes highly accessible, leading to, at minimum, automatic prejudiced associations. These associations are known as implicit attitudes. The existing literature is unclear as to whether implicit attitudes can operate unconsciously, at a level inaccessible to individual awareness (Petty and Brinol, 2010, Fazio and Olson, 2003, Gawronski, Hoffman, and Wilbur, 2006). However, it is generally agreed that individuals have both imperfect awareness of these attitudes and motivations not to express these attitudes and that self-report measures may differ consistently from other measures (Hoffman, Gschwender, Nosek and Schmitt, 2005, Hoffman, Gschwender, and Schmitt, 2005).

Neither Livingston and Sinclair or Petty, White, and Fleming measure implicit prejudice. Implicit attitudes can be measured in numerous ways. Some measures include observing decision-making under time-pressure or cognitive load, subliminally priming an individual with an attitude-relevant word, such as African American, and recording prime-activated behaviors, or having participants take the Implicit Attitude Test, which measures the speed with which individuals can categorize positive or negative words with certain attitude objects (Fazio and Olson, 2003). Tasks include categorizing positive or negative words into categories associated by word or image with an out-group (Fazio and Olson, 2003).

Both implicit and explicit prejudice, it has been recently demonstrated, moderate the persuasive power of messages by or about stigmatized persons (Petty, Brinol, and Johnson, 2010). Petty, Brinol and Fleming (Petty and Brinol, 2009) demonstrated that participants that were low in both explicit and implicit prejudice elaborated a message about an African American person more than those who were high in explicit and implicit prejudice. They further demonstrated that individuals high in implicit prejudice and low in explicit prejudice, or vice versa, elaborated messages even more than individuals that scored low on both types of measure.

Petty, Brinol, and Johnson (2010), in a manuscript in press, have demonstrated that discrepancies between a message recipient’s implicit and explicit attitudes are positively correlated with elaboration. They tested the watchdog effects influence on the processing of a message attributed to a stigmatized source by presenting recipients with either a strong or weak message advocating the use of phosphate detergents and attributing that message to either a White or Black source. They then examined the moderating effect of discrepancy between implicit and explicit prejudice—which they term implicit ambivalence—on whether message recipients elaborated the message. Petty and colleagues found that message recipients elaborated more as this discrepancy increased.

Interestingly, some participants were low in implicit and high in explicit prejudices. It is possible that these participants were prejudiced for ideological reasons and that these reasons were not positively correlated to prejudiced associations. When elaboration was analyzed, however, these participants were indistinguishable from participants with high levels of implicit and low levels of explicit prejudice. This suggests that ambivalence itself moderates elaboration. Research suggests that implicit ambivalence is positively correlated to discomfort, and that discomfort may motivate increased need to resolve ambivalence through greater processing (Petty and Brinol, 2009, Rydell and Mackie, 2008). Elaboration, then, may be motivated by the need to resolve discomfort, not by the direct pursuit of egalitarian goals.

Monday, March 14, 2011

Cognitive Constraints: Political Identity


"If a moral issue is tied to one’s political identity (e.g., pro-choice vs. pro-life) so that defensive motivations are at work (Chaiken, Giner-Sorolla, & Chen, 1996) the initial preference may not be reversible by any possible evidence or failure to find evidence. Try, for example, to make the case for the position you oppose on abortion, eugenics, or the use of torture against your nation’s enemies. In each case there are utilitarian reasons available on both sides, but many people find it difficult or even painful to state reasons on the other side. Motivated reasoning is ubiquitous in the moral domain (for a review, see Ditto, Pizarro, Tannenbaum, 1009)." (Haidt & Kesebir, 2009 pg. 15-16)

One reason that a moral belief associated with a social identity is more resistant to change is that self-relevant information is processed more extensively, but not more objectively (Baumeister 2010, p. 146).  

Geoffrey L. Cohen's 2003 study demonstrates that identification with a group that supports a particular policy can have a greater influence on judgments of that policy than the policy itself.  In this case, Cohen presented research participants that identified with the Democratic or with the Republican parties with a proposed welfare policy that was either less generous or more generous than any existing in the U.S.  Then, in a series of manipulations, he indicated to Democratic research participants that Democrats supported either of the two policies and indicated to Republican research participants that Republicans supported either of the two policies.  Of key interest, simply indicating that the leadership of the political party with which they identified supported the policy changed whether the participants themselves supported that policy, independently of the policy itself.  

Participants described in Geoffrey Cohen’s “Party Over Policy:  The Dominating Impact of Group Influence on Political Beliefs” changed their attitudes based not on the explicit content of a message but instead on their beliefs about the source of a message.  This is not, in itself, unusual.  What was somewhat more unusual is that these participants were elaborating the message at the time. Elaboration, the careful consideration of a message, is often measured by examining the number of topic-relevant thoughts that an individual generates in response to a message. These thoughts, it is presumed, reflect the participant’s attempt to accurately interpret a message and form an objective opinion in response to the message. This motivation to be objective is part of the motivation to elaborate and, like elaboration, can be limited in numerous ways be ability.

In Cohen, the central question is not whether individuals are motivated to elaborate.  All of the participants elaborate, although differences in elaboration may be obscured by the time limit imposed on the thought listing or demand characteristics of writing an editorial.  However, participants may vary in their ability to be objective.  What are the potential sources of this variation in ability and how can we distinguish them?  Cohen suggests that information about the source is incorporated into impressions of the message.  He defends this notion by detailing differences in the content and construal (not the valence) of thought listing (and the editorial assignment).  

What is not clear from Cohen’s account is how this information becomes incorporated.  There is one hint; individuals in one experiment were more persuaded by a normally counter-attitudinal message attributed to representatives of their party.  This could suggest that individuals were surprised by the position taken by their party and, in response to this surprise, were motivated to justify this position.  Elaboration, here, may have occurred both for the message itself and the source of the message.  As soon as the policies were apprehended (and one hopes that the participants could distinguish at least the stringent policy as somewhat extreme) the message may have been elaborated in a motivated way.  Once the question of why the Democrats were supporting the stringent policy or the Republicans supporting the generous policy was answered, as objectively as information constraints and time constraints allowed, then elaboration of personal response could begin.  It is not clear why this was not reflected in the thought listing. 

A potentially complementary, potentially conflicting hypothesis is that from first reading knowing the source imposed a filter on the message.  This hypothesis is closer to Cohen’s own.  Here, participants selectively ignore information that would seem counter-attitudinal for the source, constructing ad hoc justifications for the position being taken using easily accessible knowledge about what the party would believe.  This is an effortful process but one that draws selectively on heuristic cues. 

Another hypothesis is that group members are defending the position taken by other group members or group leaders.  Here, they support they are engaged in socially motivated reasoning and have given little thought to objectivity in an impersonal sense.  Indeed, as Cohen suggests, group-based cognitive dissonance could be, at least in part, at fault, depending on the centrality of the Democratic or Republican identity.  

Tuesday, February 1, 2011

Epistemic Motives: Motivated Reasoning

Kunda (1990) defines motivation as "any wish, desire, or preference that concerns the outcome of a given reasoning task" In this article, she is concerned with accuracy motives and directional motives. When an individual has an accuracy motive, that individual wishes "to arrive at an accurate conclusion." When an individual has a directional motive, that individual desires to "arrive at a particular, directional conclusion." In other words, when an individual has a directional motive, they are reasoning in order to arrive at a conclusion with specific, predetermined, characteristics. For example, they may want a conclusion that helps them to feel good about themselves and their behavior. Kruglanski, Azjen, Klar, Chaiken, Liberman, Eagly, Pyszczynski, and Greenberg have all produced research supporting this distinction between accuracy motives and directional motives.

For example, when people are responding to accuracy motives they tend to a) "expend more cognitive effort on issue-related reasoning," b) "attend to relevant information more carefully," and c) "process it more deeply, often using more complex rules." An experimenter can increase a participant's accuracy goals by "by increasing the stakes involved in making a wrong judgment or in drawing the wrong conclusion, without increasing the attractiveness of any particular conclusion." Tetlock demonstrated that accuracy goals only affect information processing if they are induced before that processing occurs. If they are induced after information exposure but before participants are asked to make a judgement, they have no observable effect.

"Tetlock and Kim (1987) showed that subjects motivated to be accurate (because they expected to justify their beliefs to others) wrote more cognitively complex descriptions of persons whose responses to a personality test they had seen: They considered more alternatives and evaluated the persons from more perspectives, and they drew more connections among characteristics.

Partly as a result of this increased complexity of processing, subjects motivated to be accurate were in fact more accurate than others in predicting the persons' responses on additional personality measures and were less overconfident about the correctness of their predictions.

In a similar vein, Harkness, DeBono, and Borgida (1985) showed that subjects motivated to be accurate in their assessment of which factors affected a male target person's decisions about whether to date women (because the subjects expected to date him later and therefore presumably wanted to know what he was like) used more accurate and complex covariation detection strategies than did subjects who did not expect to date him."

Kruglanski and Freund (1983; Freund, Kruglanski, & Shpitzajzen, 1985) "showed that subjects motivated to be accurate . . . showed less of a primacy effect in impression formation, less tendency to use ethnic stereotypes in their evaluations of essay quality, and less anchoring when making probability judgments." Once concern with Kruglanski and Freund's work, however, was that participants may simply be making less extreme judgments, rather then increasing their processing of the information. The less biased judgment, in these experiments, was always the less extreme one. However, a 1985 study by Tetlock put this concern, partially, to rest. He examined the Fundamental Attribution error and demonstrated "[i]n comparison with other subjects, accuracy-motivated subjects made less extreme dispositional attributions about a target person when they knew that the target person had little choice in deciding whether to engage in the observed behavior, but not when they knew that the target person had a high degree of choice." In other words, participants still made extreme judgments when it was appropriate to do so.

Although accuracy motives can lead to reduces bias in some cases, in others they can increased biased reasoning. Subjects who "expected to justify their judgments to others) were more susceptible than other subjects to the dilution effect--that is, were more likely to moderate their predictions about a target when given nondiagnostic information about that target---and this tendency appeared to have resulted from more complex processing of information (Tetlock & Boettger, 1989)."

Kunda argues that participants with directional motives want to "arrive at a particular conclusion [with a] . . .a justification of their desired conclusion that would persuade a dispassionate" person. In order to do this they "they search memory for those beliefs and rules that could support their desired conclusion. They may also creatively combine accessed knowledge to construct new beliefs that could logically support the desired conclusion." Both processes are "biased by directional goals (cf. Greenwald, 1980)" and "they are accessing only a subset of their relevant knowledge." Evidence suggests that directional goals can lead people to "endorse different attitudes (Salancik & Conway, 1975; Snyder, 1982), express different self-concepts (Fazio, Effrein, & Falender, 1981), make different social judgments (Higgins & King, 1981), and use different statistical rules (Kunda & Nisbett, 1986; Nisbett, Krantz, Jepson, & Kunda, 1983)."

Kunda presents evidence that directional goals induce a biased recall of existing information by analyzing various dissonance studies. For example, she argues that "[d]issonance clearly would be most effectively reduced if one were able to espouse an attitude that corresponds perfectly to one's behavior. Yet this is not always the case. In many dissonance experiments, the attitudes after per forming the counterattitudinal behavior remain in opposition to the behavior. For example, after endorsing a law limiting free speech, subjects were less opposed to the law than were control subjects, but they remained opposed to it (Linder, Cooper, & Jones, 1967). Similarly, after endorsing police brutality on campus, subjects were less opposed to such brutality than were control subjects but they remained opposed to it (Greenbaum & Zemach, 1972)"

Similar results were found from "[i]nduced compliance studies in which subjects are led to describe boring tasks as enjoyable often do produce shifts from negative to positive task evaluations, but in these studies, initial attitudes are not very negative (e.g., -0.45 on a scale whose highest negative value was - 5 in Festinger & Carlsmith's classic 1959 study), and postdissonance attitudes still seem considerably less positive than subjects' descriptions of the task. For example, after they were induced to describe a task as "very enjoyable . . , a lot of fun . . , very interesting . . , intriguing, . . exciting," subjects rated the task as 1.35 on a scale whose highest positive value was 5 (Festinger & Carlsmith, 1959)."

Biased Memory Search

The existence of a biased memory search is one of the key features of Kunda's model, and one that her own research has supported. Kunda and Sanitios' 1989 experiment "showed that subjects induced to theorize that a given trait (extraversion or introversion) was conducive to academic success came to view themselves as characterized by higher levels of that trait than did other subjects, presumably because they were motivated to view themselves as possessing success-promoting attributes. These changes in self-concepts were constrained by prior self-knowledge: The subjects, who were predominantly extraverted to begin with, viewed themselves as less extraverted when they believed introversion to be more desirable, but they still viewed themselves as extraverted." Similar studies, such as Dunning, Story, and Tan's experiment praising social skills versus task skills, have demonstrated similar patterns.

Further demonstrating this memory search, in "another study, subjects led to view introversion as desirable were faster to generate autobiographical memories reflecting introversion and slower to generate memories reflecting extraversion than were subjects led to view extraversion as desirable. These studies both indicate that the accessibility of autobiographical memories reflecting a desired trait was enhanced, which suggests that the search for relevant memories was biased.":

"Additional direct evidence for biased memory search was obtained by Markus and Kunda (1986). Subjects were made to feel extremely unique or extremely ordinary Both extremes were perceived as unpleasant by subjects, who could therefore be assumed to have been motivated to see themselves in the opposite light. Subjects were relatively faster to endorse as selfdescriptive those adjectives reflecting the dimension opposite to the one that they had just experienced. Apparently they had accessed this knowledge in an attempt to convince themselves that they possessed the desired trait and, having accessed it, were able to endorse it more quickly."

Biased Knowledge-Construction

Directional goals can bias people's ideas, but only in light of past information. For example, people tend to see themselves as farther above average on "ambiguous traits that are open to multiple construals" than they do on "unambiguous traits." For ambigious traits this tendency can be reduced "when people are asked to use specific definitions of each trait in their judgments."

Where there is little past information, directional goals can lead to very biased processing. "McGuire (1960) showed that the perceived desirability of events may be biased by motivation. Subjects who were persuaded that some events were more likely to occur came to view these events as more desirable, presumably because they were motivated to view the future as pleasant. Subjects also enhanced the desirability of logically related beliefs that had not been specifically addressed by the manipulation, which suggests that they were attempting to construct a logically coherent pattern of beliefs."

Accuracy goals can increase processing while directional goals bias this processing. Several studies indicate that people tend to see others as more likable if they expect to interact with them" In one study by Berscheid, Graziano, Monson, and Dermer (1976), "subjects liked the person whom they expected to date better than they liked the other persons. Ratings of the three persons' personalities were affected as well: Subjects rated their expected dates more extremely and positively on traits and were more confident of their ratings. Subjects also awarded more attention to their prospective dates and recalled more information about them than about other target persons, but the enhanced liking and trait ratings were not due to differential attention." Here, subjects seemed to have "both a directional goal and an accuracy goal: They wanted their date to be nice so that the expected interactions would be pleasant, and they wanted to get a good idea of what the date was like so that they could better control and predict the interaction. The accuracy goal led to more intense processing, and the directional goal created bias."

Biased Use of Heuristics

"In two studies, researchers examined directly whether people with different directional goals use different statistical heuristics spontaneously. A study by Ginossar and Trope (1987) suggested that goals may affect the use of base rate information. Subjects read Kahneman and Tversky's (1972b) cab problem, in which a witness contends that the car involved in a hit-and-run accident was green. They received information about the likelihood that the witness was correct and about the prior probability that the car would be green (the base rate) and were asked to estimate the likelihood that the car was green. Subjects typically ignore the base rate when making such estimates. However, when asked to answer as though they were the lawyer for the green car company (a manipulation presumed to motivate subjects to conclude that the car was not green), subjects did use the base rate information and made considerably lower estimates when the base rate was low. The finding that only motivated subjects used base rate information suggests that motivated subjects conducted a biased search for an inferential rule that would yield their desired conclusion. It is also possible, however, that those subjects conducted a more intense but essentially objective search for rules. This is because the pattern of results and the design do not permit assessment of whether all subjects pretending to be lawyers used the base rate or whether only subjects for whom use of base rate could promote their goals (i.e, subjects told that the prior probability was low) used it."

Bias
"Of importance is that all subjects were also quite responsive to the differential strength of different aspects of the method, which suggests that they were processing the evidence in depth. Threatened subjects did not deny that some aspects were strong, but they did not consider them to be as strong as did nonthreatened subjects. Thus bias was constrained by plausibility."

"Kunda (1987) found . . . attempted to control for the possibility that the effects were mediated by prior beliefs rather than by motivation. Subjects read an article claiming that caffeine was risky for women. Women who were heavy caffeine consumers were less convinced by the article than were women who were low caffeine consumers. No such effects were found for men, who may be presumed to hold the same prior beliefs about caffeine held by women, and even women showed this pattern only when the health risks were said to be serious"

"Lord, Ross, and Lepper (1979) . . . preselected subjects who were for or against capital punishment and exposed them to two studies with different methodologies, one supporting and one opposing the conclusion that capital punishment deterred crime. Subjects were more critical of the research methods used in the study that disconfirmed their initial beliefs than they were of methods used in the study that confirmed their initial beliefs. The criticisms of the disconfirming study were based on reasons such as insufficient sample size, nonrandom sample selection, or absence of control for important variables; this suggests that subjects' differential evaluations of the two studies were based on what seemed to them a rational use of statistical heuristics but that the use of these heuristics was in fact dependent on the conclusions of the research, not on its methods. Having discounted the disconfirming study and embraced the confirming one, their attitudes, after exposure to the mixed evidence, became more polarized. Because subjects were given methodological criticisms and counterarguments, however, the study did not address whether people would spontaneously access differential heuristics. In fact, after exposure to a single study but before receiving the list of criticisms, all subjects were swayed by its conclusions, regardless of their initial attitudes. This suggests further that people attempt to be rational: They will believe undesirable evidence if they cannot refute it, but they will refute it if they can. Also, although the differential evaluation of research obtained in this study may have been due to subjects' motivation to maintain their desired beliefs, it may also have been due to the fact that one of these studies may have seemed less plausible to them because of their prior beliefs.""

"B. R. Sherman and Kunda (1989) used a similar paradigm to gain insight into the process mediating differential evaluation of scientific evidence. Subjects read a detailed description of a study showing that caffeine either facilitated or hindered the progress of a serious disease. Subjects motivated to disbelieve the article (high caffeine consumers who read that caffeine facilitated disease, low caffeine consumers who read that caffeine hindered disease) were less persuaded by it. This effect seemed to be mediated by biased evaluation of the methods employed in the study because, when asked to list the methodological strengths of the research, threatened subjects spontaneously listed fewer such strengths than did nonthreatened subjects. Threatened subjects also rated the various methodological aspects of the study as less sound than did nonthreatened subjects. These included aspects pertaining to inferential rules such as those relating sample size to predictability, as well as to beliefs about issues such as the validity of self-reports or the prestige of research institutions."

Alternative Explanations

"For example, the memory-listing findings may reflect a response bias rather than enhanced memory accessibility. Thus the enhanced tendency to report autobiographical memories that are consistent with currently desired self-concepts may have resulted from a desire to present oneself as possessing these self-concepts. And the reaction time findings may have resulted from affective interference with speed of processing rather than from altered accessibility. However, neither of these alternative accounts provides a satisfactory explanation of the full range of findings. Thus self-presentational accounts do not provide a good explanation of reaction time findings, which are less likely to be under volitional control. And affective interference with speed of processing does not provide a good explanation for changes in overall levels of recall."

"In a similar vein, the tendency to evaluate research supporting counterattitudinal positions more harshly (Lord et al, 1979) is not affected when subjects are told to be objective and unbiased, but it is eliminated when subjects are asked to consider what judgments they would have made had the study yielded opposite results (Lord et al, 1984). Taken together, these findings imply that people are more likely to search spontaneously for hypothesis-consistent evidence than for inconsistent evidence. This seems to be the mechanism underlying hypothesis confirmation because hypothesis confirmation is eliminated when people are led to consider inconsistent evidence. It seems either that people are not aware of their tendency to favor hypothesis-consistent evidence of that, upon reflection, they judge this strategy to be acceptable, because accuracy goals alone do not reduce this bias. Thus the tendency to confirm hypotheses appears to be due to a process of biased memory search that is comparable with the process instigated by directional goals. This parallel lends support to the notion that directional goals may affect reasoning by giving rise to directional hypotheses, which are then confirmed; if the motivation to arrive at particular conclusions leads people to ask themselves whether their desired conclusions are true, normal strategies of hypothesis-testing will favor confirmation of these desired conclusions in many cases. One implication of this account is that motivation will cause bias, but cognitive factors such as the available beliefs and rules will determine the magnitude of the bias."

To What Extent is the Process Motivated?

"In its strongest form, this account removes the motivational "engine" from the process of motivated reasoning, in that motivation is assumed to lead only to the posing of directional questions and to have no further effect on the process through which these questions are answered. If this were true, many of the distinctions between cognitive, expectancy-driven processes and motivated processes would break down. Both directional goals and "cold" expectancies may have their effects through the same hypothesis-confirmation process. The process through which the hypotheses embedded in the questions "Is my desired conclusion true?" and "Is my expected conclusion true?" are confirmed may be functionally equivalent. Indeed, there are some interesting parallels between motivated reasoning and expectancy confirmation that lend support to this notion. For example, B. R. Sherman and Kunda (1989) found that implausible evidence (namely, that caffeine may by good for one's health) was subjected to elaborate and critical scrutiny that was comparable to the scrutiny triggered by threatening evidence. Similarly, Neuberg and Fiske (1987) found that evidence inconsistent with expectations received increased attention comparable in magnitude to the increase caused by outcome dependency. Finally, Bem's (1972) findings that the beliefs of observers mirrored those of actors in dissonance experiments suggest that observers' expectations and actors' motivations may lead to similar processes."

"For example, when motivation is involved, one may persist in asking one directional question after another (e.g, "Is the method used in this research faulty?" "Is the researcher incompetent?" "Are the results weak?"), thus exploring all possible avenues that may allow one to endorse the desired conclusion. It is also possible that in addition to posing directional questions, motivation leads to more intense searches for hypothesis confirming evidence and, perhaps, to suppression of disconfirming evidence (cf. Pyszczynski & Greenberg, 1987). If motivation does have these additional effects on reasoning, the parallels between motivated reasoning and expectancy confirmations gain new meaning. They suggest that seemingly "cold" expectancies may in fact be imbued with motivation. The prospect of altering one's beliefs, especially those that are well established and long held, may be every bit as unpleasant as the undesired cognitions typically viewed as "hot : It was this intuition that led Festinger (1957) to describe the hypothetical situation of standing in the rain without getting wet as dissonance arousing."

Saturday, January 22, 2011

The Elaboration Likelihood Model of Persuasion

Richard E. Petty, Derek D. Rucker, George Y. Bizer, and John T. Cacioppo discuss the Elaboration Likelihood Model (ELM) model of persuasion.  According to this model, there are two basic responses to a new message - thinking more about the message AND relying on heuristics to quickly respond to the message.  Heuristics are rules of thumb--cognitive shortcuts that can be as simple as "the author is an expert, I should listen to her."  I will note that I use the term 'cognitive' to refer to those processes that are concerned with truth and falsity.   Heuristics are primarily identified by their generality and simplicity.  According to the ELM, message recipients that are high in motivation and ability are more likely to think about a message--elaborating it and increasing the likelihood that attitudes formed for or against the message will be lasting.

A chart detailing the central (more elaboration) and peripheral (more use of heuristics) routes to persuasion can be found on the third page of the PDF document that I link to below.  This article also includes discussion of empirical research surrounding different postulates of the ELM - correctness, an elaboration-continuum, multiple roles for variables, objective-processing, biased-processing, trade-off between central and peripheral processing, and attitude-strength.