Since freshman year of college, I’ve felt that motivated reasoning is a common and negative influence on our politics and our capacity for moral behavior in general. Since then, my concerns about motivated reasoning have mostly only grown. I take a first stab at discussing motivated reasoning here, and I think I am relatively successful. I discuss what it is, and one of the most common ways it is used to justify (sometimes immoral) actions. I end by describing a broad outline of what seems to be required at the individual and social level to deal with the problem posed by motivated and bad reasoning, as well as our relationship with scientific institutions.
Discussions of the plausible evolutionary benefits of motivated reasoning, the usefulness of the deliberative context, and the discussion of our stance towards the possibility of totally stable factual truth are heavily inspired by similar discussions within Hélène Landemore’s book, Democratic Reason. The social importance of motivated reasoning has been particularly informed by work done by Black radicals and critical race theorists—particularly the first part of Cedric J. Robinson’s Black Marxism, and an interview I listened to with Ian Haney Lopez by Jane Coaston on Vox’s The Weeds.
§1 – Introduction
(1) “You shouldn’t break your promises all willy-nilly, but I’m Mr. Big Man and I don’t have to follow all the rules that all of you shitheads do.”
(2) “It’s probably the moral thing to go vegan, but I simply couldn’t do it myself because I like the taste of meat and cheese too much.”
(3) “Generally, people ought not to steal, but in this case, the person’s kinda an asshole, and I would find joy in having their sweater for myself.”
In the Groundwork, Kant describes how people cannot bring it in themselves to reasonably will the opposite of moral laws, and that to get around this, people often simply consider themselves as exceptions to the moral rule for the purpose of that action. That sort of behavior is represented in the first example that I gave above. There are also other ways people excuse themselves from moral behavior. In the second example, we see a person deny that they have free will to choose what they eat, nor the free will to change their eating habits. In the third example, the person claims that the specifics of the situation are exceptional, such that their action—usually impermissible—is in this situation permissible. There are, of course, more than just three ways people skirt their moral duties, but I thought it would be effective to mostly focus on these three. I particularly want to focus on the third style of exception.
The first two styles of excusing oneself from moral duty are pretty easy for onlookers to spot as bad reasoning. But the third style—claiming that the situation is exceptional—is the most pernicious because it’s not as obviously wrong. It really is the case that there are many situations in which your immediate personal benefit aligns with what is morally permissible. It really is the case that some situations will have morally relevant details that make them ‘exceptional’ from the more common instantiations of similar moral principles. For example, the principle that, ‘You ought to steal from others simply because it is in your personal gain,’ seems clearly undesirable to have as a universal moral principle which everyone follows. But on the other hand, (4) the principle that, ‘You should steal bread from Walmart if that’s the only way to feed your child because you shouldn’t let your child starve’ appears to be morally permissible.
This Walmart case can seem like an exception to the general rule against stealing in the sense that it allows for the same action of ‘stealing’ in a situation in which it is in your personal gain to do so. However, the Walmart case is subtly different because it doesn’t permit stealing simply out of personal gain—instead, there are more complicated motivations and situational restrictions that make stealing morally permissible. Even though cases like (4) are not true exceptions to moral rules, I will from now on refer to these sorts of cases as ‘exceptional cases’ for succinctness.
People can justify an action that is usually impermissible by claiming that there are some morally relevant aspects of their situation which make the action morally permissible. Sometimes, as it is in (4), the situation really is exceptional in that way. But many other times, the situation is not exceptional, and the person is simply trying to rationalize an argument that it is. Sometimes, their arguments are bad, and sometimes their arguments are very clever and rhetorically convincing—albeit invalid or unsound.[1]
§2 – Motivated Reasoning and Its Discontents
Motivated reasoning is when a person begins with a desire to take a specific action or hold a specific belief, such that it causes them to rack their brains to find and argue in a biased manner for a justification for their action/belief. Motivated reasoning can lead people to except themselves from moral principles in any of the ways I described in (1-3), as well as through other post-rationalized and insufficiently reasoned/informed methods. I want to focus on how motivated reasoning can influence moral decisionmaking.
From my experience, motivated reasoning to the third style of exception is the most common style of motivated reasoning. It’s possible for it to be so common because real-life situations are usually vague and complicated enough that people can cherry-pick specific aspects of the situation to make an argument that the situation is exceptional.
Notably, however, motivated reasoning toward the third style of exception might be a common occurrence because it can create real value in our lives, and likely had significant value in the small societies which humans evolved in for millennia. Looking again at the Walmart case, we might have needed the help of motivated reasoning to push us to discover and strengthen an uncommon-yet-sound moral argument. Because there is some real value in motivated reasoning to find these nuanced justifications, evolution may have selected motivated reasoning to be hardwired into our brains. Unfortunately, the way we ingest information today is very different from the way of the small societies in which our ancestors evolved. It is much easier for people to fall into ideological echo chambers and information silos by selectively choosing which communities to join and which news sources to follow. Motivated reasoning can often cross the line from being helpful, into making it difficult for people to properly update their beliefs in the face of sound arguments against their position. This unhelpful sort of motivated reasoning I call ‘overly-motivated reasoning’.
If the situation seems clearly and simply defined from the outset, it can be difficult to argue that the situation is exceptional. For example, in the case of the non-vegan in (2), the speaker might not be able to avoid their beliefs that animals can feel pain and ought not to. Because they can’t get themselves to believe that the situation is exceptional, the overly-motivated reasoner of (2) instead thinks a bit more and lands on the different excuse that they lack the free will necessary to control their own actions.
If that excuse becomes untenable, overly-motivated reasoning might lead a person to an even more irrational excuse. They might say that they are an exception to the rule, as in (1). Or, they might just make a belligerent joke and pretend the moral problem no longer exists. Motivated reasoning and cognitive dissonance can be very strong when it protects a person’s long-held habits.
Following the garden path of overly-motivated reasoning instead of updating one’s belief can have immediate psychological rewards. It can justify the continuation of a habit they find immediate pleasure in. It can also feel immediately important to their mental security not to have to come to terms with the idea that they had been acting immorally in the past. But succumbing to overly-motivated reasoning clearly has negative long-term effects for living a moral life.
§3 – Outside of a Properly Deliberative Context, Good Empirical Norms Alone Are More Effective Than Motivated Reasoning Alone
Something else is unique about motivated reasoning towards the third style of excuse. We have already discussed how—in exceptional situations like (4)—morality can almost surprisingly align with a person’s immediate personal inclinations. Motivated reasoning can be helpful for discovering that you are in an exceptional situation. A different important observation about motivated reasoning in the third style is that, whether or not you actually are in an exceptional situation is primarily a matter of empirical fact. Because it is an empirical matter, the negative effects of overly-motivated reasoning can be largely prevented by following good epistemic norms. ‘Epistemic norms’ are the heuristics and practices which help us reliably gain knowledge. Unfortunately, a large number of people don’t possess all of these good empirical norms—a couple common failures here are when we have a poor information diet, and/or when we become the victims of misinformation and disinformation.
Evolutionary natural selection seems have satisfied itself with the type of motivated reasoning that is most effective in socially-close and somewhat cognitively-diverse groups. Our epistemic norms for empirically observing the world have advanced to become more accurate than motivated reasoning—at least when the motivated reasoning takes place outside of the deliberative context of a respectful and cognitively diverse group.
In a properly respectful and cognitively diverse deliberative context, the motivated reasoning of opposing sides will lead people to quickly hone in on good epistemic norms. In fact, this adversarial reason-giving and argument-making is likely how we came to socially generate good epistemic norms in the first place, and helps us today to continually improve our epistemic norms. Deliberation can discover and generate good epistemic norms in a generalized manner. This can make inclusive deliberation better than relying on a small group of experts, defined as people who hold many good epistemic norms from the outset. At some point, the experts will likely encounter a situation wherein they lack the cognitive diversity to generate a new and useful epistemic norm, and consequently, they will make a worse decision than a more cognitively-diverse deliberative process.
But let’s place discussions about the value of deliberation aside for now. The takeaway from this section is this: Motivated reasoning in the third case of excusing oneself from morality requires people to convince themselves of a certain state of the world which they take to justify their desired action—regardless of whether or not that state of the world is actually aligned with reality. In the next section, we will look into exactly how we are able to convince ourselves of an empirically untrue belief.
§4 – A Common Way That We Convince Ourselves of Empirical Untruths
Sometimes our motivated reasoning leads us to trick ourselves to believe that we live in a false state of affairs—because in that state of affairs, our desired action appears to be morally justified. How do we trick ourselves like this?
We manage to do this by emphasizing some aspects of the situation and overlooking or minimizing other aspects. First, we come up with the aspects of the situation that can justify our behavior. Pleased with that, we might end the mental search before we think too hard about the other aspects of the situation which might compel us to act to the contrary. Prematurely ending the mental search is made especially easier when we aren’t habituated in the patterns of thought that would bring the contrary aspects of the situation to mind—it can sometimes take real effort to discover these nuances on our own, and we often have to rely on others to point them out to us. This is difficult when we find ourselves in information silos and cognitively similar social groups.[2]
On top of this, we often turn a more critical eye to the aspects of the situation which might be used to argue contrary to taking our desired actions. We do not hold the same critical standards for those aspects which we think favor our desired action. We try to find ways to dismiss the ‘contrary aspects’ as morally irrelevant, and we overemphasize the ‘promoting aspects’ as being particularly morally relevant.
By thinking in this motivated fashion, people can represent a situation to themselves and others in a biased way that makes their desired actions appear to be morally permissible. On a society-wide level, if a broad group of people want to justify their shared immoral actions (e.g. slaveholding), a narrative that justifies their actions can quickly spread as a memetic idea within that group (e.g. a narrative that claims that enslaved people aren’t actually fully people, and may therefore be treated as work animals).
§5 – Proposed Solutions
Here are a few proposed solutions which aim to help us avoid the negative consequences of motivated reasoning. This section discusses motivated reasoning within individuals, motivated reasoning within institutional decisionmaking bodies, and the value of continuously open and rigorous research about factual truths.
For individuals, it is of particular moral necessity to promote the adoption of mental heuristics that help us avoid excessive motivated reasoning and other misleading reasoning patterns.[3] But this is not simply an individual responsibility; we should also strengthen social institutions to promote reasonable epistemic norms. Education and therapy, of course, can help people realize the systemic mistakes in their reasoning, but there are other important institutions we can build.
For example, we should have institutions that reduce scarcity and abolish institutions which allow for top-down hierarchical violence. People are less likely to steal from, or otherwise mistreat others, when they feel materially and socially safe and secure. People also learn patterns of violence from what has been done to them, and will reenact that violence onto others who they have influence over.[4] All this institutionally encouraged violence is already a terrible thing, and it gets even worse when we look at how it affects our capacity to reason well. Regularly engaging in the mistreatment of others requires some justification, and this requires overly-motivated reasoning or other bad styles of reasoning. Regular engagement in these bad reasoning styles makes people more susceptible to applying poor reasoning elsewhere. Poor reasoning habits thus seep into the rest of our life, making us all even more worse-off.
To help combat society-wide biases, we need strong institutions of science and philosophy to help us narrow in on factual truth. We also need strong, institutionalized epistemic norms in our decisionmaking bodies, so that our institutional decisions are as informed by science as reasonably possible. We want to prevent institutional actors from simply selecting the data points that support the policies they like, and we want to avoid unjust institutional biases. As discussed in §3, more accurate and precise epistemic norms are generated through properly deliberative systems. Our institutions of science and philosophy have a decent amount of productive deliberation within them—our decisionmaking bodies would seem to benefit from being properly deliberative as well. In these ways, false narratives that might shape our society can be readily falsified or discredited, and prevented from influencing our political and institutional decisions.
Science, aided by philosophy, is a reliable process for honing in on reliable descriptions of causal mechanisms and reliable descriptions of the world—that is why we use science to inform our moral decisions. But while science and philosophy are useful and necessary, we shouldn’t overshoot and assume that these institutions can provide us with pure and stable facts. There is always a chance that what seems scientifically or philosophically justified will change. Perhaps there was a causal mechanism we missed. Perhaps the concepts and definitions we once thought pragmatically corresponded to the world will shift in consequential ways. Perhaps the guiding norms for how to perform science will change, revealing that some of our beliefs are not as scientifically secure as we once thought they were. Perhaps there were systematic biases and blindspots within our institutions’ epistemic norms which biased our research outcomes.
The space of factual truth therefore needs to be continuously open to rigorous scientific inquiry and philosophical debate. There is real danger in cases in which we incorrectly believe that science, history, or philosophical arguments are providing us with a totally stable and reliable truth or interpretation of data. In those cases, untruths might become accepted as scientific fact, historical fact, or logical necessity. This would lead us to make systematically uninformed moral decisions, but with the confidence that we are acting with factual backing. False factual assertions—backed by rhetorical veneers of science, history, and philosophical arguments—have been used to justify racism, sexism, monarchies, fascism, and now also, I think, capitalism.
Stay vigilant, all!—motivated reasoning comes for us when we least realize it![5]
Footnotes and Extra Bits
[1] Some of the more clever arguments will make an unfalsifiable, but otherwise rhetorically effective assertions about the world, which they go on to say justifies their action. Unfalsifiable-but-otherwise-rigorous justifications can be fine, but they should be met with great suspicion if they are being used to justify the poor treatment of others.
[2] Note that we might be motivated to seek out cognitively similar groups because doing so can help us strengthen our own arguments—and this is something that motivated reasoning encourages us to do. As usual, there is a point where too many people, doing this too much, becomes a net negative on society.
[3] Mental heuristics that raise alarm bells when one is engaging in motivated reasoning seems like a good thing to write a blog post about. An easy Buzzfeed-style listicle!
[4] Overly-motivated reasoning also tends to systematically affect people with hierarchical social influence, who feel a need to justify their position in often immoral hierarchies.
[5] (In fact, I have likely engaged in motivated reasoning in this very blog post!)