Do Not Trust Your Arguments

Image source

How wonderfully constituted is the human mind! How it resists, as long as it can, all efforts made to reclaim it from error!

—Angelina Grimké

In last month’s post on Chinese attitudes towards Hong Kong I had cause to mention Dan Sperber and Hugo Mercier’s book The Enigma of Reason. At some point I probably ought to devote an entire post to a review of the book; its ideas are subtler and logic more involved than one paragraph glosses (or even most of its newspaper reviews) can give justice to. A running theme throughout the book, is that in experimental settings and in “real life” human capacity for reason is not optimized for the pursuit of abstract truth. Mercier and Sperber suggest that this is because reason did not evolve for that end. Reasoning’s role is essentially a social one—at the level of the individual it is not about deciding what to do, but about justifying what we do.

What we decide to do is for the most part entirely intuitive. As Mercier and Sperber see it, our decisions are the products of mental subsystems as opaque to us as the mysterious mechanisms that classify what we see as “beautiful” or “ugly,” determine what we are doing as “boring” or “fun,” and judge what others are doing as “admirable” or “disgusting.” Although his brain will supply the child with reasons for why he likes to watch Star Wars, the teenager with reasons for why she favors purple eye-liner, and the lover with reasons for doting upon his beloved, these thoughts are not the actual cause of the behavior in question. They are justifications. They are seized upon and articulated by the brain not to make us aware of why we make our decisions, but to make it possible for us to justify and explain our behavior to others.

Sperber and Mercier confirm this general thesis through dozens of experiments and lots of clever thinking. They explore the implications of these ideas further (“wait,” you might ask, “what is happening then when someone else’s ideas changes our mind?”), but as they have limited relevance to the point I am going to make here, I will reserve their discussion for a different post. The main idea we are to run with here is this: to reason is to justify. Arguments, especially arguments over policy, are not about finding the truth. Both sides come with the truth already decided. In this context, reasoning and arguments are tools of persuasion. We reason to convince third parties that we are right.[1]

That is the evolutionary utility of reasoning. It makes you sound heck of a lot more convincing than the guy who (more honestly) says, “I believe we should do this because it feels right.” The jerks who wander across the internet yelling “facts don’t care about your feelings” have missed the mark. We assemble facts and data precisely because of our feelings. Facts are byproducts of a quest to justify our feelings.

None of this means that arguments and reasoning are vain deceptions. At the individual level reasoning is mostly about justifying propositions already held. At the level of the group this is not true. Groups must decide which chain of reasoning (of the many presented) is best. They tend to do this more skillfully than any individual can. The individual is always biased towards her own case. She has a much easier time assessing the merits of an argument someone else has made.

Political psychologist Philip Tetlock came to a complimentary conclusion to Sperber and Mercier, though through different means. Tetlock made his name with experiments designed to judge the effectiveness of political expertise. He gathered together experts of all stripes (economists, political scientists, intelligence analysts, think tanker, financial gurus, and more) and political persuasions. He then asked them to make predictions about the future. He tracked these predictions over time, finding that most experts did worse than chance, and none better than simple algorithms that follow easy rules. That is all interesting. For our purposes, however, what is most interesting is not that the experts were wrong, but their inability to recognize it.

Tetlock followed up with his cadre of experts and asked them to assess the strength of their own predictions. What did he find? Most of the experts believed they had predicted events correctly, even when they had not. When presented evidence that they had not done as well as they imagined, these same experts did not abandon the tools that delivered them into error. Instead they justified. They argued. They reasoned. With brilliant ingenuity they showed how the world would have tilted just as their methods said it must, had just this or just that been different.[2]

I do not blame the experts for getting the future wrong. Humans are not very good at this. I also cannot blame them for thinking they got it right. Humans are extremely good at that. Getting the future wrong while thinking we got it right is a human specialty. To hate this is to hate humanity. We cannot get rid of such a human set of tendencies. The best we can do is find ways of limiting their ill effects.

Tetlock’s experiments suggest one way to do so: have experts commit in writing what they predict will happen. If you placed fine grained predictions somewhere everyone can see, you will have provided the world—and more importantly, yourself—with an objective way to measure the utility of your reasoning. Others have suggested a more costly twist on this same logic, asking forecasters to put money down on all of their predictions. In both cases there is now an easily identified yardstick by which to judge a conviction.

But what of convictions that are not predictive? Not all arguments, after all, are about the future. Arguments over the past can be quite violent. No prediction market will ever be able to solve those arguments. No are predictions especially useful for solving debates over questions like “How much of Trump’s support comes from racist sentiment?” or “Is it wise to continue a trade war with China?”

What to do in face of such thorny problems?

Here is my personal solution: Identify a major political or ethical conviction you have. Then write out what you would need to know or have proven to you in order for you to change your position. Your position may be wrong or it may be right. But if you are not willing to sketch out what would make it wrong, you will not recognize when it has been proven so! Instead you will shift your arguments. You will shift the goal posts. You will move your target. You will do all of that—and barely be aware that you are doing so.

I am not dogmatically against moving targets. Occasionally, new aspects of a debate will arise that you had never considered before. Perhaps your argument about whether to go into Iraq was focused entirely on how easy it might be to control or contain a nuclear armed Saddam. Perhaps it had never occurred to you to think over the problems of reconstruction in a land violently divided. Perhaps that is actually the better thing to focus on. Shifting goal posts is sometimes necessary. But if you do not bother sketching out where your current posts are moored, you will not know when you have moved them.

I have found this entire exercise very useful for deciding what arguments to engage in and which to avoid. I recently had an acquaintance contact me, a very intelligent fellow who works Australian defense issues. While he is not quite Hugh White, this man believes America is on its way out of the Asia-Pacific. He makes the case that Australia must prepare for a world without American support. Knowing me to be the hawkish China type he asked for an assessment of his argument. I replied with a question:

“You believe that America is not committed enough to its allies in the Pacific, and will abandon them within fifteen years?”

“Yes.”

“What would America need to do change your mind? What could America do to prove to you that it was here for the long haul? What could it do to demonstrate the sort of commitment you think it lacks.”

His answer: “Nothing.”

That was my signal to politely excuse myself from the argument. Perhaps my friend is right. Perhaps America is on its way out of the region; perhaps America can do nothing to demonstrate her commitment to Australia and the other democracies of the Western Pacific. But if nothing could convince him, what exactly is the point of continuing this argument?

It is too easy to be attached to a conviction. I am not more intelligent than most of those who disagree with me. Nothing about my biography privileges my beliefs. The only thing that privileges my arguments is my own biases. I think it wise to fight against them. That means acknowledging from the beginning what knowledge or event should force me to change what I believe. It makes it easier to actually change your beliefs when that moment comes.

Increasingly, I ask others to do the same thing. This is an easy test to separate those of good faith from those of bad. The person willing to acknowledge at the beginning that they may be wrong, and then lay out in the conditions in which they would be wrong, is a person worth your time. A person unwilling to admit the possibility of error is not. That person simply is not interested in truth. They are interested in winning. They more they squirm to avoid explaining what might make them change their mind, the less you should trust them. Propagandists do not confess possibility of blunder.

That is my take. The first step in any serious argument, any argument with real consequences, should be this. Each side should lay out their case—and then explain, in equal detail, what would make them determine that their case is wrong. When that moment comes they still might cling to their original conviction. They might change the grounds of argument to protect their beliefs. But in that day, everyone, including themselves, will know that this is exactly what they are doing.

Edit 5 September 2019: Slightly edited the exchange with my Aussie friend for the sake of contextual clarity.

—————————————————————————————
If you found this post to your liking, you might also like other posts on similar themes: “The Limits of Expertise,” “Reason is For Stabbing,”  “On Words and Weapons,” and “Chinese Are Partisan Too.”   To get updates on new posts published at the Scholar’s Stage, you can join the Scholar’s Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.
—————————————————————————————


[1] Dan Sperber and Hugo Mercier. The Enigma Of Reason (Harvard: Harvard University Press, 2019).

[2] Philip Tetlock. Expert Political Judgement: How Good is it? How Can We Know? (Princeton: Princeton University Press, 2005).

Leave a Reply to Monte Davis Cancel reply

6 Comments

The person willing to acknowledge at the beginning that they may be wrong, and then lay out in the conditions in which they would be wrong, is a person worth your time. A person unwilling to admit the possibility of error is not. That person simply is not interested in truth. They are interested in winning.

I agree with this to some extent, but I feel like there are still cases where reasonable people should answer "nothing could change my mind here".

For a silly example, I've lived almost my whole life in Finland. Now suppose that someone came up to me and said, "you only think that you've lived your whole life in Finland, but in fact, you've lived your whole life in the USA, and have only moved to Finland recently". When I reject this claim, the person asks me what would change my mind about it.

Now I could in principle try to come up with some elaborate scenario where, if I woke up in a hospital and it was demonstrated to me that all of my memories had been implanted using sci-fi technology and I was actually living in the Matrix and… but this too would be such an astonishingly implausible scenario (as far as I can tell), that I might as well just say "nothing could change my mind about that".

Likewise, I feel like there are scenarios where a person can be interested in the truth, yet feel like the evidence is so overwhelmingly in favor of their claim that nothing even remotely plausible could happen to show them to be wrong. It's not because they would be unwilling to consider contrary evidence if it was presented to them. It's because, when they query their brain for what would count as such evidence, they can't think of anything. They know that even if their theory wouldn't predict some X happening, if X did happen, the overall probability that they put on their theory would just cause them to dismiss X as some kind of a measurement error… and given their state of knowledge, that might be a reasonable position to be in.

If they really tried, they probably could produce some scenario in which they admitted their error, but that scenario would require so many implausible-to-them assumptions that it would feel like the equivalent of "well if I woke up outside the Matrix and found out that everything that I knew about the world was wrong", so they don't bother going that far and just round the answer to "it's impossible".

Anything can be taken to an extreme. I don't recommend doing this at the beginning of every conversation. But if you are about to begin a big debate with someone, it is a useful place to start.

There are some topics where that question, out to me would get the answer

"Here are my reasons, Proving them wrong would make me reconsider"

There are also times when an opinion has to be based on probabilities and uncertain information in which case better information is always welcome.

Good teaser for Sperber & Mercier. and I look forward to a fuller discussion. The Enigma of Reasoncan profitably be paired with Kahneman's Thinking Fast and Slow: both are profound challenges to the Enlightenment/classroom framing of articulated logical argument as the normative form of how and why we think, choose among alternatives, and decide.

What struck me about the response to both is how many reviewers tried to fit both books into that framing, assessing them as self-help guides — "7 Common Errors in Thinking and How to Avoid Them". No: Sperber & Mercier are telling us that natural / cognitive / cultural evolution had little reason to adapt us for "exchange of views in search of truth" and every reason to adapt us for "See/do it my way." And Kahneman is telling us that even if the former were our telos, we're trying to model a natural and social universe in three pounds of jelly: we have no choice but to operate most of the time on habit, stereotype, and shaky generalization, because actual evidence-gathering and articulated logical argument are recently acquired, slow, and "costly" functions that we can't possibly apply to more than a small fraction of our actions, choices, and formation of views.

This reminds me of one of my favorite quotations, attributed to Oliver Cromwell. Infuriated by a recalcitrant Parliament, he said "Gentlemen! I beseech you, in the bowels of Christ, think it possible that you may be mistaken."

"these thoughts are not the actual cause of the behavior in question. They are justifications."

The reasons (ha!) that Sperber and Mercier come to this conclusion often involve one of the following two mistakes:
1) the assumption that the unconscious cannot rationally decide things and
2) the assumption that it should be easy/automatic for the consciousness to elicit the underlying reasoning of unconscious decisions.

If we acknowledge that we can take rational decisions unconsciously but that it takes effort to consciously retrace the unconscious reasoning, including some dead ends, that led to a decision, then there is absolutely no reason for the fatalistic idea that reasoning is only about justification.

Reasoning can also be the rediscovery of the actual underlying, possibly rational, truth discovery process.

Of course we cannot easily rule out post-hoc Orwellian and Stalinesque revisionism and we have to assume that we *also* engage in those.