Can You Pass an Ideological Turing Test?
[We’re making phenomenal progress on the Kickstarter! We’re a little over halfway to the deadline, and we’ve raised a little more than 2/3 of the goal. MANY THANKS to all who have donated so far! I can’t tell you how much it means to have your support. But there’s still that last third to go, which means we still need donations! If you appreciate me branching out into issues that reach beyond the walls of music, if you think I have insightful things to say about social/political issues and discourse online, please consider kicking a few dollars my way — even $5 really does help, and truly is appreciated.]
I get into a lot of fights on the internet. This is not surprising: I have Strong Opinions about many things, and, having changed my own views after reading certain arguments on numerous occasions, I’m a big believer in the power of discussion to change hearts and minds. The internet and people both being what they are, these fights aren’t always the most civil affairs, and sometimes the rhetorical intensity escalates alarmingly. When it does, there’s one thing I try to do that frequently seems to steer things back towards calmer waters: I try to pass the pertinent Ideological Turing Test.
I first learned of this concept back in 2011 when Leah Libresco, inspired by a tiff between economists Paul Krugman and Bryan Caplan, ran an Ideological Turing Test between Christians and atheists. As with the original Turing Test, which tests whether a human observer can tell the difference between text-based interactions with a machine intelligence and those with another human, the premise of an Ideological Turing Test is quite simple: Can you argue for a belief you don’t actually hold in a way that someone who does hold that belief would look at your argument and go “yes, this was written by someone who agrees with me.”? So in the case of the one Libresco ran: Could atheists successfully differentiate explanations/defenses of atheism written by atheists from those written by Christians and vice versa? Swap in different issues/positions as necessary.
That’s the strong version of such a test. The weaker — and less logistically complicated — one doesn’t require anonymized submissions from a panel of potential mimics, it just involves summarizing your debate opponent’s position such that they say “yes, that’s what I’m arguing and why I’m arguing for it.”.
I think this is important for a few reasons. The first is outright persuasion. If you want to convince someone of something, it’s really helpful to know specifically why they don’t agree with it. To take an example that will hopefully be emotionally inert enough that it doesn’t drive anyone away: I put all of my sheet music into the computer with a program called LilyPond. This makes me something of an anomaly — almost every other composer I know uses either Finale or Sibelius — and periodically people try to convert me to their preferred notation program. Far and away the most common way of doing this is to point out how much faster (they assume) Finale/Sibelius is than LilyPond. I don’t actually know if this is true. I’m not as fluent in any other program as I am in LilyPond, so it’s hard to evaluate, but for sake of argument let’s say that LilyPond really is slower. I don’t care. I don’t use LilyPond because it’s blazingly quick, I use it because it produces beautiful notation.
So if you want to convince me to switch, then, once you understand my position, you’ll realize you have two basic choices: 1) Convince me that LilyPond doesn’t actually make more beautiful sheet music than the alternatives, or 2) Convince me that I should care about speed (or whatever) more than I currently do. (And then, importantly, proceed to convince me that LilyPond is actually worse by that metric.) Stated more generally, you can either try to convince me that my position isn’t as well supported by my values as I think it is, or you can try to convince me that my values are different than what they should be. (The easiest way to do this is usually to find a contradiction between two values, preferably a deep one that you want me to keep and a surface one that you want me to discard.)
If you can’t put your finger on the crux of the disagreement, odds are high that you and your debate opponent are going to wind up talking past each other, effectively having two separate conversations that tend towards vitriol. No one wants that.
(Note also that this kind of exercise can also lead to developing completely different arguments for the same thing. If I’m advocating LilyPond to a fellow composer, I might play up the “beautiful notation will help convince performers that your music is worth taking seriously” angle; if I’m pitching it to someone whose interests are more archival, I might play up the open source angle, emphasizing the decreased (eliminated?) risk that the sheet music in question might someday become inaccessible due to the whims of a corporation or its unexpected collapse. Not everyone who holds the same position takes the same path to it, and that’s an important thing to be aware of.)
But, as suggested by the second choice above, there will also be times where trying to summarize the other side reveals that the debate you’re having isn’t the one you need to be having. Sticking with the same example, if you think my priorities are out of order when it comes to evaluating notation software, we can’t have a productive debate about “Which notation software should you use?”. Instead, we have to deal with “Which qualities are the most important in a notation program?” or even “How should composers prioritize their brief time on earth?”. These are obviously related questions, but they’re not the same question, and you can’t really answer the former until you have at least a basic agreement on the latter.
And it may be that fundamentally, you can’t come to a basic agreement on the deeper questions. Music notation programs are low–stakes enough that that possibility here seems as silly as it is unlikely, but it’s easy to imagine such ideological conflicts when it comes to higher–stakes questions about the morality of abortion, the optimal limits on government power, how to structure the maximally just society, and the like. Acknowledging this isn’t to give into relativism, to advance the claim that there are no absolute truths, merely differing perspectives and opinions — it’s just admitting that changing people’s minds is hard, and won’t always be possible in the confines of a single internet debate (which, let’s be real, is not necessarily the best forum for persuasion). The fact that two people can’t agree on something doesn’t mean that both of them are right.
Still, there are times where it’s useful to know that agreement is not in the cards. My time on earth is preciously limited, and engaging in debates destined to go nowhere is definitely not the best use of what little I have. (This isn’t to say that I don't sometimes get sucked into such debates, of course. We often fail at what we aspire to be.) If I have an hour to kill and I see someone online who generally shares a lot of philosophical ground with me espousing something I think isn’t actually factually supported, I’ll probably dive in and throw a few citations their way, since that’s probably a productive conversation we can have. If, on the other hand, in that same hour I see someone who’s coming at an issue I care about a lot from a place of totally different values — perhaps they don’t think societies have a moral obligation to provide a basic standard of living for their members, or perhaps they think that acting morally requires following (with appropriate interpretation) the edicts of their divinity of choice — I’m probably not going to engage, not because I don’t care about a universal basic income or about abortion or whatever, but because constructing a thoro defense of “societies collectively have obligations to their members” or agnostic morality takes a huge amount of time and effort, and is likely to be met with stiffer resistance too, since what’s being challenged is not a factual matter but a deep–seated, possibly identity–shaping value.
This last point could stand elaboration. It’s flattering to think of myself as a rational being, to think that my philosophical outlook rests on firmly established first principles that I can mount a vigorous argument for given a few basic axioms, but I don’t think that’s really true. Sure, I can offer defenses of why Autonomy/Self–Determination is important and a good thing to value, but I don’t value Autonomy because of those arguments — I construct those arguments because I value Autonomy. That’s just a fact about me. I value Autonomy. It’s not the only thing I value, or even the thing I value most, but it undergirds a wide swath of my more surface–level beliefs, from my positions on trans issues to my support for a universal basic income and even my belief in democracy as a system of government. There’s more to my sense of self than a set of philosophical positions, of course, but said positions are a non-negligible chunk of my self–conception, which means that it’s easy to interpret attacks on the abstract value of Autonomy as attacks on me as a person instead. And that, in turn, means my defenses of Autonomy-as-a-value will gain some extra urgency from being tinged with an edge of self–preservation somewhere deep in my psyche.
Ultimately, this is the biggest reason I care about truly grokking the positions of people who disagree with me. My beliefs make sense to me. I believe them. Of course I do — they’re my beliefs. When it comes to the beliefs I’ve had for the longest and that most of the people around me share, it’s easy to start thinking of them as self–evidently obvious, as truths you would have to be irrational to deny. If people do deny these truths, it must be because they (often grouped under a tidy label like “Republicans” or “Socialists”) haven’t thought about it, or because they’re willfully deluding themselves, or because they’re outright malicious. Either way, such people are either baffling or full of ill intent. They probably can’t be reasoned with — after all, if they could, they would already agree with you! Their opinions can be dismissed out of hand.
This is dehumanization. It’s a subtle form of it, sure — I’m not claiming that political Facebook posts that don’t attempt to grok people who disagree are on a par with all other forms of dehumanization and will lead inevitably to ideology–based genocidal totalitarianism — but it’s there all the same.
People don’t pick their beliefs at random out of the ether. Our beliefs reflect our underlying values. Sometimes the way they do that is complicated and contradictory and messy and strange, but they are nevertheless sincere. This kind of complexity is very deeply human. People who disagree with you, even on deep, fundamental issues, aren’t doing it out of spite or malice or thoughtlessness or delusion. Assuming that they’re arguing in good faith, they’re doing it because their understanding of the world and way of valuing things in it is different — perhaps radically different — than yours.
I want to emphasize that this post is not a Guide To All Interactions Online Ever, or even for All Interactions That Involve Some Kind Of Disagreement. There is a time for calmly reasoned debate that seeks to fully and fairly summarize the other side, but that time is not All The Time, Constantly, Always. Sincerely held beliefs that reflect deep personal values can still be bigoted, callous, or otherwise destructive, and it is legitimate — necessary, even — to call them out as such with language that matches their repugnancy. I have written about this in the past, as have others. Ideas that are beyond the pale do not magically become less so simply by dint of being sincerely held.
This post also isn’t an argument about how people should cultivate their Facebook pages in terms of ideological diversity. As much as I think it’s important to be exposed to people who disagree with you, I don’t think that that exposure needs to happen at every moment of every day, nor that everyone needs to count staunch ideological opponents among their closest friends and confidantes. For some people, Facebook is a semi-public forum where they can brush shoulders with all kinds of passing acquaintances; for others, it's a closely guarded refuge for de-stressing with only the most meaningful people in their lives. (And, of course, some people aren’t even on Facebook at all.) I don’t think either of these approaches is inherently superior to the other, and I definitely don’t agree that members of oppressed groups are doing something wrong if they seek to avoid those who actively argue that they shouldn’t have rights. Cultivating an understanding of the intellectual diversity in the world doesn’t require constantly bombarding yourself with calls, however coded, for your destruction.
This isn’t a post about any of that. This is merely a post about when you are trying to change someone’s mind by engaging them in a calm, reasoned debate. If that’s your goal, then it pays to take the time to really understand what they’re arguing and why they’re arguing it. It’s not an easy task, but it’s important, and it helps keep the flame wars at bay.