Psychology for Socialists, Part 3: Know your enemy.

by Jonathan K

“Psychology for Socialists” is a multi-part series designed to introduce people to findings and theories in Psychology that are relevant to socialism and activism. The things I will be presenting aren’t exclusively relevant to those topics. In fact, they apply to almost every facet of our lives. What I will be doing is presenting them in relation to the work we do as socialists.

Let me start with a couple of disclaimers. The first disclaimer is that findings in psychology are (almost) never absolute. We can capture general patterns or describe the most likely behaviors or reactions, but there will always be exceptions. So, for everything I’m about to describe, remember that it doesn’t apply to everyone or every situation. The second disclaimer is that psychology is an imperfect science. One way Psychology is imperfect is that, like many sciences right now, it is struggling with a replicability crisis. The findings I will present will be ones I have confidence in, or I will be clear that they are still unsettled. However, even for the ones I have confidence in, the ideas behind them could be overturned at some point in the future.

In another sense Psychology is imperfect because, like many sciences, it has suffered from a lack of diverse perspectives, and more than other sciences it has suffered from a lack of diverse data. Many of the findings I will be discussing are based on studies of mostly upper-middle-class and mostly white college students, and conducted by mostly white researchers (though somewhat less overwhelmingly cis-male than other fields). In the last two decades the field has become more aware of this and made efforts to self-correct, but it will take some time for us to be confident that these findings apply to all of humanity.

In parts 1 and 2 of this series, I focused on ourselves and our organization, and how we could do our work better. Now I’m going to turn to less pleasant topics: Knowing the threats to our organization. The topics I’m going to cover are going to do double-duty. As it turns out, some of the psychological phenomena that are likely to tear our organization apart are closely related to the propaganda engines of the fascist right. The tools that fascists use to stoke fear, sow division, and demand obedience are taking advantage of aspects of the human mind that, without any malicious intent or really any control at all, can shatter friendships, schism organizations, and lead to types of toxic behavior we see in leftist organizations.

First, I’ll discuss intergroup conflict, which is definitely the greatest threat to DSA of any topic I’ve discussed in this series. Then I’ll discuss the strange phenomenon of “loss aversion”, and the dangers of authority and conformism. Each of these appears most obviously in fascist messaging and practice, but also has some relevance within DSA, mostly as lessons of what to avoid.

I. Intergroup conflict

Both historically and in the present, fascism depends deeply on defining an “other”, making it a feared enemy, and demanding obedience in order to destroy it. This should not be news to any of you. It also shouldn’t be surprising to many of you that much of DSA’s internal strife comes from the exact same features of the human mind.

Humanity is, for whatever reason, extremely good at breaking the world down into “us” and “them.” It seems to be a nearly universal tendency.

When this happens, the consequences are very predictable. You will treat “us”, in technical terms your “in-group”, very well. You will be more generous, more trusting, and generally think of in-group members more as individual people with thoughts, feelings, and diverse opinions. For someone in another group, an “out-group”, you will do pretty much the exact opposite: Giving less, being less sympathetic towards out-group members, and thinking of out-group members as being “all the same,”1 both in the form of stereotypes and ascribing the opinions or actions of one member of the group to the group as a whole.

The most extreme and visible form of these group effects is bigotry. However, notice that there are many different forms of bigotry that use completely different ways of defining in-groups and out-groups, some of which seem to be built on largely cultural constructs (for example: race, religion, national origin) rather than any kind of objective or intrinsic trait. This hints at the bigger problem: Humans can make in-groups and out-groups out of anything, no matter how trivial, and once those groups have formed any trivial disagreement can become a major conflict.

One of the classic demonstrations of this is Muzafer Sherif’s “Robbers Cave” experiment. This was an experiment conducted in 1950 with 20 white middle-class assumed-male 12-year-olds in the US. It took place at a summer camp named Robbers Cave. The children didn’t know each other, and before they arrived, they were randomly assigned to one of two groups, neither group knowing that the other even existed. The groups were allowed to pick their own names (they chose the “Eagles” and the “Rattlers”). For the first week, they did cooperative team-building exercises, and again, neither group knew that the other group existed. Then, in the second week, they were told that the other group existed, and the two groups were put into light competition with each other. Immediately, they started throwing insults (including some extremely racist language), fighting, raiding each others’ cabins, and basically putting on the best imitation of war that 20 twelve-year-old boys at a summer camp could produce. (The third week was spent breaking these groups down and trying to undo this hostility.)

The basic conclusion is that you can get all the behaviors of bigotry and hostility with any arbitrary grouping. Nobody has done a study exactly like this since Sherif (for a number of reasons), and because of the extremely narrow sample it’s more of a good example than a revolutionary insight into human nature, but psychologists have found similar effects pretty much everywhere in the world (though some studies report less out-group hostility in certain cultures). Even without the self-defined identity and cooperative team-building (which really strongly build up group identity), you can get versions of the same in-group/out-group effects just by giving people different color t-shirts. This is described as a “minimal group”, a grouping defined by exactly one completely arbitrary trait. Minimal groups still give you pretty strong in-group and out-group effects. If you want a quick summary of the entire literature, this comic pretty much nails it.

Fascist propaganda constantly exploits this feature of human nature. They actually use a kind of triple-whammy to make these effects terrifyingly strong. First, they clearly define an in-group and an out-group, from emphasizing white cis-male flag-waving “American” identity, to defining out-groups by whatever terms happen to be convenient for their purposes at the time (“blacks”, “illegals”, “the libs”). Second, they push the idea that the in-group is in competition with the out-group(s) (“stealing our jobs”, “threatening our way of life”). Third, they use the language of disgust. I could write a whole separate article on the dangers of disgust, but the short version is that disgust is an incredibly visceral and foundational human emotion that has moral weight. If something is disgusting people will treat it as immoral. However, with a few exceptions, what is disgusting varies between cultures2, and you can deliberately make something (or someone) disgusting without too much difficulty. Furthermore, if you want to bring about widespread hostility and even outright genocide, the fastest way to get people on board is to make the out-group disgusting. The Nazis famously described the Jews as being “smelly” or “dirty” in children’s books. Currently, in the US, you see the same language constantly leveled against marginalized racial, sexual, and economic groups from the right-wing media.

In organizations like DSA, these intergroup conflicts take a different form. First of all, people typically call it sectarianism. Second of all, everything else is exactly the same.

We define a lot of little in-groups in DSA, caucuses, working groups, etc. Most of the time it’s not an issue, but the moment there is even the tiniest amount of competition or disagreement that can be framed in terms of groups, things get ugly very quickly. This problem ties back to things that I discussed in my last article, most notably attribution and saving face. People are more likely to make dispositional attributions about out-groups (“that’s just how they are”).

Again, that only makes the problem worse: it’s much harder to assume good intentions in someone who is from an out-group, and easier to assume good intentions in someone who is in your in-group. In some cases this works in our favor. It’s how DSA as a whole holds together, at least in theory. But, when group identities within DSA come into play, and those identities become stronger than the broader group identity of being in DSA, we have trouble working together. You also save face on behalf of other members of your group, which can turn an individual disagreement or simply a mistake into a group-wide conflict.

Avoiding this kind of sectarian conflict isn’t easy. There’s a lot more that I could say about the intergroup conflicts that have arisen in Boston DSA alone over the last couple of years, but that’s a whole separate article unto itself. Even so, there are a few things we can do that will stop intergroup conflict from damaging our work.

One thing you may have noticed is that group identity is flexible and multi-faceted. Everyone belongs to many different groups at once. When it comes to group conflict, the issue is typically which of those group identities is highlighted at any given time. The effects of even a temporary group identity can be quite dramatic. Creating a minimal group with t-shirts can temporarily override racist biases, at least towards people wearing the same color of t-shirt. Marx and his successors understood this to a degree, highlighting the identity of the working class over and above any other group identity. In terms of how to counteract this aspect of right-wing propaganda, the approach is clear: make people conscious of their class identity, and who that class is really in competition with.

For conflicts within DSA, we have a convenient pre-made unifying group in DSA itself. Highlighting our shared membership and shared goals over other labels will support more respectful discussion and productive interactions. To be clear, I don’t think we need to disband the caucuses, and I do think they serve a positive purpose for their own members. That said, we must be extremely vigilant that we avoid framing any discussion as pitting one group against another.

Our best defense against this is to think of each other as comrades first and above all. It sounds cheesy, but it’s simply the truth. If you think about people in terms of their caucus or their working group or some other subdivision in DSA, you will be more likely to think of them as an individual, and not treat them as an “other.” Conversely, when you go to present ideas that are your own ideas, make clear that you are presenting them as an individual rather than as a member of any group to which you belong. If a disclaimer is not provided, we should make a habit of asking whether something is an individual position or a group or caucus position. There is nothing to be gained from ambiguity, and people will assume hostility given the chance.

II. Loss aversion

Exactly one psychologist has ever won a Nobel Memorial Prize, and his name is Daniel Kahneman. He won the Nobel Memorial Prize in economics for a simple insight that mainstream economists found revolutionary: People are irrational, and they are irrational in predictable and quantifiable ways.

One of the phenomena that Kahneman discovered is called “loss aversion”, and it is brutally simple: say someone offers you a mug for sale, and asks you to estimate what you’d pay for it. Let’s say $3.50. Then, later, someone gives you a mug, you own it, it is yours. Now someone asks how much they would have to pay you for you to give them that mug. Most people will say about $7.

That’s loss aversion in its simplest form: You value something roughly twice as much if you think of it as yours. That’s without changing anything about what the thing is, what it can do, how it looks, whatever. As soon as you think of it as belonging to you, you value it more. It is a deeply irrational bias.

Loss aversion shows up in all kinds of interesting forms. People will take bigger risks to acquire something than they will to risk losing it. There’s an intuitive way to think about this: You might be willing to buy a lottery ticket with a one in millions chance of giving you a billion dollars, but if you had a billion dollars, you’d never take a risk on something that had a one in millions chance of letting you keep it. That’s an extreme example, but the basic idea works anywhere. The entire idea of “opportunity cost,” the price of not doing something, is an attempt to employ the power of loss aversion to things that we would typically think of as gains, because loss aversion motivates people so strongly.

A lot of right-wing messaging uses loss aversion to drive up various forms of bigotry. As I mentioned in the last section, one of the classic anti-immigrant arguments of the right is “they’ll take our jobs”. Note that this message always uses the verb “take.” That’s because it implies that you will lose something. If you look at any right-wing propaganda, you’ll find something phrased in terms of “loss,” and that’s explicitly to make people treat the group “taking” something as the enemy, and valuing whatever it is they might “lose.”

Loss aversion also plays a big role in right-wing economic messaging too. The right has successfully framed taxation as “taking” something that would otherwise belong to you. They frame social programs as “taking your tax dollars” and giving them to someone else, and try to give you a sense of ownership over money that was never yours to begin with. It works, too. People are less willing to pay taxes when it feels like a loss.

Even within DSA loss aversion will sometimes rear its head. Any time any action is framed as taking something away, there is resistance to it. It’s a tried and true way to spin up opposition to anything, and we should be careful to ask when something is truly a loss, and when it’s just being framed that way. Listen carefully next time a contentious issue comes up for debate, and you will likely hear someone suggest it’s losing something or taking something away from the organization, resources, character, whatever. That’s not because they’re being disingenuous, by the way. More likely than not it’s an honest assessment of how they view the issue, and why they feel strongly about it. It’s just that whether something is a loss or not can often be a matter of perspective or opinion rather than an objective fact.

The upside of loss aversion is that it means that some gains for economic justice are almost impossible to reverse once implemented. The Affordable Care Act might have been unpopular when it first showed up, but the first whisper that you might “lose” your health insurance and public opinion almost completely flipped. In any country that has universal healthcare, trying to undo it is politically implausible without extreme antidemocratic efforts. The few places that have some form of universal basic income? Same deal. It’s a fight to create a Universal Basic Income system anywhere that doesn’t have it, but anywhere that has UBI will fight tooth and nail against anyone who tries to take it away.

In terms of counteracting right-wing messaging, there are a few ways to approach the problem. As far as I know, you can’t beat loss aversion outright, there’s no way to “turn it off” that anyone has published. Psychologists have some ideas about when it doesn’t show up, but no generally applicable way to use that information.

One strategy is to use loss aversion in our own messaging, whenever we can. Yes, it’s a propaganda technique, but it’s one that preys on how people subjectively value things rather than changing an objective truth. If you can honestly frame something in terms of a loss, that’s no less accurate than framing it as a gain, and it will resonate more with people. Another area to look for chances to use it is in policy proposals. That article I linked earlier found that a tax structure that deducts money before people ever see it and is guaranteed to give a refund is much more welcome, and creates much more compliance, than one in which people have to pay more out of their pockets when they’re doing their annual taxes. Keep that in mind if we ever find ourselves in a position to make policy.

For internal discussions, the primary resource that we have to lose is decision-making power or the work we have invested into various projects. These each take different forms. Decision-making power is the most obvious: if we are faced with a proposal that would reduce our ability to influence the decisions our chapter or working group or organization makes, our first impulse will be to push back on it. At first glance it might sound like I’m talking about things that are simply anti-democratic, in which case the loss aversion is good, but it’s more complicated than that.

Think of it from the perspective of the members of any DSA chapter prior to summer 2016. The members at that time were used to accounting for huge percentages of any vote. If your general meeting needs only ten attendees to make quorum, each person accounts for a full 10% of the decision-making power of the entire chapter. Then, the chapters grew to have hundreds of members, and any general meeting that made quorum now needed to have triple-digit attendance, and each person in attendance accounts for less than 1% of the decision-making power. That’s a form of loss. In some chapters (thankfully not so much in Boston) we saw leadership committees that were increasingly reluctant to cede power to their membership, partly because of that loss of power.

More generally, we need to recognize in ourselves when our reaction to something is governed by our own loss aversion, and ask whether that reaction is appropriate or not. To create a socialist world, we’re all going to end up giving up something. We have to be willing to look at ourselves and ask what we’re really willing to lose, and when the time comes to lose it, we must be ready for how strongly we will want to resist it.

III. Authority and conformism

Following World War II, psychology as a field turned a lot of attention to figuring out how the civilians of Nazi Germany could become servants of fascism and commit some of history’s greatest atrocities, and most of all whether humans in general could be driven to the same extremes. By the 1960s, the answer was clearly that it wasn’t a unique occurrence. The Nazis had exploited some very straightforward features of the human psyche that could be found anywhere. Any country in the world can fall under the sway of a fascist regime. Some of the tools required I’ve already covered, but when it comes to fascism there are two other necessary pieces: The psychological power of authority and conformity.

Psychology is such a new and rapidly developing field (compared to other sciences at least) that it’s relatively rare to find work from the mid-20th century that holds up today. However, two studies in particular have held up, and are guaranteed to show up in every introductory course: Milgram’s work on authority, and Asch’s work on conformity.

The Milgram Experiments are so (in)famous that the wikipedia entry for them is actually a reasonable source. The setup was simple: The participants — middle-class white people from New Haven — came into the lab and were told they would be doing a task with another subject. The other subject was actually what’s called a “confederate,” an actor employed by the experimenter. The subject would be reading math problems to the confederate, who would be in a different room and could only be heard via intercom, and the subject would deliver progressively stronger electric shocks every time the confederate made a mistake. The subject got to experience a low-level version of this shock themselves, and it was quite painful.

During the experiment, the confederate would make several pre-arranged mistakes, and make increasing noises of agony with the increasing power of the shock (they were acting; the confederate was never actually shocked, but the participant didn’t know that until after the experiment ended). Eventually the confederate would mention having a heart condition, then plead for mercy, and eventually just go silent. If the subject asked to stop, an experimenter in their room, wearing a white lab coat and holding a clipboard, would first say “Please continue.” The second time the subject asked to stop, they would say “The experiment requires that you continue.” The third time they asked, “It is absolutely essential that you continue.” The fourth, “You have no other choice, you must go on.” If, at that point, the subject insisted on stopping, the experiment would stop.

Only 14 of 40 subjects insisted. The rest continued to the end.3

The point of Milgram’s experiments was that authority, manifested as a white lab coat and direct commands, is an incredibly powerful thing. Almost two-thirds of that group  were willing to apparently kill someone, just because they were told to do so by a man in a white lab coat holding nothing more threatening than a clipboard.

The power of authority is a necessary tool of fascism and authoritarianism. There are no limits on what someone with absolute power can get others to do even without explicit threats. Note that it isn’t just authority to some great leader, either. The Milgram result works on a very small scale, with a very specific and narrow kind of authority. Police take advantage of this all the time. Their threat comes in part from force, in part from the law, and mostly from the simple fact that their uniform represents both. A police officer can order someone to do almost anything, and merely because it’s coming from someone with a particular uniform, they’ll often do it.

DSA’s structure is resistant to developing this kind of authoritarian power within itself because of the primary authority of the membership to collectively overrule its leadership at any time. However, that doesn’t mean it’s not at risk. The Danny Fetonte incident was a close call with authoritarian power, specifically in the context of the Austin chapter in which he effectively single-handedly took over a general meeting, but also, to a degree, the NPC. The management of the DSA weekly blog recently got shuffled around because of a similar episode involving a member of the NPC. It takes some effort to contradict someone yelling orders in any context, and not everyone is willing to do it.

It is something we must always be vigilant against. Never obey a command simply because it is a command.

In fighting the power of authority in the broader world, it’s important to know what does and does not work. Follow-up studies found that seeing someone else defy authority actually doesn’t make people more likely to defy it themselves, even if there are no consequences for doing so. However, people who feel more agency in themselves, and people who feel more empathy towards others, are both more likely to resist orders to hurt someone else. It’s not easy to make people feel more agency or more empathy, but DSA actually excels at it. We emphasize doing things to make the world a better place, and we emphasize comradeship and compassion. That’s actually one of the big reasons I joined DSA: It really has the tools to stand up to the effects of authoritarianism.

The other half of fascist power is conformity, and again there is an experiment that has stood the test of time and is so famous that wikipedia is a reasonable source. Solomon Asch, in 1950, put a bunch of white middle-class men in a room. Only one of them was a subject. The rest were confederates. The group was asked to do a task in which they matched line lengths: they were given a sample line, and asked to say which of three other lines matched it. On one of the trials, every confederate in the group favored an obviously wrong answer. About a third of the time, the subject would go along with the group.

There’s an important upside in Asch’s work: Two-thirds of the time, people were willing to defy the group. Simple, objective facts are not that easy to distort. However, 75% of his participants did conform at least once. That’s more of a problem. These are obviously extreme and Orwellian examples, but the power of conformity is far stronger for things that are not objective facts.

Authoritarian regimes rely heavily on conformity to induce obedience without even needing to use the power of authority, and again this is not a surprise to anyone. Combating it is usually a matter of being the voice of dissent. If there is no unified opinion within whatever you see as your group, then it’s easier to defy a majority view. If everyone who dissents is removed from your group, conformity becomes harder to resist. The modern GOP is a nearly perfect case study of this: Inasmuch as there were ever “moderate” republicans in our lifetime, they were chased out in favor of a homogeneous whole that marches in lockstep with the directives of a sole leader.

For DSA, what this means is that debate is good, differences of opinion (discussed respectfully) are good, and we should make sure we continue to be a multi-tendency organization. That’s not to say that consensus is bad, by any stretch, but there is a difference between having some clear points of unity and enforcing a conformity of opinion from our members (which certain other socialist organizations do explicitly). The ability to support internal debate keeps us from falling into the trap Asch found: We will not change objective facts to conform for its own sake, and we should make sure that never happens.

IV. Conclusions and reflections

That brings us to the end of Psychology for Socialists, for now. In these three articles, I’ve tried to give a simple introduction to some ideas that I think are truly essential to our work. The goal of this, more than anything, was to make all of us more aware of the nature of our own minds, the biases that we are prone to, and the mistakes we can make as a result. There’s so much more I could have talked about, and maybe I will at some point in the future. The human mind is a complex piece of work and psychology as a science is still in its infancy. However, the nature of democracy is that in order to succeed it must understand how people think, both as individuals and in general. Without that, democratic socialism will suffer the same fate as Esperanto: A nice idea, but implemented in a human-incompatible way.

  1. Judd, C. M., & Park, B. (1988). Out-group homogeneity: Judgments of variability at the individual and group levels. Journal of Personality and Social Psychology, 54(5), 778-788.
  2. Rottman, J., & Kelemen, D. (2012). Aliens behaving badly: Children’s acquisition of novel purity-based morals. Cognition, 124, 356-360.
  3. (All participants were debriefed at the end and shown that the confederate was unharmed, and later work tracked down the participants of these experiments and found that none of them suffered any identifiable long-term distress from being in the study, but there has been a lively debate about the ethics of these experiments ever since they were published. Most of Milgram’s experiments on authority could not be run today because of research oversight mechanisms that were put in place partly because of these experiments.)

Leave a Reply