
Why do good people do terrible things?
When Hannah Arendt sat in that Jerusalem courtroom watching Adolf Eichmann—a man who appeared utterly ordinary, even banal—she confronted a disturbing truth that still haunts us today: the capacity for evil doesn't require monsters. It requires only ordinary people placed in extraordinary circumstances. From Stanley Milgram's shocking obedience experiments to the recent Wells Fargo account fraud scandal, we've witnessed countless examples of fundamentally decent individuals participating in terrible acts. This paradox forces us to question everything we think we know about human nature and moral character.
The urgency of understanding this phenomenon extends far beyond academic philosophy. In our current era of corporate scandals, political extremism, and social media mob behavior, we witness daily examples of seemingly good people engaging in harmful actions. The 2008 financial crisis saw respected bankers engage in predatory lending practices[1]. The January 6, 2021 Capitol riots involved teachers, police officers, and military veterans[2]. Social media platforms designed to connect people have become vehicles for harassment and misinformation, often operated by individuals who genuinely believe they're improving human communication.
The Situational Forces That Override Character
Philip Zimbardo's Stanford Prison Experiment in 1971 remains one of the most powerful demonstrations of how situational forces can transform good people into perpetrators of cruelty[3]. College students randomly assigned to play prison guards began exhibiting sadistic behavior toward their "prisoners" within days, forcing Zimbardo to end the two-week experiment after just six days. The guards—who had no prior history of violence or cruelty—began stripping prisoners naked, forcing them to perform degrading acts, and denying them basic necessities.
Stanley Milgram's obedience experiments at Yale University from 1961-1974 revealed similar patterns[4]. Participants, believing they were helping with a learning experiment, administered what they thought were increasingly painful electric shocks to another person when instructed by an authority figure. Despite hearing screams and pleas for mercy, 65% of participants continued to the maximum 450-volt level when prompted by the experimenter's calm insistence that "the experiment requires that you continue."
These experiments identified several key situational factors that can override individual moral judgment. Authority structures create what Milgram termed "agentic shift"—the psychological transition from feeling personally responsible for one's actions to feeling like an agent carrying out someone else's will. Role expectations, as seen in Zimbardo's prison study, can fundamentally alter behavior as people conform to the perceived requirements of their position. Social proof and group pressure create environments where harmful actions become normalized through collective participation.
The gradual nature of moral compromise, which researchers call the "slippery slope effect," allows good people to cross ethical boundaries incrementally. Albert Bandura's research on moral disengagement identified how people rationalize harmful behavior through euphemistic labeling (calling torture "enhanced interrogation"), advantageous comparison ("at least we're not as bad as them"), and displacement of responsibility ("I was just following orders")[5].
The Psychology of Moral Disengagement
Bandura's theory of moral disengagement explains how individuals selectively disengage their moral standards to avoid self-condemnation while engaging in harmful conduct. This process involves eight psychological mechanisms that allow people to maintain their self-image as moral agents while participating in immoral acts.
Moral justification involves reconstructing harmful behavior as serving a worthy purpose. The 9/11 hijackers viewed their actions as religious duty, while corporate executives justify massive layoffs as necessary for shareholder value. Euphemistic labeling sanitizes harmful conduct through language—"collateral damage" instead of civilian deaths, "rightsizing" instead of mass firings, "enhanced interrogation" instead of torture.
Advantageous comparison minimizes harmful acts by comparing them to worse behaviors. A manager engaging in workplace harassment might justify their actions by pointing to more severe cases of abuse. Displacement of responsibility shifts accountability to authority figures or institutional pressures, while diffusion of responsibility spreads blame across group members, making individual accountability unclear.
Distortion of consequences involves minimizing or ignoring the harm caused by one's actions. Dehumanization strips victims of human qualities, making it easier to harm them—referring to people as "animals," "savages," or reducing them to statistics. Finally, attribution of blame holds victims responsible for their suffering, suggesting they brought harm upon themselves through their actions or characteristics.
Research has shown that these mechanisms often operate unconsciously, allowing people to maintain positive self-regard while engaging in harmful behavior[6]. The gradual activation of these mechanisms helps explain how ordinary individuals can become complicit in systematic harm without experiencing overwhelming guilt or moral distress.
System-Level Pressures and Institutional Corruption
Beyond individual psychology, systemic factors create environments where good people are pressured into harmful actions. Organizational cultures can normalize unethical behavior through reward structures, role expectations, and institutional pressures that make moral resistance extremely difficult.
The Wells Fargo account fraud scandal (2011-2016) illustrates how institutional pressure can corrupt individual behavior[7]. Employees—many described by colleagues as honest and hardworking—created millions of unauthorized customer accounts to meet aggressive sales quotas. The bank's culture of "Eight is Great" (selling eight products per customer) created impossible targets that could only be met through fraud. Employees who raised ethical concerns were fired or demoted, while those who met targets through questionable means were promoted and celebrated.
Elizabeth Kaptein's research on ethical climates in organizations identified how structural factors shape moral behavior[8]. Organizations with strong performance pressures, weak oversight mechanisms, and cultures that prioritize results over methods create what she terms "ethical fade"—the gradual erosion of moral awareness as people focus on achieving goals rather than the means used to achieve them.
The 2008 financial crisis provides another example of systemic corruption involving fundamentally decent people. Mortgage brokers, bank executives, and rating agency analysts—many of whom were respected community members with no history of criminal behavior—participated in predatory lending practices and the creation of toxic financial instruments. The system's complexity created moral distance between individual actions and their ultimate consequences, while competitive pressures and financial incentives overwhelmed ethical considerations.
Military organizations present unique challenges in understanding how good people can commit atrocities. The My Lai Massacre during the Vietnam War involved soldiers who, under different circumstances, might never have harmed civilians[9]. Lieutenant William Calley, who led the massacre, had no prior history of violence and was described by those who knew him as quiet and unremarkable. The combination of combat stress, dehumanizing training, unclear rules of engagement, and group dynamics created conditions where moral boundaries collapsed.
The Role of Cognitive Biases in Moral Failure
Cognitive biases—systematic errors in thinking that affect decisions and judgments—play a crucial role in how good people rationalize harmful actions. These mental shortcuts, which normally help us navigate complex environments efficiently, can lead to serious moral failures when they override ethical reasoning.
The fundamental attribution error causes people to attribute others' harmful behavior to character flaws while attributing their own harmful behavior to situational factors. This bias helps explain why individuals can maintain positive self-images while engaging in harmful conduct—they view their actions as responses to circumstances rather than reflections of their character.
Confirmation bias leads people to seek information that supports their existing beliefs while ignoring contradictory evidence. In moral contexts, this can prevent individuals from recognizing the harm their actions cause or considering alternative courses of action. The tobacco industry executives who suppressed research linking smoking to cancer weren't necessarily evil—many genuinely convinced themselves that the evidence was inconclusive because they selectively attended to studies that supported their position[10].
System justification theory, developed by John Jost, explains how people are motivated to defend and rationalize the status quo, even when it disadvantages them or others[11]. This bias can lead good people to participate in or defend harmful systems because changing them seems impossible or threatens their worldview. The bystander effect during the Holocaust partly reflected this tendency—many Germans convinced themselves that resistance was futile and that the system, however flawed, was better than chaos.
Moral licensing occurs when past good behavior gives people psychological permission to act less ethically in the future. Research by Benoit Monin and Dale Miller found that people who establish their moral credentials through symbolic gestures often feel entitled to act in ways that contradict those values[12]. A CEO who champions diversity initiatives might feel licensed to engage in discriminatory hiring practices, believing their public commitment to equality absolves them of scrutiny.
Historical Case Studies: From Ordinary Citizens to Perpetrators
Christopher Browning's study of Reserve Police Battalion 101 provides one of the most detailed examinations of how ordinary men became mass killers during the Holocaust[13]. The battalion consisted of middle-aged German policemen, many too old for military service and with civilian careers as shopkeepers, clerks, and laborers. When ordered to participate in the mass execution of Jewish civilians in Poland, these men—who had no particular Nazi ideology or history of violence—initially showed reluctance and distress.
However, through a combination of peer pressure, gradual habituation, and the normalization of violence within their unit, most eventually participated in systematic killing. Major Wilhelm Trapp, the battalion commander, actually gave his men the option to excuse themselves from the executions, yet only a handful initially took this opportunity. Those who continued often justified their participation by claiming they didn't want to abandon their comrades or appear weak.
The Rwandan genocide of 1994 presents another case where ordinary citizens became perpetrators of mass violence. Jean Hatzfeld's interviews with Rwandan killers revealed that many participants were farmers, teachers, and shopkeepers who had lived peacefully alongside their Tutsi neighbors for years[14]. The transformation occurred through a combination of state propaganda, social pressure, and the breakdown of normal social constraints during the genocide period.
Alphonse, a farmer interviewed by Hatzfeld, explained: "On the first day, I had to drink urwagwa [banana beer] before going out to kill. The second day, I drank it during the killings. The third day, I no longer needed to drink." This progression illustrates how quickly moral inhibitions can erode under extreme circumstances, even among people who previously showed no signs of violent tendencies.
The Abu Ghraib prisoner abuse scandal provides a more recent example of how situational factors can corrupt moral behavior. The soldiers involved—including Lynndie England, Charles Graner, and Sabrina Harman—had no prior history of sadistic behavior[15]. England, in particular, was described by those who knew her as quiet and unremarkable. The combination of inadequate training, unclear guidelines, high stress, and a culture that dehumanized prisoners created conditions where abuse became normalized and even encouraged.
Corporate Malfeasance and the Banality of Financial Evil
The corporate world provides numerous examples of how organizational structures and incentives can lead good people to engage in harmful behavior. The Enron scandal involved thousands of employees who participated in or overlooked fraudulent accounting practices, many of whom were respected professionals with no prior history of unethical behavior[16].
Jeffrey Skilling, Enron's CEO, was described by colleagues as brilliant and initially committed to innovation in energy markets. However, the company's culture of extreme performance pressure, combined with financial incentives tied to short-term results, created an environment where fraudulent practices became routine. Employees who questioned these practices were marginalized or fired, while those who participated were rewarded with promotions and bonuses.
The Theranos fraud case illustrates how charismatic leadership and organizational culture can override individual moral judgment[17]. Many employees, including accomplished scientists and engineers, participated in or remained silent about the company's fraudulent blood testing claims. Elizabeth Holmes created a culture of secrecy and intimidation that made it extremely difficult for employees to raise ethical concerns or question the company's practices.
Erika Cheung, a former Theranos lab associate who eventually became a whistleblower, described the psychological pressure: "You're constantly told that you're not smart enough, that you don't understand the technology, that you're not seeing the big picture." This kind of psychological manipulation can undermine individuals' confidence in their own moral judgment, making them more likely to comply with unethical directives.
The 2008 financial crisis involved widespread participation by mortgage brokers, bank executives, and rating agency analysts in practices they knew or suspected were harmful to customers and the broader economy. Many of these individuals were respected community members who would never have considered themselves capable of such behavior[18]. The complexity of financial instruments created moral distance between individual actions and their consequences, while competitive pressures and financial incentives overwhelmed ethical considerations.
Social Media and the Digital Transformation of Moral Behavior
The rise of social media has created new contexts for understanding how good people can engage in harmful behavior. Online environments often strip away the social cues and empathetic connections that normally constrain harmful behavior, while providing psychological distance from the consequences of one's actions.
The phenomenon of cyberbullying involves individuals who would rarely engage in face-to-face harassment but feel empowered to attack others online. Research by Sameer Hinduja and Justin Patchin found that many cyberbullies are otherwise well-adjusted individuals who don't view their online behavior as "real" bullying[19]. The anonymity and physical distance provided by digital platforms can disinhibit normal moral constraints.
The spread of misinformation on social media platforms often involves well-meaning individuals who believe they are sharing important information. Research by Soroush Vosoughi and colleagues at MIT found that false news stories spread six times faster than true stories on Twitter, often shared by people with no malicious intent[20]. The psychological satisfaction of sharing information that confirms one's beliefs, combined with the ease of forwarding content without verification, creates conditions where good people can participate in harmful misinformation campaigns.
Online mob behavior represents another form of digital moral failure. The case of Justine Sacco, a PR executive who posted an ill-conceived joke before boarding a flight to Africa, illustrates how social media can transform ordinary users into participants in devastating harassment campaigns[21]. Many of the thousands of people who shared, commented on, and amplified criticism of her tweet were likely well-meaning individuals who saw themselves as fighting racism, not destroying someone's life.
The design of social media platforms themselves contributes to these dynamics. Features like "likes," "shares," and algorithmic amplification create psychological rewards for engaging content, regardless of its accuracy or potential harm. Former Facebook executive Sean Parker has acknowledged that the platform was designed to exploit psychological vulnerabilities and create addictive usage patterns[22].
Philosophical Perspectives on Moral Character and Situational Ethics
The empirical evidence about good people doing terrible things has profound implications for moral philosophy, particularly debates about virtue ethics versus situational approaches to understanding moral behavior. Traditional virtue ethics, dating back to Aristotle, assumes that moral character is relatively stable and that virtuous people will generally act virtuously across different situations.
However, the situationist challenge, articulated by philosophers like Gilbert Harman and John Doris, argues that empirical research undermines the existence of stable character traits[23]. If ordinary people can become perpetrators of great harm under certain conditions, this suggests that moral behavior is primarily determined by situational factors rather than individual character.
Aristotelian virtue ethicists have responded by arguing that true virtue involves the development of practical wisdom (phronesis) that enables individuals to navigate complex moral situations successfully. Julia Driver and other contemporary virtue ethicists contend that the situationist experiments don't disprove virtue ethics but rather highlight the importance of moral education and the cultivation of robust character traits that can resist situational pressures[24].
Hannah Arendt's concept of the "banality of evil," developed in response to Adolf Eichmann's trial, offers another perspective on this question[25]. Arendt argued that Eichmann represented a new kind of evil—not the demonic evil of traditional villains, but the thoughtless evil of someone who failed to think critically about the consequences of his actions. This "banality" doesn't excuse the harm caused but suggests that much evil results from moral thoughtlessness rather than malicious intent.
Alasdair MacIntyre's critique of modern moral discourse suggests that our fragmented ethical frameworks make it difficult for individuals to develop coherent moral identities that can resist situational pressures[26]. Without shared moral traditions and practices, people become vulnerable to the kind of moral drift that enables participation in harmful systems.
Psychological Mechanisms of Moral Resistance
While much research focuses on why good people do terrible things, it's equally important to understand why some individuals resist situational pressures and maintain their moral integrity under extreme circumstances. Studies of moral exemplars—people who risked their lives to save others during genocides, whistleblowers who exposed corporate corruption, and individuals who refused to participate in harmful group behavior—reveal several key characteristics.
Research by Samuel and Pearl Oliner on rescuers during the Holocaust identified several factors that distinguished those who helped Jewish victims from those who remained passive[27]. Rescuers were more likely to have been raised with strong moral principles, had diverse social networks that included people from different backgrounds, and possessed what the Oliners termed "extensivity"—the ability to see moral obligations extending beyond their immediate in-group.
Kristen Monroe's research on altruistic behavior found that moral exemplars often possessed a different cognitive framework that made them see helping others as a natural response rather than a heroic choice[28]. These individuals didn't view themselves as heroes but simply as people doing what needed to be done. This suggests that moral resistance may depend partly on how individuals conceptualize their relationship to others and their moral obligations.
The concept of moral courage, developed by researchers like Rushworth Kidder, identifies specific skills and attitudes that enable individuals to act on their moral convictions despite social pressure, personal risk, or institutional opposition[29]. These include the ability to recognize moral issues, the willingness to contemplate action, the courage to initiate moral behavior, and the persistence to follow through despite obstacles.
Studies of successful whistleblowers reveal similar patterns. C. Fred Alford's research found that effective whistleblowers often possessed strong professional identities that made them feel obligated to speak out when they witnessed wrongdoing[30]. They also typically had support networks outside their immediate workplace that provided emotional and practical assistance during the difficult process of challenging institutional wrongdoing.
Cultural and Social Factors in Moral Behavior
Cross-cultural research reveals significant variations in how different societies structure moral behavior and create resistance to harmful actions. Geert Hofstede's research on cultural dimensions identified how factors like power distance, individualism versus collectivism, and uncertainty avoidance influence moral behavior across cultures[31].
Societies with high power distance—where hierarchical relationships are strongly respected—may be more vulnerable to authority-based moral failures like those seen in Milgram's experiments. However, these same societies may also have stronger social sanctions against individual wrongdoing and clearer moral expectations for different social roles.
Richard Shweder's research on moral foundations across cultures identified different emphases on harm/care, fairness/reciprocity, loyalty/betrayal, authority/respect, and purity/sanctity[32]. These different moral priorities can lead to conflicts where behavior that seems obviously wrong from one cultural perspective appears justified or even obligatory from another.
The concept of social capital, developed by Robert Putnam and James Coleman, helps explain how community connections and social trust can serve as bulwarks against moral failure[33]. Communities with high social capital—characterized by dense networks of reciprocal relationships, shared norms, and civic engagement—may be better able to resist the kind of moral breakdown that enables systematic harm.
Research on the role of religion in moral behavior reveals complex patterns. While religious communities can provide strong moral frameworks and social support for ethical behavior, they can also create in-group/out-group dynamics that facilitate harm against those outside the community. The history of religious violence illustrates how sacred values can be used to justify terrible actions against those deemed heretical or threatening to the faith community.
Rather than "good people" being corrupted by circumstances, we may be witnessing the unmasking of pre-existing moral vulnerabilities that were simply well-concealed or socially acceptable in different contexts. The banker who commits fraud and the neighbor who turns informant may have always possessed the psychological traits that enabled these behaviors—traits that appeared virtuous when channeled through socially approved activities like "competitive drive" or "community vigilance."
The focus on situational factors as explanations for moral failure may inadvertently obscure the countless individuals who face identical pressures yet choose differently, suggesting that personal moral development and character play a larger role than environmental determinism implies. If circumstances were truly decisive, we would expect far more uniform responses to moral tests—yet history is filled with examples of people who maintained their principles under the most extreme conditions.
Key Takeaways
- Situational factors often override individual character in determining moral behavior, as demonstrated by classic experiments like Milgram's obedience studies and Zimbardo's Stanford Prison Experiment
- Moral disengagement mechanisms allow people to maintain positive self-images while participating in harmful actions through psychological processes like euphemistic labeling, displacement of responsibility, and dehumanization
- Organizational systems and cultures can create environments where good people are pressured into unethical behavior through performance incentives, role expectations, and the normalization of harmful practices
- Cognitive biases such as fundamental attribution error, confirmation bias, and moral licensing contribute to moral failures by distorting ethical reasoning and decision-making
- Historical cases from the Holocaust to corporate scandals demonstrate how ordinary individuals can become perpetrators of significant harm under specific circumstances
- Digital environments create new contexts for moral failure by providing anonymity, psychological distance, and reduced empathetic connection
- Moral resistance depends on factors like diverse social networks, strong professional identities, moral courage, and supportive community structures
- Cultural factors significantly influence moral behavior, with different societies emphasizing different moral foundations and creating varying levels of resistance to authority-based wrongdoing
References
- Financial Crisis Inquiry Commission. The Financial Crisis Inquiry Report. PublicAffairs, 2011.
- Sedition Hunters. "January 6th Capitol Attack Database." Sedition Hunters, 2021.
- Zimbardo, Philip. The Lucifer Effect: Understanding How Good People Turn Evil. Random House, 2007.
- Milgram, Stanley. Obedience to Authority: An Experimental View. Harper & Row, 1974.
- Bandura, Albert. "Moral Disengagement in the Perpetration of Inhumanities." Personality and Social Psychology Review, 1999.
- Milgram, Stanley. "Behavioral Study of Obedience." Journal of Abnormal and Social Psychology, 1963.
- Independent Directors of the Board of Wells Fargo. Sales Practices Investigation Report. Wells Fargo & Company, 2017.
- Kaptein, Muel. Ethics Management: Auditing and Developing the Ethical Content of Organizations. Springer, 2008.
- Bilton, Michael and Kevin Sim. Four Hours in My Lai. Viking Press, 1992.
- Brandt, Allan. The Cigarette Century. Basic Books, 2007.
- Jost, John and Mahzarin Banaji. "The Role of Stereotyping in System-Justification and the Production of False Consciousness." British Journal of Social Psychology, 1994.
- Monin, Benoit and Dale Miller. "Moral Credentials and the Expression of Prejudice." Journal of Personality and Social Psychology, 2001.
- Browning, Christopher. Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland. HarperCollins, 1992.
- Hatzfeld, Jean. Machete Season: The Killers in Rwanda Speak. Farrar, Straus and Giroux, 2005.
- Gourevitch, Philip and Errol Morris. Standard Operating Procedure. Penguin Press, 2008.
- McLean, Bethany and Peter Elkind. The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron. Portfolio, 2003.
- Carreyrou, John. Bad Blood: Secrets and Lies in a Silicon Valley Startup. Knopf, 2018.
- Lewis, Michael. The Big Short: Inside the Doomsday Machine. W. W. Norton, 2010.
- Hinduja, Sameer and Justin Patchin. Cyberbullying: Identification, Prevention, and Response. Cyberbullying Research Center, 2020.
- Vosoughi, Soroush, Deb Roy, and Sinan Aral. "The spread of true and false news online." Science, 2018.
- Ronson, Jon. So You've Been Publicly Shamed. Riverhead Books, 2015.
- Allen, Mike. "Sean Parker unloads on Facebook." Axios, November 9, 2017.
- Doris, John. Lack of Character: Personality and Moral Behavior. Cambridge University Press, 2002.
- Driver, Julia. Uneasy Virtue. Cambridge University Press, 2001.
- Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. Viking Press, 1963.
- MacIntyre, Alasdair. After Virtue. University of Notre Dame Press, 1984.
- Oliner, Samuel and Pearl Oliner. The Altruistic Personality: Rescuers of Jews in Nazi Europe. Free Press, 1988.
- Monroe, Kristen. The Heart of Altruism. Princeton University Press, 1996.
- Kidder, Rushworth. Moral Courage. William Morrow, 2005.
- Alford, C. Fred. Whistleblowers: Broken Lives and Organizational Power. Cornell University Press, 2001.
- Hofstede, Geert. Culture's Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations. Sage Publications, 2001.
- Shweder, Richard. "The 'Big Three' of Morality and the 'Big Three' Explanations of Suffering." Morality and Health, 1997.
- Putnam, Robert. Bowling Alone: The Collapse and Revival of American Community. Simon & Schuster, 2000.


