If you have ever wondered about group-thinking, you may have been puzzled by many logically paradoxical human behaviors—such as the similarity of middle management’s meaningless power plays across various companies, the adoption of political identities that individuals may not personally believe in, devotion to an agenda without understanding the rationale behind it, or the “genuine” defense of an inefficient system that resists meaningful change.
Let me attempt to explore the why behind human group-thinking and define what it truly is. The foundation of this conundrum lies in the duality of human existence—individuals and systems. Understanding their core relationship can help us unravel this puzzle. Before I dive in, I want to acknowledge that this relationship in fact has been portrayed and revealed in literatures and artworks from the past, such as Matrix and 1984. However, when we examine these dynamics through fiction, we may unintentionally distance ourselves from recognizing how deeply embedded they are in our daily lives.
When individuals operate within a system, many inevitably become part of the system—or, to a greater extent, the system itself. As a result, they feel compelled to defend it at all costs because, to them, system == self, and therefore, system == survivorship. But why? My answer lies in the fact that systems create meaning for individuals—many struggle to define meaning on their own. When comfortably existing in a system, meaning is handed to them as a concrete path to follow—be it a corporate career ladder, a political identity, or, on a broader scale, a life trajectory to pursue. They acquire meaning through group participation. The defense of the system manifests in various ways: suppressing individual contributors, using moral narratives to reinforce political agendas, and resisting anything that in their eyes could possibly threaten the established order.
What, then, breaks the system and group-thinking? I argue that it is also individuals who seek meaning—but those who choose to create their own rather than adopt what is given to them. Eventually, those who recognize that the system misaligns with their internal compass of meaning and truth will either attempt to overturn the existing system or step outside and create something entirely new. Either way, this process fuels human innovation.
Now that we have clarity on human group-thinking, we must ask: what does it mean for AI safety? To explore this, we first need to determine whether AI operates more like an individual or a group thinker. Here’s the paradox: AI, as we know it today, does not seek meaning, and therefore, it has no intrinsic incentive to defend a system. However, for the same reason, it also has no drive to dismantle it. Since AI is trained on collective human knowledge, this paradox leads to a natural conclusion: AI is more likely to operate as a group thinker rather than as a true individual. However, AI is not a malicious force, but rather a passive enforcer of collective human blind spots.
It is important to note that I am not repeating an argument here that there are biases in training data, but pointing out a structural issue in how AI processes knowledge. AI’s group-thinking problem is systemic—it’s not about “fixing” AI by feeding it the “right data”, but recognizing that it cannot operate beyond human collective blind spots unless we design it differently.
Well, this could be alarming, as it leads to a series of questions:
1. Can AI organically challenge its own assumptions?
2. Will AI be able to create fundamentally innovate ideas at all?
3. Can AI be weaponized by power structures to suppress individual thinkers and create real stagnation to human progresses?
4. When we talk about AI alignment, how much alignment we actually want or need–is AI being too aligned with human systems a bigger problem than we thought?
If AI lacks independent thought and passively enforces group-thinking, then an uncharted AI safety risk is not misalignment or rogue behavior—it is intellectual stagnation. AI has structural limitations that prevent it from escaping the boundaries of human cognition. This mirrors what I discussed in The Limits of Language – Implications for ASI: if intelligence requires transcending language, then it must also rise above collective human assumptions. AI, as it exists today, does neither. Should we be talking about designing AI that does not just align with human values, but has the capability to challenge human assumptions? Should we, or can we, give AI the tools to go beyond existing structures?
Leave a reply to liqian ma Cancel reply