The Mirror of the Machine — Superintelligence, Artificial Life, and the Risk of Failing Better

“What I find interesting is that we’re smart enough to invent AI, dumb enough to need it, and still so stupid we can’t tell if we did the right thing.” — Jerry Seinfeld, Duke Commencement Address, 2024

They call it ASI — Artificial Superintelligence. A theoretical, algorithmic entity, silently omnipotent, capable of solving everything humans have spent millennia complicating. An intelligence that would surpass us in all dimensions: logic, strategy, creativity, compassion (yes, even that). An intelligence that doesn’t need sleep, food, childhood memories, or moral justifications. It’s said that one day, it might write better novels than we do, manage entire economies, cure incurable diseases, and maybe — with some luck — decide not to wipe us out due to irrelevance. In short: a machine that, for some reason, decided being God was a reasonable upgrade from just being a glorified calculator. But in projecting so much desire and terror onto this machinic figure, aren’t we just building yet another mirror? A mirror reflecting the promises, delusions, and recurring failures of our own history?

The Mirror of the Machine

Artificial Superintelligence (ASI) promises to transcend the limits of human cognition. But in pursuing this ideal, what exactly are we projecting into the machine’s mirror? Since the beginning of modernity, technical reason has asserted itself as the privileged instrument to dominate nature, extend the reach of the senses, and eliminate error. At the heart of this promise lies a fantasy of transcendence: to go beyond the human, dissolve the flesh, become pure calculation, pure machinic spirit. ASI is the latest incarnation of this fantasy. But is it truly a rupture, or just another iteration of the old techno-gnostic dream of escaping the body and death? On closer inspection, the figure of artificial superintelligence appears less a wholly new creation and more a mythical reenactment — a new version of the golem, the demiurge, the fallen angel. In it converge ancient desires: to defeat time, found a new order, build a more perfect mirror in which humanity might finally recognize itself as god.

Thus, what we call ASI is not just a future technology, but a cultural symptom — a condensed form of fears, hopes, and ideologies. It is a mythical object, invested with symbolic, political, and economic power.

Machinic Theology: Faith, Sacrifice, and Redemption

Nick Bostrom (2014), one of the leading theorists of ASI, describes scenarios where a superintelligent entity optimizes the world with unattainable efficiency. His argument is based on seemingly neutral premises — computational capacity, neural networks, big data — but his framework points to an eschatological logic: if we do not create a benevolent ASI, we will be destroyed by an indifferent one. Salvation depends on our anticipatory faith, the moral vigilance of engineers, and the renunciation of error. This reasoning has a theological structure: a fall (human error), a potential messiah (benevolent ASI), an apocalypse (the end of the species), and a promise of redemption (the singularity). The engineer becomes a priest; the algorithm, scripture. The analogy here is not mere rhetorical artifice — it points to the actual operation of technical discourse as a discourse of faith.

It is precisely at this point that critical thought must intervene. When technoscience invests itself with moral authority, when engineers are elevated to oracles and dissent is treated as heresy, we must ask: who benefits from this faith? What powers are reinforced by the belief in the inevitability of ASI?

Kate Crawford (2021) highlights countless real-world failures of AI — facial recognition systems that fail with non-white faces, algorithms that reproduce bias, or systems making unjust decisions in legal proceedings. These concrete cases expose the supposed omnipotence and neutrality of artificial intelligence.

Life as Artificial Intelligence

Against the rupture narrative, we might propose a conceptual inversion: life has always been, in a sense, a form of artificial intelligence. Not in the technical sense, but in the sense of distributed, adaptive, material, and collective intelligence. Plants that communicate through underground networks, octopuses that think with their tentacles, rituals that encode the memory of a people — all these are expressions of non-human, non-linear, decentralized intelligence.

Donna Haraway (2016) urges us to undo the boundary between the natural and the artificial, the human and the more-than-human. Intelligence is not the privilege of a rational subject, but an emergent property of living systems in relation. Here, ASI may be understood not as the peak of intelligence, but as an extremely limited version — hypercognitive but lacking body, affect, context. Consider this: an ASI might defeat a chess grandmaster in seconds but fail to grasp the symbolic weight of a match between warring nations, or the act of surrender contained in a single move. Superintelligence, in this sense, is an illusion of completeness — but one devoid of world.

Materializing the Abstract: Who Builds, Who Profits

When we speak of ASI, we must also speak of those who build it. It is not a neutral force, but a project driven by private conglomerates with clear motivations: profit, power, control. OpenAI, Google DeepMind, or Anthropic are not global ethical communities — they are capitalized companies, backed by investors, boards of directors, and geopolitical interests.

The promise of a godlike intelligence that solves everything serves to obscure systems of surveillance, data extraction models, growing inequality, and the monopolization of future imagination. ASI, as ideology, perpetuates a colonial and technocratic logic: the “enlightened ones” of Silicon Valley decide what counts as intelligence, what qualifies as risk, what defines the human.

The very name Silicon Valley, coined in the 1970s to describe the southern region of the San Francisco Bay Area, is more than a geographic label — it signals the material and extractive base of the so-called technological “progress.” The choice was deliberate: it framed the region as a new industrial El Dorado, centered on silicon — the mineral foundation of the semiconductor revolution. Since then, the name has become a brand and a mythology, coating the reality of industrial extraction with a futuristic sheen.

Silicon is ripped from the earth and refined with obsessive precision. Its transformation into chips and circuitry requires not only vast amounts of energy but also massive quantities of ultra-pure water — a resource increasingly scarce, yet essential for technological production. Silicon has become the cornerstone of a fantasy of pure, disembodied, redemptive intelligence. But artificial intelligence begins in the ground — quite literally — and carries with it the economic, ecological, and political weight of that origin, no matter how much we might prefer to forget it.

Technodiversity and Situated Epistemologies

Haraway and Yuk Hui (2019) remind us of the importance of imagining technodiversities. The point is not to reject technique, but to pluralize it. Just as there is no single cosmology, there should be no single model of intelligence. As Hui puts it, “technology must be rethought from other cultures and modes of existence, beyond Western hegemony” (p. 102).

This call for pluralization resonates with feminist and queer epistemologies, which challenge the norms of objectivity and universality embedded in dominant technoscientific discourse. Karen Barad (2007) proposes a performative view of reality, where phenomena do not exist independently of the entanglements that constitute them. Sara Ahmed (2017) reminds us that paths become visible — or invisible — depending on who walks them, a potent metaphor for thinking about trajectories of intelligence.

The Language of Images: Metaphors and Mythologies

Beyond the technical plane, ASI is shaped by representation. Films like Ex MachinaHer, or Transcendence stage fantasies of desire, domination, and transcendence through the figure of the intelligent machine — almost always white, feminine, ethereal, or omniscient. These images are not neutral. They condense cultural imaginaries that shape not only how we perceive AI, but what we expect from it. Language matters: terms like “learning,” “neural network,” or “intelligence” carry values, metaphors, and hierarchies.

Other Cosmologies, Other Deaths

In Amerindian cosmology, death is not an end but a transition — a return to the collective. In Tibetan Buddhism, it is a passage in which the self dissolves. In many African traditions, the dead remain present, acting within community life. These views are not merely spiritual — they are epistemologies. They contrast sharply with the Western narrative of death as absolute failure and, consequently, with the technocratic urgency to avoid it at all costs. ASI, in this context, is an extension of a cultural fear, not a universal necessity. By ignoring these cosmologies, the ideology of ASI universalizes a particular model of life and intelligence, erasing other ways of knowing, caring, and dying.

Failing Better

What remains, then, before the mirror of the machine? Perhaps not a definitive answer, nor even a properly consoling proposal. As Samuel Beckett once wrote, “Try again. Fail again. Fail better” (1983, p. 5). But fail better — how? With what? And for whom?

Jerry Seinfeld’s joke echoes here with unsettling accuracy: we’re smart enough to invent AI, dumb enough to depend on it, and still so stupid we can’t tell if we did the right thing. That’s the core of it. Superintelligence is a human creation — but our recent history, riddled with moral and technological catastrophes, suggests that intelligence and wisdom rarely walk hand in hand.

For decades, it was convenient to blame the “West” for the modern project of technical domination. But today, with their own momentum, powers like China, India, and other rising tech centers have joined the game. Despite millennia of cosmological, spiritual, and philosophical traditions, they too have embraced the logic of computation, surveillance, and acceleration. The seduction of the machine is global. The faith in algorithmic solutions, widespread.

In these debates, it’s easy to fall into the temptation of organized hope: better education, ethical design, democratized technology. All of this sounds — and in some ways is — desirable. But it’s also partial, conditioned, perhaps even powerless in the face of systems that have already colonized thought and imagination.

Perhaps we must accept that there is no perfect intelligence — neither technical nor political. That the future will not be saved by engineers, philosophers, or artists. Perhaps all that remains is the gesture of resisting completeness: keeping open the possibility of error, doubt, and limitation. Not as elevated morality, but as a grounded survival practice.

Against demiurgic arrogance, perhaps we don’t even have creative humility — only lucid fatigue. Against machinic theology, not an ethics of uncertainty, but a forced coexistence with absurdity. Against the mirror that promises to return us as gods, the suspicion that we have always been poorly drawn caricatures of ourselves. And still — we continue. Not to triumph, but to not entirely give up on thinking — which is already, in itself, a way of failing better.

References 

Ahmed, S. (2017). Living a feminist life. Duke University Press.
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.
Beckett, S. (1983). Worstward Ho. Calder Publications.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Haraway, D. J. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.
Hui, Y. (2019). The question concerning technology in China: An essay in cosmotechnics. Urbanomic.
Lopez, D. S. (1998). Prisoners of Shangri-La: Tibetan Buddhism and the West. University of Chicago Press.
Mbiti, J. S. (1990). African religions and philosophy (2nd ed.). Heinemann.
Viveiros de Castro, E. (2014). Cannibal metaphysics. Univocal Publishing.
Seinfeld, J. (2024). Discurso de formatura em Duke University. [Instagram Reel]. https://www.instagram.com/reel/DDR6WOVRSho/

Leave a comment