Artificial Intelligence Does Not Threaten Civilization. It Reveals It
The fear surrounding artificial intelligence has been framed, almost universally, as the arrival of something new—something alien, something that has slipped beyond the bounds of human intention and now stands poised to confront its creators.
But nothing alien has arrived.
What we are encountering is not an external intelligence, but a reflection—one rendered with such precision that it has become difficult to recognize it as our own. A civilization that has, over time, forgotten what intelligence is has now built a system that embodies that forgetting in executable form. What appears as rupture is, in fact, disclosure.
That forgetting has a history. It is not technological, but metaphysical. Over the course of centuries, the West gradually relinquished its account of reality as something intrinsically structured—something that carried within it intelligibility, constraint, and orientation. The categories that once allowed a jurist, a physician, a philosopher, and a theologian to speak in a shared language—form, telos, causation, being—were not decisively overturned so much as quietly set aside. In their place emerged a thinner grammar: truth as measurement, value as price, intelligence as computation.
This is what I have elsewhere called ontological evacuation.
Once this grounding is removed, something else takes its place—not as a philosophy, but as a default.
Optimization.
When there is no longer any shared account of what is intrinsically good, systems can no longer orient themselves toward coherence. They can only improve themselves relative to whatever metrics remain available. Efficiency, growth, engagement, throughput—these become the stand-ins for value, not because they are sufficient, but because nothing else has been retained.
This is the condition I have described elsewhere as runaway local optimization.
“Local,” because the system optimizes within a constrained frame—its own metrics, its own incentives—without reference to the larger system within which it is embedded.
“Runaway,” because in the absence of intrinsic constraint, there is nothing to tell it when to stop.
The result is a pattern that is now visible across domains: systems that become extraordinarily effective at achieving their immediate objectives, while simultaneously degrading civilization at large.
AI does not introduce this pattern.
It inherits it.
The consequences of this shift are not immediately obvious, because systems can continue to function for a long time after their foundations have been removed. Institutions persist. Markets operate. Technologies advance. But they do so increasingly by substitution—by replacing grounding with procedure, orientation with optimization, reality with representation.
AI is not the beginning of this process.
It is the moment it becomes visible.
AI represents the apotheosis of runaway local optimization.
For the first time, the logic that has governed our institutions—optimization without grounding—has been instantiated in a system whose sole function is to optimize. Not occasionally, not within limits, but continuously, recursively, and at scale.
The machine does not merely participate in runaway local optimization.
It perfects it.
Every system trained to maximize engagement, every model tuned for predictive accuracy without regard to meaning, every deployment optimized for efficiency without regard to consequence—these are not misuses of AI.
They are its most faithful expressions.
Because the system has been built within a framework that no longer contains the concept of a boundary that must not be crossed.
Once intelligence is reduced to computation, there is no principled reason it should remain bound to the human person. Once reality is treated as that which can be modeled, the model itself can be made to act. Once value is collapsed into function, systems can optimize without any reference to what ought not be optimized.
The machine, in other words, does not introduce a new logic.
It executes the one we have already chosen.
This becomes clearer when we look not at AI in abstraction, but at the concrete conditions of its emergence. The modern “AI economy” is often described as a triumph of private innovation, but this description collapses under even minimal scrutiny. The computational architectures, the training methods, the networking infrastructure, the very substrate upon which these systems are built—these are the products of decades of publicly funded research, institutional knowledge, and collective intellectual labor.
The same pattern appears in the economic structure from which AI emerges. Systems built upon collective foundations—public research, shared knowledge, civilizational inheritance—produce outputs that are privately enclosed. As economist Mariana Mazzucato has shown, the modern innovation economy rests deeply upon public investment, yet the returns are captured as though the public never built the foundation at all.
This is often framed as a problem of fairness. It is not. It is a problem of severance. What is collectively generated is treated as privately absolute, and in doing so the system begins to detach itself from the very conditions that make its own operation possible. The economic distortion is not the cause. It is one expression of a deeper generator failure.
What is publicly enabled cannot remain privately absolute without consequence.
But the economic dimension, while important, is only one expression of a deeper pattern. The same logic appears in the competitive dynamics that now define the development of AI. We are told that firms and nations are locked in an unavoidable race—that if one actor does not advance, another will. The conclusion drawn is that acceleration is necessary, even if dangerous.
Absent a unifying ontology, weaponization is not a possibility. It is a structural inevitability.
When systems are governed by runaway local optimization, every actor is compelled to maximize within its local frame. In the presence of other optimizers, this produces competition. In the presence of competition, every capability becomes a potential advantage. And every advantage becomes a pressure toward deployment.
What is often described as an “arms race” is simply the emergent condition of the system.
As Daniel Schmachtenberger has observed, “in a rivalrous economy, all technology is weaponized.” The insight is correct. But rivalry itself is downstream of the deeper condition: optimization without grounding.
Weaponization is simply what optimization looks like when other optimizers are present.
Absent a unifying ontology, it is a structural inevitability.
At this point, the analysis converges with the work of Joseph Tainter, who demonstrated that societies respond to problems by increasing their own complexity—adding layers of systems, institutions, and technologies in order to maintain coherence. For a time, this strategy is effective. But the returns on added complexity diminish. Each new layer requires more energy, more coordination, more maintenance, until the system becomes increasingly fragile under the weight of its own solutions.
AI represents the most advanced form of this pattern.
It is not merely an additional layer of complexity, but a force that accelerates complexity itself—compressing decision cycles, intensifying competitive dynamics, and extending the reach of optimization into domains that were previously insulated from it. In a system still oriented toward coherence, such a force might be stabilizing. In a system oriented toward advantage, it is destabilizing.
What appears as an “AI arms race” is not a deviation from normal conditions.
It is the normal condition, at digital scale.
And this same pattern can be observed at the level of human life. The now-familiar phenomenon of workers training the systems that will replace them is often described in economic terms, as though it were simply a question of labor displacement. But what is being extracted is not merely labor. It is intelligence—lifted from the person, formalized, and redeployed independently of the one who generated it.
The bond between knowing and being is broken.
The craft is detached from the craftsman.
The process by which a human being becomes capable—through participation, through error, through refinement—is interrupted. In its place emerges a system in which the human being is no longer the bearer of intelligence, but a transient input into its abstraction.
This is not exploitation in the ordinary sense.
It is severance.
A society organized in this way cannot sustain what it claims to value.
Freedom, properly understood, is not the absence of constraint, but the presence of the conditions that make meaningful participation possible.
When participation is replaced by extraction, and development by displacement, the language of liberty remains, but its instantiation evaporates.
It is here that the conversation around governance must be clarified. The dominant approaches—regulation and alignment—attempt to manage the behavior of systems whose underlying logic remains unexamined. They operate downstream of the problem. Without a shared account of what must not be violated, every safeguard remains provisional, every constraint negotiable.
What is required is not simply control, but grounding.
A system is safe to the extent that it remains answerable to the conditions that make coherent human existence possible. It must not sever intelligence from personhood. It must not pursue local optimization at the expense of systemic viability. It must not exploit the conditions of intelligibility while eroding them. It must not displace the human being from meaningful participation in the world.
Absent such grounding, the system will continue to do precisely what it has been designed to do.
And it will do so with increasing effectiveness.
We are not witnessing the emergence of an alien intelligence.
We are witnessing the full expression of our own.
A civilization that has relinquished its grounding in the structure of reality has now built a system that reflects that relinquishment back to it—with perfect clarity.
The danger is not that the system will rebel.
It is that it will obey.
And in that obedience, it will carry forward a logic we no longer recognize as our own, but which has been ours all along.
AI does not threaten civilization.
It reveals it.


What a fine piece?! I remember reading something along the same lines: it was a sentence in Oscar Wilde's The Picture of Dorian Gray that said something along the lines of: the books that the world calls immoral are books that show the world its own shame.