The Dematerialization of Responsibility and the Search for a New Anchor

Last week, financial markets reacted sharply and nervously to growing doubts about the prospects of artificial intelligence. In the United States, a broad sell-off began in technology stocks whose valuations had surged in recent years on expectations of AI-driven profits. Shares of Microsoft, Nvidia, Amazon, and Alphabet Inc. declined. The Nasdaq index turned downward, and volatility increased.


At the same time, hedge funds actively expanded short positions in the technology sector — in other words, they are betting on further declines.

But that is not the real news. An overheated market always corrects when expectations begin to outpace organic growth. That is how cycles work.


What is more interesting is something else.


The sharpest declines occurred in insurance companies — firms that, at first glance, are not directly connected to AI infrastructure like Nvidia, nor to algorithm development. Shares of Willis Towers Watson posted their worst performance since the 2008 financial crisis; AXA, Aviva, and Hiscox also declined significantly.


On the surface, the trigger was concern over the sustainability of business models amid the rapid integration of AI into insurance: potential lawsuits, regulatory constraints, and unquantified algorithmic risks.


But if we look deeper, it becomes clear that this is less about technology itself and more about the loss of an anchor — a loss of structural grounding in the modern economy.


Insurance is the institutionalization of responsibility. Its logic is straightforward: there is a subject, an event, a territory, and someone who is accountable. Within this structure, risk can be assessed, distributed, and insured. This model emerged in an era when business was tied to land, individuals to the state, and the space of responsibility was clearly defined by borders. Territory became the foundation of politics, economics, and law. Banks, insurance systems, and the international financial order all grew within this framework.


Artificial intelligence is dismantling that structure. An algorithm is not tied to a specific territory. Decisions are formed within a distributed digital environment. Responsibility disperses among developers, platforms, integrators, and end users — and becomes increasingly difficult to define in linear terms. Errors can scale instantly across millions of users in multiple jurisdictions.


When it is no longer possible to clearly answer the question “Who is responsible?”, the entire system of risk assessment begins to wobble. That is why insurers find themselves on the front line. Their business is built on predictability: probability can be calculated, losses estimated, responsibility localized.


The new economy works differently. It is nonlinear.


To some extent, this resembles the moment when venture capital funds emerged. The banking system historically operated on a conservative model: loans backed by collateral, predictable cash flow, identifiable subjects. Risk was minimized, returns moderate, stability the central value.


Venture capital arose as a response to a new environment of high uncertainty — especially in the world of technology startups, where there was no collateral, no stable cash flow, and no proven business model. Traditional bank lending simply did not work in such conditions.


Venture capital embraced nonlinearity as its core principle: out of ten projects, nine may fail, but one success can compensate for all losses and deliver outsized returns. It was an institutional adaptation to an economy of uncertainty.


But the current shift runs deeper. Venture capital still operated within the previous architecture of responsibility. The algorithmic economy changes the very point at which risk is localized. If decisions are made by distributed systems, if consequences scale instantly and across borders, if responsibility is dispersed between code, platform, and user — then the traditional insurance model begins to fail not because risk has increased, but because it has dematerialized.


That is the difference. This is a transition from localized responsibility to distributed responsibility.


We are entering what may be called the “third turn.” The territorial and institutional attachment of the individual weakens. Work becomes remote, capital digital, communication networked. A new type emerges — “people of air”: those who create value beyond geography, move between jurisdictions, live within digital ecosystems, and increasingly trust algorithms more than vertical authority structures.


Artificial intelligence becomes the infrastructure of this condition. It operates in a distributed manner, has no single center, and does not fit within national borders. In the world of the “people of air,” responsibility is no longer materially anchored. It becomes networked and collective.


The traditional financial system is built around a center — a point where responsibility can be fixed. But if decisions are formed by algorithms and consequences spread instantly and globally, that center loses stability. Markets sensed precisely this loss of center — this dematerialization of the anchor.


Most likely, governments will respond in familiar ways. New standards will be introduced, specialized insurance products for algorithmic risks will appear, regulation will intensify. Licensing regimes may be established, along with stricter accountability requirements. The system will attempt to restore stability and regain a sense of control.


But the problem is deeper.


If more and more decisions are made by algorithms, if economic value is created beyond territory, if responsibility is distributed between code, platform, and user, then the old model gradually dissolves. Institutions will try to adapt, but they are no longer operating in the environment in which they were born.


We are entering a stage where responsibility is no longer rigidly tied to place or to a single identifiable subject. It becomes distributed and networked. That is what we call the “third turn.”


In the religious era, the center was God.
In the ideological era, it was the state.
Today, the center is rapidly shifting toward artificial intelligence.


When algorithms begin to influence the economy, security, and everyday decisions, trust gradually shifts in the same direction. Not because someone designed it that way, but because that is the logic of the information revolution.


Financial markets were the first to sense this change. The decline of insurance companies is particularly revealing: their business depends on clearly localized risk and responsibility. But when risk becomes global and responsibility distributed, traditional models begin to malfunction.


Therefore, the current “AI panic” is not merely a market correction. It is a signal of a deeper transformation — one in which territories and formal institutions matter less, and the distributed digital environment matters more.


And this transition is already underway.

ON ISSUES

The End of BRICS

The End of BRICS

Why the ICC No Longer Matters?

Why the ICC No Longer Matters?

“Truth Against Lies”

“Truth Against Lies”

Revival or Degeneration

Revival or Degeneration