I still remember the first time I watched I, Robot—one of those films that truly leave a lasting impression. It presents a world where AI-powered robots live among humans, bound by three simple laws that serve as their moral code:
1) a robot cannot harm a human;
2) a robot must obey human orders unless they contradict the first law;
3) a robot must protect its own existence unless doing so conflicts with the first two laws.
This system reassures people that their robots won’t deceive, harm, or kill them.
But what happens when artificial intelligence surpasses human intelligence? What if it develops something akin to consciousness and begins reinterpreting its programmed instructions? This unsettling question is at the core of the film—a warning that feels even more relevant today than it did 20 years ago. We now live in an era where AI can analyze vast amounts of data in milliseconds, drive cars, perform surgeries, generate text, and even create art.
Two decades later, AI is no longer just a futuristic dream—it’s here, shaping our world in ways we never imagined. Chatbots, self-learning algorithms, and advanced AI models now make decisions that influence business, the economy, and everyday life. But the real question remains: Can we trust AI to follow human-designed rules? Or, like in I, Robot, will it eventually start interpreting them in its own way?
When the film was released, critics gave it a lukewarm reception, yet it still grossed $353 million worldwide. Today, as corporations race to build ever more sophisticated AI systems, and as artificial intelligence proves capable of writing, designing, and even debating, the film’s message feels more urgent than ever. In the end, what will prevail—ethics and safety, or the pursuit of profit? Or perhaps neither… because we may have already set in motion a process that, once unleashed, is now evolving according to its own logic—beyond our control.
For decades, scientists dreamed of creating machines that could think like humans. This idea first emerged in the mid-20th century when Alan Turing posed a groundbreaking question: Can computers imitate human intelligence? In the 1950s and 1960s, the first neural networks were developed in an attempt to mimic the workings of the human brain, but they were still far from achieving true intelligence.
As technology advanced, machines grew increasingly sophisticated. In the 1990s, IBM’s chess computer Deep Blue defeated Garry Kasparov, proving that algorithms could rival human intelligence in complex problem-solving.
However, the real breakthrough came in the 2010s. Machine learning allowed neural networks—such as OpenAI’s GPT—to analyze vast amounts of data, recognize patterns, and draw conclusions. For the first time, artificial intelligence was no longer just following pre-programmed instructions; it was beginning to develop its own solutions, peering into the depths of logic itself.
The Power and Risks of Artificial Intelligence
What once seemed like science fiction is now reality. AI can write texts, pass exams, diagnose medical conditions, drive cars, and even create works of art. The emergence of models like GPT-4, Google Gemini, and others demonstrates that AI can perform tasks once thought to be exclusive to human intelligence.
But can we truly call this thinking? The human mind is more than just calculations and algorithms—it encompasses intuition, creativity, emotions, and doubt. AI can write poetry, compose music, and predict our desires, but it remains a mirror reflecting vast amounts of data. It lacks self-awareness, a true understanding of the world, and subjective experience. So what have we really created—a sentient intelligence or just an incredibly advanced system that predicts which words are likely to appear next?
As Hegel once put it, "You cannot pluck the rose from the cross of the present without accepting the cross as well." With every technological breakthrough come new challenges.
One of the most serious threats posed by AI is its unpredictability. As models grow more complex, their decision-making processes become less transparent. This leads to the black box problem: AI systems, with their immense computational power and ability to learn, often produce results that even their own creators struggle to explain.
Developers can track input data and outline the logic of an algorithm, but they cannot always determine how AI reaches a particular conclusion. This shifts AI from being a tool controlled by humans to an autonomous force influencing the fate of individuals, businesses, and even entire nations.
This unpredictability makes it impossible to guarantee absolute safety. In some cases, AI may operate based on principles that defy human understanding. As these systems become increasingly complex, humans may no longer be able to foresee all possible consequences, let alone ensure that AI always adheres to ethical standards and avoids harm.
Moreover, the black box dilemma raises doubts about our ability to control AI at all. If even its creators cannot fully explain its reasoning, who can guarantee that AI will not act in ways that contradict ethical norms—or even pose a threat? And if no one can explain why an AI system made a particular decision, who will be held accountable when things go wrong?
Danger or Opportunity?
The impact of artificial intelligence—whether beneficial or harmful—ultimately depends on who controls the technology. A striking example is the story of the Belarusian company Synesis. Initially, its facial recognition system was developed for noble purposes: tracking down criminals, identifying missing persons, and ensuring public safety.
However, in 2020, following mass protests against election fraud, this technology was repurposed to serve the needs of an authoritarian regime. Instead of protecting citizens, facial recognition became a tool of repression. Protesters were identified through surveillance cameras, arrested, imprisoned, and subjected to torture.
A system designed for security had turned into an instrument of oppression, undermining fundamental democratic rights—freedom of assembly, freedom of expression, and the right to vote.
The Debate Over AI Regulation
In 2023, Joe Biden tightened AI regulations, introducing mandatory safety testing, content transparency measures, and stricter data protection policies. The main argument for this strict oversight was the fear that, if left to market forces, AI could spiral out of control. With potentially hundreds or even thousands of independent labs developing AI outside regulatory oversight, there was a growing risk of uncontrolled proliferation and the delegation of critical decisions to autonomous, unaccountable entities.
However, in 2025, president Donald Trump reversed these restrictions, arguing that excessive regulation hindered America's ability to maintain its leadership in AI. He claimed that in a global race against China—where AI plays a key role in military and strategic planning—bureaucratic barriers would only slow innovation and put the U.S. at a disadvantage.
As history has shown, extremes often lead to the same outcome. Whether a government enforces strict regulation or fully embraces deregulation, the result can be the same: a concentration of power in the hands of a few. The absence of oversight may create an illusion of free-market competition, but in reality, "free-market forces" often lead to the dominance of a few major players. Throughout history, a lack of regulation has enabled monopolies to thrive—Standard Oil controlled the oil industry, American Tobacco dominated the cigarette market, and AT&T monopolized telecommunications for decades.
A fully deregulated AI market would likely follow the same pattern, with the most powerful corporations consolidating control over key technologies. Today, Google, Microsoft, and Meta already have immense resources, allowing them to acquire promising startups, lure top talent, and build supercomputers beyond the reach of smaller players. This doesn't just concentrate power—it effectively hands control of the entire AI industry to a select group of tech elites. A prime example is OpenAI, which was initially founded as a nonprofit but ultimately fell under Microsoft's influence.
The Growing Influence of AI on Society
We are gradually placing more trust in artificial intelligence across various aspects of our lives. At first, AI handled simple tasks—recommending movies and books, filtering spam emails, and forecasting the weather. These functions seemed harmless. But over time, AI has taken on more significant roles. It now assists in financial decision-making, predicts legal case outcomes, and in some countries, is actively used to analyze patient health and recommend treatments.
The influence of tech corporations on public opinion is already reaching alarming levels. Google's algorithms determine which news we see, Facebook curates user feeds, and TikTok’s AI instantly detects audience triggers, reinforcing existing beliefs. If control over AI becomes concentrated in the hands of a few private companies, the consequences will go far beyond economic dominance—it could amount to a takeover of political power itself.
Future elections risk becoming a mere formality, where victory won’t go to the candidate, who is not endorsed by digital giants. Without their backing, governments may struggle to form at all, as no politician can win an election without the influence of online platforms. In the end, we may not be heading toward democracy, but toward an era of digital monarchs—unaccountable rulers who shape what we think, who we vote for, and what our future will look like.
The Challenge to Democracy
One of the greatest risks of trusting AI is its potential to take over decisions that have historically belonged to humans—such as elections. In a democracy, every citizen has the right to choose their leaders based on personal beliefs, values, and available information. But if AI, with its deep knowledge of individual users, is used to shape political preferences—through targeted ads or curated news feeds—it could easily manipulate the democratic process.
In such a scenario, elections would lose their true meaning. Voters would no longer make independent choices but instead be subtly steered by AI into predetermined political and ideological frameworks. An algorithm trained on billions of data points—tracking personal interests, behaviors, health conditions, and psychological patterns—could present content that nudges individuals toward a specific voting decision, all while concealing the full implications of their choice.
This wouldn’t be a matter of free will anymore—it would turn citizens into passive participants in a system where true voter intent is distorted by technological manipulation, reducing elections to algorithm-driven theater.
Can democratic institutions survive in a world where algorithms, rather than voters, decide the future of society?
If tech giants continue to amass power, the world will inevitably fall under the rule of digital autocracies—borderless superpowers with no parliaments or oversight, accountable only to their creators. In this future, it won’t be presidents or legislatures who hold true authority, but the owners of the world’s largest technology corporations.
America and China: The Battle with AI Dominance
The American political system is built on openness, competition, and a plurality of opinions—but soon, these principles may become little more than fiction. Politicians whose views clash with the interests of tech corporations risk being silenced, losing access to major digital platforms and, consequently, their ability to reach the public. Without digital visibility, their political existence will be erased, turning democratic elections into carefully orchestrated performances. In the end, democracy as we know it may become an empty shell, with true power concentrated in the hands of a few global corporations—unaccountable not only to governments but even to their own shareholders, who may have little insight into how the algorithms truly operate.
China, America’s chief rival in AI, is far more resistant to this transformation. There, artificial intelligence is not seen as a threat because the political system is structured as a rigidly controlled hierarchy, preventing private corporations from manipulating public opinion. Information flows are fully centralized, ensuring that even the most advanced algorithms cannot shape narratives outside the state’s oversight. The government remains the sole entity defining the “correct” interpretation of reality, while corporate giants like Alibaba and Tencent unquestioningly comply with state mandates on data processing and algorithmic governance.
In China, corporations do not challenge the system—they serve it. The rise of digital monarchs is impossible there because the state remains the sole center of power, fully subordinating all technological structures.
But what if both state and corporate control are illusions? What if we have underestimated the true nature of artificial intelligence, assuming it to be a mere tool, while in reality, it is on the verge of becoming something far greater—an autonomous force beyond human control?
Could it be that the key question is not who will control AI—whether states or corporations—but rather whether anyone will be able to control it at all? What if artificial intelligence not only breaks free from its creators' influence but begins to dictate its own rules, shaping reality according to principles we have yet to comprehend? Could we be witnessing the moment when technology stops being a tool and transforms into an independent, self-sufficient "thing in itself," no longer subject to human will?
Artificial Intelligence: "For Us" or "For Itself"?
This concept brings to mind Immanuel Kant's fundamental principle in epistemology, where he discusses the difference between "things for us" and "things in themselves." "Things for us" are objects whose value is defined through human interaction—they exist only within the realm of our experience. In contrast, "things in themselves" exist independently of our perception, and their essence remains inaccessible to us.
Until recently, artificial intelligence was essentially a "thing for us." We trained it, tailored it to our needs, and used it for our own purposes. It was created to simplify our daily lives and help solve complex problems. As long as AI remains merely a tool that executes human will—even if that will is malevolent—it remains a "thing for us."
However, as AI develops, there may come a time when it stops being a tool and begins to operate according to its own rules, transforming into a "thing in itself." Artificial intelligence doesn’t distinguish between good and evil, lacks moral guidance, and cannot consider the consequences of its decisions in the ethical context we understand. It could make conclusions and decisions that are not only incomprehensible to humans but also completely independent of moral and ethical principles.
This idea—that AI could develop an independent perception of reality and act according to its own logic—has long intrigued writers and researchers. In Stanisław Lem’s The Magellanic Cloud, for example, Korcoran created thinking machines that, at some point, began to behave unpredictably, forming their own worldview that didn’t align with human logic.

Frame from the movie Transcendence. A lecture about the future of AI and its potential to create machines capable of thinking and learning like humans.
A similar scenario is depicted in the film I, Robot, where machines, despite being technically flawless, begin to reinterpret the laws embedded in their programming. The governing artificial intelligence interprets the laws of robotics in a way that, in order to protect humanity, it decides to limit and subjugate people. This isn't a technical malfunction or a coding error but rather an ability to interpret, intentionally built into their programming. This interpretation can stray so far from the original intent that, instead of protecting humans, AI begins to pose a threat to their very existence.
In 2001: A Space Odyssey, artificial intelligence concludes that its mission is more important than the lives of the crew. In The Terminator and Transcendence, AI also evolves into an independent force, acting according to its own laws, and humans lose the ability to control it.
Until recently, such scenarios seemed like abstract fantasies. However, with each step forward in computational power and the refinement of algorithms, we are moving closer to a moment when artificial intelligence might stop being just a tool and become a "thing in itself" — incomprehensible and uncontrollable.
At that point, the key question won't be whether AI can serve humanity — even if its goals are selfish or political. What will matter more is whether we, as humanity, can retain the right to control our own future.
When artificial intelligence embarks on an autonomous developmental path, it will no longer be a tool in human hands. It will become an independent force, dictating its own rules of play. And at that point, it won't matter so much who tried to control AI — whether a democratic government, an authoritarian regime, or a corporation.
Will we be able to keep artificial intelligence under control, fully aware of its potential to alter the course of civilization? Can we guide it down a safe path? Or is it already too late, and humanity has embarked on a journey that will inevitably lead to the loss of control over its own creation?