AI OUT OF CONTROL - DEEPSEEK, STARGATE, AND THE EU AI Act

- Falk Borgmann

The new AI models from DeepSeek and the prospect of significantly lower costs for training and using so-called large language models like ChatGPT have temporarily caused the stock market value of the US chip manufacturer NVIDIA to drop by $600 billion. Just a few days earlier, shortly after the inauguration of re-elected US President Donald Trump, his administration announced plans to scale back AI regulations in the United States. At the same time, US consortia under the program title "Stargate" intend to invest $500 billion in AI infrastructure projects. Whether these figures are realistic or not is irrelevant. Even half of these investments would exceed Germany’s entire annual IT budget. And we are talking about just one initiative in the USA. One could say the cards are now on the table, and every politician or official should have heard the accompanying bang. The race for global AI supremacy is now publicly visible. Anyone who delves deeper into the subject of AI will quickly realize that regulating it makes absolute sense. The intentions behind the EU’s AI regulation— the so-called EU AI Act—may be well-meaning, but in my opinion, it completely misses its mark. On the contrary, it creates additional bureaucratic overhead, benefiting only consulting firms that can expect significant revenue from certifications and audits in the future.

Why the EU AI Act Fails

To understand the problem, one must grasp what AI technically entails. AI is not just a computer or a software model alone. Rather, AI is the combination of a model with a powerful IT infrastructure that can be utilized by other applications in this configuration. Therefore, AI can be operated anywhere and used as a service. All that is required is technical access via the Internet. In an increasingly interconnected world, where people and machines are almost constantly connected via the Internet, a user cannot possibly know which components or services are involved in responding to their requests—whether originating from a smartphone or a computer. Just as it is nearly impossible for users to verify this, it will be equally impossible for any regulatory authority to oversee it on a global scale.

From a technical standpoint, it is hardly conceivable that the training methods of models or their use can be seriously monitored if they exist outside the legal jurisdiction of German or European legislation. The Internet simply cannot be confined within legal boundaries. Even companies that use cloud services, for example, cannot be certain which software, data, and models underpin these services. By now, it should be common knowledge that US corporations do not take data protection and transparency as seriously as European standards require. Believing that Chinese or Russian state-sponsored hackers or companies from such countries would adhere to European (data protection) standards would be extraordinarily naive.

Welcome to Reality

The harsh reality is that, on a global scale, monitoring the use and training of AI models is extremely difficult, if not impossible. Even within the European Union, I consider effective oversight practically unfeasible, simply because enforcing mandatory reporting requirements for AI usage is unrealistic. Consequently, the EU AI Act targets AI models and applications that operate within the European legal framework and voluntarily comply with existing laws. However, with expertise and goodwill, it is by no means impossible to create or operate safe and trustworthy AI applications. There are already solid guidelines in this area that predate the EU AI Act, such as those from the Center for Research on Foundation Models (CRFM), a collaboration of AI researchers from various universities. Companies can also operate their AI applications locally, meaning within their own or at least German data centers, ensuring full control over what happens with their applications and data. Modern IT does not always have to rely on the opaque cloud offerings of US corporations. This narrative is both incorrect and strategically unwise for businesses. Many companies may find their lack of foresight in planning their IT infrastructure coming back to haunt them in the medium term—especially considering the current political developments in the USA. Those companies that have relied on local infrastructure and European IT partners in key IT areas now find themselves in a strong position.

But how is the EU AI Act supposed to protect our society from AI models and applications that are not created or operated with the necessary diligence? How do we deal with services like DeepSeek, which are managed in China and represent a data protection nightmare? The data highways of the Internet are open to everyone.

As a result, our current legal framework and associated legislation do not provide effective mechanisms for solving the digital challenges ahead—something neither politics nor society has fully grasped yet. To be clear: I firmly believe that legal regulation is both necessary and appropriate. However, I do not see the necessary means to control and enforce these regulations effectively. Our traditional branches of government—the legislative and executive—are reaching the limits of what is feasible.

AI is somewhat similar to nuclear energy. It has been invented, and now it is here to stay. In the case of the atomic bomb, the destructive power became clear to everyone through the historical examples of Hiroshima and Nagasaki. That is precisely why, in the 1960s, there was a relatively swift, almost worldwide consensus on the need for access restrictions on nuclear weapons. The Treaty on the Non-Proliferation of Nuclear Weapons was the result, and it remains an agreement ratified by most countries, which the international community has largely adhered to (with few exceptions).

What Does This Have to Do with AI?

For one, I consider AI’s negative potential to be just as destructive as that of an atomic bomb—albeit in a completely different way. The democratic order is particularly at risk from unregulated AI, as perfectly crafted disinformation can be mass-produced and used in an uncontrolled but targeted manner to manipulate public opinion on a large scale. Secondly, AI regulation—due to the reasons mentioned earlier—can only be successful on a global scale. However, China, Russia, and, at present, likely the USA will hardly be willing to cooperate with the EU on this matter.

But How Could AI Be Effectively Regulated?

In my view, there is only one possible lever we could adjust. This lever would be international data traffic, which, in itself, could be regulated or partially restricted. In other words, a kind of European Data Governance Policy, where regulatory authorities control and monitor data flows. However, this would require a broad discourse, which would be challenging in multiple ways—especially since it would ultimately involve potential restrictions on Internet freedoms. That, in turn, would be difficult to reconcile with our democratic understanding of freedom. At the end of the day, such measures would create a very powerful tool with enormous potential negative consequences.

The general naivety and idealistic notion that the Internet makes all information accessible and freely available, with an entirely positive impact, are now backfiring. Not only can undemocratic structures freely tap into publicly available resources, but the power of tech giants over all of us has reached unprecedented levels due to their data collection frenzy combined with their technological superiority. Now, this omnipotence is being unleashed with Trump’s support and through Chinese state interests. Furthermore, many German companies—and unfortunately the German state itself—have placed themselves in dependency on US corporations due to their overly careless cloud strategies. The forecast, therefore, looks rather bleak.

Share