Link copied to the clipboard.
Almost everyone is currently talking about ChatGPT and the influence of language models. Suddenly, experts are popping up everywhere, expressing their opinions, and to be honest, I find it increasingly difficult to bear the same questions and agonizing discussions. However, probably 99.9% of the population only see the tip of the iceberg. Since the LLM (Language Model) leak from Meta (Facebook), figuratively speaking, a tsunami has hit the small but globally operating AI tech community (see also: https://deepshore.de/knowledge/ki-facebook-leak-open-source-community-fordert-tech-konzerne-heraus). Just five years ago, I would have hardly believed in the innovative power of open-source swarm intelligence. And by that, I'm explicitly not referring to the performance of OpenAI's ChatGPT or Google's response. The potential of these offerings from major corporations should now be imaginable to anyone reasonably informed. No, I mean what is currently happening on platforms like HuggingFace (https://huggingface.co/) and what seems to be invisible to the general public. And what is becoming visible in terms of innovation there is, in my opinion, completely surreal. Over the past three months, there has been a qualitative development comparable to the sum of the technological progress made in this field over the past ten years. Therefore, ChatGPT is only the tip of the iceberg, so to speak, what everyone sees, even when they close their eyes.
Where can this development lead?
The current pace is making me dizzy. I assume that we are currently dealing with the potential of a true game-changer that can revolutionize not only IT but society as a whole. If it continues at this pace, it is not inconceivable that the internet can create its own reality, which will then flow back into the real world. This means that we won't be influencing the content on the internet, but rather the content on the internet will have a direct and massive impact on each of us, and even on entire societies. And this is because the use of excellent models is so simple that they will be deployed everywhere we work with language or writing. And since in the digital realm, communication runs through machines and thus through "their" language, we find ourselves at the central point of the internet's nervous system. Information, whether true or false, will be able to be placed in the network so well, plausibly, and widely that hardly anyone in this world will be able to recognize whether it corresponds to the facts in the real, non-digital world. Merging fiction and reality already works today in a way that is truly impressive, and the new models/systems can act as a multiplier here. Would it be conceivable to use AI to validate AI? I am skeptical about that because every piece of information on the internet will also potentially serve as a basis for training new models. Therefore, I expect that large parts of this new internet system will feed themselves with information, without any filtering. Furthermore, there will not be just one AI, but thousands of models scattered worldwide, which cannot be centrally controlled.
You may be thinking, is that nonsense? Does something become real just because it is on the internet? Let's conduct the following thought experiment together: You have a 100 euro banknote. You can use it to go shopping. Does the merchandise you can buy with it really have the value of the green piece of paper in your hand? Or does its 100-euro value only exist because enough people believe that it has that value? And how large is the critical mass of people that would have to believe something for that belief to become reality?
Should AI be regulated as a result?
I cannot comprehend the current discussion about regulating AI from a technical perspective because it is irrelevant. In reality, the train for regulation has long since left - specifically, since the invention of AI.
But if it were still possible, should AI be regulated? I would be absolutely in favor of it, in the interest of a democratic order. Unfortunately, I lack the imagination of how that could be implemented. An EU authority that verifies and approves neural networks? This proposal actually exists. And it is so far-fetched, almost childishly naive, that it is genuinely worrying. It illustrates how helpless politics and policy advisors are in this field of information technology. AI is on the internet, or rather, the internet will be AI. The only way to control AI would be to control the internet and, ultimately, to censor it. However, I'm not entirely sure how well that would align with the understanding of democracy for all of us. Even if we were to isolate ourselves, countries like China or Russia certainly would not adhere to our rules. In this regard, democracy has a strategic disadvantage compared to totalitarian systems because we cannot close the door to manipulation and misinformation without jeopardizing our democratic values.
Was it a good idea to invent nuclear fission? Was it a good idea to invent AI? Both can be controversially discussed.
However, the fact is: Both technologies have been invented, and regardless of what comes next, this fact cannot be undone.