Link copied to the clipboard.
- Falk Borgmann
As digital transformation continues to progress, large enterprises in particular face the challenge of developing robust and scalable strategies for the use of Artificial Intelligence (AI). The pressure to innovate is immense—especially in order to remain competitive in the global technology race.
What often began as proof-of-concepts (PoCs) on developers’ notebooks has led to a central realization: AI only delivers real value where it comes into contact with production-relevant data—in other words, where the data is actually available. Without real data integration, AI remains an academic concept that fails in practical application.
A common misconception is that AI applications can function independently of existing IT structures. In reality, productive deployment of such services almost never takes place on a proverbial greenfield. Instead, the focus must be on sensibly integrating AI models into established data sources and processing workflows—such as ERP systems, data warehouses, or other operational platforms.
This is precisely where a common problem arises: machine learning experts possess deep knowledge of modeling and algorithms, but often have limited expertise in data integration and provisioning. Conversely, BI or DWH teams, who are highly familiar with data structures, usually lack an in-depth understanding of modern AI technologies. These respective gaps are often bridged with external SaaS solutions—frequently at the expense of transparency, cost control, and strategic independence.
Many companies turn to AI services offered by major cloud providers because they can be implemented quickly and easily. In doing so, large volumes of data are often transferred to cloud infrastructures—predominantly those of U.S.-based corporations—to perform tasks via “black box” AI. While this may seem practical in the short term, it carries significant long-term risks:
A viable and cost-efficient long-term strategy follows a different principle: AI should be brought to where the data already resides. Whether on-premises, in a private cloud, or within an existing enterprise cloud—the proximity of AI components to the data reduces complexity, minimizes data movement, and improves overall system efficiency.
Ideally, AI services are operated independently—for example, as containerized models on internal platforms using open-source technologies. Even if the initial effort for setup, integration, and training is higher, the long-term benefits are clear:
This by no means implies that the use of hyperscaler infrastructures should be categorically excluded. If a company has already transferred its data workload to the cloud, it can make strategic sense to operate AI services there as well. What matters, however, is the choice of operating model and the AI service itself: instead of relying entirely on non-transparent SaaS models, companies should evaluate where the operation and creation of AI functionalities might be more efficient, flexible, and secure.
In real life, no one would think of building a kitchen around the chef—including all equipment and ingredients. Yet when it comes to IT, decisions are often made in practice that appear poorly thought out from a neutral standpoint. Successful AI strategies in the enterprise segment are not guided by short-term trends but by structural realities: data is the core raw material of AI—and its location rarely aligns with the requirements of external SaaS models.
Instead of laboriously moving data, it is often more sensible to bring AI to the data. The path of independently operated and intelligently integrated solutions is not an easy one—but in the long run, it is the more sustainable, economically sound, and sovereign approach.