EU launches strategy to foster ‘sovereign’ AI ecosystem
Is it worth redirecting €1 billion of existing funds towards AI adoption?
Welcome back!
Today, the Commission will launch a new AI strategy, branded as a push to reduce reliance on the US and China, and backed by €1 billion from existing financing programmes. We got our hands on the document a bit early and will share our take with you below.
Another AI strategy?
Yes. The strategy is the anticipated Apply AI Strategy, which we’ve mentioned since the launch of this newsletter. It now comes with more sovereignty language and all the latest buzzwords (AI agents, AGI, digital twins). AI is now framed as necessary for European productivity and competitiveness, as well as technological sovereignty. The strategy is the adoption component of the EU AI Continent Action Plan we wrote about in April. Alongside comes an AI in Science Strategy, focussed on putting the EU at the forefront of AI-driven research and scientific innovation. There will also be a Data Union Strategy later this month.
What the strategy gets right: Europe’s structural dependence
The strategy explicitly identifies “external dependencies of the AI stack.” This is notable, since such a clear-eyed analysis was missing from the Commission’s narrative on AI a year ago. It’s a step towards a grand strategy aimed at boosting European sovereignty in times of geostrategic tension, while trying not to antagonize a belligerent US administration. However, the strategy itself remains slightly underwhelming.
The Commission’s theory of change seems to be that flagship projects in key sectors (e.g. autonomous cars, AI to develop medicine), boosting AI adoption with an emphasis on open source and European “frontier models,” combined with new institutional coordination and an AI-first approach in the public sector, promote EU presence in layers of the AI stack (while driving business success).
The strategy’s blind spot: Is it worth redirecting €1 billion towards AI?
Underpinning the strategy is a deafening silence on the value added.
The strategy is notably supply-side driven, where producers of AI want to push AI technologies into existing processes. The push to become “AI-native” might lead to wrapping outcomes around technology instead of the other way around.
The key question is: does Europe need €1 billion to push AI adoption across all sectors, or would a more targeted and strategic investment in European digital sovereignty be more effective, all while investing in real solutions to pressing societal problems? Since the money comes from the existing budget, it will mean substantial trade-offs that need to be explained or justified.
No focus on chips, cloud, or “Buy European”:
The strategy fails to address the part of the AI stack where Europe is most dependent on US tech (and where most of the money is being made): chips and cloud. We have to assume this will be addressed in the upcoming Chips and Cloud Development Act (if it materializes at all, given delicate US trade relations). In the absence of an update to procurement rules (such as “Buy European”), it is unclear whether aggressively boosting AI adoption may inadvertently increase, rather than decrease, EU dependency on US tech. There’s a lot of emphasis placed on open-source, but open source alone is not a silver-bullet solution to resolving the issues stemming from concentrated market power in AI.
AGI and further intensification of the large scale scaling paradigm:
Notably, the Commission continues to index on the AI arms race narrative. The focus of AI adoption is primarily on generative AI and “frontier AI,” with an entire section dedicated to Artificial General Intelligence and AI agents. As we’ve argued many times before, it is within the paradigm of large-scale AI (and the race to build ever larger models using ever more computational resources and data) that dominant firms have a competitive advantage. Moreover, the plateauing of capacities might yield to diminishing returns to scale in the current paradigm.
Scattered industry wishlist, the usual suspects:
All in all, the strategy is quite a bewildering read, with the different sections showing various interest groups grasping the moment and reframing their operations in an AI-conducive way to become beneficiaries of the current political moment around AI.
The eleven flagship projects section in particular reads like a scattered wishlist from key European industries, organized around presumed obstacles commonly named as barriers to AI adoption in Europe: fragmented data sharing, interoperability problems, copyright concerns, lack of skills and trust, and (of course) regulatory concerns. The usual suspects seem to loom in the background (telcos, the car industry, and specific defense proposals have a clear footprint). All of this makes for quite a bewildering and heavy read, where the AI agenda is integrated into myriad existing policy processes.
What a truly ambitious EU AI strategy could look like:
A more ambitious sovereignty strategy would lay out a plan to reclaim agency over the future of AI in Europe (not just more AI, but better AI), while decentering AI as the solution solution to complex societal challenges. It would think beyond the current trajectory that predominantly benefits incumbent, hyperscaler logics of large-scale AI.
Sectoral flagship initiatives
Among the strategy’s eleven sectoral flagship projects, covering fields from agri-food to mobility, two deserve particular scrutiny: defence and internal security, and public sector AI adoption.
Focus on: defence and internal security
[correction: the leaked document contained a section on a European Defence Transformation Roadmap, which was missing from the final document]
Race to become the platform on which everything else runs: There’s a race among European startups to become the dominant operating system for European militaries, mirroring the successes seen with companies like Anduril and Palantir in the US. For venture-backed startups, becoming the platform on which various hardware from different providers can operate is highly desirable, offering scalability and infrastructural lock-in for future secure cash flow. Competing integration platforms are emerging in the EU, indicating a platform race akin to those in digital industries to conquer the market. However, the open-source experience in Ukraine might frustrate these commercial objectives.
Drone Wall - Good for VCs, but does it make sense? The idea of a “drone wall” has gained traction in the strategy, advocated by startups as a way to generate sustainable demand and ensure growth after the war in Ukraine eventually ends. Despite this, the feasibility and effectiveness of such a drone wall have been questioned by some defence analysts on the grounds of feasibility and cost-effectiveness.
Dedicated Military Compute: Finally, the plan suggests dedicated AI Gigafactory computing capacity for military AI, mirroring the recent French announcement of Asgard.
The strategy also includes ominous statements about fostering the “development and uptake of AI solutions for internal security purposes.” While the strategy is light on details, this shows a more securitized approach to artificial intelligence.
Focus on: public sector AI
Public sector AI as a strategic asset: In this section, we also notice a change in tone compared to how the Commission talked about AI a year ago. AI adoption in the public sector is no longer just supposed to increase efficiency, reduce administrative burden, and cut red tape, but public sector procurement should also help AI startups grow (the emphasis here is on European-made open-source AI).
Barriers and solutions to AI adoption: The strategy identifies fragmented public sector data sources and limited accessibility of trustworthy AI-based tools as key barriers to AI adoption in the public sector, and addressing potential biases, investing in infrastructure and skills, and ensuring transparency and trust as the key solutions.
Tools and resources to boost AI adoption: The proposal includes a shared repository of open-source, reusable tools — meaning architecture models, standards, specifications for data and AI, and registries of LLMs for the public sector, including the judiciary and law enforcement — to use AI and to support AI interoperability.
Finally, the European Interoperability Framework will be revised to “incorporate guidance on AI-first policies” within European public administration. There’s no further detail on what this may entail.
In need of independent experts for governance and accountability
The strategy is complex (we counted at least 32 separate initiatives). That’s why the establishment of a single governing mechanism in the form of a coordination forum for “Apply AI” stakeholders and policymakers called the Apply AI Alliance may look like a positive signal. This multistakeholder initiative, however, could easily become captured by industry interests, especially in the current political juncture in the European Union.
Despite a colorful graph, it remains unclear how exactly the various stakeholder initiatives will work together. There’s the AI Office’s annual gathering, a newly announced AI Observatory, and the Apply AI Alliance, as well as the existing AI Board. These substantial overlaps are likely to lead to accusations of over-bureaucratization of European policymaking. Is there enough capacity in civil society and academia to effectively feed into these forums with truly independent expertise?
Most of these initiatives are tasked with monitoring the progress of AI adoption or external trends in AI development. None of these governance mechanisms is currently tasked with measuring whether the €1 billion investment will have the desired impact on the European economy, its public sector, and key sectors such as education, health, and energy.





A note that the emailed version of this text included a section on a European Defence Transformation Roadmap. However, this proposal has disappeared from the final text.