The EU has the opportunity to shape how the world approaches AI and data governance. AI News spoke with Resham Kotecha, global policy director at Open Data Institute (ODI).
ODI’s European Data and AI Policy Manifesto sets out six principles for policymakers, calling for strong governance, a comprehensive ecosystem, and public participation to guide AI development.
Standard settings for AI and data
“The EU has a unique opportunity to shape the global benchmark of digital governance that puts people first,” Koteka said. The first principle of the manifesto reveals that innovation and competitiveness must be built on regulations that protect people and strengthen trust.
Common European data space and Gaia-X are early examples of how the EU is building the foundations of AI development while protecting rights. The initiative aims to create a shared infrastructure that allows governments, businesses and researchers to pool data without giving up control. If they are successful, Europe can combine large-scale data usage with strong privacy and security protections.
Privacy Enhanced Technology (Pets) is another part of the puzzle. This tool allows organizations to analyze or share insights from sensitive data sets without exposing the raw data itself. Horizon Europe and Digital Europe already support pet research and deployment. Koteka argued that what is needed now is consistency. “Make sure pets leave the pilot and move into mainstream use” This shift allows businesses to use more data responsibly and show that citizens take their rights seriously.
Trust also depends on monitoring. Independent organizations provide the checks and balances needed for trustworthy AI, Kotecha said. “They provide equitable scrutiny, build public trust and hold accountability for both government and industry.” ODI’s proprietary data agency program provides guidance on how these organizations can be structured and supported.
Data as the basis of AI’s EU
The Manifesto calls open data the foundation of responsible AI, but many companies are cautious about sharing. Concerns range from commercial risks and legal uncertainties to concerns about quality and form. Even when data is published, it is often difficult to use because it is unstructured or inconsistent.
Kotecha argued that the EU should reduce the costs facing organizations when collecting, using and sharing AI data. “The EU should investigate a variety of interventions, including legislative frameworks, financial incentives, capacity building and data infrastructure development,” she said. By reducing obstacles, Europe can encourage private organizations to share more data responsibly, generating both public and economic benefits.
ODI research shows that clear communication is important. Senior decision makers need to see the tangible business benefits of data sharing, as well as extensive “public” debate. At the same time, sensitivity around commercial data must be addressed.
The Data Space Support Center (DSSC) and the International Data Spaces Association (IDSA) are building technical frameworks that make sharing safer and easier. Updates to the Data Governance Act (DGA) and GDPR also clarified permissions for responsible reuse.
Regulation sandboxes can be built on this foundation. By allowing businesses to test new approaches in a controlled environment, sandboxes can demonstrate that public interest and commercial value are not conflict-free. Privacy-enhancing technologies add another layer of security by allowing individuals to share sensitive data without putting individuals at risk.
Trust across the EU and building a cross-border AI ecosystem
One of Europe’s biggest hurdles is making data work within its membership. Legal uncertainty, divergence of national standards, and inconsistent governance are fragmented into every system.
Data Governance Act is central to the EU’s plan to create a trustworthy cross-border AI ecosystem. But the law itself does not solve the problem. “The actual tests are about how member states conduct it (the data governance law) and how much support is given to organizations that want to participate,” Kotecha said. If Europe can adjust in line with standards and implementation, it can strengthen the AI
That requires more than technical fixes. Building trust between governments, businesses and civil society is equally important. For Kotecha, the solution is “an open and trustworthy data ecosystem that helps collaboration maximize data values
Independence through funding and governance
Monitoring AI systems requires sustainable structures. Without long-term funding, independent organizations risk becoming project-based consultants rather than consistent watchdogs. “Civil society and independent organizations need a commitment to long-term, strategic funding flows to implement not only project-based support, but also monitoring,” Koteka said.
ODI’s Data Agency Program has investigated a governance model that allows organizations to be managed responsibly while still being independent. “Independence relies on more than money. It requires transparency, ethical surveillance, inclusion in political decision-making, and an accountability structure that locks the organisation into the public interest,” Koteka said.
Embed such principles into the EU funding model could ensure that watchdogs remain independently effective. Strong governance should include ethical oversight, risk management, transparency, and a clear role. This will be handled by the Board Subcommittee on Ethics, Audit and Compensation.
Make your startup data work
Access to valuable datasets is often limited to major high-tech companies. Small players suffer from the cost and complexity of obtaining valuable data. This is where initiatives like AI factories and data labs come into play. Designed to lower barriers, startups provide curated datasets, tools and expertise.
The model worked previously. Projects like Data Pitch combine small and medium-sized businesses and startups with data from large organizations. This allowed us to unlock previously closed datasets. Over three years, it supported 47 startups from 13 countries, helped create over 100 new jobs, and generated 18 million euros in sales and investment.
ODI’s open-active initiative showed similar impacts on the fitness and health sector. At the European level, DSSC pilots and new sector-specific data spaces in areas such as mobility and health are beginning to create similar opportunities. For Kotecha, these schemes are to ensure that “innovative products or services can be built on high-value data to truly reduce the barriers for small players.”
Leading the community into conversation
The manifesto also emphasizes that the EU’s AI ecosystem is only successful when public understanding and participation is embedded. Kotecha argued that engagement cannot be top-down or tokens. “Participational data initiatives enable people to play an active role in the data ecosystem,” she said.
What makes ODI’s 2024 Report Participatory Data Initiative a Success? Maps how communities are directly involved in data collection, sharing and governance. Local participation was found to enhance ownership and affect underrated groups.
In practice, this means community-driven health data projects like ODI supported, or community-driven data projects like open standards that are embedded in everyday tools such as activity finders and social prescription platforms. These approaches raise awareness and give people to agents.
Effective participation requires training and resources, allowing communities to understand and shape how data is used. Expressions should reflect the diversity of the community itself, using methods that are culturally related to trustworthy local champions. Whether low-tech or offline, you should be able to access technology and be clear about how your data is protected.
“If the EU wants to reach an underrepresented group, we need to support a participatory approach that starts with local priorities, uses trustworthy intermediaries, and builds transparency from the start,” Kotecha said. “That’s how we turn data literacy into a real impact.”
Why trust could become the EU’s competitive advantage in AI?
The Manifesto argues that Europe has opportunities. “The EU has a unique opportunity to prove that trust is a competitive advantage in AI,” Koteka said. By showing that open data, independent surveillance, comprehensive ecosystems, and data skills development are at the heart of the AI
This position is in contrast to other digital powers. In the US, regulations remain fragmented. In China, a state-led model raises surveillance and human rights concerns. By setting clear and principled rules for responsible AI, the EU can turn regulations into soft power and export governance models adopted by others.
For Koteka, this is about shaping the future, not just rules. “Europe can be positioned not only as a rule maker, but as a global standard setter for trustworthy AI.”
(Photo by Christian Lu)
See: Agent AI: Promises, Scepticism, and Its Implications for Southeast Asia
Want to learn more about AI and big data from industry leaders? Check out the AI
AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.
Resham Kotecha’s Post of Open Data Institute: How to Lead the EU with AI First appeared in AI News.