Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Performs inferences that provide privacy by hugging the face endpoint

June 10, 2025

Ryght’s journey to empower healthcare and life sciences with expert support from a hugging face

June 9, 2025

Benchmarking large-scale language models for healthcare

June 8, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Tuesday, June 10
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»CTO and CIO key insights
AI Legislation

CTO and CIO key insights

By March 5, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Emre Kazim, co-CEO of Holistic AI, is an expert in AI Ethics and Goverance and holds a PhD. King’s College London Philosophy.

Getty

The European Union (EU) AI Act implemented its first deadline in February 2025, indicating changes in global AI governance. For technology leaders, interests are high and may not follow. This rule affects businesses using AI and selling products or services in the EU. This includes systems developed by your business or purchased through third-party software suppliers.

Non-compliance includes fines of up to 35 million euros or 7% of global annual revenues, as well as reputation and operational risks.

Since August 2024, the EU AI Act has introduced a risk-based framework that classifies AI systems as banned, high, limited or minimal risk. For CTOs and CIOs, understanding the meaning of the law is essential to navigating the impact on innovation and compliance.

At its core, ACT defines a red line for AI use in the EU. These are systems that pose “unacceptable risks” or systems that contradict EU values ​​such as human dignity, freedom, and privacy. For CTOs and CIOs who shape AI development and deployment strategies, top priorities are required to meet these regulations and tracking deadlines.

EU AI ACT ORIGINS

My involvement in EU AI law began through work with the Economic Co-operation and Development Agency (OECD). The early discussion focused on the EU’s ambition to set a global standard for AI governance, similar to the general data protection regulations (GDPR), but this time it has a dual focus by fostering trust and enabling innovation. While the GDPR is primarily concerned with the protection of personal data, EU AI law takes on the much more complicated challenge of regulating the AI ​​systems themselves.

Throughout the drafting process, it became clear how difficult it is to regulate technologies that evolve faster than legislation can keep up. Unlike data privacy, AI governance is more than just policy. A deep technical understanding of how AI models work, how decisions are made, and where risks emerge.

Common misconceptions

My experience working with customers has seen three enduring misconceptions that shape the way companies approach compliance with this behavior.

1. “Our legal team can handle this.” Many assume that AI compliance will be straight to legal teams, like GDPR. However, unlike the Data Privacy Act, this Act requires a detailed technical analysis of AI models, risks and behaviors.

2. “Just extend your cyber or privacy solution.” Traditional governance tools built for cybersecurity or data privacy are not equipped to assess AI-specific risks such as bias, explanation, and robustness. AI requires a governance framework designed to address its own lifecycle.

3. “Compliance slows us down.” Companies that embed AI governance into their development cycle will actually accelerate their deployment. Clear Risk Assessment and Compliance Framework removes obstacles and makes it easier to scale your AI with safety and confidence. An additional benefit is that compliance with the law is easy.

AI practices have been banned

The EU AI Act prohibits eight AI practices for possible harm, regardless of whether the entities are developing, deploying or using them.

• Manipulation or deceptive AI: A system that subtly affects human behavior, such as embedding undetectable clues in content.

•Exploitation of vulnerable groups: Target AI children, financially struggling individuals, or other groups at risk for manipulation.

• Social Scoring and Behavior-Based Classification: AI (e.g., social media-based employment decisions) that categorize individuals based on personality and behavior, leading to unfair treatment.

•AI-Driven Predictive Policing: Profiling-based AI Prediction for Criminal Behaviors without Human Supervision.

•Untargeted facial recognition data collection: Scrape biometric data from sources such as CCTV and online platforms, and match GDPR protection.

•Emotional perception in work and education: AI systems that infer workplace and school emotions are limited, except for health and safety applications.

• Biometric classification of sensitive traits: AI is not permitted to use biometric data to infer race, political beliefs, or sexual orientation, except for strict legal conditions.

• Real-time biometric identification in public spaces: Live facial recognition by law enforcement is largely prohibited, with the exceptions that require prior approval and monitoring.

CTOs and CIOs should conduct detailed risk assessments to ensure compliance, particularly as enforcement timelines approach them.

CTO and CIO 2025 Key Steps

Our team is currently helping our customers prepare for EU AI law. On our phone we advise our customers to take the following steps:

• Perform a comprehensive AI audit: Identify all AI-powered software used internally or supplied by third-party vendors to map potential compliance risks.

• Implementing AI Governance Protocols: For example, Unilever (one of our customers) has established standardized policies on transparency, fairness and bias mitigation to meet EU regulatory standards.

• Legal and Compliance Team Engagement: Evaluates whether AI models comply with EU regulations and whether case-by-case exceptions apply, such as biometric identification of law enforcement and security applications.

• Verify vendor compliance: A compliance guarantee from an AI vendor is required before deploying services to mitigate third-party risks.

Future preparation: Why should CTOs and CIOs take action now?

Enforcement of prohibited AI practices will first take effect, followed by practice codes for common AI systems such as basic models and LLMS, and then high-risk AI regulations.

To move on, CTOs and CIOs need to establish a robust governance framework to ensure compliance, minimize risk and drive responsible AI adoption. A standardized approach streamlines AI projects, strengthens trust and positions organizations as AI leaders. Consider implementing an AI governance software platform to help you manage not only regulatory compliance activities but all AI use cases across your organization with AI safety, ROI and effectiveness.

As the EU continues to gain leadership positions in AI regulation, we need to ensure that innovation and accountability are closely linked.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Are you qualified?

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI voice model reduces healthcare transcription errors
Next Article What’s going on with gaxos.ai (gxai) strains? -gaxos.ai (NASDAQ: GXAI)

Related Posts

AI Legislation

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
AI Legislation

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025
AI Legislation

California Senate Passes Bills aimed at making AI chatbots safer

June 4, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Performs inferences that provide privacy by hugging the face endpoint

June 10, 2025

Ryght’s journey to empower healthcare and life sciences with expert support from a hugging face

June 9, 2025

Benchmarking large-scale language models for healthcare

June 8, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?