
TELUS Digital’s Jeff Brown discusses borderless AI compliance and how businesses can meet global AI regulations. This article was originally published on Insight Jam, the enterprise IT community that enables human conversation on AI.

Whether it’s human resources, supply chain, workforce management or customer support, organizations are increasingly incorporating artificial intelligence into their operations. New AI use cases emerge daily across industries and geographies, and AI adoption increases around the world, creating uncertainty about how to safely, ethically, and responsibly develop, deploy, and monitor AI. Understanding global regulations and monitoring new legal obligations is essential to your AI strategy and implementation, as well as to protecting your brand.
If a company fails to notify and take action, there are many downsides to non-compliance, including reputational damage and hefty fines. Currently, European (EU) Union AI laws impose stiff penalties for violations, but the U.S. AI Action Plan has guardrails and leans toward industry-driven autonomy. Other countries are rolling out voluntary codes, drafting new laws, and collaborating through global initiatives such as the G7 Hiroshima AI Process.
Why is AI regulation a global business risk?
There is no doubt that the era of “light-touch” AI surveillance is over. This approach made sense before 2015, when AI systems were still narrow in scope and not yet considered socially or economically disruptive. At the time, regulatory attention was largely indirect and fell under broader privacy or cybersecurity laws.
AI is now being incorporated across industries and geographies. As our understanding of the risks and real-world negative impacts of failing to “get AI right” deepens, the need to ensure transparency and accountability for how these technologies are created and used is increasingly guiding the way we approach AI development.
AI regulation is both a critical business obligation and a challenge, especially for global companies, given the fragmented web of legal definitions, expectations, and uncertainties they must navigate. In the 2025 Global Industry Report, which surveyed more than 800 business leaders, 44% cited “compliance with government regulations” as one of their biggest challenges in maintaining customer trust. Additionally, nearly half of respondents said data breaches and cyberattacks are the biggest threat to maintaining a safe and secure digital environment for their customers. The introduction of AI will exacerbate these risks.
Why is global AI regulatory compliance so difficult?
Today’s legal and governance teams face seemingly contradictory constraints. In a world where there is no global consensus on what constitutes an AI system, what is considered “high risk” or even what safety measures are considered essential, it must contribute to the delivery of agile and adaptive cross-market strategies. In other words, ensuring compliance is an ever-changing goal. Moreover, regulations vary widely from a geographical perspective.
Let’s take a look at how some major regions of the world are currently defining and implementing AI surveillance.
Europe: First binding AI law
The EU AI Law, adopted into law in July 2024, is the first comprehensive binding law focused solely on artificial intelligence. This introduced the world to a tiered system that categorizes AI tools according to their risk level, from minimal to unacceptable. Depending on the type of organization, non-compliance can result in fines of up to €35 million (US$40 million) or 7% of a company’s global revenue, whichever is higher.
Article 5 of the EU AI law prohibits certain high-risk applications, including social scoring (ranking people based on behaviors or characteristics that can lead to unfair treatment) and manipulative AI (systems that exploit user vulnerabilities or ignore consent). Strict requirements will also be placed on AI used in more sensitive areas such as healthcare, law enforcement, and recruitment.
To help organizations determine whether their tools fall within the scope of EU AI law, the European Commission has published guidelines on the definition of an AI system. It is important to note that EU AI law is not limited to AI only. It also includes logic-based software tools that can make or influence decisions. For global organizations, the most restrictive alternative is often the best standard.
United States: Sector-based, transition to deregulation
The U.S. AI Action Plan is a new framework introduced by President Trump in July of this year to replace Executive Order 14110 on AI, which President Biden rescinded. The new 2025 AI Action Plan rejects the idea of centralized AI regulation, favors sector-specific oversight, and fosters global AI competition. Government agencies have been directed to review all existing AI regulations and “eliminate policies that limit growth, restrict speech, or require bias audits.” Additionally, references to topics such as misinformation, diversity, and climate change will also be removed.
The U.S. government continues to promote guidance like the NIST AI Risk Management Framework, a voluntary guide developed by the National Institute of Standards and Technology in January 2023, to help companies identify and manage risks throughout the AI lifecycle. We provide practical steps and an accompanying AI risk management framework handbook to improve transparency, safety, and accountability.
Canada: Voluntary standards, pending legislation
Canada’s Voluntary Code of Conduct for Advanced Generative AI Systems was published in September 2023 and focuses on six core principles: accountability, safety, fairness, transparency, human oversight, effectiveness and robustness. The Artificial Intelligence and Data Act (AIDA) (Bill C‑27) was introduced in June 2022 to regulate high-impact AI systems, but it has stalled in Parliament and has not yet become law.
If passed, AIDA is expected to align with EU standards across categories such as healthcare, employment, biometric ID, and public services. It is designed to force transparency, risk assessment and incident reporting.
AI rules are taking shape in Brazil and Singapore
Meanwhile, the Brazilian government is considering the Brazilian AI Law (Bill No. 2338/2023). The bill proposes a three-tier risk-based framework that classifies AI systems as high risk (prohibition), high risk (regulation), or low risk (minimum obligation), and includes requirements for fairness, transparency, and human rights protection.
Although Singapore’s Model Artificial Intelligence Governance Framework has not yet introduced binding legislation, it provides practical guidance on topics such as accountability, data quality, security and human oversight. In 2024, Singapore released the Generative AI Addendum to address emerging risks and foster responsible innovation across sectors.
G7 Hiroshima AI Process: Global Voluntary Monitoring
At the G7 Summit held in Hiroshima in May 2023, leaders launched the Hiroshima AI Process, the world’s first international framework for advanced AI governance. This voluntary process includes issues such as risk mitigation, publishing capabilities, and developing tools to identify AI-generated content.
What is the role of legal counsel in navigating AI regulatory compliance?
Multinational companies often align themselves with highly prescriptive global regulations, such as EU AI laws, while navigating the politically volatile US regulatory environment.
Given that AI will span multiple sectors, lawyers will play a critical role in ensuring that governance can accommodate innovation. This means working directly with technology while translating new rules into business-ready policies. In my own organization, gaining hands-on experience with our proprietary GenAI platform has made our legal team more reliable, more effective in advising product teams and anticipating risk, and more confident in regulatory conversations. When lawyers understand how these systems work not just in theory but in practice, they can better guide responsible development.
Additionally, the most effective legal teams:
Translate regulations into policies that align with diverse global rules and frameworks.
Advise cross-functional teams including product, engineering, and data to ensure transparency and accountability are built into the system from the beginning.
Stay ahead of change by establishing mechanisms to monitor and respond to local legal changes in real time, including a centralized knowledge base that helps your team meet requirements and deadlines.
Shape the regulatory environment by participating in industry associations and public consultations, providing feedback that informs how AI legislation is presented.
Support ethics oversight through key roles in AI ethics committees and working groups to assess legal and reputational risks during the development process.
This means the role of legal departments is to move beyond reactive reviews to proactive partnerships that anticipate risk and build trust in AI with all stakeholders, including regulators, employees, and customers.
8 practical guidelines for building resilient AI governance
While legal teams are essential to compliance, building strong AI governance is a team sport and requires consistent attention from departments across the organization. Companies can strengthen their approach by:
Develop an internal AI governance playbook that aligns with recognized frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001. These should include definitions and guidance around hierarchy and accountability roles to ensure all teams adhere to best practices.
Use a tiering system to classify AI tools by risk level based on the system’s role, decision-making authority, data it uses, and how likely it is to raise legal or ethical concerns (higher risk requires increased safeguards and reviews).
Build or design AI systems that are transparent from the start by keeping clear records of usage, adding human checks, and creating an audit trail. All high-impact decisions must be tracked and reviewed.
Participate in voluntary governance frameworks such as the G7 Hiroshima Process and national voluntary codes of conduct. This demonstrates responsible intent, even in a market where binding regulations are still evolving.
Vet external partners by ensuring that their AI tools and services adhere to your company’s core values and guiding principles. Ensure compliance with security, legal compliance, and ethical standards for AI use.
Track evolving global norms and new international initiatives such as the UN’s Global Digital Compact and the OECD’s AI Principles. Even if they are not yet legally binding, they indicate the path that future laws will take.
Define clear escalation paths and internal processes for flagging concerns about the use of AI systems. Apart from legal issues, reputational and ethical risks can surface over time.
We train our internal stakeholders on AI compliance and incorporate it into onboarding and training of our teams, especially our data, product, and procurement teams. Responsible practices are developed from the ground up.
Future-proof your company to meet future regulations
Even in markets without legal mandates, investing early in internal AI governance puts companies in a good position to quickly adapt to the emergence of new rules. We recommend being audit-ready and compliance-ready through clear documentation, third-party reviews, and risk assessments.
Consider a central repository of region-specific obligations to help keep your legal and product teams in sync as laws change. Whenever possible, legal teams should plan for scenarios for upcoming changes, such as the passage of AIDA in Canada or federal legislation in the United States. Even in unregulated spaces, demonstrating responsible intent through voluntary frameworks and transparent systems can reduce risk and strengthen trust.
Legal teams are more than just advisors checking compliance checkboxes after an AI product launches. In the age of AI, they play a central role as builders of trust, global responsiveness, and responsible innovation. When business leaders bring legal advisors to the table early on, they are better equipped to deal with complexity, respond to changing norms, and maintain strong relationships that strengthen loyalty, revenue, and customer trust.

