In January 2025, the UK government announced the global AI Cybersecurity Standards (1) aimed at protecting the digital economy to artificial intelligence (AI) systems from cybersecurity risks.
This voluntary practice and implementation guide follows the feedback of global stakeholders, including the National Cybersecurity Centre, and forms the basis for new global standards for secure AI through the European Institute of Communications Standards.
Last year, the UK AI sector generated £14.2 billion in revenue, and the 13 principles of this standard aim to maintain critical growth while protecting critical infrastructure from cyberattacks. This principle emphasizes protecting AI systems from hacking and disruption delivery, and guides developers to create safe and resilient products.
Feryal Clark MP said:
“This not only ensures the knowledge that businesses can create opportunities to thrive and protect more than ever, but also promotes growth, improves public services and delivers cutting-edge AI products that put the UK at the forefront. It’s not just about supporting us to provide. The world’s AI economy.” (2)
A BCI study (3) shows that nearly 75% of organizations experienced cyberattacks in 2024, the greatest risk for future organizations (4). Adoption of AI tools is relatively slow, and many organizations prefer an “viewed as a wait” approach, so advances are being made to integrate AI into the sector. Currently, research shows that AI is primarily used for daily time-saving tasks such as online meeting transcription, but how to use AI to manually use realistic and emotional training scenarios Reports to create much faster than Others use AI to align their organization’s policies with relevant legislation, significantly reducing the time required for human error and repetitive cross-check tasks.
The global annual BCI survey results for Resilience Report 2024 reflect an increasing interest in AI adoption across the organization. Three or more organizations expect AI to play a key role in their operations to varying degrees throughout 2025. This was an increase from the previous year, indicating a growing interest in the sector.
Tackling the dangers of AI
While AI can increase resilience, there are risks that resilience experts must address. For example, entering sensitive corporate data into a public AI system can lead to data leaks and attackers getting information that leads to violations. To address this, practitioners can consider establishing AI management groups to assess risk options and developing organizational policies for safe use and restrictions. You could also consider implementing AI awareness training for staff to achieve efficiency, but the data will remain safe.
Furthermore, one of the principles of code is to identify, track and protect assets that contain interdependencies/connectivity. Practitioners can aim to achieve this by promoting close collaboration with third-party suppliers and working to break down silos between organizational departments.
The AI Cybersecurity Code of Practice has been released to help developers and end users implement AI in a secure way without thwarting innovation. This is UK guidance, but other regulations in the pipeline, such as the European Union AI Act, Canada’s Voluntary Code of Conduct, and Brazil’s first AI regulations, such as the European Union AI Act, and other regulations currently under review. can be used as a global superior practice to enhance resilience. . (5)
more