Human considerations
While technical requirements for protecting AI systems are important, many organizations have found that they pay to take a humane approach to the challenges.
At TBWA, an international advertising agency, Australia’s chief AI and innovation officer, Lucio Ribeiro, said both ethics and safety were in front of the management’s minds from the earliest stages of the AI journey. The company has established a collective AI framework that provides structure beyond the governance, risk and transparency of AI investments, and has proven essential in assessing the suitability and security of AI investments.
“We turned down a lot of tools that didn’t meet our standards — no matter how closely they looked on social media,” Ribeiro said.
“Many of the things people publish online — Genai videos and ads, workflow automation, image generators, or AI-generated case studies — may seem impressive, but many of those tools cannot be used responsibly in an enterprise setting.
“They often lack clear IP rights, data protection or commercial terminology. We simply can’t afford to experiment irresponsibly. Our clients can’t afford to become AI cowboys.”

Ribeiro said TBWA’s quest to safely embrace AI will take steps to ensure that this requirement does not prevent it from innovating using AI. This created a “safe” environment with clear boundaries that support AI-based experiments.
“If trust is compromised, there’s creativity too,” Ribeiro said.
“So our principles remain: build at high speed, test safely, and only expand on the safe ones.”
Ensuring the foundation of AI
Security is incorporated as a fundamental pillar of the transformation programme taking place at Australian National University, which has led to the adoption of AI technology, leading to the introduction of additional security dimensions.

According to Sajid Hassan, director of digital infrastructure and information security at the university, the concentration of computational power in AI workloads required the development of a safe computing environment with isolated processing capabilities for sensitive research.
“Compliance with evolving AI regulations and guidelines has been an inspiring goal that requires constant attention,” Hasan said.
“We had to carefully balance the balance of openness needed to collaborate with research cooperation and intellectual property protection, especially as the AI model itself became a valuable research outcome.
“These considerations have led us to develop specific governance frameworks for AI research beyond traditional IT security measures.”
Investing in AI has also facilitated the evolution of university ethical frameworks and led to the creation of comprehensive guidelines for responsible AI use in research addressing issues from algorithmic bias to AI-driven decision-making transparency.
“We have been working to ensure AI innovation and university values and university values.
“This includes extensive consultations with researchers, ethicists and the broader university community to develop a framework that enables innovation while maintaining ethical standards.”
The data governance framework is strengthened to address privacy concerns inherent to AI applications, particularly regarding the use of personal data in research. The university also implements transparency requirements for AI-driven decision-making that impacts students or staff, ensuring that it has the ability to understand and challenge human surveillance and automated decisions at all times.