Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

OpenAI spreads $600 billion bet on cloud AI across AWS, Oracle, and Microsoft

November 3, 2025

Samsung Semiconductor Recovery: Explaining the recovery in Q3 2025

November 2, 2025

How Lumana is redefining the role of AI in video surveillance

November 1, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, November 3
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Generic ai ‘pure theoretical’ terrorist potential
AI Legislation

Generic ai ‘pure theoretical’ terrorist potential

versatileaiBy versatileaiJuly 17, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

According to British terrorist advisors, the Generating Artificial Intelligence (GENAI) system allows terrorists to spread propaganda and help prepare for attacks, but the level of threat remains “pure theoretical” without any evidence that is actually used.

In his latest annual report, Jonathan Hall, an independent judge of the government’s terrorism law, warned that the Genai system could be exploited by terrorists, and how effective technology is in this context and what to do about it is now a “open question.”

Commenting on the possibility that genais could be deployed to help terrorist groups propaganda activities, for example, Hall explained how it could be used to significantly speed up its production and amplify its spread, allowing terrorists to easily create formats of images, stories and messages that could be easily shared with much less resources or constraints.

However, he also said that terrorists are not given the ability to “overflow” the information environment with AI-generated content, and that group coverage could change as a result of the possibility of undermining the message.

“Depending on the importance of reliability, the possibility that a text or image was generated by AI can undermine the message. Propaganda communications like spam can prove a turn-off,” he said.

“On the contrary, it may be booming for conspiracy theorists who enjoy extreme right-wing forums, anti-Semites and creative awkwardness.”

Similarly, regarding the possibility of technology used in attack planning, Hall said that while it could be aid, it is an open question as to how much actual generation AI systems actually serve terrorist groups.

“In principle, genai investigates important events and locations to target targets, suggest ways to bypass security, and provide trademarks that use or adapt the cellular structure of weapons and terrorists,” he said.

“Access to the appropriate chatbot may distribute the need to download online instructional materials and make complex instructions more accessible.

However, he added that “profits may be progressive rather than dramatic,” adding that it is more likely to be related to only the attacker than the organized group.

Hall added that while it can be used to “expand attack methodology” using Genai, this requires attackers to require prior expertise, skills, access to labs or equipment, for example, via the identification and integration of harmful biological or chemicals.

“The effectiveness of genai here is questionable,” he said.

A similar point was produced in the first international AI safety report created by a global cohort of almost 100 artificial intelligence experts in the wake of the UK government’s first AI safety summit in 2023 at Bletchley Park.

The new AI model could create step-by-step guides to create pathogens and toxins that exceed PHD level expertise, but it could potentially lower barriers to biological or chemical weapon development, but it said it would remain a “technologically complex” process.

An additional risk identified by Hall is the use of AI in the course of online radicalization via chatbots. He said that a one-to-one interaction between humans and machines could “create a closed loop of terrorist radicalization.”

However, he said that even if the model has no guardrails and “sympathizes with the terrorist narrative” for the data, the output relies heavily on what users request.

Potential solution?

From the perspective of legal solutions, Hall emphasized the difficulty of preventing genai from being used to support terrorism. We note that the “upstream responsibility” of those involved in the development of these systems is limited.

Instead, he proposed to introduce “tool-based responsibility.” This targets AI tools specifically designed to support terrorist activities.

Hall said the government should consider opposition or possession of computer programs designed to stir up racial or religious hatred, but he acknowledged that it would be difficult to prove that the program was specifically designed for this purpose.

He said that if they actually create a terrorist-specific AI model or chatbot, developers could be prosecuted under UK terrorist law, but “it appears unlikely that Genai tools will be created specifically to generate new forms of terrorist propaganda. It is much more likely that the powerful general model features will become harnesses.”

“We can predict great challenges to prove that chatbots (or genai models) are designed to produce narrow terrorist content. A better course is a computer program specifically designed to stir up hatred for reasons of race, religion, or sexuality.”

In his reflection, Hall acknowledged that he did not see exactly how AI is used by terrorists, and that he acknowledges that the situation remains “purely theoretical.”

“Plausibly, some say there’s nothing new to be seen. Genai is just a form of technology, and so it will be exploited by terrorists, like the Vans,” he said. “Without evidence that the current legislative framework is insufficient, there is no basis for adapting or extending it to address purely theoretical use cases. In fact, the absence of a genai-enabled attack could suggest that the whole problem is exaggerated.”

Hall added that criminal liability can be argued that even if some form of regulation is needed to avoid future harm, criminal liability is the most appropriate option, especially given the political obligation to utilize AI as a force for economic growth and other public interest.

“Alternatives to criminal liability include transparency reporting, voluntary industry standards, third-party audits, suspicious activity reporting, licensing, bespoke solutions such as AI-Watermarking, advertising restrictions, forms of civil liability and regulatory obligations,” he said.

Although Hall expressed uncertainty about the extent to which terrorist groups adopt generative AI, he concluded that the most likely effect of technology is the general “social degradation” facilitated by the spread of online disinformation.

“While we’re far from bombs, gunfires or blunt force attacks, toxic misrepresentations of government motives or target demographics can lay the foundation for polarization, hostility and ultimately real-world terrorist violence,” he said. “But there’s no role for terrorism here, because the relationship between Genai-related content and ultimately terrorism is too indirect.”

Although not covered in the report, Hall acknowledged that there could be more “indirect effects” of genai on terrorism. This is because it can lead to widespread unemployment and create an unstable social environment that “more encourages terrorism.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleOpen LLMS from Openai and hug your face message API
Next Article Dubai announces a global system for defining the role of humans and AI in content creation – Fast Company Middle East
versatileai

Related Posts

AI Legislation

Congress wants to use AI to protect children. Are their ideas correct?

October 30, 2025
AI Legislation

CEO of stablecoin giant Circle says international law needs to be updated for a “machine-governed economic system”

October 28, 2025
AI Legislation

US AI company defies EU with ‘massive facial recognition scraping operation’

October 28, 2025
Add A Comment

Comments are closed.

Top Posts

Bending Spoons’ acquisition of AOL shows the value of legacy platforms

October 30, 20257 Views

Build a healthcare robot from simulation to deployment with NVIDIA Isaac

October 30, 20256 Views

CEO of stablecoin giant Circle says international law needs to be updated for a “machine-governed economic system”

October 28, 20256 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Bending Spoons’ acquisition of AOL shows the value of legacy platforms

October 30, 20257 Views

Build a healthcare robot from simulation to deployment with NVIDIA Isaac

October 30, 20256 Views

CEO of stablecoin giant Circle says international law needs to be updated for a “machine-governed economic system”

October 28, 20256 Views
Don't Miss

OpenAI spreads $600 billion bet on cloud AI across AWS, Oracle, and Microsoft

November 3, 2025

Samsung Semiconductor Recovery: Explaining the recovery in Q3 2025

November 2, 2025

How Lumana is redefining the role of AI in video surveillance

November 1, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?