Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025

Gemini 3 for developers: new inference, agent features

December 10, 2025

Anifun vs NovelAI: Which anime AI art generator is better for story creation?

December 10, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, December 11
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Generic ai ‘pure theoretical’ terrorist potential
AI Legislation

Generic ai ‘pure theoretical’ terrorist potential

versatileaiBy versatileaiJuly 17, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

According to British terrorist advisors, the Generating Artificial Intelligence (GENAI) system allows terrorists to spread propaganda and help prepare for attacks, but the level of threat remains “pure theoretical” without any evidence that is actually used.

In his latest annual report, Jonathan Hall, an independent judge of the government’s terrorism law, warned that the Genai system could be exploited by terrorists, and how effective technology is in this context and what to do about it is now a “open question.”

Commenting on the possibility that genais could be deployed to help terrorist groups propaganda activities, for example, Hall explained how it could be used to significantly speed up its production and amplify its spread, allowing terrorists to easily create formats of images, stories and messages that could be easily shared with much less resources or constraints.

However, he also said that terrorists are not given the ability to “overflow” the information environment with AI-generated content, and that group coverage could change as a result of the possibility of undermining the message.

“Depending on the importance of reliability, the possibility that a text or image was generated by AI can undermine the message. Propaganda communications like spam can prove a turn-off,” he said.

“On the contrary, it may be booming for conspiracy theorists who enjoy extreme right-wing forums, anti-Semites and creative awkwardness.”

Similarly, regarding the possibility of technology used in attack planning, Hall said that while it could be aid, it is an open question as to how much actual generation AI systems actually serve terrorist groups.

“In principle, genai investigates important events and locations to target targets, suggest ways to bypass security, and provide trademarks that use or adapt the cellular structure of weapons and terrorists,” he said.

“Access to the appropriate chatbot may distribute the need to download online instructional materials and make complex instructions more accessible.

However, he added that “profits may be progressive rather than dramatic,” adding that it is more likely to be related to only the attacker than the organized group.

Hall added that while it can be used to “expand attack methodology” using Genai, this requires attackers to require prior expertise, skills, access to labs or equipment, for example, via the identification and integration of harmful biological or chemicals.

“The effectiveness of genai here is questionable,” he said.

A similar point was produced in the first international AI safety report created by a global cohort of almost 100 artificial intelligence experts in the wake of the UK government’s first AI safety summit in 2023 at Bletchley Park.

The new AI model could create step-by-step guides to create pathogens and toxins that exceed PHD level expertise, but it could potentially lower barriers to biological or chemical weapon development, but it said it would remain a “technologically complex” process.

An additional risk identified by Hall is the use of AI in the course of online radicalization via chatbots. He said that a one-to-one interaction between humans and machines could “create a closed loop of terrorist radicalization.”

However, he said that even if the model has no guardrails and “sympathizes with the terrorist narrative” for the data, the output relies heavily on what users request.

Potential solution?

From the perspective of legal solutions, Hall emphasized the difficulty of preventing genai from being used to support terrorism. We note that the “upstream responsibility” of those involved in the development of these systems is limited.

Instead, he proposed to introduce “tool-based responsibility.” This targets AI tools specifically designed to support terrorist activities.

Hall said the government should consider opposition or possession of computer programs designed to stir up racial or religious hatred, but he acknowledged that it would be difficult to prove that the program was specifically designed for this purpose.

He said that if they actually create a terrorist-specific AI model or chatbot, developers could be prosecuted under UK terrorist law, but “it appears unlikely that Genai tools will be created specifically to generate new forms of terrorist propaganda. It is much more likely that the powerful general model features will become harnesses.”

“We can predict great challenges to prove that chatbots (or genai models) are designed to produce narrow terrorist content. A better course is a computer program specifically designed to stir up hatred for reasons of race, religion, or sexuality.”

In his reflection, Hall acknowledged that he did not see exactly how AI is used by terrorists, and that he acknowledges that the situation remains “purely theoretical.”

“Plausibly, some say there’s nothing new to be seen. Genai is just a form of technology, and so it will be exploited by terrorists, like the Vans,” he said. “Without evidence that the current legislative framework is insufficient, there is no basis for adapting or extending it to address purely theoretical use cases. In fact, the absence of a genai-enabled attack could suggest that the whole problem is exaggerated.”

Hall added that criminal liability can be argued that even if some form of regulation is needed to avoid future harm, criminal liability is the most appropriate option, especially given the political obligation to utilize AI as a force for economic growth and other public interest.

“Alternatives to criminal liability include transparency reporting, voluntary industry standards, third-party audits, suspicious activity reporting, licensing, bespoke solutions such as AI-Watermarking, advertising restrictions, forms of civil liability and regulatory obligations,” he said.

Although Hall expressed uncertainty about the extent to which terrorist groups adopt generative AI, he concluded that the most likely effect of technology is the general “social degradation” facilitated by the spread of online disinformation.

“While we’re far from bombs, gunfires or blunt force attacks, toxic misrepresentations of government motives or target demographics can lay the foundation for polarization, hostility and ultimately real-world terrorist violence,” he said. “But there’s no role for terrorism here, because the relationship between Genai-related content and ultimately terrorism is too indirect.”

Although not covered in the report, Hall acknowledged that there could be more “indirect effects” of genai on terrorism. This is because it can lead to widespread unemployment and create an unstable social environment that “more encourages terrorism.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleOpen LLMS from Openai and hug your face message API
Next Article Dubai announces a global system for defining the role of humans and AI in content creation – Fast Company Middle East
versatileai

Related Posts

AI Legislation

Debate over AI intensifies as Republicans reverse suspension of annual defense bill | Nationwide

December 5, 2025
AI Legislation

DeSantis breaks with Trump and announces AI regulation plan

December 5, 2025
AI Legislation

Prusa: There is no room for business associations to comment on EU law before authorities

December 5, 2025
Add A Comment

Comments are closed.

Top Posts

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Don't Miss

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025

Gemini 3 for developers: new inference, agent features

December 10, 2025

Anifun vs NovelAI: Which anime AI art generator is better for story creation?

December 10, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?