Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI Art Trends 2024: God’s Hand Created with Primo Models on the Piclumen Platform | AI News Details

July 8, 2025

Efficient Multimodal Data Pipeline

July 8, 2025

Leading the Korean LLM evaluation ecosystem

July 8, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, July 9
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Media and Entertainment»GHOSTGPT provides AI coding and fishing support for cyber criminals
Media and Entertainment

GHOSTGPT provides AI coding and fishing support for cyber criminals

By January 24, 2025Updated:February 13, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

The generated AI (GENAI) tool called GHOSTGPT is provided to cyber criminals that help create malware codes and phishing emails.

GhostGpt is sold as “uncensored AI”, and an unusual security researcher writes that it is likely to be a rapper for Genai model Genai model of Chatgpt or open source.

Provides some attractive features for cyber criminals. This guarantees that there is no record of conversation, such as a “strict no -log policy”, and guarantees convenient access through telegram bot.

“Promotional materials are used in” cyber security “, but given that they are focusing on the availability of the cyber criminal forum and the BEC (compromise of business email), this claim is believed. Tai Tai, and an unusual blog said. “Such disclaimers seem to be a weak attempt to dodge legal accountability. There is nothing new in the cyber crime world.”

Researchers test the GhostGpt function by requesting a phishing email from Docusign, and chatbots are convincing to check the document to the recipient to check the documentation. I responded.

GHOSTGPT can be used for coding, and blog posts focus on marketing related to the creation and development of malware. Malware authors are increasingly utilizing AI coding support, and tools like Ghostgpt, which lack the typical guardrail of other major language models (LLMS), jailbreaks mainstream tools like Chatgpt. You can save the time of the criminal who was spent.

Ghostgpt ads in the Cyber ​​Crime Forum have gained thousands of viewing and some traction, according to unusual security. In the previous report of Abnormal, such a forum has gained popularity of “Dark AI”, and the entire section is specialized in jail break techniques and malicious chatbots.

“The attacker uses tools such as GhostGpt to create a completely legal malicious email. These messages often pass through conventional filters, so AI -mounted security solutions are This is the only effective way to detect and block. “

Malicious LLMs pay attention to tools such as WormGpt, centered on malware and fishing in at least mid -2023, to lower the bar to make a more sophisticated attacker with low skilled attackers. It has been advertised since it was collected.

Attackers also try to strengthen cyber criminal activity using legal tools such as Chatgpt. This confused the activities of actors sponsored by malware developers and the government last year.

In recent ransomware campaigns, including AI Assist gangs and ransomb’s affiliates, signs of AI Assist coding are observed, but AI Assignment Fishing and BEC campaigns are the most common use of Genai by cyber criminals. is.

Egress’s October 2024 report found that 75 % of the phishing kit provided the Dark Web AI function, but Vipre Security Group was in the second quarter of the second quarter of 2024 in August. Estimated 40 % reported that they are involved in e -mail of the AI ​​generation.

On the other hand, I found that Pillar Security’s attack on Genai Report in 2024 was a success rate of about 20 % of LLM jailbreak, only 42 seconds to complete on average.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous Article2025 AI and Data Protection: All rise must be converged
Next Article Manuscript

Related Posts

Media and Entertainment

Viral AI band “spokesman” Velvet Sundown admits he is “making the media a hoax”

July 4, 2025
Media and Entertainment

Maine Police Department apologises for AI-changed social media posts

July 3, 2025
Media and Entertainment

Important biases in AI models used to detect depression on social media

July 3, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Leading the Korean LLM evaluation ecosystem

July 8, 20251 Views

Introducing the Red Team Resistance Leaderboard

July 6, 20251 Views

The role of AI in national security depends on data quality

March 3, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Leading the Korean LLM evaluation ecosystem

July 8, 20251 Views

Introducing the Red Team Resistance Leaderboard

July 6, 20251 Views

The role of AI in national security depends on data quality

March 3, 20251 Views
Don't Miss

AI Art Trends 2024: God’s Hand Created with Primo Models on the Piclumen Platform | AI News Details

July 8, 2025

Efficient Multimodal Data Pipeline

July 8, 2025

Leading the Korean LLM evaluation ecosystem

July 8, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?