The generated AI (GENAI) tool called GHOSTGPT is provided to cyber criminals that help create malware codes and phishing emails.
GhostGpt is sold as “uncensored AI”, and an unusual security researcher writes that it is likely to be a rapper for Genai model Genai model of Chatgpt or open source.
Provides some attractive features for cyber criminals. This guarantees that there is no record of conversation, such as a “strict no -log policy”, and guarantees convenient access through telegram bot.
“Promotional materials are used in” cyber security “, but given that they are focusing on the availability of the cyber criminal forum and the BEC (compromise of business email), this claim is believed. Tai Tai, and an unusual blog said. “Such disclaimers seem to be a weak attempt to dodge legal accountability. There is nothing new in the cyber crime world.”
Researchers test the GhostGpt function by requesting a phishing email from Docusign, and chatbots are convincing to check the document to the recipient to check the documentation. I responded.
GHOSTGPT can be used for coding, and blog posts focus on marketing related to the creation and development of malware. Malware authors are increasingly utilizing AI coding support, and tools like Ghostgpt, which lack the typical guardrail of other major language models (LLMS), jailbreaks mainstream tools like Chatgpt. You can save the time of the criminal who was spent.
Ghostgpt ads in the Cyber Crime Forum have gained thousands of viewing and some traction, according to unusual security. In the previous report of Abnormal, such a forum has gained popularity of “Dark AI”, and the entire section is specialized in jail break techniques and malicious chatbots.
“The attacker uses tools such as GhostGpt to create a completely legal malicious email. These messages often pass through conventional filters, so AI -mounted security solutions are This is the only effective way to detect and block. “
Malicious LLMs pay attention to tools such as WormGpt, centered on malware and fishing in at least mid -2023, to lower the bar to make a more sophisticated attacker with low skilled attackers. It has been advertised since it was collected.
Attackers also try to strengthen cyber criminal activity using legal tools such as Chatgpt. This confused the activities of actors sponsored by malware developers and the government last year.
In recent ransomware campaigns, including AI Assist gangs and ransomb’s affiliates, signs of AI Assist coding are observed, but AI Assignment Fishing and BEC campaigns are the most common use of Genai by cyber criminals. is.
Egress’s October 2024 report found that 75 % of the phishing kit provided the Dark Web AI function, but Vipre Security Group was in the second quarter of the second quarter of 2024 in August. Estimated 40 % reported that they are involved in e -mail of the AI generation.
On the other hand, I found that Pillar Security’s attack on Genai Report in 2024 was a success rate of about 20 % of LLM jailbreak, only 42 seconds to complete on average.