Cybersecurity researchers uncover a sophisticated malware campaign that uses fake AI video-generating software to steal sensitive data from Windows and Mac users, creating new risks as companies rush to adopt artificial intelligence tools highlighted.
Security experts warn that the campaign, first reported by Bleepingcomputer, uses stolen code-signing certificates and professional websites. This represents a new threat vector as organizations adopt AI content tools. Victims are advised to immediately reset compromised credentials and enable multi-factor authentication on sensitive accounts.
“The recent rise of fake AI video generation tools is an alarming development that shows how cybercriminals are capitalizing on emerging trends,” said Ed Gaudet, CEO and founder of Censinet. he told PYMNTS. “As AI video creation becomes more prevalent, companies need to validate their tools, set security protocols, and take steps to protect their creative teams from fraud.”
The surge in AI-related fraud threatens to undermine consumer trust in legitimate e-commerce platforms that sell artificial intelligence content tools, potentially slowing their adoption among online shoppers and sellers. Small businesses and content creators who fall victim to these scams face significant disruption to their online operations, as compromised payment credentials and authentication tokens can lead to fraudulent transactions and account takeovers on major e-commerce platforms. face confusion.
fake video
The scam revolves around a fraudulent video editing application called “EditProAI” that promotes deepfake political videos through social media. Downloading this software installs information-stealing malware that collects passwords, cryptocurrency wallets, and authentication tokens, creating a potential point of entry to compromise corporate networks.
Fraudsters promote their malicious software through targeted social media ads featuring high-profile deepfake content, such as fake videos of politicians, and linking to convincing copycat websites. These sites imitate legitimate artificial intelligence platforms with standard website elements such as cookie consent banners and professional design, making them difficult to distinguish from genuine services.
When a victim clicks “Get it now,” malware tailored to the operating system (Lumma Stealer for Windows and AMOS for MacOS) is downloaded. These programs covertly collect stored browser data while masquerading as AI video editing software. Attackers aggregate that data through control panels and use it to sell on cybercrime markets or infiltrate corporate networks.
A new type of cybercrime
AI-generated video scams using malware are becoming more sophisticated and dangerous. For example, cybercriminals have created YouTube tutorials that provide free access to popular software such as Photoshop and Premiere Pro. These videos contain links to malicious programs such as Vidar, RedLine, and Raccoon that steal personal information such as passwords and payment data. One example involved malware disguised as a cracked version of software, which infected thousands of devices and extracted sensitive information from unsuspecting users. This AI-generated content is often professionally created and exploits user trust by mimicking legitimate tutorials, making it difficult to detect and effectively counter malware campaigns.
“Downloading niche software exposes users to risks such as ransomware, information thieves, and cryptocurrency miners, which were top of mind for security experts years ago.” Tirath Ramdas, founder and CEO of Chamomile.ai, told PYMNTS. . “But protection has really improved, so I don’t think these problems will come back to the same extent as before.”
Ramada said the endpoint detection software has been improved. All antivirus solutions now benefit from artificial intelligence technology to improve their detection capabilities. Browsers are also better able to prevent the installation of PUAs (potentially unwanted apps).
“Mac and Windows operating systems are hardened by default,” he added. “And for businesses, moving to a zero trust architecture means that even if someone in the marketing department is tricked into installing malware, the impact is better isolated than before.”
Gaudet said when deadlines are tight, creative teams are more susceptible to scams that promise quick results.
“To combat this, companies need to provide cybersecurity awareness training specific to the unique challenges of creative teams,” he said. “Educating employees to be aware of phishing attempts and software reliability and to report suspicious activity is critical.”