Openai has published its annual report on malicious use of AI.
By using AI as a power multiplier for the expert research team, within three months of the last report, we were able to detect, confuse and expose abusive activities such as social engineering, cyberspy, deceit ceptive employment schemes, secret impact manipulation, and fraud.
These businesses have occurred in many parts of the world, acted in a variety of ways, focusing on a variety of targets. It appeared that a considerable number was occurring in China. Four of the ten examples in this report had presumably Chinese origins, spanning social engineering, secret impact operations and cyber threats. But we have confused abuse from many other countries. The report includes a case study of possible task fraud from Cambodia, obvious comment spam from the Philippines, attempts that could link secret effects to Russia and Iran, and a deceptive employment scheme.
Such reports give a short window into the way in which malicious actors around the world are used. Last year, models weren’t good enough for these types of things, so I say it’s “easy.” Next year, threat actors will run AI models locally. There is no visibility of this kind.
Article from the Wall Street Journal (also here). Slash dot thread.
***This is Schneier’s Security Bloggers Network Syndicate Bloggers on Security, created by Bruce Schneier. Read the original post at https://www.schneier.com/blog/archives/2025/06/report-on-the-malicious-uses-of-ai.html