According to Microsoft’s Estartcyber Signals report, AI-powered scams are evolving rapidly as cybercriminals use new technologies to target victims.
Over the past year, Tech Giant has blocked $4 billion in scam attempts and blocked around 1.6 million bot sign-up attempts per hour, indicating the scale of this growing threat.
The ninth edition of Microsoft’s Cyber Signal Report, titled “AI Powerful Deception: The Threat and Countermeasures of New Criminals,” reveals how artificial intelligence has reduced technical barriers for cybercriminals, allowing even skilled actors to generate sophisticated fraud with minimal effort.
What previously took days or weeks to create a scammer can now be achieved in minutes.
The democratization of fraudulent ability represents a change in the criminal landscape that affects consumers and businesses around the world.
AI-enhanced cyber fraud evolution
Microsoft’s report highlights how AI tools scan and scrape the web for company information, helping cybercriminals build detailed profiles of potential targets of highly engaging social engineering attacks.
Bad actors can use fake AI-enhanced product reviews and AI-generated storefronts to lure victims into complex fraud schemes.
The number of threats continues to grow, according to Kelly Bissell, corporate vice president of Anti-Fraud and product abuse at Microsoft Security. “Cybercrime is a trillion dollar issue and has risen every year for the past 30 years,” according to the report.
“I think there is an opportunity to adopt AI faster today. This allows us to quickly detect and close exposure gaps. Now we can make a huge difference and build security and fraud protection for our products faster.”
The Microsoft Anti-Fraud team reports that AI-powered fraud attacks have occurred worldwide, with key activities from China and Europe, particularly Germany, due to their status as one of the European Union’s largest e-commerce markets.
The report notes that the larger the digital market, the more likely it is to have a proportionate degree of fraud.
Leading e-commerce and employment fraud
Two areas of AI-enhanced fraud area include e-commerce and employment recruitment fraud. In ecommerce space, you can create fraudulent websites in minutes using AI tools with minimal technical knowledge.
Sites often mimic legitimate businesses by using AI-generated product descriptions, images and customer reviews to help them believe they are deceiving consumers and interacting with real merchants.
A customer service chatbot with AI that adds another layer of deception can interact with customers persuasively, stagnate with scripted excuses to delay chargebacks, and manipulate complaints with AI-generated responses that make scam sites professional.
Job seekers are equally at risk. According to the report, the Generated AI made it very easy for scammers to create fake lists on various employment platforms. Criminals generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-driven email campaigns for Phish job seekers.
AI-powered interviews and automated emails make these scams more reliable and more difficult to identify. “Criminals often request personal information, such as resumes and bank account details, under the guise of verifying the applicant’s information,” the report states.
Red Flag includes unsolicited job offers, requests and communications for payments via unofficial platforms such as text messages and WhatsApp.
Microsoft’s countermeasures against AI scams
To combat the new threats, Microsoft says it has implemented a multifaceted approach across its products and services. While Microsoft Defender for Cloud offers threat protection for Azure Resources, Microsoft Edge, like most browsers, provides website typo protection and domain spoofing protection. Edge has been attracting attention for Microsoft Report using deep learning technology that helps users avoid fraudulent websites.
The company has also stepped up Windows Quick Assist to enhance warning messages to alert users about possible technical support scams before granting access to those claiming from IT support. Microsoft is blocking an average of 4,415 suspicious quick-assist connection attempts daily.
Microsoft has also introduced a new anti-fraud policy as part of the Secure Future Initiative (SFI). As of January 2025, Microsoft Product Teams should perform fraud prevention assessments and implement fraud management as part of the design process to ensure that the product is “fraudulent by design.”
As AI-driven scams continue to evolve, consumer awareness remains important. Microsoft advises users to be aware of urgent tactics, verify the validity of the website before making a purchase, and not to provide personal or financial information to unconfirmed sources.
For enterprises, implementing multi-factor authentication and deploying deepfake detection algorithms can help mitigate risk.
See: Wozniak warns that AI will empower next-generation scams
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.