Aaron Bugal, Field CISO, APJ, Sophos
Artificial intelligence (AI) is gaining significant airtime as a silver bullet to fill the cybersecurity void. Across Malaysia, businesses are rushing to deploy AI-powered tools to enable faster detection, automated response, and decision-making at machine-like speed. It’s a seductive promise. Solve cyber risks with smart technology.
But here’s the problem. Cybercriminals are similarly rapidly adopting AI and are not concerned with compliance, ethics, or protecting their bottom line.
Cybercrime levels up
In the current scam epidemic in Malaysia, attackers are using QR code scams, deepfake voices, and fake banking apps combined with creative and ruthless social engineering to evade detection and fool even the most experienced professionals. The technique is compelling, but the psychology behind it is even more penetrating.
Imagine a voicemail that mimics a CFO approving a fund transfer, or a chatbot that looks like it’s from a bank but is siphoning data in real time. These are not hypothetical scenarios, they are happening right now.
By September 2025, Malaysia had recorded about 48,000 online fraud cases, accumulating losses of nearly RM2 billion. In 2024, fraud alone cost Malaysian organizations RM2.45 billion. If AI protection were sufficient on its own, this number would go in the opposite direction.
Automation is not a get-out-of-jail-free card
To be clear, AI is not absolute. You just know what you were given. Cyber attackers, on the other hand, seek novelty. They twist, turn, and weaponize the unknown.
Even the best AI can be tricked through data poisoning, model manipulation, or simply exploiting human error. And what about human error? This is the only constant that AI still cannot patch. Millions of ringgit spent on cybersecurity can be instantly wasted if a distracted employee clicks on a fraudulent link.
Machine requires administrator
So what’s the answer? It’s not about replacing humans with machines. It’s about empowering humans with the right tools. Managed detection and response (MDR) is one way to do this.
Let AI scrutinize data and report anomalies, allowing human analysts to interpret gray areas and take decisive action.
AI brings speed. Humans bring context. You need both if you want to be resilient, not just compliant.
Continuously train your employees
Cybersecurity is not a one-time introductory session. Threats evolve rapidly. Your people must do the same.
From spotting deepfake content to recognizing AI-powered fraud, regular, realistic training is your best defense. This includes simulating attacks that don’t seem like impromptu exercises.
Build a security strategy that doesn’t assume AI will save you
AI tools are just one layer. You need protection across all dimensions: endpoint, network, identity, cloud, email, data. Each must be part of an integrated, layered security framework. It also needs to be constantly reviewed and improved.
This is not about spending a lot of money on the attacker. It’s about outsmarting them.
Trust is your currency. don’t let the value go down
Malaysia is moving towards its digital economy ambitions. But here’s the kicker. If you’re betting everything on automation, you’re also betting on losing trust the moment automation fails.
Regulators and customers alike don’t care how “sophisticated” the AI is if it can’t stop a breach.
Final Words: AI won’t save you, but if the right people use it, it might.
There is no doubt that AI is powerful. But cybersecurity is, and always will be, a human story.
It’s about wise decisions, keen awareness, and the kind of judgment that machines can’t reproduce. The future of cyber defense in Malaysia will not be automated and will be powered by trained and skilled personnel steering the ship, not just watching the dial.
In the end, it’s not AI that will save Malaysian businesses. It’s the people who know how to use it wisely with their eyes wide open.

