We will continue to review the incredible benefits that AI can bring over the next 12 months. At the same time, we predict that 2025 will be a year in which AI will fundamentally disrupt the role of developers and the security attitude of companies. While advances in AI will lead to positive change in the long run, organizations can endure the pain growing in three key areas: security risk management, developer roles and regulations.
Record level of risk generated by AI
In 2025, we will see an explosion of brand new software vulnerabilities, with AI writing codes emerging faster than ever. AI-generated code is based on learning open source libraries and can contain unfixed vulnerabilities, which can facilitate unfixed vulnerabilities. Worrisomely, using genai to write code is two years before adoption to mitigate the associated security risks needed to address AI-generated vulnerabilities is. Just as developers leverage AI as part of their development process, organizations need to use AI to do their job to make software flaws faster, easier and more manageable. The vulnerability did not become a “security debt” for more than a year. Without AI intervention, this mountain of security debts becomes increasingly difficult to tackle.
A new developer has been born
AI has already transformed developer roles significantly and will continue to do so over the coming months and years. Supported by AI New developers have leveled out the arenas for many recent alumni who are trying to get involved in the industry. But as technology continues to learn and become smarter, the skill gap between what companies need and what job seekers can offer increases. This is especially true for entry-level roles that new developers are aiming to gain experience.
The types of skills required to create a great developer will vary. In fact, some of the roles of software development today have been transformed into a completely different job. For example, as AI is integrated into the DevOps process, rapid engineering skills become increasingly important for both developer and security teams. The role of traditional systems engineering evolves to incorporate rapid engineering. This type of training also increases demand.
At the same time, automation automatically relies on AI generated to repair defects, improving the developer’s daily experience. This progression is similar to the task of calling someone over the phone. Decades ago, we had to remember someone’s number to get to them, but today all we need to do is tap on our mobile contacts That’s all. For developers, the equivalent is writing secure code without learning how to code safely from scratch. Instead, they employ a process to automatically identify, test and fix vulnerabilities.
In short, it’s essential that both developers and security teams learn how to make the most of AI from 2025 onwards. However, this is not the only important skill. Interpersonal “human” skills (such as business insight and process management) remain crucial for security and developer teams to run well.
Keep up with regulations
AI and cybersecurity regulations are evolving at very different rates across regions. In Europe, for example, the European Union’s Digital Operations Resilience Act (DORA), enacted on January 17, 2025, aims to enhance IT security for financial organizations.
Organizations should also consider AI regulations. Existing AI laws focus primarily on ethical guidelines, bias, safety, and disinformation rather than security. However, next year, AI governance will be a key concern for both cybersecurity experts and regulators, particularly as U.S. software regulators tackle this ever-evolving draft criteria for technology.
Keeping up with the complexities of regulations is a challenge for businesses and software providers. Ensuring compliance with various frameworks requires a lot of time and resources. However, when landscapes become too complicated, many businesses are likely to spend time focusing on compliance rather than actually improving their security attitudes and ensuring that they have implemented the right controls. can.
Years of future changes
In my decades working in cybersecurity, I have learned that it is difficult to predict exactly what will come when technology advances at such a fast pace. Another groundbreaking development, like generative AI, for example, could advance in a completely different direction from the planning of your business.
However, because of these predictions I stuck to what I know and the trends I’ve observed over the past year. Generated AI continues to help developers write code at a record-breaking pace, but we need to change our focus to make sure this code is safe. This is why regulations like EU AI law and Dora are so necessary. They add “or otherwise” to what has always been the best security practices. And Dora is very necessary – they add “or otherwise” to what has always been the best security practices.
Many changes that AI brings mean that security teams need to adapt to the new reality of their role. However, if AI is right, the pros far outweigh the weaknesses, resulting in safer software and more powerful teams. We are excited to see what 2025 has and how it will surprise us.