Most industry analysts expect organizations to accelerate their efforts to leverage generative artificial intelligence (GenAI) and large-scale language models (LLM) for a variety of use cases over the next year.
Typical examples include customer support, fraud detection, content creation, data analysis, knowledge management, and increasingly, software development. recent Survey of 1,700 IT professionals In a survey conducted by Centient on behalf of OutSystems, 81% of respondents said they currently use GenAI to assist with coding and software development. Almost three-quarters (74%) plan to build 10 or more apps in the next 12 months using an AI-powered development approach.
While these use cases promise to bring significant efficiency and productivity gains to organizations, they also introduce new privacy, governance, and security risks. Here are six AI-related security issues that industry experts say IT and security leaders should pay attention to over the next 12 months.
Risks increase as AI coding assistants become mainstream
The use of AI-based coding assistants such as GitHub Copilot, Amazon CodeWhisperer, and OpenAI Codex will move from experimental and early adopter status to mainstream, especially among startup organizations. Touted benefits of such tools include increased developer productivity, automation of repetitive tasks, reduced errors, and faster development time. However, as with all new technologies, there are issues, including: Shortcomings too. From a security perspective, this includes automated coding responses such as the propagation of vulnerable code, data leaks, and unsafe coding practices.
”While AI-based code assistants undoubtedly offer powerful benefits in terms of autocompletion, code generation, reuse, and making coding easier for non-engineering users, Not without riskssaid Derek Holt, CEO of Digital.ai. The biggest factor is the fact that an AI model is only as good as the code used to train it. Early users saw coding errors, security anti-patterns, and code sprawl while using AI. “Enterprise users should continue to use (dynamic application security testing (DAST) and static application security testing (SAST)) to scan for known vulnerabilities and harden their code,” Holt said. We oppose reverse engineering attempts to limit negative impacts and ensure expected benefits from increased productivity. ”
Accelerate adoption of xOps practices with AI
As more organizations look to incorporate AI capabilities into their software, DevSecOps, DataOps, and ModelOps, the practice of managing and monitoring AI models in production, are being integrated into a broader, more comprehensive xOps management approach. The hope is that this will happen, Holt said. Driven by AI-enabled software, between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLM and GenAI apps that dynamically generate responses based on patterns learned from training datasets. Holt says the lines are becoming increasingly blurred. He points out that this trend will put new pressure on operations, support, and QA teams and drive xOps adoption.
“xOps is a new term that outlines the DevOps requirements for creating applications that leverage in-house or open-source models trained on a company’s own data,” he says. “This new approach integrates and synchronizes traditional DevSecOps processes with DataOps, MLOps, and ModelOps processes for a unified end-to-end life cycle when delivering mobile or web applications that leverage AI models. Holt recognizes that this new set of best practices will be critical to ensuring companies have quality, safe, and supportable AI-enhanced applications. Masu.
Shadow AI: A big security problem
The easy availability of a rapidly expanding range of GenAI tools is facilitating the misuse of the technology in many organizations and creating a new set of challenges for already overburdened security teams. Masu. An example is one that proliferates rapidly and is often uncontrolled. Utilizing AI chatbots among employees For various purposes. This trend has raised concerns in many organizations about sensitive data being inadvertently leaked.
Nicole Carignan, vice president of strategic cyber AI at Darktrace, said security teams expect to see a surge in the misuse of such tools next year. “There will be an explosion of tools that use AI and generative AI within the enterprise and on the devices used by employees.” The rise of shadow AIsays Carignan. “If left unchecked, it not only raises serious questions and concerns about data loss prevention, but also compliance concerns due to new regulations such as: EU AI law Carignan expects chief information officers (CIOs) and chief information security officers (CISOs) will be under increasing pressure to implement capabilities to detect, track and eradicate unauthorized use of AI tools in their environments. I’m doing it. .
AI augments human skills, not replaces them
AI is good at processing large amounts of threat data and identifying patterns in that data. But at best, things will remain as they are, at least for a while. extension tools Become adept at handling repetitive tasks and enable automation of basic threat detection functions. According to Stephen Kowski, field CTO at SlashNext Email Security+, the most successful security programs over the coming year will continue to combine the processing power of AI with human creativity.
Many organizations will continue to require human expertise to identify and respond to real-world attacks that evolve beyond the historical patterns used by AI systems. Effective threat hunting will continue to rely on human intuition and skill to spot subtle anomalies and connect seemingly unrelated indicators, he says. “The key is to achieve the right balance where AI handles the high volume of routine detections, while skilled analysts investigate emerging attack patterns and determine strategic responses. ”
The ability of AI to quickly analyze large data sets will increase the need for cybersecurity professionals to hone their data analysis skills, adds Julian Davies, vice president of advanced services at Bugcrowd. “The ability to interpret AI-generated insights is essential for detecting anomalies, predicting threats, and improving overall security posture.” For organizations looking to get the most value from their AI investments, Engineering skills will also become increasingly useful, he added.
Attackers leverage AI to exploit open source vulnerabilities
Venky Raju, Field CTO at ColorTokens, expects attackers to leverage AI tools to exploit vulnerabilities and automatically generate exploit code in open source software. “AI-based fuzzing tools can identify vulnerabilities without accessing the original source code, so even closed-source software is not immune. It is a matter of grave concern,” Raju said.
In a report earlier this year, cloud strike He cited AI-enabled ransomware as an example of how attackers are using AI to hone their malicious capabilities. Attackers can also use AI to easily adapt and modify ransomware to probe targets, identify system vulnerabilities, encrypt data, and evade endpoint detection and remediation mechanisms. there is.
Verification and human monitoring become important
Organizations will continue to have difficulty completely and implicitly trusting AI to do the right thing. a Recent research by Qlik A survey of 4,200 executives and AI decision makers found that most respondents overwhelmingly support the use of AI in a variety of applications. At the same time, 37% said senior executives lack trust in AI, and 42% of mid-level executives expressed a similar opinion. About 21% reported that their customers also distrust AI.
“Trust in AI will continue to be a complex balance. Benefits and risks“Current research shows that eliminating bias and hallucinations may be counterproductive and impossible,” says SlashNext’s Kowsky. “While industry agreements provide some ethical framework, the subjective nature of ethics means that different organizations and cultures will continue to interpret and implement AI guidelines differently.” “A practical approach is to implement robust verification systems and maintain human oversight, rather than seeking complete reliability,” he says.
Bugcrowd’s Davies says there is already a growing need for experts who can deal with the ethical implications of AI. Their role is to ensure privacy, prevent bias, and maintain transparency in AI-driven decision-making. “The ability to test AI for unique security and safety use cases is becoming important,” he says.