Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, June 8
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Security management in the age of AI
Cybersecurity

Security management in the age of AI

By March 9, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Senior Gartner analysts launched the analyst company’s security and risk management summit in Sydney this week amid hype surrounding artificial intelligence (AI) and cybersecurity, but acknowledged that “hype often contains a core of truth.”

In cybersecurity, for example, it was fortunate that executives were paying attention to it, Gartner’s vice president Richard Addiccott said in the opening address of the event. Gartner’s research found that more technology executives plan to increase their cybersecurity funding than they would increase their AI funding.

Christine Lee, vice president of research at Gartner, urged engineers to help executives take intellectual risks, noting that there is no such thing as complete protection. Higher levels of protection reduce risk but costs, while lower protection increases risk while reducing costs. Recognizing this will allow us to reach an appropriate level of protection agreement for the organization, she added.

To operate it, Addiscott recommended a combination of service-level agreements and protection level agreements (PLAs), similar to outcome-driven metrics (ODM).

Addiscott said the PLA will help security teams manage stakeholders at various levels. This is because “focusing on what you agreed to do for the business, it states that it “deters technology acquisition” or “deters technology-only “thinking”” and “ODM discussions are on the same page.”

Of course, AI is another well-promoted area. To capitalize on that hype, Addiscott proposed three steps: Cultivate AI literacy, promote experimentation and enhance AI initiatives with versatile security.

Looking at AI in the “beginner’s mind” can help you identify viable use cases, but that’s not a good thing about ignorance. Addiscott pointed to the development of internal AI champions as a way to improve AI literacy within the organization. Its literacy helps to promote responsible and productive use of AI.

Experiments are good, but organizations need to use AI intentionally. Lee cited an example of this, including the development of an internal chatbot for cybersecurity coaching to answer employee questions, ensuring security staff focused on higher value tasks.

That being said, Addiscott is recommended for focusing on productivity, but I recommend asking the question, “What can AI do for my use case?”

Working on Shadow AI

Chief Information Security Officers (CISOs) cannot sit in their hands as 98% of organizations surveyed say they plan to adopt or adopt generated AI.

Lee has used Discovery Tools to uncover examples of Shadow AI and suggested that it can help identify and correct security risks such as data leaks. Blocks for such applications must remain in a situation where risk outweighs the benefits and there is no alternative product. If you are permitted to use a tool that has not been approved before, you have the opportunity to consider why approval was not sought and improve the process.

Gartner’s Vice President Pete Shoard nominated Shadow AI as one of the biggest issues with CISOS and its team’s Generation AI, with a wide range of Generation AI web products being used, and AI is increasingly embedded in everyday products. While these tools help people become more efficient, there are many risks, including data leaks, oversharing, undetected inaccuracies, and even using aggressively malicious apps.

To mitigate the risks of Shadow AI, Shoard proposed identifying various roles within the organization, such as content creation, and determining key risks such as brand shaming. Second, you can allow the use of appropriate AI tools along with acceptable use cases such as the production of non-technical content.

Establishing such a policy is not enough. It must be forced. This means that we have a way to monitor abnormal use. This includes measurements such as endpoint security products and role-based access control. Importantly, this can only be done at scale with automation and good exception management, so organizations need to evaluate AI usage control tools with an eye on ease of deployment, depth of control, and the long-term viability of vendors.

If your organization is building its own AI applications, Shoard suggests that security teams work with the AI ​​team from the early stages of the project to provide adequate attention to privacy, security and compliance.

Furthermore, AI applications are more than traditional applications due to issues like bias and fairness. Gartner recommends using the AI ​​Trust, Risk and Security Management (TRISM) program to ensure governance issues are addressed from the start.

AI security issues

Short-term AI issues worthy of CISOS’ attention include attacks on AI systems, including the use of fraud and deepfakes of spear phishing, and attacks on AI systems, according to analysts and fellow Leigh McMullen, vice president of Gartner Distinguished.

“The best control we can introduce is human control,” McMullen said. This includes setting up a secure channel for communication with real people if there is a slight suspicion of deepfake.

Examples of AI models being directly attacked include providing “mal information” that affects its output. The group of artists was able to embed information at the sub-pixel level of the image, and supplying only three of such images to a large-scale language model (LLM) was found to be sufficient to generate a picture of a cat when asked for one of the dogs.

“It’s not going to break our business, but what if someone did what they underwrite it?” he asked McMullen.

Another novel attack saw that it embed “mal-information” in the video subtitles and placed the screen offscreen, making it seem like AI trying to understand what was going on in the video, but it wasn’t visible to anyone watching it. “If AI can change what the world believes, you will change that mental model,” says McMullen.

A particularly despicable approach is to design situations in which AI hallucinates software tools that do not actually exist, and amplify the hallucinations to the point where they appear in response to other prompts. The attacker then plant the malware under the guise of the tool, and is then downloaded by the victim.

“It’s like the search engine optimization attack we’ve seen… on a very, very different level,” he said. For example, they are found in the R and Bioinformatics libraries.

During the panel discussion, Christopher Johnson, group technology director at real estate company Charter Hall, warned that “people don’t think there’s a risk” when using AI tools. Microsoft Copilot implements several restrictions to stop people looking at their colleagues’ pay, for example, but the company needs to protect information that is confident in.

Allens CIO Bill Tanner said the law firm has already completed its enterprise search project, so permissions are largely sorted. This allowed us to adopt Copilot more quickly, but all that had open authority were reviewed and reset, so only those who needed access had it. Additionally, the company was well placed as it adopted an enterprise version of ChatGPT when it first appeared.

Generation AI is important for the legal department as it is all about language. But it’s important to educate people about risk and find opportunities to gain value from generics and custom tools. Allens has a steering committee to ensure that the AI ​​projects are in alignment with the company’s strategic direction.

Johnson warned that individuals may be convinced that what individuals are doing with generative AI is safe, even though it is not safe. Therefore, they should instruct that actions such as pasting sensitive information into copilot are not permitted. The company uses AI-generated activity summary, but does not consider the prompts employees give to Copilot, Tanner said.

Cybersecurity AI

There are some quick wins for AI applications within cybersecurity features, Lee said. These include security testing of your company code, runtime control, and data masking. Strategic priorities should include data governance, developers, and incident response playbooks.

Shoard said LLM is not intelligent, but it has a role for generating AI within security operations. These include summarizing alerts, interactive threat intelligence, generating an overview of attack risk, and documenting mitigation. However, these assistants should be evaluated in terms of their ability to improve performance over existing security metrics rather than ad hoc productivity metrics.

On the topic of metrics, well-known Vice Presidential analyst Tom Scholz suggested that there are two tests to apply before adopting the new metric. The first is, “Do you need the resources needed to manage measurements, put them in context and maintain metrics?” These resources could be used more in other ways.

The second is, “What do executive audiences do with information?” This includes determining informed by the metric, how often it is used, and whether it is used periodically or ad hoc.

Scholz recommended that board-level executives regularly provide 4-6 input-centric metrics, focusing on operational and tactical metrics.

Non-technical threats and quantum impacts

During the closing keynote speech, Gartner Vice President John Watts warned that it was the biggest user error out of any non-technical threat. “We’re really not good at creating security controls. When someone creates a security control that can’t be used, (they) bypass it.”

In a Gartner survey, 74% of business technicians said they bypass security guidance to get the job done. This means that organizations need better control than employees see themselves and their organization’s enablers, he said.

The relevant issue is the sudden growth of employee activities where their personal values ​​do not match the values ​​of the organization. This is being viewed on Google and Amazon, he said. There, the employees said they were engaged in destructive sit-ins because the company didn’t want to engage with the military. “It’s funny to think about,” he said.

On the technical side, he pointed out the possibility that quantum computers could destroy commonly used encryption algorithms. This will be a problem. This is probably because secure communications will be decrypted, and digital signatures will no longer be able to guarantee non-repudiation, and changes will be required to the identity and access of the management system.

It is not clear when quantum computing will become a reality, but it is time to adopt cryptographic inventory within organizations and replace weak algorithms with quantum-safe methods in the coming years. “2030 seems like a pretty good goal,” he suggested completing such a project.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAdvances conservation with AI-based facial recognition for turtles
Next Article Meet MANUS: Deep Research + Operator + Using Computer + Lovable + Memory New AI Agent from China

Related Posts

Cybersecurity

Rubrik expands AI Ready Cloud Security’s AMD partnership to reduce costs by 10%

June 3, 2025
Cybersecurity

Zscaler launches an advanced AI security suite to protect your enterprise data

June 3, 2025
Cybersecurity

Why AI behaves so creepy when faced with shutdown

June 3, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?