Organizations across industries manage vast amounts of data, including customer data, financial data, sales figures, reference numbers, and more. And data is one of the most valuable assets a company has. Ensuring data safety is the responsibility of the entire organization, from IT managers to individual employees.
However, the rapid adoption of generative AI tools requires a greater focus on security and data protection. It doesn’t matter when your organization uses generative AI, but it’s essential to staying competitive and innovative.
Throughout my career, I have experienced first-hand the impact of many new trends and technologies. The influx of AI is different. For some companies, like Smartsheet, we need a two-pronged approach: as a customer of companies that are embedding AI into the services we use, and as companies that build and deploy AI capabilities in their own products. It’s from.
To keep your organization safe in the era of generative AI, we recommend that CISOs focus on three areas:
Transparency about how GenAI is trained, how it works, and how it is used with customers Build strong partnerships with vendors Educate employees on the importance of AI security and the risks associated with it do
transparency
One of the first questions I ask when talking to vendors is about the transparency of their AI systems. How is the publishing model used and how is the data protected? Vendors are well prepared to disclose how their data is protected from mixing with other companies’ data. You need to keep it in order.
You need to be clear about how you are training AI capabilities within your product and when and how you are using them with your customers. As a customer, if you feel like your concerns and feedback aren’t being taken seriously, it could be a sign that your security isn’t being taken seriously either.
If you are a security leader innovating with AI, transparency should be a cornerstone of your responsible AI principles. Publicly share your AI principles and document how your AI systems work, just as you would expect from a vendor. An important part of this that is often overlooked is to also be aware of predicting how things will change in the future. As AI will inevitably continue to evolve and improve over time, CISOs should be proactive about how this is expected to change their use of AI and the steps they will take to further protect customer data. must be shared.
partnership
Building and innovating with AI often requires relying on multiple providers who have done the heavy and expensive work of developing the AI systems. When working with these providers, customers don’t have to worry about anything being hidden. Instead, providers should strive to be proactive and forthright.
Finding a reliable partner requires more than a contract. The right partner will deeply understand your needs and strive to meet them. Working with a trusted partner means you can focus on what AI-powered technology can do to drive value for your business.
For example, in my current role, my team evaluated and selected several partners to build AI into a model that we felt was the safest, responsible, and most effective. Building native AI solutions can be time-consuming, expensive, and may not meet security requirements, so leveraging a partner with AI expertise can help your business maintain the data protection your organization requires. This will be advantageous in reducing the time it takes to realize value.
By working with trusted partners, CISOs and security teams can not only deliver innovative AI solutions to their customers faster, but also help their organizations keep up with rapid iterative developments in AI technology and meet evolving data protection needs. You can adapt.
education
To keep your organization safe, it’s important that all employees understand the importance of AI security and the risks associated with the technology. This includes ongoing training to help employees recognize and report emerging security threats by teaching them how to properly use AI in the workplace and in their personal lives.
Phishing emails are a great example of a common threat that employees face every week. Previously, a common recommendation for spotting phishing emails was to look out for typos. Now that AI tools are so easily available, the bad guys are even more powerful. There are fewer clear and obvious signs that we used to train employees to watch out for, and we’re seeing more sophisticated planning.
Ongoing training on seemingly simple things like how to spot a phishing email will need to change and evolve as generative AI changes and overall security evolves. Alternatively, leaders can go a step further and run a series of mock phishing attempts to test employees’ knowledge as new tactics emerge.
Keeping your organization safe in the age of generative AI is no easy task. As technology evolves, threats become more sophisticated. But the good news is that no company faces these threats alone.
By collaborating, sharing knowledge, and focusing on transparency, partnership, and education, CISOs can make great strides in securing their data, customers, and communities.
About the author
Chris Peake is the Chief Information Security Officer (CISO) and Senior Vice President of Security at Smartsheet. Since joining the company in September 2020, he has focused on customer enablement and a passion for building great teams, continuously improving security programs to better protect customers and the company in an ever-changing cyber environment. is responsible for leading the Chris holds a PhD in cloud security and trust and has over 20 years of experience in cybersecurity, during which time he has supported organizations such as NASA, DARPA, Department of Defense, and ServiceNow. He enjoys biking, boating, and cheering on Auburn football.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Check us out on YouTube!