Danielle Sheer is Chief Legal and Trust Officer at Commvault. The views are the author’s own.
As the boom in artificial intelligence fuels a frenzy for data, some of the world’s largest technology companies are looking for new sources of information. Purpose: Use your data to train large, rapidly growing language models to generate the best answers to any question.
But their quest has raised concerns about privacy, bias and the use of consumer data.
Some data security companies have already begun using AI to glean insights from customer data and sell it as a service to customers. Some have indicated they are willing to partner with Big Tech companies to mine the vast amount of information these security companies are supposed to protect for organizations in all sectors, from startups to corporate giants.
These adventures may seem worthwhile. For example, unlocking decades of anonymized and encrypted medical research and healthcare information to AI algorithms could lead to treatments and cures for diseases through pattern matching beyond human ability. . By modeling global weather patterns and agricultural practices, along with demographics, distribution systems, and economic policies, we can make progress toward eradicating hunger and malnutrition.
However, this type of data mining poses an existential threat to the integrity of the data security industry.
Daniel Shear
Courtesy of Commvault
This industry exists to ensure the complete protection of your important records. This helps organizations get back up and running smoothly in the event of a breach. Should our industry hand over customer data to the latest AI project just because we have it?
Get health care. To help comply with regulations, organizations typically contract with third-party vendors to back up all records, including patient medical histories. If that data is shared with another vendor to train an AI model, the information is at risk. A single cybersecurity breach, like the one Change Healthcare experienced earlier this year, could compromise the personal information of 100 million people.
Then there’s the issue of consent. People who sign permission to share their patient records with doctors and insurance companies probably don’t think they’re providing their medical history to AI researchers. But if a healthcare provider provides information, even anonymously, to an AI algorithm, is that a violation of privacy? Does that require separate consent?
This is where regulation needs to improve, ensuring that companies and institutions are transparent with consumers about exactly how their data will be used, and giving them the opportunity to opt out.
Consider how safety regulations arose for other important industries. In the late 1800s, the use of electricity spread rapidly, but so did building fires. A group of industry experts created the first National Electrical Code. This helped make the system safer with standardized guidelines for wiring methods and materials.
Less than a decade later, the unsanitary practices in the meat packaging industry famously described in Upton Sinclair’s book were uncovered. junglewhich facilitated the passage of the Pure Food and Drug Act of 1906, leading to the creation of the Food and Drug Administration. And in the 1950s, two commercial planes collided in midair, killing all on board, foreshadowing the creation of the Federal Aviation Administration to manage civil aviation safety.
We’ve never done that before with software, which is as much a part of our lives as electricity, food, and air travel. Software is deployed around the world, powering everything from the way we work and get our groceries to the safety and functionality of our critical infrastructure. But we don’t have a comprehensive, peer-reviewed set of rules for how to behave. It is necessary.
Perhaps global privacy experts and technology leaders – activists, regulators and practitioners – can all work towards the goal of creating safeguards for software, as in the electrical, food and aviation industries. A quasi-governmental organization consisting of
We have many pressing problems that technology can help solve. But AI is a cutting-edge technology, so we need to understand it and build in the appropriate safeguards.
Rather than waiting for a crisis or a shocking revelation, let’s learn from history before getting serious about data security.