Ask the compliance person to name the top concerns about artificial intelligence. And it may be blurry that they are related to privacy. It doesn’t just tell me what AI risk is. It also gives hints on how companies are trying to manage these risks.
In fact, if the compliance person wants a thoughtful overview of how AI and privacy will overlap, one of the appropriate places to start is from the U.S. House of Representatives. This is a recent report. This report, published in December, has a catalog on the impact of AI’s public policy, and recommends the goals to keep in mind when considering the possibility of AI’s law.
It is not anyone’s speculation whether we actually see such laws. Anyway, the privacy report section suggests important points that the compliance person who is worried about AI wants to consider. This explains how data supplies the fuel to the growth of AI, which can reverses some of the possible compliance risks.
For example, AI is “learning” by consuming a large amount of data. Some well -known AI tools (Chatgpt and other consumers will think of the brothers) learn by scraping the Internet about the data that can be found. Other companies have built their own AI systems based on the data they control. However, since this has not been tracked, it is impossible to know the number of homemade AI solutions developed within the company.
Consider the risk of compliance under the efforts of these AI
Your company may not be able to secure your consent from customers and business partners before providing data to AI.
You may believe that your company has secured your consent by fixing the privacy policy or user contract, but the subtle changes to these policies are deceived by the Federal Transactions Committee or other regulatory authorities. It may be recognized as a CEPT practical practice.
The IT team purchases training data from some external providers without confirming that the data has been properly procured.
The collected data has all the access permit, but the AI is very smart and can guess private data anyway. (Famously, in 2012, a large -scale retailer marketing system speculated that teenagers were pregnant and then mailed their mother -in -law to their father. I misfirmed it).
There is more. Companies can try to train AI systems with “synthetic data”, but this is not realistic, so we will avoid consent issues, but AI trained in synthetic data does not work as well. There is a possibility. This simply exchanges the risk of compliance for the operation risk of AI’s worse choice. It can even lead to other compliance risks, such as discriminating customers or not trained to show minors to minors.
We can continue, but you can get photos: privacy issues cannot be separated from the risk of AI. Therefore, the compliance person needs to start from that to understand the compliance issues arising from artificial intelligence.
Questions to ask when you start
If the AI compliance takes risks via its privacy lens, there are some questions in the mind of the compliance person when the company begins to use artificial intelligence.
Who is in charge of AI in your organization? The adoption of AI may be managed through the technology department. Alternatively, no one is in charge of AI, and various teams experiment with AI in their own way.
Neither answer is good. Artificial intelligence and its accompanying risk management is a team -based approach, and the compliance person should be part of the team. You need technology, legal, cyber security, and financial functions. And you all need to cooperate to define the risk of AI and how to deal with them.
Are you making appropriate disclosure to users via a privacy policy? If necessary, how to understand how user data (personal or other) is used for the purpose of AI training and decision -making in a privacy policy or user contract. Talk to a professional, privacy, or regulation expert.
Remember that different regulatory authorities may take different views on what they make clear and sufficient disclosure. You need to worry about the Federal Trade Commission, the European privacy regulation authorities working under the general data protection order of the EU, the regulatory authorities in the state, and probably others.
Are you procuring training data from appropriate and reliable sources? As usual, the risk of third -party is never far away. In addition, companies provide mechanisms (contract management, duudderization, data verification test, etc.) to guarantee that third parties cooperating with AI efforts do not have data that should not be there. I need it.
How do you test the result of AI and confirm that it works as intended? This is one of the new frontier of artificial intelligence. It may learn to behave in a way you don’t want. You may decide to make a bad judgment from bad data or take action you didn’t expect. In any case, the output of the AI system requires a close and regular scrutiny so that the action does not bring compliance risk.
Please prepare now
Compliance staff cannot stop the employer the employer of the AI. But you can be an indispensable part of adopting AI wisely (and ideally it should be).
It requires the ability to regenerate the strengths of compliance personnel, such as risk evaluation, regulation change management, training, third -party risk management, and reporting. Consider the tools and processes required for efficient and large -scale (including the expected to manage AI using AI).
Similarly, the important thing is to consider the relationship that needs to be cultivated to guarantee that the adoption of AI will work. This includes IT, internal audits, laws, and security teams, and probably other parts of the company. Above all, senior managers need to support the idea that compliance should be involved in the AI plan from the beginning.
For additional insights on how to manage AI’s new frontier, subscribe to risk and compliance issues. See the details of the AI of other articles in the link below.
The purpose of this article is to provide a general guide on the subject. You should ask for an expert’s advice on your specific situation.