AI is being developed at lightning speed and these applications rely on probability. If AI models produce false results, inaccuracies, or illusions and are not easily identified as such, the risk of liability and reputational damage increases. The Swiss Financial Market Supervisory Authority (FINMA) identified these concerns in its 2023 Risk Monitor. Combined with reduced transparency of AI application outcomes, controlling and attributing responsibility for AI application actions becomes more complex. As a result, there is an increased risk that errors will go unnoticed and accountability will be blurred, especially for complex, enterprise-wide processes where in-house expertise is lacking. ”
Poor quality of GEN AI data could indicate risk of bias and discrimination
If AI relies on incomplete data sets, biased data sets or discriminatory results may be a cause for concern when AI is used to make decisions for consumers. may be obtained. Even with complete data sets, AI used in consumer finance can exacerbate bias or push consumers towards predatory products, as highlighted in a December 2024 report from the U.S. Department of State. You may be heading towards a “digital redlining” community. Financial services firms that use chatbots to interface with customers need to be aware of potential liability and reputational risks as a result of inaccurate, inconsistent, or incomplete answers to questions or concerns. There is.
Modern AI is often probabilistic, lacks explainability, and results in opaque decision-making
AI can be deterministic or probabilistic. Deterministic AI functions follow strict rules to render explainable results. However, modern AI is probabilistic. This means that even with the same input, the AI can produce different outputs based on probabilities and their weights. This makes the output of probabilistic AI difficult to predict or explain. Some laws and guidelines require organizations to explain why an adverse decision was made, such as a credit decision or insurance outcome, if they are unable to explain the results of an AI model, resulting in significant liability. You may have been exposed.
Regulators, including ESMA, have identified concerns about the potential impact on transparency and the quality of consumer interactions, particularly when plants are deployed in client-facing tools such as virtual assistants and robo-advisors. . Because service providers are the owners of the algorithms and models, users often do not have very good access to the sources of data used to train the AI. If errors in the data produce inaccurate results, the output will also be inaccurate when used to train an AI system.
Organizations in the financial industry face concentration risks
Depending on the AI system and how it is used by financial institutions, AI tools can be considered information and communication technology (ICT) assets. This could bring them into the scope of the new EU Cybersecurity Regulations applicable to the EU’s Digital Operational Resilience Act (DORA), the Finance sector, which will begin filing applications on 17 January 2025. New cybersecurity management, reporting, testing, and information sharing requirements for organizations may impact the AI tools used in the financial industry. Dora requires financial institutions to assess concentration risk. The increase in third-party AI could impact concentration risk for financial institutions, as AI models are concentrated with relatively few suppliers.
AI and emerging technologies inherently involve data privacy and cybersecurity risks
Because AI systems sometimes rely on processing personal information, these tools may already be subject to existing data privacy laws. For example, some US privacy laws require that automated technology be used to make important automated decisions (finances and lending, insurance, housing, education, employment, criminal justice, or access to basic necessities). requires organizations to allow individuals to opt out. Automated decision-making tools. U.S. privacy laws also (a) provide individuals with transparency notices before their personal information is used in connection with the development or deployment of AI, and (b) provide individuals with the ability to access, delete, correct, and opt-out. It requires that individuals be provided with the right to Specific processing if personal information is used in AI.
The EU/UK General Data Protection Regulation (GDPR) is similar to US privacy law when an individual is subject to decisions based solely on automated processing, including profiling, which has a lawful or similarly important effect. Create strict requirements, including transparency and privacy rights. GDPR requires companies to document the “lawful” basis for using individuals’ personal data in connection with AI. Complying with these requirements can be unduly difficult for certain AI systems, such as those using probabilistic decision-making tools.
Additionally, widespread use of AI may lead to increased cybersecurity risks. genai can be used to enable convincing and sophisticated phishing attempts that lack the usual markers of unsophisticated attempts, such as grammar, translation, and related language errors. Specifically, passwordless set requests and other spoofing and social engineering techniques used to gain access to systems can be more difficult to detect, regardless of their level of sophistication. The benefits of AI-enhanced software development and other cyber operations can accrue to even the most sophisticated threat actors, including nation-state actors, who have the finances to take advantage of a rapidly changing technological environment. Sector – an immediately attractive target.
How an enterprise risk mindset approach can reduce AI-related risks
An enterprise risk mindset approach to AI and other emerging technologies requires certain best practices.
Increase awareness within your organization
AI is a complex technology, but organizations need to understand how and where their employees are using it in their organizations, how to spot potential shortcomings, risks, and inaccuracies in AI systems, and what prohibited uses of AI are. You need to ensure that you develop a basic understanding. Organizations should also identify individuals who can answer AI-related questions and to whom employees can bring concerns.
Create a diverse multidisciplinary team dedicated to addressing AI risks
Managing the risks and opportunities associated with AI is too monumental for one person or department within an organization. Instead, organizations need to assemble a dedicated AI team that includes stakeholders and employees with skill sets such as law, data privacy, intellectual property, information technology and security, human resources, marketing and communications, and procurement. . Relying on internal and external experts and resources, this AI team must create, implement, and maintain a trustworthy AI governance program. AI teams should review AI-related tools (including those developed by third parties), processes, and decisions. Among other things.
Incorporate governance guardrails
Organizations should take steps to implement and communicate policies regarding the development or use of AI to all employees within the organization. These guardrails should reflect the important risks associated with the development and use of AI. Additionally, specific departments or functions within an organization may require specialized or focused training guardrails. For example, organizations may instruct employees not to input personal data or sensitive business information into AI tools, or use only company-approved AI systems that have appropriate contractual protections for company data. You can.
Regulations set different obligations depending on the role of the organization and the level of risk of the AI system (risk-based approach). Organizations should determine the level of risk posed by AI systems and the organization’s role in relation to AI (e.g., developer vs. deployer) and assess each AI system to comply with legal obligations specific to the organization’s role. You need to make sure that you are. Risks are appropriately mitigated. Organizations must document an AI impact assessment that reflects that the development or deployment of AI is justified based on appropriate risk mitigation measures.