According to a recent study from McKinsey, AI adoption will make significant strides in 2024, with 72% of organizations reporting adoption in at least one business function. For the previous six years, this statistic had remained relatively stagnant at around 50%.
New technologies are the catalyst for unprecedented business transformation. By harnessing the power of innovation, companies can streamline operations, improve customer experiences, advance key initiatives, and develop breakthrough products and services. This rapid evolution not only drives growth, but also creates new opportunities for competitive differentiation and market leadership.
However, implementing new technology is not without risks. They may be unproven, have unintended consequences, face regulatory scrutiny, and contain a variety of vulnerabilities that can cause significant harm to your organization. Companies across all industries are turning to law firms for advice on how to properly assess and mitigate these risks so they can safely take advantage of the benefits promised by new technologies.
New technologies are not static. They evolve and adapt rapidly as the market evaluates, selects, and deploys tools from different providers. Who is best placed to guide that evolution? Early adopters. Law firms at the forefront of evaluating and deploying new technologies may require specific requirements before rewarding successful providers. This ensures that our products reach their potential while meeting high legal and ethical standards. No other industry is better equipped to produce results like this.
AI in human resources functions
A great example is the use of AI in human resources departments. This area is receiving significant attention as AI tools are deployed into all aspects of business operations. According to a report from LinkedIn, 80% of global HR professionals believe that AI will become a tool to help them do their jobs in the next five years. So what are the benefits of AI that should excite HR professionals?
Improving efficiency and fairness
AI-powered talent tools deliver unparalleled efficiency by automating time-consuming tasks and providing more sophisticated information for human-driven decision-making. In recruitment, these tools quickly process vast amounts of data and identify the best candidates based on predefined criteria. AI automation not only speeds up the hiring process but also reduces the administrative burden on HR departments.
AI also has the potential to reduce human bias in employment that would otherwise go unchecked. Even with potential shifts in diversity, equity, and inclusion efforts, leaders can leverage data and technology to continue building inclusive and just environments. Traditionally, talent decisions often involve subjective decisions using limited data, which can inadvertently perpetuate bias. If properly designed and monitored, AI algorithms can expand the amount of information considered in talent processes, focusing on skills and qualifications that predict performance rather than subjective factors that do not correlate with job success. can improve fair decision-making.
Additionally, AI tools can help ensure consistency in the evaluation process. By using standardized criteria and eliminating human error, AI consistently establishes fairer talent processes and decision-making. This level of consistency is difficult to achieve with manual methods, where personal bias, subjectivity, and varying levels of scrutiny can lead to unfair results.
While integrating AI into human resources processes promises to improve efficiency and objectivity, it also presents challenges. One of the main concerns is the potential for AI algorithms to perpetuate existing biases in the data they are trained on, which could lead to discriminatory outcomes against protected groups.
Additionally, the complex nature of AI systems often makes their decision-making processes difficult to understand and explain, raising issues of transparency and accountability. In response to these concerns, the regulatory landscape is rapidly evolving, placing new compliance burdens on organizations. Organizations considering deploying these tools should carefully scrutinize the provider to ensure that these risks are adequately addressed, and that both the accuracy and negative impact of the AI models and results. Must be tested regularly. At the same time, AI tools should always be used to augment human decision-making, rather than replace it.
To achieve the best results, it is paramount that human agency remains the driving force behind talent decisions. By requiring appropriate measures to be taken both in the tools themselves and in the processes they support, law firms can support the adoption of AI responsibly and reduce the inherent risks associated with its use. We are in a unique position.
Overcoming the regulatory environment
The regulatory environment for AI in the human resources sector is complex and evolving. The US and EU are enacting new laws and guidelines to ensure the ethical use of AI in talent decisions. These regulations aim to address issues such as data privacy, algorithmic transparency, and bias mitigation.
For example, the European Union’s AI law, which came into force on August 1, 2024, has specific provisions related to the use of AI in employment, highlighting the importance of fairness, transparency and accountability, and the continuing The importance of gender is emphasized. Involvement in the hiring process. Although most of its provisions will not take effect until 2026, companies developing or using AI-based technologies should be prepared to comply.
New York City’s AI Bias Law, Local Law 144, went into effect in the summer of 2023 and mandated independent audits to ensure that AI tools used in hiring decisions do not promote bias. Employers are responsible for decisions made by AI, even if the tools are provided by a third-party vendor.
As more and more AI-related laws emerge, law firms must play an active role in shaping these regulations by advising policymakers and advocating for standards that promote fairness and accountability. It doesn’t have to be. Needless to say, the fragmented nature of these regulations poses challenges for businesses. Law firms have the expertise to deal with complex legal situations and are well-positioned to understand and interpret these regulations, require providers to comply, and ensure widespread compliance across industries. I’m in a position. By staying ahead of regulatory changes, law firms can shape the development and implementation of AI tools in a way that aligns with ethical standards and legal requirements.
Law firms also need to stay abreast of international regulatory developments. As a global company, companies may face compliance requirements across multiple jurisdictions, each with their own laws and expectations. By proactively engaging with regulators, not only as advisors but also as users of their products, law firms can influence the development of AI regulation while supporting innovation and meeting client expectations for modern, ethical AI solutions. can be given.
The legal profession is at a crossroads. The transformative potential of technology and AI across all business functions is immense, but decisions made today regarding its use and governance will have far-reaching implications for the future of the industry.
Matt Spencer is the co-founder and CEO of Suited, an AI-powered, reputation-driven recruiting network for professional services firms. A former chief human resources officer at Houlihan Lokey, Spencer’s vision is to leverage technology in industry-relevant ways to solve talent acquisition and retention challenges.