Melanie Ronen is a partner at Stradley Ronon in Southern California, where she focuses her practice on employment law.
Artificial intelligence has been used in hiring processes for several years, automating tasks like candidate sourcing, screening and even predicting a candidate’s success and cultural fit. More recently, the widespread adoption of AI in recruitment is part of a broader trend where human resources increasingly relies on AI to improve predictive accuracy, streamline processes and minimize biases in decision-making.
But the growing use of AI in recruitment also raises significant concerns, including the potential to perpetuate biases, inequalities, eliminate jobs and create ethical dilemmas.
To mitigate those concerns, lawmakers in the Golden State and throughout the country are putting laws and regulations at the forefront of their policy agendas. California was unsuccessful in getting AB 2930 on the books this year — legislation that would have required employers to assess the impact of certain AI tools each year — but momentum continues elsewhere. The California Civil Rights Council is weighing AI-focused regulations. And the U.S. Equal Employment Opportunity Commission also recently secured the first-ever AI-based discrimination settlement.
While these efforts are still taking shape, it is critical for HR professionals to develop best practices for integrating AI into the hiring process.
How employers unknowingly engage in algorithmic discrimination
Many employers are unaware of algorithmic bias — systematic and repeatable errors in computer systems that lead to unfair outcomes.
This bias can arise from skewed or limited input data, unfair algorithms or exclusionary practices (such as actions or policies that result in certain groups being unfairly excluded or disadvantaged) during AI development.
For example, unintentional discrimination can happen in the initial job-seeking process when AI is used on job-matching platforms or in conjunction with targeted advertising. If a position or industry has been historically male dominated, the use of machine learning may result in a new job posting being shown only to men over time. During resume screening, an AI tool may weed out resumes from candidates who live beyond a certain distance from the job site. That seems like a logical screening criterion, but what if certain racial groups tend to live outside of the search area the AI has defined? With no human oversight, the AI’s decision has now become discriminatory.
Similar types of concerns exist with the use of AI analysis of video interviews or the use of chatbots during the application process.
Humans are capable of discrimination even without the use of AI, of course. And human bias is a genuine concern in hiring, and technology can play a role in addressing it. The issue arises when employers depend on AI without regularly scrutinizing the decisions the algorithm is producing.
Best practices for AI use in hiring
Although no federal law specifically addresses the use of AI in the employment context (yet), discrimination is already prohibited by Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the Americans with Disabilities Act and a host of other federal, state and local laws.
As such, preventing algorithmic bias is essential regardless of the outcome of pending AI legislation.
While we wait for California and others, HR leaders can take proactive steps to establish organizational standards and processes for using AI in hiring:
- Assess how AI is currently used in your organization and how it guides decision-making processes, specifically regarding hiring.
- Perform adverse impact assessments to confirm that the use of AI in employment processes is not operating to favor or exclude groups. Assessments should be performed when a tool is implemented and throughout its lifecycle to address any emerging adverse impacts as the technology evolves.
- Review and update contracts with vendors to ensure they keep up with the latest AI-related standards. Establishing regular check-ins and compliance audits helps ensure that all parties adhere to current regulatory requirements and best practices.
- Develop notices to advise applicants and employees when AI tools are used to make consequential decisions. Be prepared to adapt the language of notices as regulations become final.
- Consider feasible ways to provide alternative selection processes or accommodations for individuals who wish to opt out of AI screening.
- Finally, stay informed about legislative updates and engage legal counsel to navigate the evolving regulatory landscape.
Above all, fostering a culture of transparency and ethical AI use within the organization reduces the risk of algorithmic discrimination, improves employee trust and helps ensure compliance. As this landscape evolves, staying proactive in implementing fair AI practices and continuously educating your HR team on these issues will be essential.
Leave a Reply