What HR pros need to know about AI in the workplace

• Published July 26, 2024

Corey Gildart is managing director and Joe Knight is senior managing director at FTI Consulting. 

Long before the explosive rise of ChatGPT adoption, 75% of U.S. companies were already using some form of artificial intelligence in the employment lifecycle. The use of large language models as a component of the employment decision-making tool kit is a sophisticated, albeit less transparent, step forward that extends beyond traditional resume screening.

In emerging iterations, the technology permeates nearly every stage of the employment process, from hiring and compensation to promotion and termination. Regulation has responded accordingly, increasing at local, state, national and global levels. In this environment of technological and regulatory change, it’s imperative that companies examine how AI tools, particularly opaque systems with hidden layers, interact with employment data to mitigate risks and promote beneficial outcomes.

Many organizations are not aware of the extent and nature of AI use across their business. This can be problematic, as algorithms are only as reliable as their training and integrated data allow. Whether intentional or inadvertent, the biases of software engineers can (and do) find their way into software coding, leading to negative outcomes (e.g., employee homogeneity). Regardless of who developed or sold a program, or whether a company is aware of how a program does what it does, it is the company employing AI that will be held responsible for any misuse.

Real world risks

The harm caused by AI is no longer a future concern. It’s a reality today.

A legacy hiring test established by the EEOC, known as the “four-fifths rule,” also referred to as the Adverse Impact Ratio, continues to set the standard for how to determine adverse impact in hiring practices.

By calculating the difference in hiring rates between favored (highest selection rate) and disfavored groups, this rule can also be applied to review discrimination within AI. For example, if a job application input is for the candidate to be able to lift a certain amount of weight, AI review of that application could blindly rule out candidates of certain age or gender. In this example, the four-fifths rule would surface any disparities in hiring rates across the age or gender.

Similarly, the EEOC has flagged concerns over AI automatically rejecting disabled candidates or individuals with employment gaps. More recently, the agency issued guidance regarding employer use of AI in any aspect of the employer selection process, including hiring, promotion and termination.

Though AI is mathematically sound, when fed empirical inputs, it may become overly reliant (i.e., “overfit”) on the data on which it was trained, not allowing accommodation of new data, and in turn producing homogenous results (e.g., a biased candidate pool). To avoid this, organizations must ensure their training data is sufficiently diverse to allow the model to identify a diverse hiring pool. If a company is using a black box tool, the system could begin incorporating its own decision making, compounding the overfit issue without anyone knowing this is happening.

As another example, a recruiting team might believe they have eliminated all possible points of discrimination when it comes to a subject such as race or other overt characteristics that may lead to discrimination. However, if a factor such as zip code is introduced, and there is a correlation, race may be encoded by proxy. There can also be several hidden layers in an AI system, where the software has a foundational knowledge base that is known. Yet, unknown to the user, it could continue creating its own variables and seeking out real-time data from other sources to achieve its primary goal of providing an outcome.

There are clear regulatory risks associated with discriminatory and unfair employment practices. Moreover, a business can also be damaged over the long term through a decline in diversity, lack of visibility into a wider talent pool, diminished company culture and public scrutiny for AI governance failures.

Mitigating actions

An AI system must be designed and implemented to avoid bias, treating individuals equitably, with as much transparency as possible. Governance should leverage existing structures and be supported by a committee that can guide AI usage within risk management frameworks. Upon that foundation, there are several steps organizations can take to help realize the benefits of AI and mitigate misuse:

  • Craft a purpose statement. What is the company working to achieve at the highest level?
  • Define organizational objectives that support the overarching goal.
  • Develop an AI program framework or policy and rules-based metrics. Ask questions like, “what does this mean for the organization over time?”
  • Consider outcomes in a meaningful way, i.e., whether approved use cases are aligned with the risk portfolio and purposes.
  • Adapt the framework as needed to remain aligned with company purpose. There must be ongoing human oversight, i.e., whether approved use cases are implemented and maintained in accordance with approved governance structures.
  • Address major changes in company structure and data inputs that may impact AI models and governance frameworks (such as during M&A activity).

Legal, compliance and HR teams should be prepared to provide a clear explanation of how AI software is being used and confirmation that it is aligned to regulatory requirements and data protection everywhere the company operates. Proactive and consistent testing must be upheld to continuously assess and address instances of bias.

The regulatory environment will continue to evolve. The EU AI Act is only the beginning. President Biden issued an executive order on AI that delegates action to certain agencies, with different regulators tasked with various aspects of assessment and enforcement. New York City launched its own AI action plan in late 2023 and several U.S. states have introduced legislation. More will follow.

A 2023 Gartner survey of HR leaders said more than 60% are engaged in enterprise-wide discussions around the use of generative AI.

More than half are doing so in collaboration with IT leaders, yet fewer than half (45%) are working with legal and compliance teams. It’s critical for legal and compliance to be involved in these decisions early and throughout the process of implementing new tools to ensure it is done responsibly and in compliance with company policy and regulatory guidance. Through strong governance and the continued oversight of human discernment, AI can be a powerful tool for HR teams and other functions within the business.


Article top image credit: PixelsEffect via Getty Images