Generative AI still absent from most employer policies — even as executives fear being sued for using it

HR Chartables is HR Dive’s series about the numbers that drive HR decisions. And, well, making those numbers a little easier to look at on paper (er, screen?). In any case, we hope you enjoy this goofy, semi-regular data visualization showcase.

Artificial intelligence. Two words that have bedeviled HR departments at an ever-increasing rate since the widespread introduction of generative AI models a few short years ago. You don’t have to spend too much time using your AI-assisted search engine of choice to find reporting on employers’ frustrations. Shoot, we have a few in-house headlines of our own.

Last year, Victoria Lipnic, former chair of the U.S. Equal Employment Opportunity Commission, said that AI-related issues had hit HR from “everywhere all at once” in reference to a popular film. Algorithmic bias is one of organizations’ chief concerns, so much so that organizations have held up widespread adoption, our sister publication CIO Dive reported in February.

In terms of what corporate decision-makers really think about AI — about its risks, its potential to boost efficiency and its very makeup — HR may not have that much insight. Sure, executives may feel intrigued by the idea of one day replacing wide swaths of human employees with technology, and some have even said as much! But we figured a deeper dive into some research on the issue would be of use to our readers.

I can’t think of a better topic for this latest experiment in HR Dive’s storytelling: Welcome to HR Chartables!

Where’s the fine print?

Right, so let’s start with some key findings from law firm Littler Mendelson, which just published the latest edition of its AI C-suite Survey Report. The survey asked 336 U.S.-based C-suite executives a range of AI-related questions.

Let’s start with the most typical of HR topics: policy. Littler found that more than half of organizations did not have a policy in place governing employees’ use of generative AI for work purposes, and 12% were not even considering such a thing.

Most employers still don’t have a generative AI policy

% of U.S.-based C-suite executives by whether their organization has a policy in place for employees’ generative AI use for work purposes

That’s not the whole story, however. Last year’s edition of Littler’s survey — slightly different in scope, given that the firm interviewed a mix of HR staff, in-house counsel and other professionals — found that only 37% reported that they had AI policies or guidance in place. If nothing else, it shows more and more employers are thinking through how to address AI.

Enterprising employees are steaming ahead regardless; a McKinsey & Co. report from August found that 91% of respondents were using AI for work through employers’ own tools or through externally available ones.

Human-shaped robot Ameca of British manufacturer Engineered Arts interacts with visitors on July 6, 2023, in Geneva, Switzerland. I can’t confirm this, but Ameca’s face here certainly seems to be responding to a “Will AI get me sued?” type of question.

Johannes Simon via Getty Images

 

Will your company get sued because of AI?

Employers are getting a slightly clearer picture of AI’s regulatory implications thanks in part to a few jurisdictions that have passed laws regulating the use of AI to perform tasks like sorting through job applications. But that landscape is largely evolving.

Most executives at least moderately worried about legal risks of AI in HR

% of U.S.-based C-suite executives by how concerned their organization is of litigation related to the use of AI in HR functions

Littler found that C-suites are well aware of the risks, with more than half stating that they were concerned about AI-related litigation to either a “moderate” or “large” extent.

We’re already seeing some rather high-profile names become the subject of AI lawsuits in the HR context. IBM is facing a lawsuit from two former HR professionals who alleged the company fired them due to their age and planned to replace them with chatbots and AI tools. A separate suit involving HR vendor Workday alleged that the company’s AI screening software discriminated on the basis of an applicant’s race, age and disability status.

Regulatory concerns prompt organizations to decrease AI use for HR

% of U.S.-based C-suite executives by degree to which regulatory uncertainty decreased their organizations’ use of AI to assist with HR functions

Federal regulators have their own reservations about AI, and that wasn’t lost on most of Littler’s survey-takers either: A fair number said they’ve already decreased AI use in HR because of regulatory uncertainty. The EEOC has been particularly vocal, giving AI and machine learning top billing in its Strategic Enforcement Plan for 2024 to 2028. The agency joined other executive agencies in a 2023 statement that discussed the potential for AI to “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

Just a second, please. I’m trying to wrap my head around the millions of future lawsuits focused on AI-generated job postings, hiring algorithms, performance management tools and other untold applications.

OK, enough of that.

How is HR using AI anyway?

Here’s a fun one. Where exactly is HR using AI here and now?

Document creation the leading organizational HR use case for AI

How U.S.-based C-suite executives said AI is being used to assist with HR and talent acquisition processes, either at an enterprise level or by individual employees

According to Littler, a full one-third of respondents said their organizations have no HR use cases for AI whatsoever. Document creation, for items like recruitment and onboarding, took the top spots, and employers also reported using AI for sourcing candidates, talent development and other tasks.

These results have some similarities with those of an earlier 2024 Littler survey. That report, which polled a mix of executives, in-house counsel and HR professionals, similarly found that creation of HR-related materials was the top use case for AI. It also found that 51% of employers were not using AI for HR or talent acquisition.

Even as HR departments begin to experiment with AI for such tasks, sources who previously spoke to HR Dive have said that the industry is still in its earliest stages of working with AI.

The public’s perspective

Let’s end with some food for thought. We know a bit more about what executives think of AI’s HR applications, but what of employees? There are several public opinion polls on AI, but a recent survey by Gallup and Bentley University asked U.S. adults specifically about business AI use.

Given the narrative around AI’s power and potential to reshape humanity as we know it — and some HR departments’ eagerness to experiment with AI — does the public trust employers to use the technology responsibly?

Skepticism about business AI use persists

% of U.S. adults by how much trust they had in businesses to use AI responsibly, 2023 to 2024

Ouch.

Hm. Well, I suppose it’s a good time to remind readers and C-suites alike that HR is often left out of AI plans, per a June report by consulting firm McLean & Co., despite the profession’s role in establishing organizational culture and values. On a related note, it also might be worth noting the role that employee resource groups can play in addressing workers’ challenges, such as the disruption that something like AI seems likely to cause.

There’s clearly a role for HR to play in all this, especially if AI has yet to truly show up in most employers’ formal policies. But it will be up to the profession to seize that opportunity.