AI can help workplace investigations but needs human oversight, attorney says

Employers are beginning to incorporate generative artificial intelligence into their investigations process, particularly at the planning, preparation and analysis phases, according to Leighton Henderson, senior counsel at Liebert Cassidy Whitmore. But as with other AI-in-HR use cases, there are some caveats.

One of the first steps to conducting effective internal investigations of alleged misconduct or similar issues comes at the preliminary stage, when employers outline the key issues, witnesses and questions involved.

AI may assist with each of these tasks, from identifying persons with whom employers should speak to brainstorming potential questions for interviews, Henderson said in an interview. It also can double check any blind spots the investigative team may have.

Additionally, AI’s transcription capabilities are relevant to a common point of debate: “If you want to start a fight with a group of investigators, ask them how to best document,” Henderson said.

The two most common methods for documenting witness interviews, for example, are to either record them or manually take notes. Recording helps cut down on the hard work of manual notetaking but can prove expensive, as audio transcription services can cost hundreds of dollars per interview.

Current AI models are not yet reliably accurate enough to transcribe witness interviews word-for-word, per Henderson, but the technology could potentially reach that level of accuracy in the future and thereby eliminate a major cost for investigators.

Until that point, AI transcriptions can nonetheless be used to double-check notes. The tech is also helpful for putting together a timeline of events for investigations that involve a large number of events over an extended period of time, Henderson said, assuming that employers verify the results for accuracy.

Employers may want to consider purchasing an enterprise license for AI platforms they plan to use for investigations.

Getty Images

 

Privacy, accuracy pose obstacles

When it comes to risks posed by AI in the investigations context, “the two biggest ones are privacy and accuracy, without a doubt,” Henderson said.

Publicly accessible AI modules — such as the free version of ChatGPT — could leave sensitive information exposed to the general public. Workplace investigations very often involve personal data and information that is not public record, Henderson noted, meaning even small tasks such as asking a chatbot to generate questions for a specific witness are problematic.

To get around this concern, employers might want to consider paying for an enterprise license, which allows more of a “closed universe” when using a given AI tool. But even in that scenario, they should only upload documentation after removing personally identifying information and fully anonymizing the data, Henderson said.

“Whoever created that chatbot, that company still has access to this closed universe,” she added. “What if they get subpoenaed and need to turn over everything that an employer has been using the chatbot for? There’s no guarantee that everything is going to remain private.”

HR teams are likely aware of generative AI’s tendency to produce hallucinations, or inaccurate data that appear authentic. That’s something to watch out for in the transcription context, where AI might mistake two common but opposite phrases like “nuh uh” and “uh huh,” Henderson said. “That could have detrimental effects on your fact finding.”

A California state flag flies in San Francisco

California is among the state and local jurisdictions to regulate how employers use AI in employment decisions.

Getty Images

 

Expect state governments to have their say

Though federal courts have begun to field employment law claims involving AI, the tech’s sudden rise hasn’t left much time for case law to develop with respect to its use in workplace investigations. Henderson nonetheless said she anticipates such cases will show up eventually.

Meanwhile, state governments have passed laws and issued regulations that could affect investigation-related AI deployment.

In California, rules prohibiting automated decision-making systems from discriminating on the basis of employees’ protected characteristics took effect Oct. 1. Separately, the state issued consumer privacy regulations that place requirements on the same kinds of automated systems if they are used in employment decisions such as demotion, suspension and termination.

A handful of other jurisdictions, most notably Colorado and New York City, have similarly regulated AI’s use in employment decisions. Henderson said she expects regulators to address AI’s use in workplace investigations at some future point.

Parting thought: Don’t forget the human

It may be tempting for employers to automate all or most of the investigative process, but Henderson advised against letting the tech make judgment calls or conclusions on behalf of HR departments.

AI, for instance, can’t judge a witness’ credibility. It also may not be able to identify a red flag or a red herring in a statement or piece of evidence. As such, Henderson said it should not be asked to make the ultimate conclusion that a workplace investigator is charged with making.

“I just want to emphasize that this should be a tool,” she said. “At the end of the day, the individual investigator is the decision maker, and that’s a role with great responsibility. That’s a role that can’t be dictated to the chatbot.”