State laws regulating AI take effect in the new year. Here’s what HR needs to know.

State laws regulating AI take effect in the new year. Here’s what HR needs to know.

Artificial intelligence use at work is increasingly a fact of life, especially for knowledge workers. For better or for worse, that includes HR: AI is increasingly being adopted in recruiting, compensation and even performance management. 

But that also means it’s increasingly a point of contention, sometimes resulting in legal action. Take the ongoing collective action lawsuit filed against Workday, for example. 

Federal and state policymakers are scrambling to keep up, and the result is a tension between the two — with employers caught in the middle.

Feds, states race to set the tone

Recent White House policy has been aimed at making the U.S. a global AI leader. Efforts include executive orders President Donald Trump signed in July, along with an AI Action Plan issued this summer.

In Congress, a bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Mark Warner, D-Va. in November seeks to mandate that employers report AI-related layoffs

And then there’s the ever-growing patchwork of state AI employment laws.

New York City’s AI in hiring law, for example, has been in effect since 2023, with other states and municipalities following suit. This summer, California added an AI at work provision to its pre-existing Fair Employment and Housing Act, which took effect on Oct. 1. It applies to hiring as well promotions and training in the state. 

Similarly, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) was signed into law this past summer and will take effect Jan. 1, 2026. Likewise, Illinois’ AI in hiring law takes effect Jan.1. And Colorado’s AI in hiring law will take effect in June 2026. 

“There is, to some degree, some tension between the messaging from a federal perspective and what we’re seeing on a state-by-state basis,” Jenn Betts of Ogletree Deakins, co-chair of the firm’s technology practice, told HR Dive.

“A federal framework that preempts the state-level patchwork would be ideal, but appears unlikely,” Niloy Ray, an attorney at Littler who is a part of the firm’s eDiscovery practice and often litigates cases involving AI, said via email.

Because of this dissonance between federal attitudes toward AI and state laws on the books, Betts observed that many employers are “setting up internal governance programs and strategies that make sense for their organization.” This is the case for Ogletree Deakins itself: Betts is also co-chair of the Innovation Council, an internal governance group managing how the firm integrates technology into its workflow.

State requirements vary widely

Both Betts and Ray said HR should be thinking about AI-related laws on the books in California, Colorado, Illinois, Maryland, New York City and Texas — as well as in the European Union.

The requirements can vary widely and have both direct and indirect effects on HR. The California law that took effect in October will not affect HR operations directly, Ray said, as it’s focused on the models themselves. However, SB 53 “will help ensure that AI models are generally safer for use, and will extend whistle-blower protections to employees who raise issues concerning [their] safety.”

Meanwhile, Ray called TRAIGA “a concerning shift” when it comes to employee protections.

The law “largely exempts AI from its requirements and regulations when used in employment or commercial contexts,” Ray said; it only requires “that AI not be intended to cause physical harm or abet criminal activity.”

Moreover, TRAIGA explicitly states that “disparate impact is not sufficient by itself to demonstrate an intent to discriminate,” Ray said. He called this “a significant shift from the past 50 [plus] years of federal and state law, which have held that adverse, i.e., discriminatory, impact is grounds for liability even if a policy is facially neutral in intent.”

But overall, Ray’s advice, given the legal patchwork, is that employers “need to comply with the HCF or highest common factor when setting up their AI disclosure, risk-assessment, opt-out, appeal and record-retention processes.”

And when it comes to the creation of an internal governance system or policy, HR should consider several different variables, Betts said. For example, employers should consider the size of the company; its industry; how often employees are using AI and for which tasks; where the employer has operations; and what level of risk tolerance it has.

“There’s not necessarily a one-size-fits-all approach that organizations are taking to managing the risks that are relatively evolving here, but there are a couple of common threads,” Betts said.

Flexibility and pragmatism rule the day

Above all, Betts’ main advice was to be thoughtful: Use a “robust vetting process” before rollout, draft workplace AI use policies, offer AI-related training, audit all tools and send notices to employees and applicants where relevant.

HR professionals also must remain flexible, Betts said. “This is an evolving area, and will continue to be an evolving area. And so you’ve got to recognize that what makes sense today may not necessarily be what makes sense a year from now,” she explained. 

Ray said his best advice for employers navigating this current fragmented regulatory landscape is to do so “with resolute pragmatism.” 

“Limit AI deployment to high ROI uses, budget for compliance,” he said, “and discern when to be the early bird and when the second mouse.”