Pay practices are quietly moving up the list of HR functions set to be affected by artificial intelligence.
Experimentation with AI in the area of pay, benefits and total rewards increased from 2025 into the early months of 2026, according to a February Korn Ferry survey of HR and total rewards professionals. However, the firm found that most organizations remain in the early stages of adoption, with 57% of respondents stating they had not even begun to experiment with AI in total rewards.
That’s reflective of the experiences of consultants like Gordon Frost, global rewards solution leader at Mercer, who told HR Dive that the use of AI to set pay has been slow due to concerns about risk. But it’s apparent that the tech has carved out a role in many organizations, he said, because it can augment the work of compensation professionals.
AI use in total rewards increased last year, but most have not begun experimenting
% of respondents to Korn Ferry’s February 2025 and February 2026 Global Total Rewards Pulse Survey by current state of organizational maturity with respect to use of AI in total rewards processes
Setting pay involves synthesizing a number of data sources, both external and internal, and AI is being tasked with making that data faster to collect and easier to analyze in a more consistent manner. “We’re seeing people start to use it from that perspective,” Frost said.
So far at least, it’s generally not the case that employers are using large language models like OpenAI’s GPT or Anthropic’s Claude to assign specific dollar figures to a role or individual worker, Jamie Eisner, attorney at Offit Kurman, said in an email. Instead, she said, she more frequently sees employers deploy AI as a “system that shapes the data, assumptions and decision-making frameworks that ultimately influence pay outcomes.”
Employers also seek to make the tech part of a holistic, broader compensation strategy as they mull which factors they want to consider in setting pay, said Britney Torres, senior counsel at Littler Mendelson. There’s also hope that AI can improve consistency for pay practices across organizations, especially those with large workforces.
“Ultimately, improving the accuracy of your wage and compensation setting program is done to accurately reward the employee,” Torres said. “The draw of AI in this context is to improve the quality of that output.”
But it’s not something HR departments can adopt without doing some legwork. Frost said there’s a level of foundational work that has to happen before AI is integrated into any HR process, and a big part of that work is gathering all of the organization’s relevant data into one place — and ensuring that data is free of errors and unintended biases.
“That work has been going on for years and years, but it’s almost a foundational precursor to doing all of this stuff,” Frost said. A number of complexities can emerge, he added, especially when an organization has gone through several acquisitions of firms that employed different job titles or codes or had multiple employees doing similar jobs. “All of that needs to be cleaned up, simplified and aligned before you can use AI or do sophisticated data analysis.”
And that’s without getting into the nation’s increasingly complicated AI compliance picture.
Uncertain legal landscape presents an obstacle
Using AI to set pay could implicate several federal laws, including the Fair Labor Standards Act. Eisner said employers may violate the FLSA if their AI-influenced pay structures lead to employee misclassification, minimum wage violations or overtime pay errors.
AI systems trained on historical pay or performance data also may unintentionally replicate or amplify existing pay disparities, such as those that occur along gender or race lines, and this can create liabilities under federal equal employment opportunity laws, Eisner continued.
“Just knowing what the security parameters are and what the privacy guardrails are is super important.”
Gordon Frost
Global Rewards Solution Leader, Mercer
Several states, meanwhile, have passed laws restricting AI’s use in hiring, and some of these statutes contain provisions that could affect systems used in the pay context, Torres said.
For instance, California privacy regulations require certain employers to issue pre-use notice requirements informing consumers about their use of automated tools in areas such as compensation. Others specify that employers must conduct risk assessments of their tools or keep data pertaining to AI use.
States have been slower to adopt laws that specifically address AI’s role in setting wages, Torres added, though some proposals have been floated by local legislators. While it’s too early to predict which laws or frameworks will emerge first, Torres said, employers can identify certain “big-picture concepts” to help prepare for future requirements.
One key provision in several AI-related pieces of legislation concerns anti-bias assessment requirements. Torres explained that such clauses are meant to ensure that an automated tool makes determinations that are both free of discrimination as well as legitimately based on factors pertinent to the employee’s role.
Other parts of state AI laws can affect compensation strategy. Maryland’s AI law addresses the use of facial recognition tech during the hiring process, and laws of this sort may impose consent or use restrictions depending on how a tool functions, Eisner said.
“The key point is that AI does not shift legal responsibility away from the employer,” she added. “Employers remain accountable for wage outcomes, regardless of whether a human or an algorithm make the recommendation.”
Which inputs should be avoided?
Avoiding discriminatory AI outputs isn’t as simple as prompting a tool to ignore protected characteristics, Eisner said. Instead, employers have to be intentional about how an AI is governed, keeping in mind that the tech is only as good as the information an employer provides.
“Employers should clearly define what inputs are permissible and explicitly exclude inputs that may lead to biased outputs or pay disparities,” Eisner continued.
Even seemingly neutral factors — like an employee’s pay history, assumptions about an employee’s career path, location data and performance history — can skew results in ways that disadvantage certain groups, Eisner said. Employers also create risk by relying on “overly blunt” metrics, she added, such as keystrokes and mouse movement.
Employers must ask themselves what factors may be considered a proxy for bias, Torres said, emphasizing that pay determinations should be based on nondiscriminatory, nonretaliatory reasons. Performance can be a sensible way to approach this topic, but even there, HR has to be thoughtful about how performance is measured.
For instance, Torres noted that if performance is based on audio or visual surveillance data, that could implicate protected characteristics such as an employee’s national origin, race, ethnicity or disability.
“It’s not just about configuring the tool to comply with the law,” Torres said. “You also want to plan for that meaningful human oversight and regular anti-bias assessments to protect against discriminatory patterns that may develop as a proxy.”
Data security a ‘major’ consideration
Employers also may be wary of sharing sensitive HR data to AI tools. Mercer’s Frost said organizations are generally hesitant to begin using AI in the pay context unless they are certain that the tool is completely compliant with their organizational firewall and designed for private, internal use.
“That’s one reason why we’re not seeing widespread use of [AI in pay] yet,” Frost continued, adding that HR teams should know never to feed employees’ pay information into a publicly available AI model. “Just knowing what the security parameters are and what the privacy guardrails are is super important.”
It’s important for compensation professionals to ensure that the data they use is anonymized and to think through how legal privilege might apply to any decisions made based on AI’s contributions, Torres said. This is the case even if an AI’s recommendations are not used to implement a pay decision.
AI tools also draw from data sources beyond pay data alone, including for human resources information systems software that tracks employee data, performance, engagement and other indicators, Eisner said. Employers, she added, should be transparent with employees about the types of data used, how that data is used and why collection is necessary.
Any AI use is likely to involve the use of a vendor, and vendor relationships also present HR risks worth considering. This includes vendor handling of data and any associated contract language around data ownership and security safeguards, Eisner said.
“A guiding principle here should be data minimization: don’t collect or process data you don’t actually need because unnecessary data collection may create legal exposure without delivering corresponding value,” Eisner continued.

HR professionals should approve the output of AI platforms using their own knowledge and experience, sources told HR Dive.
Getty Images
HR must maintain human oversight
It’s key for HR professionals to review AI outputs and make the final determinations on all compensation decisions, Frost said. He noted the use of an AI tool is, in this respect, no different than using Microsoft Excel in that a human must approve the output of a given platform using their own knowledge and experience; “We’re not just completely giving it to AI and absolving ourselves of responsibility.”
This point reinforces the need for good governance procedures, Torres said, as it can be difficult even for humans overseeing an AI’s output to check for bias without knowing what to look for. Training can help, but on a systemic level, employers have to budget for and employ a range of measures such as audits and risk assessments to ensure they have a compliant pay process, she added.
Before an employer chooses to adopt AI in its compensation practices, Eisner said it might first articulate what it is trying to accomplish by introducing the tech, what data it will need to ensure successful implementation and what values it wishes to be reflected in its pay practices.
“AI should be treated as a decision-support tool, not as an autonomous decision-maker,” Eisner said.






Leave a Reply