As more employers adopt artificial intelligence tools as part of the everyday workflow, AI also is creeping into a key part of employees’ lives: performance reviews.
Companies that have adopted the technology seek to make the process easier, particularly for managers who might otherwise have to track down disparate bits of information from different sources.
But it’s not as easy as telling managers and workers that it’s OK to start using ChatGPT.
“While AI promises efficiency and data-driven insights, its use in evaluating employees could subject organizations to discrimination claims as well as substantial compliance obligations,” said Peter Cassat, partner at CM Law.
Early adopters
Banks, in particular, have been fairly early adopters of AI tools for performance evaluations.
Starting in June, Citi employees who already had access to the company’s AI tools have been able to use them to draft their self-evaluations, Citi’s head of total rewards confirmed to HR Dive. Then in October, the company launched an AI tool called Citi Performance Assist to help managers draft evaluations for their direct reports.
“Performance reviews are a critical part of meritocracy, but they can also be time consuming to complete. As part of our broader use of AI across the firm, we see a great opportunity to use AI to make things simpler, especially for our managers,” Patricia Gould, head of total rewards at Citi, said in an email.
Traditionally, managers would have to track down information from different systems, files and read inputs about their direct reports to write an evaluation, she said. “Now, Citi Performance Assist automatically gathers all of that information directly from our systems – in-real time – and creates a first draft of the manager’s evaluation for them to work from. Our managers have been delighted to get a leg up on this important responsibility.”
Citi Performance Assist’s output is “just a starting point — a first draft,” she said. “Managers are still fully responsible for reviewing, adjusting and finalizing their evaluation to ensure it accurately reflects their team members’ performance.” Workers can also opt out of their manager using AI for reviews. So far, less than one percent have done so, she said, “which suggests a high level of comfortability.”
According to the Financial Times, JP Morgan is similarly allowing AI to assist in writing performance reviews. The bank did not respond to a request for comment.
Proper application
AI is already being used for HR-related tasks, according to recent research.
Ideally, employers can guide managers using AI for performance reviews to improve “the problem that typical performance reviews are limited, arbitrary and biased,” said Betsy Summers, principal analyst at Forrester.
AI could be used to bring more accuracy, perspective and data to the process. “This can be as easy as providing managers with a template and best practices to help them write a less-biased review, using AI,” said Summers.
For example, a manager can paste their review into a secure, internal large language model while taking out any names or company-details, Summers explained. Then the manager can ask AI to evaluate the review for fairness, tone and effectiveness. “That’s an easy way to use AI to improve the experience and outcomes of performance reviews,” she said.
AI can also be used to check managers, she said, such as when an AI agent pulls information from an individual’s past performance to evaluate manager feedback for bias
AI pitfalls
For AI to work, it has to be used well. But that’s not always the case.
Instead, it’s more commonly used “as an ignorant ghost-writer assistant, writing reviews for individuals based on their basic prompts,” said Summers. “This kind of AI slop is incredibly dangerous from a morale perspective because it can sound authoritative and accurate but lacks the examples and detail that an individual requires for behavior change.”
There are also “serious legal and compliance questions that HR leaders cannot afford to ignore,” said Cassat. “While AI promises efficiency and data-driven insights, its use in evaluating employees could subject organizations to discrimination claims as well as substantial compliance obligations.”
That’s because AI systems learn from historical data, and if past reviews favored people from certain demographics, the algorithms may replicate or even amplify biases.
“This exposes employers to Title VII, ADA and Age Discrimination in Employment Act claims,” he said. “Even ‘neutral’ algorithms can create disparate impact if they disadvantage protected groups.” He added that even if the AI tool is from a third-party vendor, it doesn’t protect an employer from liability.
Using AI improperly could also run afoul of state and local laws. New York City and California, for example, passed laws creating obligations to employers who use AI in hiring and promotion decisions, while other states have passed or proposed laws targeting the algorithmic bias in high-risk employment decisions, which includes performance reviews.
“Compliance with these laws will require implementation of risk management and disclosure processes, creating significant administrative burdens for HR teams,” Cassat said.
Data privacy should be a concern as well: “AI tools designed to enhance performance reviews may rely on the use of personal employee information and any misuse could subject the organization to privacy related claims, including those arising from information security breaches,” he said, plus “employees may have rights to know how their data is used in the context of employer conducted performance reviews.”






Leave a Reply