- A year after generative AI burst onto the market and into the workplace, employees with experience using the technology report new anxieties fueled by lack of guidance from leaders, according to survey results from Ernst & Young.
- About two-thirds (65%) of the 1,000 office/desk workers surveyed said they’re anxious about not knowing how to use AI ethically. More than 3 in 4 employees are concerned about the legal risks, and a similar number are anxious about cybersecurity risks, the findings, released Dec. 6, showed.
- Respondents said they would be more comfortable if workers from all levels were involved with their company’s adoption of AI and if senior leadership promoted responsible and ethical use of the tech, the findings showed. “Employees play a crucial role in the successful integration of new technologies, so leaders must prioritize alleviating fear-based obstacles for their organization to harness the full potential of AI,” EY said in a statement.
Despite their concerns, employees are open to AI, the survey found: 4 in 5 see its value at work and believe it will make them more efficient, more productive and able to focus on higher value work.
But to move forward, organizations will have to keep up, research shows. Only 26% have a workforce policy related to generative AI; roughly one-quarter are currently working toward establishing one, according to Conference Board survey results published in September.
Without clear guidelines, there are risks to adopting generative AI, including privacy and security concerns that come with feeding the tech sensitive data, experts have said. Policies should give employees guidance on the input that can go into the model, how outputs should be used and the ethical implications of that use, they recommended.
Employers also should be guided by workers’ needs when deciding how and where they use automation and AI, according to a March recommendation by consulting firm Bain & Co. For example, employees may be in the best position to know which processes could be improved with AI and could be encouraged to reimagine how work gets done, the firm said.
The additional resource of AI can also help talent acquisition teams acquire untapped talent, such as people with disabilities, those without degrees or individuals who were formally incarcerated, an executive with the Society for Human Resource Management recently told HR Dive.
AI can do this without creating bias against job candidates from marginalized backgrounds that many, such as the U.S. Equal Employment Opportunity Commission, have warned about, the executive said. Anonymizing job applications is one way it could help. As employers look to skills-based hiring, the algorithms looking for the skills can pull that out, and the human reviewer doesn’t know anything about the candidate, other than that they have the skills, the exec explained.
As regulators and C-suite leaders struggle to keep up with generative AI, “it’s causing a sense of discontinuity, confusion, and even a loss of control among employees,” Dan Diasio, EY Global Artificial Intelligence Consulting Leader, said in a statement. “Leaders must keep employees at the center and help overcome fear-based barriers to usher in a new era of productivity and growth,” Diasio urged.