How companies communicate their AI strategies can influence public opinion

A majority of Americans are still hesitant about using artificial intelligence in business, either expressing neutral opinions or saying it does more harm than good, according to an Aug. 28 report from Bentley University and Gallup.

At the same time, some strategic business practices could help, the survey respondents said, such as providing transparency about how AI is used and clarity around data privacy and security concerns.

“Although fewer Americans today see AI as harmful than did so a year ago, they are still firmly wary of how it is being used in a multitude of settings, including the workplace,” a press release about the report said. “While Americans tend to be less skeptical of AI when they know more about it, providing transparency and being clear — rather than offering education — may benefit businesses more if they are trying to convince the public that they can use AI responsibly.”

In a survey of 5,835 U.S. adults, 56% said they believe AI has a net neutral effect, while 31% believe it’s more harmful than helpful and 13% believe it’s more beneficial. In the past year, the percentage of those who believe AI is more harmful has dropped from 40%, with most of the shift occurring among ages 30 and older.

However, major concerns remain prevalent. About three-quarters said AI will cut jobs in the U.S. during the next 10 years, similar to last year’s poll. Also similar to the year before, 77% said they don’t trust businesses to use AI responsibly, including nearly 70% of those who said they’re extremely knowledgeable about AI.

In addition, at least 8 in 10 adults are concerned about using AI for hiring decisions (85%).

To address these concerns, 57% of survey respondents said companies should be transparent about how AI is being used in business practices — the only strategy that broached more than 34% of respondents, the report said.

Beyond that, about a quarter of respondents said businesses should report on the number of jobs lost or created due to AI and communicate about the actions being taken to reduce the risks associated with AI.

So far, employers remain split on using generative AI tools for HR practices, particularly given the legal risks, according to a Littler Mendelson survey. Most of the respondents — including in-house lawyers, HR pros and executives — expressed concerns about complying with state, federal and international data protection and information security laws, including employment law enforcement in general.

The Department of Labor has said in guidance documents that employers should implement AI tools with transparency and incorporate worker input. As part of this, DOL recommended that AI be designed, developed and trained in a way that protects workers and that employers have “clear governance systems, procedures, human oversight and evaluation processes” in place.