AI output will increasingly require more oversight, workers report

AI output will increasingly require more oversight, workers report

Dive Brief:

  • Only 17% of U.S. adults said workplace artificial intelligence is reliable without human oversight, according to the recently released Connext Global 2026 AI Oversight Report. Thirty-five percent said reliability came from “AI plus light review” and an additional 35% said reliability came from “AI plus dedicated oversight.” 
  • The need for human review is increasing, respondents predicted. Nearly two-thirds of those surveyed said human review of AI will increase at least somewhat. Additionally, 28% said AI needs attention almost every time, while 54% said “sometimes.”
  • AI output needs frequent fixes, the report found, with only 37% of respondents saying AI is right without fixes “most of the time.” Forty-five percent said AI was right only sometimes, 16% said AI is rarely right and 2% said AI is almost never right.

Dive Insight:

The Connext report found that the time it takes to fix AI mistakes eats into whatever time was saved by using AI. When AI output needs to be corrected, nearly half of respondents said it takes about the same time as doing a task manually and 11% said it actually takes more time. 

A January report from enterprise AI platform Workday also found that almost 40% of what appeared to be AI productivity gains in the workplace were being lost to rework and low-quality output, with the highest levels of AI-related rework (38%) reported by human resource workers.

“AI can be a powerful accelerator, but this research shows most teams are still doing the hard part, making output accurate, complete and ready for real-world use,” Tim Mobley, president and CEO of Connext Global, said in the company’s report. “The opportunity is not just adopting AI, it is building the oversight habits that keep quality high while speed increases.”

The survey was conducted via third-party platform Pollfish in January, and collected responses from 1,000 U.S. adults aged 18 and over who said they used AI in their day-to-day work. According to Connext, the study’s goal was to “clarify the operational reality of workplace AI, especially the human work that surrounds it,” such as oversight, editing, review and recovery.

There’s a clear shift toward what Connext called “AI with a human safety net,” per the report, with only 4% of respondents saying they rarely do follow-up work following AI usage. The biggest AI output changes included editing or fixing (42%) and review or approval (34%).

Meanwhile, respondents said that AI sometimes left out important details or context (42%), or caused extra work that required fixes or repeated work (32%). Another 32% said AI sounded confident in its responses but ended up being wrong. In addition, 19% said AI made a customer situation worse.