- AI implementation will expand enterprise skill sets in some areas and reshape job roles elsewhere, according to a report conducted by Coleman Parkes and commissioned by Dynatrace.
- More than 3 in 5 organizations have already changed job roles and skills they are recruiting for as AI impacts productivity and enterprise operations, according to the survey of 1,300 CIOs, CTOs and other senior technology leaders.
- Nearly 9 in 10 technology leaders expect AI to expand data analytics access to non-tech employees via natural language queries.
Enterprises are heading into the new year with clear goals related to generative AI: limit risks and bring value. Frenzy around the technology has made getting back to basics essential.
“In many cases, people don’t look at the development of AI products and services holistically, so there can be many shortcomings and breakdowns from when a product is implemented to procured, rolled out, training, observation and use,” said Jodi Forlizzi, faculty lead for the Responsible AI initiative at Carnegie Mellon University’s Block Center for Technology and Society. “There’s a lot of places [where] things can break down.”
The onus is on enterprise technology leaders to work toward solutions that prevent mishaps and promote safety. Organizations across industries have identified skilling up and acquiring AI talent as keys to unlocking tangible value for the business and bridging the gap for employees.
Required employee training can allay fears of AI-generated code infringing on intellectual property rights and mitigate other risks technology leaders have identified.
There are concerns that those early in their career, or non-subject matter experts, won’t have the skills to discern when a model is factually incorrect. As AI democratizes analytics, the ability to fact-check outputs will need to expand, too.
Nearly all — 98% — of technology leaders are concerned about generative AI’s susceptibility to unintentional bias, error and misinformation, according to Dynatrace’s research.
“You need strong skeptical skills,” said Amanda Stent, professor and director of Colby College’s Davis Institute for AI. “You need to not trust anything that you read or see or hear. Always verify.”