What happens when millions of employees all need to acquire a new skill at the same time?
When ChatGPT arrived in late 2022, we got our answer.
Generative AI is a disruptive technology which is being prioritized by organizations large and small across a wide range of industries. A McKinsey survey from earlier this year found that 65% of organizations are regularly using genAI, and 75% of survey respondents expect genAI to lead to disruptive change in their industries.
When a new technology debuts, we may assume that we have to start from zero when building our skill levels. The truth is more complicated: many of the skills required to derive value from genAI are transferable from other technologies and knowledge areas.
Workera is an AI-driven platform for skills verification and development. During the first three quarters of 2024, we’ve collected 8,605 initial skills assessments from nearly 3,000 (2,945) distinct users representing 24 organizations. That data paints a fascinating picture of employees’ ability levels when approaching genAI for the first time — what can they accomplish without any training, and where should they focus their learning to become proficient as quickly as possible?
The assessment data reveals some stark differences in employee capabilities. Companies must bear these differences in mind as they develop their long-term strategies for genAI.
Let’s look at the numbers.
-
1. Most employees can communicate effectively about generative AI
Workera assesses users in individual domains — larger categories that encompass dozens of individual skills. The dataset for this research included 20 distinct domains; in each assessed domain, users are given a score that indicates whether they are Beginning, Developing, or Accomplished. A user who is Accomplished in a non-technical domain (such as “Communicating about AI”) will be able to recognize, recall and understand key concepts within that domain; in a more technical domain (for example, “Deep Learning” or “TensorFlow”), an Accomplished user will be able to design, build, and engineer solutions using that domain knowledge.
Domain |
Accomplished |
Developing |
Beginning |
Communicating about AI |
64% |
30% |
6% |
Our initial assessment data found that more than three in five (64%) users are already Accomplished in less technical domains like Communicating about AI. The “Communicating about AI” domain breaks down into 22 individual skills, ranging from “Explain how machine learning algorithms learn from data” to “Classify AI applications into high- and low-risk categories.” This data demonstrates the value in soft skills around learning and communication — employees who can communicate effectively shouldn’t have much trouble communicating about AI.
-
2. Prompt engineering requires special attention
But while users demonstrated their immediate ability to communicate about AI, they fell short in domains that are more specific to GenAI as a tool. Like “Communicating about AI,” “Prompt Engineering” is also a non-technical domain. However, it requires knowledge that users wouldn’t have otherwise acquired. Our initial assessment data found that just 33% of users were Accomplished in Prompt Engineering.
Domain |
Accomplished |
Developing |
Beginning |
Prompt Engineering |
33% |
25% |
41% |
It’s one thing to be able to communicate about AI; it’s another thing to be able to use it. Organizations need to focus their upskilling on the domains that will help employees to use GenAI tools and drive value from them. Prompt engineering is essential because it allows users to interface with tools like ChatGPT. Employees without this skill will struggle to make the most out of GenAI.
-
3. Employees lag behind in AI accountability
One of the most widespread concerns around the rise of AI centers on accountability. Are people accountable for the AI systems they’re building and using? Can they explain the impact of AI models in their industries and design features with the appropriate disclosures built-in? Our “AI Accountability” domain assesses these abilities — and the initial assessment data is concerning.
Domain |
Accomplished |
Developing |
Beginning |
AI Accountability |
17% |
62% |
21% |
Just 17% of users are Accomplished in AI Accountability on first assessment, demonstrating that work needs to be done before employees can safely build and use AI systems at scale. The risks associated with GenAI should be taken seriously: a recent analysis in Harvard Business Review classifies these risks as misuse, misapplication, misrepresentation, and misadventure. Organizations must ensure their employees have a firm handle on AI Accountability before turning them loose with AI tools.
-
4. Technical capabilities are less predictable
It makes sense that employees will already have built-in ability levels when it comes to soft, non-technical skills. But what can we expect from users in highly technical skills that apply to AI? The results vary widely. This unpredictability is particularly pronounced with newer domains: the more novel a skill, the less likely an employee is to have achieved competence.
Domain |
Accomplished |
Developing |
Beginning |
MLOps Culture |
31% |
25% |
45% |
“MLOps Culture” assesses the user’s ability to reliably integrate machine learning throughout the engineering process. Our data shows that 31% of users are already Accomplished in their initial assessment — demonstrating their ability with individual skills like “Identify potential data leakage in ML models” and “Optimize ML models for scalability in production environments.” It’s worth noting that this domain has a smaller sample size — users are more likely to assess themselves in a broad domain like “ChatGPT” than they are in a highly specific, technical domain.
Domain |
Accomplished |
Developing |
Beginning |
Machine Learning |
15% |
33% |
52% |
AI Explainability |
10% |
34% |
55% |
Other technical domains demonstrate an elevated need for targeted upskilling. In “Machine Learning,” just 15% of users are Accomplished on initial assessment, while more than half (52%) are classified as Beginning. For “AI Explainability,” which goes far beyond communicating about AI to assess the user’s ability to explain how AI systems work in detail, only 10% of users are Accomplished on initial assessment; 55% are Beginning.
Overall, each of these domains reflect significant skills gaps in technical domains. However, there is some good news: many of these skills have strong correlations, and an employee who is accomplished today in MLOps will likely have an easier time upskilling in related domains like AI Explainability and Machine Learning.
What can business and talent leaders take away from this data? The data from our initial assessments offers clues regarding where organizations should invest their time for maximum impact. Do employees need to develop their foundational skills in ChatGPT and Communicating about AI? Absolutely. But the majority of employees already have a handle on the basics.
Organizations can accelerate their AI transformation by focusing on the areas where they need the most help. Employees may waste time by studying information they already know; organizations need to identify the domains in which they need to rapidly upskill their workforce. Helping employees to catch up in these key areas will lead to a broad-based capability across the organization — maximizing the value these companies derive from their AI investments.