Webinar

Josh Bersin + Degreed - Skills, AI, and the Future of Learning

What are the key business challenges for using generative AI?
Home Blog What are the key ...

What are the key business challenges for using generative AI?

What are the key challenges businesses face when using generative AI?



Part of series–
Navigating generative AI’s challenges and how responsible AI can help. 

There are countless use cases for generative AI. From automating high-volume tasks, to enhancing personalization and using AI to support the creation of marketing content. After the release of ChatGPT in November 2021, major tech companies like Google, IBM, Microsoft and Cisco quickly incorporated generative AI into their product stacks. While Dell joined forces with Nvidia to create Project Helix, an initiative enabling businesses to build and deploy on-prem generative AI applications.

Cisco has also used generative AI to make meetings more productive and improve the work environment, including introducing automatic meeting summaries and new AI features to its Security Cloud.

Aside from the major tech giants, we’ve recently seen companies like Canva adding AI-assisted tools and a ChatGPT plugin to its armory. While Getty Images teamed up with Nvidia to launch Generative AI by Getty Images – a tool allowing customers to generate new images using Getty’s extensive library of photos. The list of businesses adopting generative AI is growing rapidly, and there is plenty of buzz around future possibilities. But with these technologies comes new risks and challenges. Ones that businesses will need to overcome if they’re to ride the generative AI wave successfully.

Things to consider when using generative AI in the workplace

To understand the ways that businesses can implement generative AI responsibly, it’s necessary to have an understanding of the potential risks first. The most obvious – and most publicized – is that generative AI models are trained on massive datasets of text and code, which can reflect the biases that exist in the real world. This can lead to generative AI models producing biased or discriminatory content.

“Despite concerns around the use of generative AI, the risks of not using AI tools to increase productivity and smoothen workflows could arguably be worse.”


Leaders may also encounter anxiety within teams around generative AI models having the potential to automate tasks currently performed by humans – creating a fear that this innovation could lead to job displacement and unemployment. Despite concerns around how and when generative AI is going to be used in a business, the risks to leadership of not using AI tools to increase productivity and smoothen workflows could arguably be worse.

As leaders begin to implement generative AI into their workflows, these are some of the key risks to consider. 

1. Biased or misleading content 

Like conventional AI, generative AI displays bias, mainly due to the bias within its data. If not used carefully, AI tools have the potential to create misinformation or biased content. This can be particularly problematic if content is created under a business’s name.

2. Indecipherable black boxes

If you’re using a third-party AI model, it’s unlikely that teams will have access to its inner workings. This means leaders may find it difficult to explain why the business has reached a specific conclusion or delivered a particular output.

3. Models have been created using data without consent

We may not know the source of the data underlying a generative AI model. It could, for example, reproduce copyrighted images, text or code. There’s a chance that this could leave businesses in hot water from a legal perspective.

4. Accidentally sharing an IP 

With the mainstream democratization of AI tools, leaks of confidential information are becoming more common. This scenario already happened at Samsung in 2023, where workers accidentally leaked sensitive data while using ChatGPT to help with their tasks.

5. New cyber risks

As expected, new cyber risks are also becoming more prevalent with the advent of generative AI. Nefarious actors are now using these technologies to create ‘deep fakes’ and new forms of cyber attacks at speed and scale.

6. AI hallucinations 

The problem with AI hallucinations is that they often sound viable and authoritative, meaning they’re not easy to spot. This can result in the unintentional spread of misinformation, or inaccurate information, which could have a negative effect on a business’s reputation.

To manage the risks of generative AI within a company itself, or from malicious actors outside the organization, leaders need “responsible” AI practices. But what exactly are these?

In our next article, we will discuss how businesses can use generative AI responsibly. 

Understand. Develop. Mobilize

Unlock the full potential of your workforce

Learn how Workera can power digital transformation and produce measurable results across your enterprise.

Get a Demo