Webinar

Get Ready for 2025: Equip Your Workforce for the Future of AI

Blog-post_Responsible-AI_Hero-Image (1)
Home Blog What Does ...

What Does Responsible Generative AI Mean for Today’s Organizations?

Over the past year, businesses large and small have looked for ways to enhance their operations with generative AI. From software developers integrating GitHub Copilot into their workflows to creatives using ChatGPT or Dall-E to generate texts and images, generative AI is opening up new opportunities for efficiency and scale. Recent research from McKinsey states that “current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today.”

But as businesses begin to expand and standardize their use of generative AI, they must also evaluate the unintended consequences of these new technologies. AI-backed tools are evolving rapidly, and they carry significant risks to both businesses and to society as a whole if they aren’t deployed and managed carefully.

Employees must understand the ethical considerations involved with using generative AI, as well as any regulation that could apply to their work. Organizations need to develop and implement a strategy for “responsible AI.” But what does that mean for organizations today, particularly when the technology is changing on a monthly basis?

Here’s how businesses should think about generative AI risks and how to develop a future-proof strategy for responsible AI.

Generative AI risks


To understand what it means for businesses to practice responsible AI, we first need to understand the potential risks. Generative AI models are trained on massive datasets of text and code, and these datasets can frequently reflect the biases that exist in the real world — leading to content that is biased at best and discriminatory at worst.

Leaders may also find their employees anxious about using generative AI because of the potential for AI to automate tasks currently performed by humans. Is it ethical to begin using a technology that could lead to job displacement and unemployment?

These fears are legitimate. But despite these concerns, the risks of not using AI tools to increase productivity and streamline workflows could be even worse. A company that fails to integrate AI could soon find itself overtaken by a competitor and put out of business.

As leaders consider how to implement generative AI responsibly, here are some of the key risks to consider:

  • Biased/misleading content: If the data underlying an AI model is biased, then the model’s outputs will be biased as well. This can be particularly dangerous for organizations if content is generated under the company’s brand identity. 
  • Black boxes: Organizations using third-party AI tools will likely be unable to access or inspect the inner workings of the model. Leaders may find it difficult to explain — to customers, regulators, or other stakeholders — how the business reached a specific conclusion or delivered a particular output.
  • Models created using data without consent: It may be impossible to know the data used by a particular model. If the model was trained using copyrighted or stolen materials, it could put the organization at legal risk. In late 2023, the New York Times sued Microsoft and OpenAI over copyright infringement for using articles to train its chatbots; we should expect more of these cases to emerge before regulators catch up.
  • Accidentally sharing confidential information: As generative AI tools become more widespread, we’re also seeing leaks of confidential information. In 2023, Samsung employees accidentally leaked sensitive data while using ChatGPT — leading the company to ban the use of generative AI tools entirely.
  • AI hallucinations: Whether the underlying data is legitimate or not, generative AI will create a seemingly authoritative answer. This can result in the unintentional spread of misinformation, which can then reflect badly on a company’s reputation. In the best-case scenario, an organization may be lightly mocked online for an entertaining hallucination. But in the worst-case scenario, this misinformation could lead to real-world consequences and potential legal liability.

 

Responsible AI in practice


Even as employees increase their use of generative AI tools, they lack the necessary skills and training to use those tools responsibly. Workera users who completed assessments in our Generative AI and ChatGPT domains were more likely to demonstrate strong skills in subdomains like “ChatGPT Uses and Applications” (66% strong skills) than in “Ethical Considerations” (54% strong skills).

For organizations to improve the responsible AI abilities of their workforce, they must develop a solid framework that encompasses the many facets of responsible, ethical AI use. These are some of the most important areas when applying responsible AI in a business setting:

  • Safety: For a business to be able to protect itself from potential legal liability, it must set strong standards for data provenance and management. Whenever possible, organizations should train their generative AI tools using zero-party and first-party data. This ensures that the data is accurate, original and trusted. Organizations can also mitigate their risks by ensuring data is up to date and well labeled. Models may produce biased or inaccurate results if the underlying data is old or biased; implementing ongoing practices to review and refine datasets will ensure the outputs can be trusted.

  • Security: An organization’s ultimate responsibility is to protect its sensitive data. Companies should take a proactive approach to cybersecurity, privacy, and data governance to ensure that sensitive data doesn’t find its way into generative AI training. 

  • Transparency and explainability: Responsible AI use must be founded on trust and transparency. This includes gaining consent to use data in generative AI models and ensuring that customers know if they’re interacting with an AI-backed system. Organizations should also be responsible for identifying when AI has been used to generate content. Business must also be able to explain to customers how their generative AI-backed products and services work. This level of transparency is expected to be included in future AI regulations.

  • Fairness: Many of the most widely-used large language models have been trained on data found on the open internet. This means that the underlying datasets are already infused with human bias. Responsible AI requires users to be aware and responsive to this bias. Data science teams must work proactively to recognize the challenges presented by biased AI tools, manage risks, and retrain solutions to minimize the risk of bias and other ethical issues.

  • Sustainability: Today’s generative AI tools consume massive amounts of energy and water; according to the Guardian, early iterations of ChatGPT would use 500 milliliters of water to respond to just 20 prompts. When developing their own models, organizations should work to minimize model size while maximizing accuracy with large amounts of high-quality data. Reducing the sprawl and size of a generative AI tool will also reduce its carbon footprint.

 

The future of AI regulation


As companies implement long-term strategies for responsible AI, it’s worth noting that their efforts will eventually be shaped by regulations. In the European Union, the Artificial Intelligence Act (AI Act) aims to classify and regulate AI applications according to their potential risk to cause harm. These classifications range from minimal risk (ex. AI tools used in video games) to unacceptable risk (ex. AI tools used for social scoring and ranking people based on personal characteristics).

We are likely to see other regulations emerge in response to generative AI and other AI applications. In October 2022, the White House published its “Blueprint for an AI Bill of Rights,” which aims to “protect the American public in the age of artificial intelligence.” While this vision has yet to come to fruition in the form of legislation, we should expect the federal governments and state regulators to implement new guidance in the months and years to come.

For organizations implementing generative AI, the arrival of regulation should be a reason to take a proactive approach to responsible AI. Companies that are already maintaining good habits when it comes to data hygiene, cybersecurity, and AI fairness will find it much easier to adapt to regulations than those companies who have taken a relaxed approach to their AI experiments. 


To learn more about how Workera helps organizations develop responsible AI skills among their employees, request a demo of our Skills Intelligence Platform.

Understand. Develop. Mobilize

Unlock the full potential of your workforce

Learn how Workera can power digital transformation and produce measurable results across your enterprise.

Get a Demo