How can businesses use generative AI responsibly?
Part of series– Navigating generative AI’s challenges and how responsible AI can help.
As could be expected, each business has their own definition of what defines “responsible” AI usage. For us at Workera, responsible AI is a methodology for designing, developing, and deploying AI in a trusted and ethical way. Our mission to empower leaders and learners across the world depends on our data being as accurate and unbiased as possible – which means we need internal practices to help steer our use cases, and external legislation to set the bar for both our own employees and our clients.
The good news is that generative AI is very much in its infancy. This means companies are in a good position to make it part of their policies right now – by introducing responsible AI practices at this early stage in the technology’s evolution. In doing this, leaders can also ensure that fundamental principles are baked-in to the design process from the start. This means businesses not only benefit sooner, but also save themselves from hurriedly closing gaps when systems are fully up and running.
How to create and implement responsible generative AI practices
Implementing responsible AI requires a solid framework that the entire business can buy into. If a business currently has a strong responsible AI programme, it’s likely that those leaders have already noted the challenges that generative AI brings. However, there are still some essential areas to consider and key steps to take when applying responsible AI in a business.
Leaders may also encounter anxiety within teams around generative AI models having the potential to automate tasks currently performed by humans – creating a fear that this innovation could lead to job displacement and unemployment. Despite concerns around how and when generative AI is going to be used in a business, the risks to leadership of not using AI tools to increase productivity and smoothen workflows could arguably be worse.
Safety
Successful safety initiatives will tend to come down to strong data provenance and management. Where possible, it can make sense to train generative AI tools using zero-party data and first-party data. This will help ensure models that are accurate, original and trusted.
It’s also important to keep data up to date and check it’s well labeled. Models may produce inaccurate or propagate bias if the underlying data is old, inaccurate or contains bias in the first place.
To mitigate safety risks in this area, leaders can review the datasets and documents that are being used to train models. They can also implement ongoing practices for removing biased, false or inaccurate information.
Security
To avoid harm, it’s vital to protect the privacy of any information in a company’s data which could identify individuals at the organization. This data could end up being used for generative AI training.
Updating cyber security, privacy protocols and data governance will also help to reduce the risk of malicious actors using generative AI to unearth identities or carry out cyber attacks.
Transparency and Explainability
While there’s a balance between being open and maintaining security; transparency is critical to any responsible AI policy. This includes gaining consent to use data in generative AI models, and ensuring that customers know if they’re interacting with an AI system.
Similarly, when AI has been used to generate content, it’s important that this can be identified easily. There are several ways to do this, including using watermarks or through clear customer messaging.
Alongside an open dialogue, businesses should be in a position to explain to customers how their generative AI service or product works. This level of transparency is something the EU AI Act, and future regulation will require from all businesses, so it’s a worthwhile investment at this stage.
Fairness
Bias is built into Generative AI, as most LLMs have been trained on data from the open internet. This data is already infused with human bias: for example, there is more content in English than any other language, a legacy of where this technology has been developed.
Being aware and responsive to this bias will be an essential part of any responsible AI policy. At the same time, data science teams need time to recognise these challenges, manage risk and retrain solutions to remove the risk of bias or similar ethical issues.
Sustainability
Most businesses today have policies around sustainability and environmental action. Using generative AI tools has an impact on the environment, and leaders may need to ensure these are incorporated into an organization’s sustainability policies.
Today’s LLMs have hundreds of billions of parameters and use a substantial amount of energy and water to train them. According to a recent article in The Guardian, researchers estimated that the training of GPT-3 led to emissions of more than 550 tonnes of carbon dioxide equivalent – the equivalent of flying between New York and San Francisco on a return journey 550 times.
The same report also states that the early iterations of ChatGPT would use 500ml of water – the same as a standard-sized consumer water bottle – in responding to just 20 prompts.
When developing their own models, organizations should consider minimizing model size, while maximizing accuracy with large amounts of precise, high-quality data. When it comes to delivering a great output, larger doesn’t always mean better.
Similarly, if companies are using third-party models, leaders might encourage team members to consider when and how they’re using generative AI. Important questions need to be asked: Does AI need to be used for the process or job? For example, you can use ChatGPT’s new Code Interpreter to create GIFs or generate QR codes – but there are perfectly good non-AI tools that deliver the same results.
Our next article highlights the significance of legal adherence and upskilling.
Be sure to sign up so you won't miss it.