Future-Proof Your Talent: Harnessing Verified Skills Data to Stay Ahead in AI and Tech

Adhering to legislation and upskilling teams

Home Blog Adhering to ...

Adhering to legislation and upskilling teams

Part of series– Navigating generative AI’s challenges and how responsible AI can help. 

Having frameworks and policies in place is vital, but the successful implementation of responsible AI also relies on a team that understands the technology and gets behind an organization’s policies. 

According to McKinsey, organizations that combine “generative AI with all other technologies” could see “work automation that could add 0.2 to 3.3 percentage points annually to productivity growth”. However, McKinsey also notes that “workers will need support in learning new skills and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantially to economic growth.”

The importance of educating and upskilling teams
Developing knowledge and bringing teams up to speed on responsible AI will be fundamental in ensuring success. Depending on their roles, each team in the business will require a different amount of training or upskilling. For example, team members who are using generative AI should, at the very least, understand the basics of how it works, when to use it, and how to modify or verify outputs. Compliance and legal teams, meanwhile, will need the skills to identify IP violations and other risks in this area.

Alongside upskilling, businesses are also considering the longer-term impact of generative AI on teams and resources. Generative AI should be used to enhance human capabilities, rather than replace them. This means leaders must consider integrating generative AI in a way that enhances, rather than diminishes, the skills of employees.

Staying aware of the regulatory landscape 

In the not-too-distant future, having responsible AI policies will be more than a moral obligation – it will be a legal one.

Policymakers worldwide are increasingly issuing guidelines on the development and use of artificial intelligence. At the moment, this guidance is a patchwork, rather than a full regulatory framework. However, new obligations and requirements are emerging – especially in terms of privacy, AI bias, and governance.

All over the world, we can see governments moving in the direction of regulation. The EU is currently finalizing the AI Act, the world’s first comprehensive AI law, which is due to land in mid- 2024. Last year, the UK government published a policy paper outlining its vision for a “pro innovation approach to regulating AI”. The US has issued an executive order on safe, secure and trustworthy AI while Japan, Canada, New Zealand, Singapore, Australia and Middle East governments are all developing their own frameworks.

What can we expect from the EU AI act and similar regulations?

The EU AI Act is still in a development phase. Originally drafted at the beginning of 2023, the EU parliament is in the process of negotiating what this legislation will look like, and how it will affect business. 

The cornerstone of the act is believed to be a classification system. This will determine the level of risk an AI system could pose to the health, safety or fundamental rights of a person. For example, AI systems with minimal risk – like spam filters – can be used with few requirements other than transparency obligations, while systems seen as posing an unacceptable risk – like government social scoring – would be prohibited with few exceptions.

It’s likely that AI systems will also be mandated to adhere to various development and use standards. These will include accuracy, safety, non-discrimination, security, transparency, and data privacy. 

In addition to the AI Act, we’ll see other regulations appear in response to generative AI. The UK’s Data Protection and Digital Information Bill is set to change the rules on how personal data is processed by automated systems. It will demand increased transparency and accountability, particularly in relation to automated decision making.

How businesses can prepare for AI regulation

While these regulatory frameworks are still to be finalized, there are enough common themes for businesses to start preparing for future requirements. Taking steps now will put businesses ahead of the game and will ease the compliance burden further down the line.

Of course, having a responsible AI policy is a practical and straightforward move in the right direction. Implementing policies now on fairness, transparency, managing risks, bias and dataset integrity will help, in some part, to pave the way for the regulations that lie ahead.

The advantages of responsible AI for businesses

We’ve outlined the key elements of incorporating responsible generative AI into a business – but what benefits can leaders expect from taking these important steps? Aside from doing the right thing, what will businesses and wider society gain from more responsible uses of generative AI?

  • Building trust with stakeholders. Responsible AI practices can help organizations build trust with their customers, employees, and stakeholders – leading to stronger brand loyalty and enhanced reputation.

  • More confident decision-making. Developing responsible AI systems will ensure a business can rely on technology to make decisions without worrying about potential bias or misinformation.

  • Reducing legal and reputational risks. Unintentional lapses or mishandling of generative AI can lead to legal or reputational risks. Responsible AI practices can help mitigate these.

  • Improved productivity. When used responsibly, generative AI can successfully enhance human skills to increase efficiency and productivity. 

  • Encouraging innovation and ongoing success. By creating a safe space for AI experimentation, businesses are more likely to innovate successfully and create long-term value.

As generative AI continues to revolutionize the business landscape, responsible AI is a critical consideration for leaders who want to harness its potential while maintaining ethical standards.

By upskilling teams and designing, developing, and deploying AI in a responsible way, leaders can create a positive impact in their business – as well as society as a whole.

Considering how to implement responsible AI practices and policies while generative AI is still in its infancy will also enable a business to build trust, foster innovation, mitigate risks and be fully prepared for future regulations.



Understand. Develop. Mobilize

Unlock the full potential of your workforce

Learn how Workera can power digital transformation and produce measurable results across your enterprise.

Get a Demo