Opinion

October 13, 2023

AI needs more regulation in Europe, not less

The EU AI Act is a step in the right direction towards making AI safer, but does not go far enough

Toju Duke

4 min read

Earlier this year, a father-of-two took his life after an interaction with an AI chatbot, Eliza, which convinced him it was the only way to save himself — and the planet — from the climate crisis. 

This terrible event demonstrates that large language models (LLMs) can be dangerous unless they are rigorously tested, including via adversarial testing and safety guardrails, which are applied through fine-tuning the model based on the results.

If this had happened, Eliza would have provided less falsified information that would “save” a human life rather than potentially lead someone to his death. 

Advertisement

The problem

Unfortunately, more LLMs are being developed at neck-breaking speed without any form of guardrails. Mistral AI, a French AI start-up that recently launched its first large language model within four months, omitted to include guardrails or content moderation.

Its chatbot provides details on how to develop a bomb, self-harm and harm others, amongst other anomalies.

The owners of this offending LLM admit there’s no content moderation in place. Indeed, Mistral cofounder Arthur Mensch told the Sifted Summit in October that it was up to the developers that use LLMs, not the companies that create them, to make sure that they are used for good. 

There are areas that are not covered by the law and need to be

When Meta released its BlenderBot 3 in August 2022, it committed very similar offences but it has now improved with better responses over time. 

Other prominent LLMs like Bard and ChatGPT do not provide responses for these sorts of harmful questions as there are content moderation and safety guardrails in place that help limit the amount of unsafe, unfactual and incorrect responses LLMs are prone to provide.

The EU is the first global regulatory body to develop an AI regulation draft through the EU AI Act, which aims to provide a regulatory legal framework for AI models and applications. It will classify AI systems into three levels of risk — unacceptable, high and limited — and impose restrictions on each. Key will be the requirement that consumers are aware that they are interacting with AI. 

Generative AI is given a whole category of its own, with applications like ChatGPT needing to be clear that content has been generated by AI, to ensure illegal content isn’t produced from the application and publish summaries of copyrighted data used during training. 

The solution

As lack of transparency, otherwise known as the “black box” problem, is a longstanding issue faced by AI systems due to the layers of neural networks these systems are composed of, it’s critical to strive for transparency in as many ways as possible. 

Although there are a few kinks and bolts to work on and a few opposing opinions on the timings of regulation, the EU AI Act is a necessary step in the right direction. 

But it is only a step. There are areas that are not covered by the law and need to be. 

It is imperative that the EU leads the way in establishing public trust in AI, especially given that 61% are wary of it. Increased trust will lead to an increase in adoption, and consequently, drive economic benefits.

Advertisement

It must also increase public knowledge of AI, including further upskilling and preparation for the impending jobs/tasks displacement the technology will inevitably result in.

Lastly, we must figure out how to protect people from making incorrect judgements and actions based on what they learn from AI or use it for. 

Here are a few ways these could be carried out: 

Firstly, the EU should create an associated body for EU AI standards that will empower, educate and equip EU companies to build AI systems that pose minimal risks and harm to members of society, and develop a Responsible AI Framework. 

Secondly, it must develop and run EU-owned AI literacy and educational programmes to upskill Europeans on AI tools and technologies.

Lastly, the EU must support businesses and organisations on the impending and current job disruption caused by AI, by identifying the various tasks and potential roles that would need automation and then upskilling members of the workforce to adopt these technologies while keeping their jobs. 

Carrying out these steps is the only way Europe will help build a robust and safe AI ecosystem for the EU and enable further economic growth, workplace satisfaction and improved education. 

Most of us have slowly come to the realisation that AI is here to stay and it’s critical the EU leverages the opportunity it offers for the economic growth of all member states within the EU and the safety of its residents — beyond the EU AI Act.

Toju Duke

Toju Duke is a London-based speaker and advisor on responsible AI. She led responsible AI programs at Google and founded Diverse AI, which champions diversity and inclusion in AI. She is author of Building Responsible AI Algorithms.