Opinion

February 25, 2019

"AI ethics should be a forethought, not an afterthought"

OpenAI's new "dangerous" text-writing tool epitomises the "invent first, worry later" approach.


Carly Minsky

3 min read

Artificial intelligence research group OpenAI stoked controversy last week by creating a text-writing tool that is, they say, too dangerous to release. The group, co-founded by US entrepreneur Elon Musk, warned that it could be used for generating fake news, impersonating people, or automating comments on social media.

The tool — known as GPT2 — triggered a debate in Europe and elsewhere over responsible advances in artificial intelligence, and the AI community on Twitter hotly debated whether OpenAI had made the right call.

If we learn anything from OpenAI’s very public refusal to release its tool, it’s that the European way could have avoided the situation entirely.

But this isn’t the conversation that needs to be had. Amidst all the hoopla, there was little discussion of the pressing and fundamental concerns about how it got to the point that OpenAI invented something that was, by their own admission, too dangerous to release. What does this say about the technolibertarian wild-west of AI research in the US that the potential abuses of OpenAI’s tool was an afterthought left to the community at large to deal with?

Advertisement

I can’t imagine the same situation happening in Europe. The cultural attitude to AI and to responsible innovation places far more value on earlier interventions to proactively and preemptively minimise the risks. Soon, Europe won’t just rely on its culture and values to protect society from AI risks; formal guidance in this vein is about to be finalised by the European Commission, potentially paving the way for EU regulation. If we learn anything from OpenAI’s very public refusal to release its tool, it’s that the European way could have avoided the situation entirely.

A draft of the European Commission’s Ethics guidelines for trustworthy AI emphasises that AI companies must identify the ethical implications of their products before or during the development, assess the potential risks and put in place a plan to mitigate risks. The authors advocate for “AI ethics by design” as opposed to the more common attitude epitomised by OpenAI’s “design first, worry later” approach.

The seems eminently sensible. The responsibility should be on the innovators and inventors — startups and research groups —  to consider potential misuses from the very start of a product proposal and design process.

No one wants to hold back the arc of progress. There is danger in all advancement. But a fundamental goal should be to give all stakeholders, whether that’s businesses, governments or members of the public, the best chance to manage the risks.

AI responsibility is, in a way, analogous to cybersecurity. No one expects that all cyber attacks can be prevented and all the risks nullified, and so a key priority is ensuring that vulnerable parties understand and can manage the risks that do exist.

The tool will no doubt come out eventually — the toothpaste is already out of the tube, as they say.

At any rate, today, the GPT2 tool has been invented. OpenAI says the decision not to release it gives the AI community at large “more time to have a discussion about the implications”. But this is already too late. The tool will no doubt come out eventually — the toothpaste is already out of the tube, as they say.

The challenge now is for startups and researchers to treat AI responsibility as a forethought, not an afterthought. This has to include engaging directly with those who are most vulnerable to potential misuses of new tools. AI developers need to engage with business partners, governments and members of the public to collaborate on designing new tools responsibly and to ensure all stakeholders understand new and increased risks. More channels for community engagement are beginning to surface — including a new AI Sustainability Center in Stockholm and The Ada Lovelace Institute in the UK. All startups with AI ambitions should be proactively engaging with them, not leaving it to these organisations to clean up their mess.