\Deeptech Opinion/ “AI ethics should be a forethought, not an afterthought” OpenAI's new "dangerous" text-writing tool epitomises the "invent first, worry later" approach. \Deeptech Graphcore raises $222m to speed up battle with Nvidia By Maija Palmer 29 December 2020 \Deeptech Opinion/ “AI ethics should be a forethought, not an afterthought” OpenAI's new "dangerous" text-writing tool epitomises the "invent first, worry later" approach. By Carly Minsky Monday 25 February 2019 By Carly Minsky Monday 25 February 2019 Artificial intelligence research group OpenAI stoked controversy last week by creating a text-writing tool that is, they say, too dangerous to release. The group, co-founded by US entrepreneur Elon Musk, warned that it could be used for generating fake news, impersonating people, or automating comments on social media. The tool — known as GPT2 — triggered a debate in Europe and elsewhere over responsible advances in artificial intelligence, and the AI community on Twitter hotly debated whether OpenAI had made the right call. “If we learn anything from OpenAI’s very public refusal to release its tool, it’s that the European way could have avoided the situation entirely.” Advertisement But this isn’t the conversation that needs to be had. Amidst all the hoopla, there was little discussion of the pressing and fundamental concerns about how it got to the point that OpenAI invented something that was, by their own admission, too dangerous to release. What does this say about the technolibertarian wild-west of AI research in the US that the potential abuses of OpenAI’s tool was an afterthought left to the community at large to deal with? I can’t imagine the same situation happening in Europe. The cultural attitude to AI and to responsible innovation places far more value on earlier interventions to proactively and preemptively minimise the risks. Soon, Europe won’t just rely on its culture and values to protect society from AI risks; formal guidance in this vein is about to be finalised by the European Commission, potentially paving the way for EU regulation. If we learn anything from OpenAI’s very public refusal to release its tool, it’s that the European way could have avoided the situation entirely. A draft of the European Commission’s Ethics guidelines for trustworthy AI emphasises that AI companies must identify the ethical implications of their products before or during the development, assess the potential risks and put in place a plan to mitigate risks. The authors advocate for “AI ethics by design” as opposed to the more common attitude epitomised by OpenAI’s “design first, worry later” approach. The seems eminently sensible. The responsibility should be on the innovators and inventors — startups and research groups — to consider potential misuses from the very start of a product proposal and design process. No one wants to hold back the arc of progress. There is danger in all advancement. But a fundamental goal should be to give all stakeholders, whether that’s businesses, governments or members of the public, the best chance to manage the risks. AI responsibility is, in a way, analogous to cybersecurity. No one expects that all cyber attacks can be prevented and all the risks nullified, and so a key priority is ensuring that vulnerable parties understand and can manage the risks that do exist. “The tool will no doubt come out eventually — the toothpaste is already out of the tube, as they say.” At any rate, today, the GPT2 tool has been invented. OpenAI says the decision not to release it gives the AI community at large “more time to have a discussion about the implications”. But this is already too late. The tool will no doubt come out eventually — the toothpaste is already out of the tube, as they say. The challenge now is for startups and researchers to treat AI responsibility as a forethought, not an afterthought. This has to include engaging directly with those who are most vulnerable to potential misuses of new tools. AI developers need to engage with business partners, governments and members of the public to collaborate on designing new tools responsibly and to ensure all stakeholders understand new and increased risks. More channels for community engagement are beginning to surface — including a new AI Sustainability Center in Stockholm and The Ada Lovelace Institute in the UK. All startups with AI ambitions should be proactively engaging with them, not leaving it to these organisations to clean up their mess. Advertisement Help Sifted get bigger and better (and get a sneak peak at our future plans). Please take our reader survey. Take the survey Terms of Use Related Articles Only 21% of tech unicorns are led by women, report shows By Freya Pratty Click here to read more Black entrepreneurs receive just 0.24% of capital in the UK By Freya Pratty Click here to read more Systemic barriers for minority business owners persist, report shows By Freya Pratty Click here to read more Time to stop using the term BAME By Erika Brodnock and Johannes Lenhard Click here to read more Get the best of Sifted in your inbox By entering your email you agree to Sifted’s Terms of Use Sign up to \Future Proof Sifted’s weekly \Corporate Innovation roundup email By entering your email you agree to Sifted’s Terms of Use Most Read 1 \Fintech Starling Bank wants to buy a lender 2 \Startup Life Chief of staff: the ‘must-have hire’ for startup CEOs? 3 \Fintech The 10 fastest fintechs to reach billion dollar valuations 4 \Venture Capital Rich Europeans need to invest 10% of their money into tech and stop buying stupid stuff like hotels 5 \Public and Academic European Commission makes its first equity investments into startups
Systemic barriers for minority business owners persist, report shows By Freya Pratty Click here to read more