Public & Academic/Policy & Regulation/Opinion/

Europe wants to police AI. Here’s how startups can prepare

Any startup operating in Europe can't ignore the European’s Commission’s proposed AI regulation.

Margrethe Vestager, European commissioner for competition. Seb Daly/Web Summit via Sportsfile
Diana Spehar

By Diana Spehar

Many young businesses consider future regulation to be a luxury problem. If you’ve got the government on your heels, you’re probably doing something right. 

But the European Commission’s draft legislation on artificial intelligence, published in April, is anything but luxury. It will apply to any AI system whose recommendation can influence an EU citizen, be that a customer or an employee. Compliance could be complex — and costly. Startups, you should not underestimate this. 

Yes, the regulation is still in its infancy, awaiting approval from the European Parliament and the Council of the EU. And yes, the proposed mechanisms for AI oversight and auditing are largely unknown. Timelines are fluid.

But startups should seize the chance to get involved in the consultation process and prepare early. 

These preparations should be overseen by the AI regulation or compliance expert on the team or — if that role doesn’t exist yet — informed by input from a multidisciplinary group of experts. 

1/ Understand the basics 

Start by reading the proposal itself. Review the proposed obligations and understand the main areas of concern; these will stay consistent as the draft undergoes further amendments. Where does your company fit in? Understanding complementary and preceding regulations, such as GDPR or sectoral product safety legislation, is equally important. 

2/ Understand the risk taxonomy 

In order to distinguish harmful from mundane AI applications, the commission has proposed a risk taxonomy. Businesses should carefully assess which risk bracket they belong to and whether there is a risk of future crossover.

Nearly 35% of all sectors in the EU (by value), equal to €3.4tn of economic activity, are predicted to fall into the “high risk” category. These companies will face the largest administrative and compliance burden. That includes startups building services like biometric identification and categorisation of individuals, creditworthiness assessments, recruitment and performance evaluations, and safety components of critical infrastructure such as the supply of utilities

3/ Make a strategy 

Once you’ve wrapped your head around the idea itself, it’s time to build some internal processes (Harvard Business Review and McKinsey delve into these suggestions further):

  • Establish a risk-management strategy that clearly outlines any risks posed by your AI system and contingency plans; 
  • Develop a governance framework and dedicated governance committee of subject matter experts (AI industry pros, lawyers, ethicists, product developers, security officers, etc);
  • Commit to continuous AI auditing and review to address evolving risks;
  • Implement data-privacy and cybersecurity risk-management protocols to enable the processing of sensitive personal data.

4/ If you don’t already live in Europe, consider your go-to-market strategy

The commission will delegate implementation and oversight responsibilities to its member states. That means startups should think carefully about the relative global importance of the EU market as a launchpad for their products if they’re not already in the market. Startups may benefit from setting up shop in EU countries with familiar and innovation-friendly regulatory regimes like Sweden, Finland or Denmark.

5/ Budget accordingly (and start early)

Different studies offer different views on regulation-induced costs. Startups can reduce future costs by slowly starting to phase in the necessary processes. 

One study funded by the commission suggests total compliance costs of up to €400,000 for one high-risk AI product. The commission disputed this estimate, but Allied for Start-ups warns that Brussels could be underestimating costs. Hiring AI compliance experts will not come cheap, as their roles require a very specific and technical skillset.

6/ Join advocacy groups 

Startups shouldn’t be afraid to jump in and directly engage with the EU institution as the draft undergoes further iterations. Organisations such as Startups for AI, Allied for Start-ups, COADEC and the European Digital SME Alliance have developed their own manifestos and offer support to European AI businesses in way of representation, access to community and policy advice.

Certain European founders are already closely monitoring the latest developments in Brussels. Growing European cybersecurity and AI company Oxyde Technologies, for example, has started documenting regulatory inconsistencies in anticipation of meetings with the European Artificial Intelligence Board.

Another way to get involved would be to work alongside regulators on sandboxing schemes. These can be resource intensive, but can help startups start the dialogue early. 

What’s next

Arguably, the majority of AI-focused businesses do not wish to take part in a regulatory experiment aimed at reining in a technology so far-reaching and quickly evolving that it has yet to be defined. On the bright side, if businesses did participate in the creation and implementation of the new AI law, they’ll be publicly recognised as not only innovation drivers but also as protectors of digital human rights. 

Startups can approach the proposed regulation with a high dose of scepticism or with a sense of urgency.

Given that this regulation may as well represent an inflection point in Europe’s digital future, they would do better to choose urgency.

Diana Spehar is data ethics lead at Sky. 

Join the conversation

avatar
  Subscribe  
Notify of