Zareen Ali

Opinion

November 14, 2023

I learned to navigate the world as a neurodivergent founder. We can teach AI to do the same

Mitigating the risks in an AI system is no different from managing other governance issues like data protection, cybersecurity or ESG 

Zareen Ali

5 min read

When I was growing up, I found people impossible to understand. What they did, what they said and how they expected me to respond was constantly confusing. 

I now know that this disconnect is something that autistic people experience every day. But at the time, like many undiagnosed autistic girls, I assumed I was the problem and so I was the one that needed to change. It felt like everyone else was speaking the same language apart from me, I just had to learn it.

So I did. My response to not understanding the people around me was to study them intensely. Which is where my fascination with human behaviour began. 

Advertisement

By watching and mimicking, I found I could fit in anywhere, largely avoiding the bullying that so many autistic children experience at school. This strategy, called masking, is used by autistic people, especially women and girls, to protect themselves in spaces where being different can be dangerous.

My fascination with the way people behaved led me to teenage interests in subjects that tried to unpick the human psyche, like philosophy and psychology. 

But it was economics that really called to me, with its promise that all human behaviour could be reduced to intersecting curves and mathematical functions. Studying it for my undergraduate degree at Oxford, I quickly realised that this wasn’t the case. That no real person acted the way the algorithms assumed they did.

Risk management isn’t sexy, but it’s integral to running a good business

This assumption of homogeneity, that there is one “normal” way to behave, causes damage to society that goes far beyond poor economic predictions. 

I’ve seen this myself, working in schools with neurodivergent children and teenagers — where different ways of communicating are interpreted as bad behaviour, low intelligence or something that needs to be “treated”.

It is too easy for AI, often unintentionally, to amplify the worst traits of human behaviour — racism, gender discrimination, ableism and many more — and embed them into code. 

Having now moved from working in education to building AI solutions, these are my reflections on how we can avoid it…

By recognising that AI is neutral; humans are the problem

Despite ChatGPT’s best efforts to make us think otherwise, AI is not sentient and therefore can’t be “good” or “bad”. It’s the human-defined application of the technology that determines an AI system’s moral impact on society. 

Humans control the system, from deciding what problems to apply it to, to the data we feed it, to how we evaluate its success in the real world. This control means that it is completely in our power to design and build AI systems that are fair, responsible and ethical.

Building responsible AI is good business governance

As an AI founder or anyone using AI as a core part of their business, understanding and mitigating the risks in an AI system is no different from managing other governance issues like data protection, cybersecurity or ESG. 

Advertisement

Risk management isn’t sexy, but it’s integral to running a good business and creating safeguards to protect your company from costly mistakes. Applying your existing risk processes to AI development is an easy way to start thinking about the potential harms in your system, without having to get too deep into the debates currently surrounding AI ethics.

Representation is key, but so is personal responsibility

Diverse teams outperform monocultural ones, from engineering teams to Fortune 500 companies. The practical benefits of having people with different views and life experiences collaborating and questioning each other leads to more critical thinking in the workplace. 

This is exactly the mindset business, product and engineering teams need when considering the trade-offs between benefits and harms whilst developing an AI system.

Having people on the team with lived experience of the problem the AI system is solving is important, as well as actively seeking feedback on the system’s benefits and potential harms during discovery and user research. 

However, care must be taken not to shift accountability for identifying and calling out harm exclusively onto those with lived experience. Understanding the impact of the system you’re creating together is a collective team responsibility. If you’re leading that team, more of that responsibility sits with you.

Practical tools and strategies exist, with a little digging

The Google quagmire of search terms — “AI ethics”, “responsible AI”, “data justice”, “algorithmic fairness” — can be overwhelming when trying to understand how to practically build an AI system that solves a problem whilst not causing harm. The debates and arguments in this space feel very distant from the reality of the system running on your servers.

Healthcare and medicine are good areas to look to for inspiration, with a history of translating ethics into practice. AI applications in healthcare are developed in the knowledge that the system will be functioning in a medically regulated environment. The rewards of AI in healthcare can be huge, but so is the potential for harm as we have already seen in the US — leading to the development of risk frameworks and technical methods to keep patients safe.

As we take stock of the recent UK AI Summit, the incoming European AI regulations and the AI discussions in the US, one big challenge for AI innovators stands out — people still don't fully trust AI. If used unethically or in a harmful way, this fear could be valid. However, this lack of trust could also slow down the development of life-changing AI innovations. 

One way to build trust is by creating responsible AI and communicating this process transparently. Being an early adopter of these responsible practices is not just the right thing to do; it’s a smart move. It will save AI businesses time, energy and money in the long run, by getting ahead of any incoming AI regulation.

From an investment point of view, I believe the best VC firms, i.e. the ones expecting the highest governance standards, will begin to look for startups that adopt responsible AI practices.

As someone who has raised funds for an AI-powered social impact startup, this is something I’ve already started to hear in conversations with ESG-focused funders.

Ultimately, what I hope for is a world where responsible AI is seen not as a nice-to-have, but as a fundamental business practice that creates a more inclusive, trustworthy and innovative future for all.

Zareen Ali is CEO and cofounder of Cogs AI.