Robyn Scott

Opinion

February 9, 2024

What governments could learn from startups on AI

Governments have been slow to capitalise on the power of digital technologies to create more efficient organisations and better public services

Robyn Scott

6 min read

If you’d presented tech founders with a free association exercise on the words “AI and government” in 2023, most would have answered “regulation”. In 2024, an unprecedented year for democracy — with four billion eligible to vote in elections — “disinformation” is likely top of mind.

But both answers represent only a small part of the story of AI in government, which presents huge opportunities for public services, the people they serve and the tech companies that interface with states. 

The risk of disinformation to democracy is rightly getting a lot of attention. Examples already abound of audio deep fakes (last year’s Slovakian election) and video deep fakes (from the US to Venezuela). But solutions, if not always quick ones, do exist. 

Advertisement

Finland, long contending with Russian disinformation attempts, has shown that it’s possible to build a society comparably resilient to fake news. Agile, AI-fluent startups could make an outsize positive impact on democracies by tackling disinformation.

There are two big incentives for governments to get AI right when it comes to democracy. Along with avoiding the potential harms, there are benefits to democracy to be gained from AI — for example, the ability to hold large-scale public deliberations and affordable voter education. 

The tide and pressure of generative AI will not recede

However, a growing AI knowledge and confidence gap between the public and private sector, despite a clear desire among civil servants to embrace the technology, risks governments failing to find the solutions they need during a critical period for democracy and the evolving institutions of government. 

Closing this AI literacy gap will help shore up our democracies and enable governments to capitalise on the potential benefits of AI to the operations of government — a sector which collectively represents one of the world’s largest workforces (200m people globally) and which directly controls around 40% of GDP in most countries. 

In GenAI, governments have a technology with the potential to help them deliver significantly better returns on investment for every taxpayer dollar. They will also have to manage what might be a bumpy workforce transition: as with any sector, some government jobs will fall away, with new ones emerging and many existing ones being enhanced with the smart use of AI. 

You might be thinking that governments will snatch defeat from the jaws of victory with AI. 

It’s true that many governments have been slow to capitalise on the power of digital technologies to create more efficient organisations and better public services. Governments can be particularly weak when it comes to harnessing innovations from startups, who struggle to navigate procurement systems still geared to giant contractors. 

This is a challenge we regularly experience at Apolitical with our government customers around the world and a shared headache for most innovators working with governments, which have to date kept a $13tn annual procurement market largely out of reach for startups and scaleups.

But two things are different and striking about the age of GenAI, which makes me optimistic about governments’ response. The first is attitude: Apolitical’s polling shows that a sizeable majority of public servants are more optimistic than pessimistic about the potential of AI in government. 

The second is the speed of adoption, with 60% of public servants already experimenting with GenAI in their work. 

This is an astonishingly rapid uptake for a cautious workforce, with many experimenting before official guidance has even been released and adoption rates comparable to those in the private sector. This reflects genuine curiosity and openness to innovation, and it’s a worldwide trend. 

Advertisement

Bottom-up adoption is also helped, in most cases, by relatively swift top-down support. Governments and departments are rapidly releasing guidance, most of which supports at least some experimentation, with the leaders setting a genuinely impressive pace. Estonia had implemented more than 80 AI use cases by August 2022, back in the preGPT mists of time. 

To accommodate the pace and reach of AI, governments will be forced to become more agile organisations

A world in which governments smartly harness AI is an exciting one. Much of the bread and butter work of public servants involves research and writing, and already we’re seeing 22% use GenAI to support research and 19% to support writing. 

Then there is the vast range of sector-specific applications. In the US, the Inland Revenue Service is experimenting with AI to help detect complex tax frauds by law firms and hedge funds. In Canada, AI is accelerating planning permitting to address the housing crisis.

And there are already numerous chatbots to vastly improve and speed up the interface between government and the people it serves, improving on the first generation of government chatbots, which have a chequered history. 

Translation is another area for rapid gains. The already good, and swiftly improving, translation abilities of GenAI will enable governments to provide more inclusive services — for example to new immigrants. And the low cost and high wattage of tools like ChatGPT and Bard could enable stretched civil services in low income countries to improve their output. 

There are risks to be avoided and managed with the widespread adoption of AI, of course.  

Governments need to improve their data infrastructure to manage the garbage in, garbage out risk, ensure protections for sensitive data and put humans in the loop at the right points — to mention just a few critical measures. 

And to do this, they need to rapidly upskill their workforces to understand how to safely and effectively harness the technology. Governments like Singapore are leading the way on this. Since 2019, more than 90k Singaporean public officers have completed a Data and AI Literacy Primer course. That’s 60% of the entire workforce.

Individual leadership is required too. Leaders need to start using GenAI technologies themselves. Anecdotally, while 60% of the wider government workforce has used GenAI at work, this drops to around 10-20% among leaders. 

This is a problem. It’s hard to understand the power and potential of GenAI in the abstract. Only by using the tools themselves, and not looking over their children’s shoulders, will leaders be able to move from a defensive crouch to embracing the massive opportunities for productivity and delivery. 

Boston Consulting Group (BCG), the consultancy firm, estimates the productivity gains of GenAI for the public sector will be valued at $1.75tn per year by 2033. 

Increased government speed and flexibility stand to be significant indirect benefits. To accommodate the pace and reach of AI, governments will be forced to become more agile organisations. Covid-19 demonstrated this was possible, with numerous stubbornly analogue services digitised in weeks and tortuous workflows streamlined overnight. But the tide of Covid receded, and old habits quickly returned. The tide and pressure of GenAI will not recede. 

This matters for the tech sector: more agile governments are more friendly governments to startups and scaleups; governments that are better able to create enabling environments and more willing to spend on products and services from innovators.

AI might, in other words, at last force governments to become flexible enough to work with startups, unlocking a $13tn market.

Robyn Scott

Robyn Scott is co-founder and CEO of Apolitical – a London-based tech company that partners with governments across the world to prepare civil servants for working with new technologies.