May 30, 2023

What the rise of AI means for the climate fight

GenAI tools in climate tech and sustainability present both massive potential and huge risks

Aruni Sunil

6 min read

Sponsored by


AI is expected to massively expand the capabilities of climate tech — AI tools have already been developed for remote-sensing plastic detection, decarbonising agriculture and detecting future climate disasters. 

But there are also challenges such as the possibility of hallucinations, ethical concerns, limited availability of solutions and a lack of confidence in AI data and analysis. 

So what are the opportunities in using AI in the climate fight — and what are its challenges and shortcomings? We asked the experts.


An ocean of potential

87% of global public and private-sector climate and AI leaders believe that AI is a helpful tool in the fight against climate change, according to a BCG report. Stefan Gross-Selbeck, senior partner and managing director at BCG X, Boston Consulting Group’s tech build and design unit, echoes this sentiment.

“It's almost impossible to think about any new venture or a new business without an AI component to it today,” he says. Gross-Selbeck is also the global leader for climate tech at BCG leading global efforts in business building for a net-zero economy.

Our ability to collect data from space continues to grow exponentially and we can combine that with the analytical power coming from AI tools

He adds that the real power of AI lies in being able to analyse large bodies of data, be it in reducing carbon emissions, carbon removal or space technology. 

“Our ability to collect data from space continues to grow exponentially and we can combine that with the analytical power coming from AI tools, giving us insight into things we didn't understand until now,” he says. “Examples include weather forecasting, carbon monitoring, measuring greenhouse gas emissions, carbon dioxide emissions, methane emissions — and we can do this in a very precise manner, all the way down to the actual source.”

Research shows that the sheer volume of data that needs to be assessed for developing sustainable pathways to net zero means that human decision-making needs to be augmented with AI. 

“AI can help in understanding changes in sea level rise and managing specific crises, but also in strengthening the infrastructure and understanding what it means for certain populations and certain parts of the world,” Gross-Selbeck says. He says that it can help us track deforestation, land degradation and urbanisation — and all in real time, which can help initiate timely action such as to assist emergency responders assessing the damage or coordinating relief efforts.

For example, Michal Nachmany, founder of Climate Policy Radar, a startup that uses AI to map and analyse global climate policies and laws, says AI helps connect the dots between different issues such as emissions, biodiversity, food, energy and transportation systems.

She gives the example of startup Open Climate Fix which “uses AI to forecast cloud coverage over solar panels so that you know exactly when you're going to have coverage and when you're not, resulting in accurate prediction of renewable electricity generation and better planning”.

Cutting carbon emissions

Business and AI leaders see most business value in using AI in the measurement and reduction of emissions. 

AI can help in analysing company data to understand emissions and to help make decisions based on the data and optimise processes, says Gross-Selbeck. He adds that companies should also use AI tools to understand how emissions can be reduced at every level of the company, from customers, suppliers and internally within the organisation.

If you have a model of the climate system that obeys, for example, conservation of carbon, energy and water, then you would potentially trust its results more, as opposed to a purely black box approach

However, Kasia Tokarska, a climate data scientist working on climate risk modelling, notes that it’s crucial for companies to be transparent about the methods used in measuring emissions as results can then be checked for accuracy. Results can vary depending on which algorithm or method companies use — and there are also limitations in input data available since emissions disclosure and reporting standards vary from region to region, and disclosure is often limited to certain types of companies.

When it comes to actually reducing emissions, Gross-Selbeck says AI tools can be used to transition to renewable energy sources (such as by using forecast models to understand when to use solar energy) or using AI to drive efficiency across the organisation and thereby reducing energy consumption. 

Nachmany emphasises the power of AI in analysing the huge and complex body of climate laws and policies for better policy making and risk modelling by companies, banks, investors and insurers: “Risk modelling relies on understanding the regulatory environment, for which analysts need to make sense of a large number of complex documents that come in many languages and formats.”

Ethics and hallucinations

General concerns in AI such as the bias of data that LLMs are trained on and hallucinations (or model errors like when AI gives a response that is factually inaccurate) — which are now being addressed by a discipline of AI known as responsible AI — also apply to climate tech. These risks may even be more critical in climate tech than in other areas, given the scale and urgency of climate disasters.

Tokarska says that a possible solution in climate science is a “physically constrained machine learning (ML), which is a model that obeys some sort of physical constraints. For instance, if you have a model of the climate system that obeys, for example, conservation of carbon, energy and water, then you would potentially trust its results more, as opposed to a purely black box approach where you put in data and get some results.”

We need to think about AI as a very potent drug — drugs can cure disease, and they can kill

She adds that when it comes to predicting events in the next 10 to 20 years, there needs to be additional data and evidence fed into ML models for reliable results, as otherwise, projections are based only on observed and past events which do not account for new events such as changing climate policies and how humans could respond to them. 

There’s also bias in data availability and privacy concerns: “Historically, you have a lot more data in the northern hemisphere than in the southern hemisphere — and in terms of emissions, most of the emissions happened in the northern hemisphere as well. So it's key to be aware of these data limitations,” Tokarska says. 

“When it comes to satellite data, especially in monitoring carbon offsets, data coverage is improving to be more global, but that also comes with other data privacy constraints, especially if the offsets are over some regions like where indigenous nations are, for example.”

Gross-Selbeck says the issues of hallucinations, bias and emerging capabilities will all evolve as the sector of AI grows and improves. “It's going to be very interesting to see how these capabilities evolve,” he says. “Right now, it's very important that there's always some sort of human oversight and clear governance recommendations which are put in place.”

BCG’s report also shows that 67% of climate and AI leaders want governments to do more to support the use of AI to combat climate change. 

The EU has proposed an AI Act to reduce the risks of LLMs, but startups in Europe are worried the legislation will be a serious blow to companies working with generative models if it is ratified as is.

“Responsible AI is the responsibility of every company — every company has the opportunity to establish a set of governance rules with regards to AI based decision making,” Gross-Selbeck adds.

“We need to think about AI as a very potent drug — drugs can cure disease, and they can kill,” says Nachmany. “Similarly, AI can help solve humanity's biggest problems, and it can lead to chaos and dystopia. So all those mechanisms that we would use for drugs such as independent governing bodies, clinical trials and clear labelling, should be applied to AI.”

Aruni Sunil

Aruni Sunil is a writer at Sifted. Follow her on Twitter and LinkedIn