Healthtech/Coronavirus/Interview/ Extreme personalisation: could AI get us out of lockdown? Big Tech-style prediction engines should be used to give people highly personalised health risk information as Covid-19 lockdowns are eased. By Maija Palmer 12 May 2020 \Healthtech The UK startup developing a variant-proof coronavirus vaccine By Maija Palmer 1 April 2021 Healthtech/Coronavirus/Interview/ Extreme personalisation: could AI get us out of lockdown? Big Tech-style prediction engines should be used to give people highly personalised health risk information as Covid-19 lockdowns are eased. By Maija Palmer 12 May 2020 Governments should be using artificial intelligence to create personalised advice for each person or household on how to live and work during coronavirus, according to a group of academics. The group from INSEAD, Queen’s University and UnionBank of the Philippines said that individuals could be given advice about social distancing and returning to work via their phone based on their clinical risk. The proposal is designed to allow those at low risk to live more or less normally, helping to bring economies across the world back online. It would also help to build herd immunity with lower mortality rates, while at the same time allowing better allocation of scarce medical equipment (e.g masks, test kids, hospital beds), according to the academics. All this would be made possible thanks to recent advances in AI technology, with the group pointing out that companies — from insurance to the likes of Netflix — make personalised recommendations using predictive data all the time. While acknowledging problems of regulation, data privacy and modelling the group — writing in Harvard Business Review — said that there was a “great deal that modern data science and AI could do to mitigate the fallout from this pandemic”. To discuss further these ideas, Sifted caught up with the authors — Theos Evgeniou of INSEAD, David Hardoon of the UnionBank of the Philippines and formerly Monetary Authority of Singapore and Anton Ovchinnikov of Queen’s University and INSEAD — to dig deeper. These are emailed answers on behalf of the three academics. 1) Hang on, AI hasn’t played much of a role so far in the pandemic. Certainly it didn’t predict the pandemic. Isn’t this a technology that has been overhyped? Of course AI is hyped — this is the usual “hype curve”. See the valuation of Amazon since 1997 or so (…its share price went from $10 in the 90s to $80 in 2000 then back to $10 in 2001 and now passed $2300…) The key is to separate hype from value and there is value in AI. It has, for example, been used to speed up the search for Covid-19 treatments and could be used to expedite testing. AI could not predict the pandemic as there were no data. We need to prepare otherwise for future pandemics. AI could not predict the pandemic, nor the spread early on as there were no data. Part of our argument in the article is that this is a problem which requires that indeed we prepare otherwise for future pandemics. Such preparation should include efforts at two levels. At the national level countries need to collect standardised detailed health-related data on each resident — well-organised electronic health records are a natural place to start. At the international level, countries need to adopt common standards and protocols to ensure learning across countries, from the epicentre nation to subsequent ones, rather than repeating a nearly identical infection pattern we saw with Covid-19. Both are known to be herculean tasks, but the Covid-19 experience shows that exceptional times — for potentially existential threats prom pandemics — require exceptional actions. In general, AI works only if you have (relevant) data. Researchers devised special methods, such as “multi-task learning”, “transfer learning” or “federated learning” to allow AI systems to transfer learning from one dataset, or region, to another — that is why we advocate for the development of global standards and protocols for health data. The underlying health data — the vitals of the human body — is essentially identical for all humans, but left to their own devices countries may record and manage these data in ways that would make cross-country learning. Theos Evgeniou of INSEAD. 2) You make a case for the use of AI to model and predict the likelihood of CV-19 cases going forward — but do we yet have enough data to do this Indeed, we are learning every day about how Sars-Cov-2 affects infected people. We already had some indications early on, for example from studies by WHO in China (e.g. page 12 regarding diabetes, hypertension, comorbidities, etc). Even an “ok” prediction model is better than no prediction model. Interestingly, as we also discuss in a Covid-19 research article, because the percentage of people who, if infected, will get severe symptoms is in the low single digits range this makes personalisation easier — as it is (statistically) easier to classify cases when, say, 99% of them are in one group than when it is 50-50. In a sense this works to our advantage: we need to predict those who will *not* get the severe symptoms and those are the vast majority. The main challenge is to actually get the data together, which is more of a regulatory and IT challenge. However, even an “ok” prediction model is better than no prediction model in terms of saving lives. For example, we can probably already do better than only using just age to differentiate between cases. Unfortunately, for Covid-19 the world is not prepared to leverage these technologies enough, given the data challenges outlined above, and may need to rely largely on simple models and expert rules. This should be an unfortunately very costly lesson for everyone as we move forward. David Hardoon of the UnionBank of the Philippines. 3) How big a change in data policies would be needed to do this? How do you balance this with concerns about data privacy and state surveillance? There are essentially two kinds of data policy issues. One is about global standards and coordination to allow for faster model development and the transfer of knowledge. Beyond that, the change is minor, as one needs to allow the introduction of risk tolerance with de-anonymising an individual, and allowing for data or specifically model sharing for specified purposes. In other words, it is introducing flexibility. The concerns with data privacy should be alleviated in knowing that data privacy is still required and there are clear and specified situations when they may need to be “violated”, such as imminent harm to an individual (however, this is no different to existing police power). Secondly, re state surveillance, while this needs to be addressed through a wider engaging of building trust with the state, as suggested in the paper such provisions would only be enacted in the state of “war” as declared by specified bodies (such as UN, WHO, etc). Reverting back to “normal” when the situation resolves itself. However, privacy should be still maintained — just that a tolerance/threshold (i.e. flexibility) be introduced. Finally, there are modern machine learning methods that can help preserve privacy. Anton Ovchinnikov of Queen’s University and INSEAD. 4) Is current AI ready to handle this kind of system? Is it a case of adopting something like Netflix movie predictions or would we be looking to something more complex? Absolutely. We can start from some models, using relatively “simple” data (such as the ones above, regarding comorbidities). Over time, this data can be enhanced with other, say Big Data about genetic information, lifestyle, or other medical records, to improve the clinical risk, prediction models. “Netflix”-like models are not “simple” at all… Big Tech spent years, and billions, in developing them. Note, “Netflix”-like models are not “simple” at all — they are way more sophisticated than what would be initially used here, primarily because Big Tech spent years, and billions, and with the best talent, on collecting, managing and analysing mountains of data. But we can definitely start from relatively simple models, and improve from there (much like Big Tech did in their infancy). At least we should consider this path — at the end, of course, the data will tell us what is feasible and what is challenging. 5) Do you think governments would be receptive to this kind of approach, given that there can still be some mistrust of adopting AI in some quarters? What does the industry have to do to build trust and acceptance? Data/Machine Learning/AI is the main tech innovation of the last at least two decades. We use it to get advice on how to go from A to B when driving, on which movies to watch and on what news to read — all of which are far from “crucial” compared to, say, being able to breathe. Thus, it seems that the question needs to be turned backwards — can we afford not to? Our view is that governments would find it challenging to deal with the political windfall of not having explored well-established and proven technology innovations that may manage the situation better, as well as save lives. Being the main innovation of the last few decades, data science, machine learning and AI are natural candidates for that. Related Articles Covid-19 crisis will “create the next Googles and Amazons” By Maija Palmer Click here to read more How AI is finding drugs to treat coronavirus By Maija Palmer Click here to read more From robots to AI drugs, here are six tech startups fighting coronavirus By Kitty Knowles Click here to read more Most Read 1 \Startup Life UK government to reform ‘equity for visas’ residency application system 2 \Fintech Is Revolut really worth $33bn right now? 3 \Startup Life Techstars unexpectedly pulls out of Sweden mid-programme 4 \Deeptech The other funding gap: it’s not just unicorns that are leaving Europe 5 \Deeptech ‘There’s going to be a bloodbath’ — is generative AI a bubble?