The NHS, open data, and AI

This session was led by John Kellas, an expert in community development in healthcare, and the “complicated” subject of healthcare, AI and licensing. He asked people to share anything they felt was important, with a view to making recommendations to policy makers.

“In 2017, I helped run a series of webinars on AI in healthcare,” he said, “and on the back of that I was asked to be part of the Academic Health Science Network core AI advisory group and support the development of a national survey on AI in healthcare.

“I was already interested in open data and open source, so I asked for a small question on licensing to be included in this survey. What we found was that about 38% was proprietary, and much less was open source, although there was a lot of ‘don’t want to say’ or ‘don’t know.’

“Since then, we’ve had a £250 million pot for AI in the NHS, and some vague talk about a value return. But I think there is room for something stronger. Because it’s clear that the data for AI is very valuable, and it’s reasonable to think that patients should get some return for it.

“And at the moment, there seems to me to be an issue around whether the NHS is going to procure AI, or develop it, and how we are going to secure that value is not really clear.”

The session considered where AI might be used in the NHS. Which might be in research. Or proprietary systems to support workflow through laboratories. Or clinical decision making (which may take AI into the realm of medical devices). Or to help patients navigate the system.

So, one participant argued: “We should not be thinking of this as one problem. It is going to be lots of different problems.” Plus, another added: there will be lots of other problems, not related to the AI, just with sharing information around the system, and enabling patients to have access to it.

Even so, the discussion suggested, there might be some basic principles to apply: like having openness about where data is coming from; and what use is being made of it; and enabling patients to see and use their own data.

If that happened, Kellas argued, there might be scope for the public to ask questions: like whether the tagging of medical images to support the creation of an algorithm to help clinicians read an image should be proprietary, or not.

Or, other people suggested: who should regulate the AI produced. Or: what would be the process of redress for anyone harmed by a process that involves an AI.

“I feel like in five years or so there should be an open data set for any contract,” Kellas argued. Another participant suggested that, since the NHS should already be publishing its contracts, there might be a role for audit, to see what is being done. While another suggested there should be more research into the performance of AIs. And training for clinicians in how to use them.

Overall, the conclusion was that openness, if not open data, is going to be critical to ride the AI wave, and avoid unintended consequences.

Leave a Reply

Your email address will not be published. Required fields are marked *