Artificial intelligence and dementia: Q&A with Dr David Llewellyn

Dr David Llewellyn is a Senior Research Fellow in Clinical Epidemiology

Artificial Intelligence (AI) – human intelligence exhibited by machines – in healthcare is developing rapidly, with many applications currently in use or in development in the UK and worldwide.

We speak to Dr David Llewellyn, a Senior Research Fellow in Clinical Epidemiology at the University of Exeter Medical School, about the impact of AI on dementia diagnosis. David’s research focuses on how data science and AI can improve the way in which we conceptualise neurocognitive disorders in order to improve diagnosis, treatment and prevention.

How do you define AI as it relates to healthcare and what are some of the biggest transformations that it will bring to the field?
In its broadest sense, artificial intelligence is the creation of generalisable intelligence. At the moment the majority of progress is being made with machine learning, where we’re teaching machines to learn patterns in real clinical data. We’re taking techniques that have been developed for a wide range of purposes, for example, self-driving cars and search engines, and applying this to real clinical data. This gives us a massive advantage in that we’re able to handle a much richer range of data than we were able to do so before with traditional statistical methods. We’re developing pieces of software which can be used by clinicians or patients to improve healthcare efficiency, patient safety, and patient outcomes.

How close are we to a world where AI are used to diagnose and treat patients?
I think that AI is already used to diagnose and treat patients but in limited ways. For example, before patients come and see their GP, they’re increasingly using the internet. They’re using AI through search engines to work out what their symptoms might mean. Doctors are also increasingly using various forms of AI and we’re seeing the growth in decision-making aid. It’s very much that the doctor is still in control, but they’re getting more targeted information about individual patients.

Do you foresee a future where AI technologies can operate autonomously in healthcare?
We’re much further away from systems that can actually make decisions autonomously without the doctor and without any clinical oversight. If we think about autonomous cars as an analogy, we’ve got cruise control. Similarly with AI in healthcare, we’ve got aids to decision-making. What we don’t have are ‘robot doctors’ that can diagnose and treat patients without any human oversight. I think that will come, but we’re a long way from that.

The biggest question at the moment is how we are going to regulate that process. If it’s an aid for doctors but the doctor is still in control, then you can regulate it as a medical device. But if it’s autonomous, then actually what it’s doing is practicing medicine, not supporting a doctor who practices medicine. Medical societies regulate people who practice medicine but who exactly is going to regulate machines that have the capacity to practice medicine? I don’t think we’re anywhere near to reaching a solution for that and there is certainly no way which we can effectively regulate that at the moment.

Are there any common misconceptions or general misunderstandings about AI that you believe could use some clarity?
When we think about how AI can influence medicine, there’s often the misconception that it’s going to deskill the workforce and put people out of a job. However, when you bear in mind the immense pressures that the NHS is under, I think AI technologies in healthcare should be seen as a massive opportunity to improve patient outcomes and to make the jobs themselves better for clinicians. Particularly things that are routine – they can be taken away from a clinician’s job. It will become less about whether AI will replace clinicians, but more about how clinicians will use the technology to enhance their own abilities. That’s a tremendous opportunity if you can empower clinicians to think in that way. It will allow them to focus on the human side of medicine, which for most medical professionals is the most interesting bit!

DECODE dementia enables GPs to identify patients with dementia more effectively

Identifying people with dementia is clinically challenging given the non-specific pattern of symptoms associated with it. You’ve recently developed a computerised decision support system called DECODE to help address this. Can you tell us more about it?
It’s a very difficult clinical challenge assessing patients who you may not know well and who are concerned about their memory and thinking, and trying to work out whether they are just ageing normally as no two cases of dementia are exactly alike. If you’re a non-specialist, you may not have seen a patient with a particular combination of signs and symptoms before. So one of the advantages of DECODE, a machine learning-driven system, is that it can learn to recognise patterns in hundreds, thousands, potentially millions of dementia cases and work out what needs to happen clinically to benefit that patient. So it’s the idea it doesn’t get tired or distracted and it’s very consistent. It’s not a completely objective system though, as it captures the human expert decision-making that we used to train it in the first place.

To find out more about the DECODE project, follow David (@DrDJLlewellyn) on Twitter. To read more about dementia research at Exeter, please visit our website, or follow #ExeterDementia.

Leave a Reply