Ethics and innovation in medical AI: a conversation with Dr. Paul Campbell.

October 8, 2024
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

"If you torture the data long enough, it will confess to anything" said Ronald Coase. Certainly, the advent of AI has created some spectacular progress and failures. In the realm of patient care, AI tools can have a powerful impact, and there is little room for error. How do professionals in the Medical Device and Medical Software space prepare their solutions for the market?

In the latest episode of SecureTalk, Justin is joined by Dr Paul Campbell, who serves as the Head of Software and AI at the UK's Medicines and Healthcare Products Regulatory Agency (MHRA). Dr. Campbell discusses his journey from pharmacy to becoming a prominent figure in healthcare IT and regulated software. The conversation covers the development of AI in healthcare, the global standardization of regulations, and the MHRA’s innovative initiatives, such as AI Airlock, which are driving progress in medical technology. The discussion also delves into the vital role of data representation, ethical considerations in AI, and the complexities of implementing advanced technologies in real-world medical settings.

View full transcript

Secure Talk - Dr Paul Campbell

Justin Beals: Like many of you, I have been involved and engaged in this development of AI tools for many, many years. When I first started working on data science, we called it algorithmic programming, and most of the work that we did was in service of effective search algorithms and the ability to locate and find information quite quickly that morphed over time into predictors into systems that could make a prediction.

Anything from, in my world, the correlation between types of content, the ability to understand large data ontologies and classification systems, and eventually through to natural language processing, uh, synthesis of audio to text, um, and even later on, The ability to analyze large amounts of unstructured data for correlation with existing population cohorts, data science, AI and machine learning have been a part of my computer science journey since very early on in my career, but there are certain situations where I've used AI where I was very concerned about the applicability and the efficacy of what we were doing.

Certainly, one of the most sensitive areas for AI utilization is in. Medical care. And today we see a lot of opportunity and a lot of nervousness about the ability of AI tools to be included in software as a medical device or software as a part of our medical care solution. It's for that reason today that we wanted to interview Dr.Paul Campbell.

Dr Paul Campbell is the head of software and AI at the Medicines and Healthcare Products Regulatory Agency in the UK. He is an experienced medical executive with over two decades of clinical experience in various disciplines. Paul has held senior roles across the NHS, government, and in strategy and regulations.

He is a trusted advisor in digital health. He is a graduate of the first cohort of the NHS Digital Academy founding and a recent Executive MBA graduate. Additionally, he is an expert in medical and digital technologies. He is specialized in their development, assessment, and delivery. In the field of medical device regulations, he is particularly knowledgeable about software as a medical device and AI as a medical device.

I hope you'll enjoy this episode with me today with Dr. Paul Campbell. We certainly get to delve into the use of AI in a very specific aspect of our humanity, specifically health care, and also understand how governments are starting to approach the regulatory issue of AI tools in critical care. I hope you enjoy the episode.

Thanks for joining us on SecureTalk. 

Hello, everyone, and welcome to SecureTalk. Super excited to have you with us this week. Today we're really excited to talk with Dr. Paul Campbell. Paul, welcome to SecureTalk today. 

Paul Campbell: Thanks very much, Justin. Very kind of you to invite me onto your podcast.

Justin Beals: I don't, maybe I make people nervous sometimes, but no, we're, we're, it's very kind of you to join us. We're ecstatic for you to share some of your story and your expertise, and your perspective on the work that you do. And one of the things we love about secure talk is we get to meet people in the field that may have an impact on effective security, privacy or other aspects of the broader marketplace that we operate in.

Could you tell us a little bit about your origin story? How did you, when you look back over your career, what were some of those early milestones or interests or passions that you brought 

Paul Campbell: A so few, few bits that the simple background to the story is, is my, my career background, which has been rather long and winding.

I'm not as young as I used to be. I basically, my background really isn't healthcare. I started off initially in pharmacy, community, pharmacy, industry, hospital, pharmacy, but pretty soon after that, I went back to medicine. So became a doctor, worked as a doctor in the NHS in Scotland, I've worked across various disciplines, mostly centered around sort of anaesthetics because it will care type stuff, and I did that for a couple of decades.

So, so my background really as a, as a clinician, however, the evolution and the evolution and the clinical world brought me to have to deal with basically IT. and as is often the way sometimes some things that frustrate you, you start to get involved in trying to make better. So, I got involved in healthcare IT and within my NHS roles, I had a number of jobs. And health systems both, you know, local, regional, national level, government level, I got involved with, at one point, the Scottish Health Technologies Group Healthcare Technology Assessment of Digital Tools. And along that journey, my journey also mirrored a mapping of the increased use of software and software dropping itself into the land of being regulated healthcare software.

So, I got involved at that stage. Led to me, you know, having some contact with the MHRA. And through that became, I was invited to be one of the expert advisors. The MHRA has expert, some expert advisory groups. So, I joined them as an expert advisor. A couple of years ago, I joined MHRA on a secondment originally from, from the NHS to be a clinical advisor. And as it turned out, not long after that, ended up as the head of software and AI within innovative devices at the MHRA. So that, that's been the, the transition of the, the career from, you know, from a clinician to health to healthcare, to healthcare it, and then sort of regulated software and, and increasingly that software now becomes a be AI-driven software as well. Yeah, that's about, and the reasons. 

Justin Beals: Long and winding road, to be sure. But there's something to, the intersectionality of these experiences. I, I found too, I, you know, I, I worked as a software engineer, then I gained some expertise in enterprise education, technology, and software. And it was the aggregate of the two expertise that allowed me to find some really interesting impact opportunities.

I'm curious about one thing. I certainly have been one of the developers of machine learning, data science feature sets and trying to deliver that as product. And I always think about the consumers on the other side. You know, I think of medicine as being highly scientific, and the work in developing a treatment having a lot to do with what is effective.

And certainly, as a pharmacist, You must have been reading, you know, what certain types of drugs may do. And even as a practitioner and anesthesiologist, you must have been thinking about what new innovations were coming out and, what you, what you thought might be useful.

What did you think about the state of the information you were being provided in making those decisions? Was it, was it, You know, were there improvements that you saw things you wanted to see?

Paul Campbell: So, I mean, my clinical practice goes back a number of days. And in the beginning, you know, much of it was driven by paper, paper-based stuff and much, you know, back in.

The start point, it would be, you know, printed off x ray films and stuff like that. So I've seen the evolution from the sort of paper-based, purely paper-based and non-digital worlds to all results provided digitally, you know, x-rays, images, blood tests, etc. And now, increasingly, that moves into a world of sort of advanced analysis using sort of data leading sort of clinical decision support, et cetera.

So, that has grown through my clinical career, and now I don't currently know what works, you know, frontline clinically anymore, but I've seen that evolution go through, and yeah, that, that data is important. That information is important. That becomes the basis upon which people make decisions.

And. Yeah, it's very important that that information is correct, I suppose is the simplest thing to say. 

Justin Beals: Yeah, you know, and I think one of the interesting parts of the communication, I'm sure that actually, I have no idea. So you'll have to tell me, Paul, there's some amount of training in the medical field about how to consume that interface information and analyze it.

But for example, when, I once built a feature set that was a prediction of a human outcome in a job. And we, of course, would predict along a probability, you know, the probability that this person will achieve in this way looks like this per cent. And I was always stunned by how hard the conversation around even simple probabilities were, you know, to lay people that may not have dealt with them. Do you find in the medical field that people are coming out of training, schooling, you know, being, having a good tool set around consuming the information?

Paul Campbell: Again, I'm, you know, thinking what was taught and what I learned and where I began and how that progression, what I saw others do along the way. And  I don't get, you know, how has that changed the undergraduate career? Is, you know, a really important topic, and what information goes into that.

There are lots of challenges, you know, there's too much information. You can't teach everything to everyone. Yeah. So it becomes really important. But back at the start, you know, I do, you know, but it's pretty standard for all medical courses to have some sort of element of statistics in them, some sort of way of analyzing data.

So that, that is pretty standard. And usually through people's clinical careers that some of. They, all maintain a certain amount of, sort of awareness of that and, even, clinical professional development. Some people become fairly highly expertise in the, in the world of sort of evidence production, and some people are more, more likely, you know, the consumers of that.

 I suppose the simplest things would, would it was more or less bound around, the concepts of sort of sensitivity and specificity, certainly in my day. And, and of course, you know, that is a, a certain amount of sort of probability within that, but I think the data system has evolved to digital technology evolved. Now that there's a lot more information coming out of AI. There's a lot more needs to analyze probabilities and where they came from. And what does that mean? And I, again, I'm not heavily involved in the development of undergraduate curriculum, certainly not globally, but even within the UK, I know that people, there's a lot of discussion about how can we improve or change or what do we need to improve or change within undergraduate curriculums to improve The ability to understand and consume data, data science, data knowledge, probability.

 If I can say one final thing in that, that's for clinical practitioners, like we're now at the stage that there's also a call globally that that's important to even consumers, patients,  like there's a need for everyone to understand this probably better. 

Justin Beals: Yeah, you know, that resonates with me in the United States, of course.  We have this interesting situation where manufacturers of drugs or devices or software can advertise, you know, on TV there, and I do get concerned that they don't have the expertise the way, you know, the pros and cons of a certain treatment plan the way a doctor might and certainly, I think in the U.S. We've decided that, we're okay, advertising the drugs, but then you can't, you can't, this is a lay person's perspective, but you can't get that, uh, you know, except for with the, with the doctor's prescription, but I also wonder about undue influence, right? Like. Patient seeing something that feels like it would solve their problem, wanting that passionately a doctor trying to get through their day, you know, and perhaps, you know, just, just trying to muddle through, you know, the afternoon.

 Paul Campbell: So I think the system you described, there isn't a system in the UK, you know, we, we don't have that sort of direct advertising, so it's not something we're used to, but it's not unusual for people to speak to their clinical practitioners with knowledge of a product, you know, a friend told them about, they found something on a website, et cetera.

So that, that, that's pretty commonplace now. And I think most clinicians are pretty comfortable and relaxed with that. You know, you, you, you ground it based on the simple principles of, you know, history, investigation, analysis, examination, diagnosis, like you ground it in all of that, and you've got this background in science, and there are some things that carry more or less weight, you know, some bits of information are more pertinent to certain diagnoses or not.

So people coming with a story, you know, they often may have picked something that's not quite the right fit and maybe wanted to be the right fit, and I think most clinicians are usually in a good position to explain if that isn't the right, you know, diagnosis or treatment and they're usually comfortable with that or alternatively they're in the right position to say actually that, that's a good spot, you know, you've probably hit the nail on the head with that one and you know that that might be something worth trying. So my experience, and I think probably most, clinicians are relatively comfortable with that. 

 Justin Beals: Good. I think that's, that's great to hear we, I am a little curious about just AI broadly, as you said, and see what's being developed perhaps, or I see what's trending where the areas of, um, research might be, you know, when I started first working with data science type work or machine learning, what is now I think been termed AI.

I'm loath to use that sometimes because, you know, I remember the old HAL 9000 computer, to me that was true  AI, and I don't think we're there yet, so, but it was interesting to me, definitely in developing models, how sometimes we might use a very esoteric modeling technique, but then we might go back to very simple statistical analysis, get the same level of accuracy out of it. 

And certainly, we've seen a lot of growth in natural language processing and LLM. Although I think about certainly image analysis as being something that there's been a lot of innovation in the medical field around, are there other areas, you know, when you look at some of the, the broad strokes of AI tools, maybe beyond statistic analysis.

Are you seeing some of these tool sets having a bigger impact on health care?

Paul Campbell: Uh, you know, innovation, evolution, health care never, never stops. It's, it's always changing and improving and sometimes not as fast as some people would like, but, but it certainly always, always innovates. So in the, in the land of sort of AI, AI tools or machine learning and all the different variants and versions of that, as you indicate, there are lots of different things, you know, the simple one phrase AI does not really do justice to all the techniques that are available, but, but if we'll stick with that broad brush term for, with that understanding, the mainstay in healthcare at the moment, globally, a lot of these tools in the AI space are really, as you said, around imaging, and so imaging modalities, you know, x rays, CT scans, analysis, and beyond that.

I think to the best of my knowledge, I think probably 70, 80 per cent or so of the sort of AI tools in healthcare at the moment are in that imaging space. That's changing, I suppose it's changing in two ways, like it's still imaging, but it's maybe going broader into other aspects of imaging or, you know.

Parts of clinical practices didn't, weren't really capable of using imaging or get much out of it. Maybe, maybe now they can, so it's still imaging, but a different version. And then there's other developments within the AI space that are not using imaging. They're using a more sophisticated analysis of data. For example, you know, collections of all of, you know, Clinical profiles, you know, phenotypic profiles, so you know, related to the genetic profile of the patient combined with, you know, medicines combined with other laboratory results, you know, that's mixed bag in that sense of data that can come together to form part of clinical decision support and even overlaying that with analyzing patient symptoms as well. So that's all becoming more prevalent, I would say. And what's not entirely clear at this point is how fast that rate of change will be. You know, we'll always be a bit more predominantly imaging, even, even if that imaging is expansive or even, I don't think that that's clear yet.

Changing world. 

Justin Beals: Yeah, of course. And I can think of the major levers, right? We might develop new modeling techniques or statistical analysis with existing data. We may collect more precise data, better refined, more detailed. Or a broader data set from more people, we can see that perhaps in DNA work where we've been able to dramatically improve the databases of DNA information. 

So I now I'm a little curious to switch gears a little bit to the MHRA and NHRA and some of the work that you do there. One of the things, and maybe I'm diving into the deep end of the pool too quickly, but I'm always curious about is. You're working for the medicines and healthcare products regulatory agency in the UK, but there's a real international nature to medical innovation, you know, work United States, other countries. COVID, I think, was a great example of this. We saw different solutions popping up from, from different countries. How do you manage that, the international nature of life sciences? And you may need to give us the fifth-grader perspective of how someone brings an innovation from another country to the UK.

Paul Campbell: Okay. Right. I'll give him a question, a broad question there. Uh, so just briefly, just to, to, to start to position it. So yeah, I do need work for the MHRA medicine, healthcare regulator, healthcare products, regulatory agency in the UK. So we're the regulator, the UK regulator for medicines, medical devices, and blood components.

And we effectively provide those regulations with mechanisms are these are pretty well-established regulatory mechanisms. You know that most countries have slightly different versions, but they're all based on the same sort of themes. So these are well-established, recognized regulatory concepts and guidance.

So that, that's the background to that, uh, on the, taking it onto  the sort of the innovation and the international sites to it. So it, it might help my area of expertise is medical devices, maybe more than medicines or blood. So if, if I can caveat that out in, in the space of medical devices. So, Where do we go with that in terms of innovation?

We're actually, it's maybe just some useful background. We're in the process of changing our regulations in the UK at the moment. You know, we have regulations that have been listing for a decade. You know, a number of decades now, we currently run the regulations that you get, I think, known as UK MDR and UK medical device regulations that they're actually from 2002.

So we're in the process, we're already in the realms of updating those regulations. This is in the background of a public consultation that was carried out a few years ago, and we're bringing in some new changes to the regulations. You know, this year, we're bringing in new changes to post-market surveillance. For example, which is like one of the, you know, within the life cycle of your medical advice, making sure that you can always ensure the safety, that you're always monitoring it and checking if anything's going wrong. And then we'll get future sort of plans for regulations going in next year to again, further enhance aspects of the safety and the quality of the medical devices aligned with this new post-market surveillance. So we're in the process of a regulatory reform program. 

So that's, that's where we are right now. As part of that turn to the innovation side of it, that we're always the whole global community, the regulatory communities, it's always trying to think, how can we improve the innovation?

How can we make sure that we're keeping, you know, safe, but also ensuring access to the, you know, the latest of technologies relevant technology? So you have to all at all times get that balance right of safety and access as we call it, you know, quality access. So, we have a number of programs in the MHRA that we're working on.

One of which is actually another part of the agency. It's the IDAP, Innovative Device Access Pathway. And it's been up and running for several months now, over a year now, I think, and they're trying to bring like innovative products that somehow are fulfilling a sort of an unmet clinical need within the NHS.

 And so it's, it's a sort of tightly sort of monitored program. And that's one of our aspects, but how can we, you know, focus, we bring innovative products, you know, to the market safely. And, but, Quickly and other countries in terms of them are doing doing similar type of programs, but I that's the one that we've got.

We recently also launched a thing known as our airlock. We call it the airlock, and that's a regulatory sandbox. People may have heard of this concept of regulatory sandbox. That's weird. You're bringing regulators, you know, manufacturers, innovators, developers, you know, other other types of registers into a theoretical space to try out some of these new technologies.

And for us, it's specifically a medical devices.  Okay. So recently opened up as an application. Process for this week. In fact, we launched the application process for Airlock. Yeah. You know, have a look on the website, please apply. I did not apply though, but I did read it. Yeah. Good. So again, and within that, what we're trying to do is understand what regulatory challenges may exist because we always need to sort of, you know, some of these technologies maybe take the regulations to the edge of what the regulations were set up to do, so we need to understand that and work out what changes are required, whether that's the regulations or, in fact, just the guidance itself, or, in fact, we may expose that it's not really the regulations that's the problem, but it's some other piece of knowledge that's been missed and work on.

People need to understand. So that's we've got a couple of those programs, you know, in that sort of innovative space within the regulatory reform. The last bit of your question, if I remember it was, you know, how does all that play out in the international dimension? 

Justin Beals: Yeah, 

Paul Campbell: so I've mentioned, I think, you know, many regulators are trying to, you know, do some of these innovative things, but they generally do that in a backdrop of sort of, What's the global picture here?

So the UK MHRA are one of the members of a thing known as the International Medical Device Regulator Regulators Forum, the IMDRF. So that's a sort of global medical device forum. It's aimed at trying to promote sort of convergence of sort of, you know, good, effective, efficient regulations. You know, as a whole series of guidance documents to use as a baseline that all the global regulators to use, but there's been this year, in fact, it's been sponsored. The chair of the chair moves of years. This year. It was sponsored by FDA and and the UK will be taking their turn soon, and they have reviews constantly some working groups within that, and we, across the different areas of all of medical devices, we are the co-chairs of the AI] working group within IMU REF at the moment.

And we're doing that as a co-chair alongside the US FDA. And we're working through a process to, usually it's a, a production of a guidance document. It goes out for sort of draft consultation. I think that consultation process is recently closed. We're doing an IMDF version of what's known as sort of GMLP, the Good Machine Learning Principles.

So, at the moment, we, that's a these are principles that we already, you know, have published alongside the FD in Health Canada, but that's now been sort of elevated to MDF level and, and we are the co-chairs of that. That's the general mechanism that people follow to try and make sure there's sort of global harmonization, global alignment, and where we're at, everyone does slightly different regulatory laws and rules, but there's definitely an aim to try and improve global harmonization.

Justin Beals: Well, I, first off, I love the, the rigor, which with this has been developed, it's very obvious to me that, and now that I consider it, it should have been obvious a long time ago that. Medicine as a practice is steeped in ethics. I think it must have been centuries ago. There was a discussion of the ethics in medicine and how we practice it well.

And the profession is hung on to that as we, you know, think about industrialization of medicine in a way. And then back to personalization where we have specific devices and treatments that are being delivered. I have long complained in the computer science community space broadly that we don't have a concept of ethics.

I think many of us just starting playing with code, that was my background, and it led to a developing software that was. Impactful to our fellow human beings. And sometimes quite negatively, I, I’m especially in my work in education and some human capital management work, I wish we would have had deep guidelines as practitioners, and as AI became prevalent in the work we were doing, it was even it's even more critical because we don't consider safety over access. I think we considered access over all other problems because we felt like if we could deliver material to everyone, that negated the concern on the safety. And I think we've learned that that's just not true. Like, it's a painful outcome. Yeah. So I also, so I did have a question about the AI airlock a little bit. Is this, um, more of kind of, um, a working group or it sounds like it is a working group, but is there a technology component to it as well? 

Paul Campbell: I should, to clarify, what do you mean? Yeah. 

Justin Beals:  I have participated sometimes in like large working groups, and one of the, for example,, if we were working together to develop a research project, and there were going to be findings afterwards, we may pull our data into a singular database of what we were working on.

 And that database may have been set up by a cybersecurity expert for something that everyone was comfortable working on, and we'd have to negotiate. You know, how we were keeping private information protected or what fields we would ask for or not ask for so that we could operate from a singular data set.

It sounds like from your question back to me that you may be AI Airlock might be more interested in what you're working on 1st and then deciding if there's some componentry to that.

Paul Campbell: Okay, maybe, maybe the best way to take it is to, you know, what we're trying to do with it and what sort of things we'll be looking at and what, what mechanisms we use to do that. That might be a helpful way to explain this. So first, as I said, we're trying to understand challenges that people may or may not be having within regulations.

The reason I say may or may not is because I think we do have some experience that people say that, you know, regulations are problems and are barriers. I think that's a misperception at times. I think that's not necessarily the case. It's not that the barrier isn't the regulation. The barrier is that there's some piece of information that you don't know.

And that's that's one thing. So that's the general concept. And then we boom. But we do wish to stress those regulations, you know, work out where the bounds are and work out does anything need to be changed. And I think, so there'll be a combination of sort of some outcomes here that, you know, people will learn from, you know, things that we all need to know, things we need to improve guidance, and there may be changes in regulations.

These are general concepts of all regulatory sandboxes. So what we're looking for is yes. For people to apply in the AI medical device space they are, they are having from their side of things, you know, innovators, developers, manufacturers, a problem, a challenge, something that they're not managing to deal with.

We'll take the best of portfolio collection of those and bring them into the two or three different areas that we're looking at. So one of them is really looking at that sort of, you know, early space, that real sort of almost people who, you know, things that are more of a concept level, like they may not have any access to sort of real-life data or anything yet. They just, this is an AI tool that they want to develop, and they're trying to work out. They've already had a barrier, but they don't even know where to go with that.  

In some ways, that's like a tabletop desktop, you know, accessible scenario to talk to me. We can work through that. Then you're getting more on to the next bunch of people who are maybe, you know, mid process of their development program, and they already have access to that data that you're talking about. Like, you know, they're trying to deal with and manage that data. And there may be finding some problems about how to handle their product with this data and this regulated environment. And we will, you know, Take a collective review of that.

And I want to say collective review. You know, we will have a fairly significant bunch of parties involved in this. So ourselves, the MHRA, is regulator, but also, in the UK, medical devices in the UK are a technically approved by authorized bodies. Well, the lower-risk ones are sort of self-certified, but the higher-risk ones are approved by what we call approved bodies.

These are sort of independent entities that do the analysis about the safety of these products. They have a collective group known as team AB, so they're in there with us as well. We've also got some, you know, some pretty hardcore data experts we've got involved from some of our other services and MHRA and associated bodies.

We've got some academic experts as well, particularly in the sort of AI space. So we've got, and people have previously been innovators. So we've got a range of people that are going to help. With the applicants who bring the problem, they've got access to data. They're trying to manage it. They're struggling to work out how this will satisfy or provide what they need for regulatory status.

So that'd be the other camp. And then I suppose the third one would be more sort of, and these are, these are broad brush camps. Like this is not to say this is exactly, we just, we see this as a free mechanism. We'll be prepared to enable to flex between that, depending on what the answer is, but we're looking at these three stages, and then people who've got maybe.

A much more advanced product is maybe a product that's already on the market, but what they want to do is extend the functionality of their product, maybe move from being something in workflow with humans to something being automated. And how do we do that? Within the realms of regulations within the realms of safety and surveillance space.

So in our head, we've got right from the start of the products all the way through. So we see that's a three broad categories,  and we're going to work with that and make sure that we, as best we can, share the outputs of the learning from that for all parties. 

Justin Beals: Yeah,  I think I understand much better now how what you're, what you're doing is, is kind of a fast analysis, but effective analysis, like staged analysis for some of these organizations that don't fit the mold of the regulatory process in a way so that, so that you can invite them into a regulatory process that still brings deep scrutiny. But also some experts from the regulatory agency to analyze how the regulatory process is functioning for them. 

Paul Campbell: Yeah, you made an interesting statement there, you know, don't fit the mold of the regulatory process and like, you know, the regulatory process is a fairly broad process. Principles and a scientifically driven process to sort of aim at providing safety and quality and, you know, many people within the medical device world, many people that you mentioned earlier, the medicines world, these are, these are well-established processes.

However, we are, in fact, now dealing with people who've got more from a software world. Into the healthcare world, sorry, they may not be used to that, you know, you're welcome, please, you're helping us solve problems, but you know, for people coming into that world, it's like they land in this new regulated world, you know, one of the, I would argue one of the very frequent problems that we have is software developers in the healthcare space that literally they're starting the journey as they don't even know that they're in a regulated space.

Like they don't recognize that the software tools they're developing may qualify as medical devices. That's probably in fact, one of the most frequent areas that we have to cover. 

Justin Beals: Of course, we, in my work, we help companies with compliance, and sometimes that's regulatory, some, some form of legal compliance, and it's funny. 

I have come across a couple of fast moving VC funded startups that are building a device or software that impacts health. And I agree. I think they're excited about what they're building. Writing the code is easy in a lot of ways, but the impact of what we do, I think, as a field, again, a level might complain at ourselves as an innovation industry, we rarely think about the ethics, I think, and I would, I have personally been involved in projects that I regret. And now, looking back that I, I wish I would have had some even broad ethical guidelines about what to do and how to participate. As a matter of fact, you mentioned something that I found really brilliant. The good machine learning practice for medical device development that you all have been working on. I think these print their ten principles. Is that right? Paul? Yeah, I think I read through them, and I was like, man, I ] wish all of us that were working on AI would adopt all of these principles, I think, you know, even if we weren't building a medicine or a medical device or medical software that they're applicable. Well, And so I thought if you're open to it, we might highlight a couple that I think are really valuable. One that stood out to me that I think is the mistake a lot of my software developers turned data scientists that I meet to do is, and the way it's stated for medical is clinical study participants, and data sets are representative of the intended patient population.

Tell us a little bit about, you know, from maybe a medical or regulatory perspective, how you perceive this or even why it might be valuable. You know, what's What's your perspective on this particular guideline? 

Paul Campbell: So, you know, yeah, thanks for mentioning the GMLP principles. As I said, we have this, these were published a couple of years ago now.

And before that, actually, between ourselves, Health Canada and FDA, and I have to give credit to the colleagues that did them. It was before my time, to be honest. We are now working with IMDF. There's, there's been a public consultation of the draft of that because we want these to, you know, just make sure they're completely up to date and make sure we can, you know, make them aligned with this global harmonization aim, because we feel that they are very good, useful, valid principles. You touched on one clinical study participants, the phrasing is intended patient population. So there's a, there's a phrase and sort of medical device regulations, either quoted as intended use or intended purpose. They're used fairly interchangeably in different countries. 

So the way I like to think it was like, I imagine a world where people say, I've got this amazing product. It does this. Great stuff. It cures problems in health. That's fantastic. You know, but what exactly does it cure? And what does it fix? And who exactly does it do that for?

Because there's no major panacea that this is, you know, take that, take this one drop of ] liquid in your drink each day, and we'll all be super healthy all of the time. Generally, the way healthcare products work is that they're directed to a specific problem in a specific group of patients.

 So, in the intended purpose, in fact, we also have guidance about that in our MHRA web page about, we call it an intended purpose, and it's usually broken down across a number of things, looking at sort of, you know, age. You know, the age of the population, even the ethnicity of the population. So bias can be really important, particularly in AI products.

Uh, we also look at, I'm straying a little bit from the GMLP principles here, but just into this concept of intended purpose, you know, what exactly does it do? Who are the patients that uses them? That's this patient population. What's the environment they use it in as well. So this, this wider context of intended use purpose is, is to make people think that you're developing a product, you're going to have to prove. What it does and to which particular group of people, because then with that, you've got some evidence that you can say, well, we're assured that it can do these things in that group of people. And I think experience has told us that many, many, many sort of, uh, IT projects, you know, healthcare software, IT projects and software and AI projects.

They start off with a very broad brush of like, oh, this, this is going to cure X for everyone type thing, and the bottom line is that's really hard to prove, you know, so, so that's one of the reasons for that principle. It's well established from the oral medicines that you mentioned earlier on as well.

 Justin Beals: Yeah, certainly on the technology development side, in the pursuit of billions of dollars of value, we, we are as founders or  technology company leaders, we're enticed. To build what we call the largest total addressable market because they're, you know, I think the business folks believe there's a relationship with how valuable a company can be to the amount of people that you can provide that service solution for, but it  infects things in the wrong way.

And I'll give you a personal example that I've seen is we were building some hiring models at a prior company. And I was really rigorous about the accuracy measurement on those models because we were very new to it. The data was very new, and I was unsure about the applicability of a single model. One of the really interesting things that we found is we could have two different banks that could be very similar, very large banks, multinational banks.

And we're looking at the same job title, and we would build a model for prediction on the same job title at both those banks. But we found that dramatic accuracy differences when we looked at the two different populations because all of a sudden, we realized that the treatment for this bank was unique for the people they wanted to hire compared to the other bank.

And that  was dangerous. We, people didn't get a job that should have gotten a shot because we were building these predictions. 

 Paul Campbell: Yeah, I think you've just touched on a point there, Justin, which is like many people who are in the field know and recognize that there can be quite a a brittleness or a fragility to the data science world.

You know, you can prove a thing in a certain set of circumstances, you can be pretty robust about that, but it can often take not very much in the way of a change in one element. And you can get quite different outcomes. So that remains up to my mind. Actually, it's a problem for us in terms of medical devices and quality and safety.

But I think it's a data science problem, which I'm about to call you such yourself. Our data science experts to try. They also don't prove that.

Justin Beals: Well, I think we forget the science part, which is we need to test, you know? I get frustrated when we assume, and  I certainly I assume sometimes, but I wish we would think about it more as a thesis than a fact.

When we imagine what might be possible and something to test and requiring accurate information for a result. Yeah, there was another one, uh, that I thought was really good, which is testing demonstrates device performance during clinically relevant condition.

[Paul Campbell: So, you know, that that's particularly pertinent. Healthcare is providing a number of different sort of means and a number of different settings and sort of timelines as well. So, you know, the. People probably fairly familiar with like, you know, the family doctor and GP, you know, primary care, you know, the general practitioner, and then there's sort of secondary care, which is, you know, more advanced hospital, and then there's even like, you know, tertiary level care, you know, high-level specialist care, and then there's like time critical care, sort of emergency department care and trauma stuff.

So all these environments make a difference, and, you know, when your  software gives answers, it has to be thought through about the relevance Of that information, the need for the certain level of accuracy required the interjection of that information at the right time in a time critical or environment context sensitive manner.

The last group in that is increasingly we've got, you know, care being provided directly to patients or consumers. And again, we touched on this back at the start, but the, you know, The level of knowledge you have to understand these probabilistic answers that you may be getting. What does that mean? What do I do with it?  So, it has to be, you know, giving it, you know, giving an ECG to a professor of cardiology versus giving an ECG to a primary school child is going to get two very different interpretations, if you know what I mean. 

 Justin Beals: Yeah. 

Paul Campbell: So that context of the environment is really, really important. And it has to be thought through.

Sometimes, you know, medical devices are in risk categories. And one of the features that's taken into consideration is that sort of, the criticality of the timeline. Timely nature of this. So you're building your tool. You know, we asked people, what does it do? What's his intended use? What's what's the purpose? What's the intended population? And what's the intended environment that you want to use that? Because there's no point in proving that you can use this perfectly in a, you know, very sterile benchtop test environment. Yeah, I'm trying to transpose that into a very busy emergency department with multiple streams of information coming at you like these things it's been I think many people will recognize us you go from that sort of benchtop lab environment you put in the real world and suddenly your model doesn't work as you as you thought it intended to.

So that's why, as a principle, you need to try and prove it in the right environment 

Justin Beals: Again. This is something that just resonates resonates with me broadly is, computer science work and some of the  work on data science. I've just seen us develop models, and they're oftentimes developed in a very sterile environment.

We have a data set. We're going to train a model on it. We're going to test that model with a segmented data set to see how accurate  results were sometimes we don't even do that. If we feel like the results are qualitative, as opposed to quantitative, very frustrating to me because I think there are ways to do that drive at a quality, like if you had an AI tool, that's a large language model, like a natural language processing model, and you said, write a paragraph about that thing. There are ways to get to quantitative analysis on the effectiveness of it, right? Like, I could go bring in, you know, 25 English teachers to grade it on its ability to write that paragraph and receive some quantitative analysis on the accuracy of response.

But we, I think too often, we don't care about both. That accuracy and how accurate it is, is in the environment because we think it works the  same way when we deploy it. But, you know, if you've got someone consuming that information, and let's say that it is an essay grading tool. In the moment of the essay grading tool, if you find that your recommendation was to use the high-level value, you know, in the, did they get an A, a B, or a C, to help guide your reading of the information, but we see teachers just plowing right through it, say A, B, A, B, A, B, without, you know, working in the context, you've got to remove that overarching value, because in the context of a teacher trying to grade something, they're going to high center on it and, and not do the rest of the analysis.

And I don't think data scientists In the software world think about that impact the contextualization of the consumption of these predictions we make. Yeah, yeah. 

Paul Campbell: I, I mean, like AI, did science been around for  decades? It's not new. Yeah, it's not new. Yeah. For a number of reasons. It's becoming much more prevalent and, and.

It in and of itself as a science is advancing and advancing rapidly, and I think that's part of the learning, you know, that the amazing information that can be garnered out of, you know, collections of data, you know, of good data science techniques can be hugely beneficial to healthcare. There's no real debate about that.

But it is part, but that's on its own journey of evolution towards, like, okay, so how do we use that data? How do we make sure that it's contextualized? How do you make sure it's appropriate for the environment? How do you make sure it's used in the right way on the right people? I think that that's part of the evolution of the whole data science journey, which, as you say, is grounded in those GMLP, good machine learning principles.

Justin Beals: Yeah. Well, uh, Paul, I want to thank you for, uh, coming into this particular conversation with your background. And I also want to  thank the amazing work at the image or RA that you do and that you do in ensuring that we have safe and effective practices and also an opportunity to bring innovation to the broader marketplace.

I also in, in just reading about the organization and yourself and some of your work, there's a deep transparency to the work you're doing. And I think that drives a lot of trust. Both in practitioners and consumers, so I'm grateful for it. And, of course, for SecureTalk, we're really grateful for you joining and sharing your expertise and perspective on these things.

I think it's very helpful, and I hope that broadly, those of us that work in the computer science space will adopt better ethics in the work that we do. And that was part of my interest in chatting with you today. Yeah. 

Paul Campbell: Thanks very much, Justin. It's been a pleasure to speak to yourself and colleagues in SecureTalk, so I hope that's been useful.

Yeah, it's really good to hear that you guys are interested in sort of safe and effective products to help us in healthcare, so thank you very much. Yeah, try. 

Justin Beals:We're always improving, right? Yeah. Well, thank you.

 

About our guest

Dr. Paul CampbellHead of Software and AI, Innovative Devices Division MHRA (Medicines and Healthcare products Regulatory Agency)

Dr Paul Campbell is the head of software and AI at the Medicines and Healthcare Products Regulatory Agency in the UK. He is an experienced medical executive with over two decades of clinical experience in various disciplines. Paul has held senior roles across the NHS, government, and in strategy and regulations.

He is a trusted advisor in digital health. He is a graduate of the first cohort of the NHS Digital Academy founding and a recent Executive MBA graduate. Additionally, he is an expert in medical and digital technologies. He is specialized in their development, assessment, and delivery. In the field of medical device regulations, he is particularly knowledgeable about software as a medical device and AI as a medical device.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.