Skip to content
Search to learn about InterSystems products and solutions, career opportunities, and more.

The Evolution of AI: Embracing Agency

READY 2025 Keynote

Join Professor Aldo Faisal, a leading expert from Imperial College London and Director of UK government-funded AI centers, as he explores the transformative power of AI in healthcare. This keynote takes you through the evolution of AI, revealing how unconventional data sources like shopping patterns and timestamped patient records can enable earlier, more accurate diagnoses. Discover how AI is enhancing clinical decision-making, optimizing treatment strategies (like in sepsis management), and the critical need for human oversight and continuous learning in AI systems. Professor Faisal also introduces the groundbreaking Nightingale AI initiative, aiming to build a large-scale health model that thinks like a clinician.

Presented by:
Aldo Faisal, Professor of AI & Neuroscience, Director, School of Convergence Science in Human & Artificial Intelligence, Director, UKRI Centres in AI for Health, Imperial College London

Video Transcript

Below is the full transcript of the READY2025 Healthcare Solutions Keynote featuring Professor Aldo Faisal.

[0:00] Good morning, everyone. To your big surprise, I’m going to deliver a keynote on AI. Some may roll their eyes, as I did 10 or 20 years ago when I started working in machine learning, thinking that was the future. Meanwhile, another group was doing “AI,” the old-school thing, while machine learning was the real thing to do. And it goes back to a long story about how we create innovation. And if you think about it, we can go all the way back to when our ancestors learned to make fire. And since that moment until today, the way we innovated was: you had a few smart people sit down, tinker, and build the solution.

[0:48] Then came a time around the late ’80s and early ’90s when people started to think: can we build systems that learn to solve problems for which humans need intelligence? But instead of building the systems themselves, we enable them to learn from data to discover the solution on their own. That’s the approach of machine learning, and it has become the dominant way by which we are designing and building AI systems.

[1:17] And so if you ask me to this day, “Do you do AI?” I say, “No, I don’t do AI. AI is what you do with PowerPoints. I do machine learning, and that’s what you do with Python.”

Read the full transcript

[1:29] And coming from that perspective, I want to tell you a bit about the journey that we’ve been on over the past 10 years, when we started working on AI for healthcare and started to embrace agency in a systematic manner. I’m a professor at Imperial College London and director of the two UKRI centers in AI for healthcare and AI for digital health. They constitute the largest single investment by the UK government in science research for artificial intelligence in general, and we’re delighted that we were successful in getting that focused on healthcare.

[2:06] Our partners are not just the National Health Service, but crucially – and fairly uniquely – we brought in the UK healthcare regulators as partners in our center, as well as over 40 industry partners. And over the past few years, we’ve managed to realize a number of things.

[2:29] But let me take you to London – not as it is now, but maybe London as it was 150 years ago – to a gentleman called John Snow, generally considered the father of epidemiology. John Snow had to deal with a cholera epidemic. And in those days, we didn’t know how cholera spread. Did it come from bad air? There was something called the miasma theory. And some people thought maybe it was actually some water-borne agent spread through drinking water, for example.

[3:04] So John Snow looked at this problem and, being a British empiricist, he decided: let’s look at the data. People didn’t speak about “data” in that way then. So what he did was take the street map of central London and then decided: for each case of cholera, I draw a little rectangle; if there are more people suffering from cholera, the rectangle gets higher. You can see how that looks there. And so he invented the bar chart to visualize the data, and that’s literally, I’m told, the first bar chart ever drawn.

[3:35] Then he wanted to test another hypothesis: is cholera linked in some way to the spread of water? And he looked at the data, and it seemed like these cases of cholera were all clustered around Broad Street, the little green dot you see there. And he identified there was also a water pump that you can still see to this day in London. I took this picture a few weeks ago. And so now he had the idea of going into a medical intervention. He had the authorities literally shut down the pump, and through that causal intervention, he could see that no new cholera cases arose, and the cholera epidemic was stopped.

[4:21] And so in one go, he established public health and epidemiology in a very data-driven way. He effectively started data science and data visualization, and he used causal interventions to manage public health at a systemic level.

[4:35] So, how have we been faring on this journey? This is a question for you as an audience: where are we in the journey to AI and digital healthcare? I’m sure most of you will be familiar with these video games that have appeared over the past few decades. I’m going to make my way over here.

[4:53] If you think about this: where is digital health now, and where can it be in the future? I’d like you to raise your hand when you think I’m close to where we are. Are we in the Pong age of digital health? Raise your hands if you think it is. Are we at Atari levels? Are we doing CGA, which was the first proper color graphics? Are we at EGA? Then we’re getting to better stuff: VGA. Now we’re in the 2010s. And now we’re in the 2020s.

[5:30] Thank you for that data-collection effort with me. I agree with the majority of you that we are at the EGA level. We are just here, and we can still make it all the way there. And I hope we can make it as fast as this 1980s technology can do it today, or faster. Why do we need to be fast? We are under severe strain in our healthcare systems – be that in staff, be that in need for health and care, or be that in costs.

[6:06] What we’re seeing is that over the past few years, the demand for health and care has been outstripping the supply that we can provide. And my hope – and the goal of my career – is to push the boundaries, so we can make more and more of this unmet demand addressable by artificial intelligence.

[6:28] And of course, here we believe that AI is the solution. Most of us over the past decades have thought about AI as a way of obtaining and analyzing data – that’s all of data science and all of the engineering. But really, if you start thinking about the true value of AI, it’s that AI can do stuff for you that you don’t have to do anymore. And so we’re now talking about agency.

[6:54] And of course, we’ve all used ChatGPT, and now we’re seeing the wonders of this language technology being rolled out to facilitate life in the professions of healthcare through ambient intelligence. That’s generative AI. And we’re going to see in the future more agentic AI – AI systems that go out, you send them on an errand, and they come back and do stuff.

[7:19] Now, that’s nice if you’re thinking about building AI systems as a problem where we’re looking at things that we’ve been doing inefficiently, and we’re doing them now more efficiently with AI. But what I want to push you on – and what the purpose of my talk is – is to think not about how we can just do the old better, but how we can do things we couldn’t imagine before by unlocking the power of AI. And so that’s what I call developing patient-ready AI and really realizing the full potential.

[7:52] People say the full potential realized means we can increase life expectancy by one and a half years by catching preventable deaths. We can dramatically reduce burnout of clinicians, we can reduce costs, and we can improve the satisfaction of healthcare systems and our populations. But there are a number of gaps that we need to address here.

[8:17] And the first gap I want to talk about is the observability gap. Only because you’re standing in front of a clinician doesn’t mean they have full insight into how you’re doing. The human body and the human context and experience are too rich for that. And yet, in many domains, we’re still operating medicine the way we did in the 19th century, in the days of John Snow.

[8:48] Let me explain. A few years ago, we started thinking about whether we could use GenAI methods to try to manage public health at the level of a country. Of course, the United Kingdom is composed of four countries – Scotland, England, Northern Ireland, and Wales – and I’ve drawn up here the country of Wales: 4 million inhabitants. Wales is fairly unique because they very early on started to digitize their data. So we have the complete linked patient records from 2009 to today for all primary care and all secondary care data. And we can do that because within the NHS patients are in one system. We have every GP visit, every hospital visit, and every single prescription picked up in a pharmacy.

[9:30] And so we asked ourselves: can we now use that information to do something useful – for example, catching or predicting preventable life-death situations? The way we wanted to do that was simple: use the data we have available. Over the past few years, there have been beautiful studies – many of them in top-notch US hospitals – showing that if you capture someone’s blood results and regular scans at high intensity, you can fairly accurately predict if someone is going to suddenly get worse in the future. But can we do that at an affordable level, where you don’t have to come in for a weekly blood test, using other data – cheap data – data that’s not collected by a clinician or professional but collected by a secretary.

[10:17] So we’re looking at timestamps in the administrative aspects of patient records. The idea is that there may be a pattern in that information. The challenge, people will tell you, is that if you look at primary care data, you don’t go to the doctor every week. You go for six, seven months without going, and then maybe you go more frequently if you have a problem. So the gaps were the issue, and so we started to look at the timestamps of these records, and what you're seeing here are these rhythmic – or arrhythmic – patterns of interactions between patients and the healthcare system. Once you have millions of patient records, you can examine these patterns.

[11:03] To our surprise, we were able to predict with 80% accuracy – four out of five cases – whether someone would need to unexpectedly go to the hospital in the next three months purely from the timestamps and their age, gender, and postcode. That puts us in a very interesting position. We now have a tool that can be used to potentially inform patients (although you may not want to do that with 80% accuracy), or to alert their general practitioners that a patient may need an earlier visit. You can notify hospitals that an uptick in admissions is likely, or inform public health authorities.

[11:45] We created a solution that suddenly prompted the question: how should we deploy it now? This was previously thought impossible because no one imagined that the simple rhythms of healthcare interaction data could carry so much information about a person’s health state. There is no other information being used. We called this the “Morse code of health.” This shows how administrative data in healthcare records can start opening up this observability problem that we have in patients. Now let’s look at other kinds of data collected in other systems.

[12:18] Shopping data. We all shop, and most people use some kind of loyalty card that tracks purchases. This is well-known. But we wanted to see if shopping data could help with early diagnostics. We focused on ovarian cancer, studying women in London who were treated at the Royal Marsden Hospital, part of the Imperial College Healthcare Trust. We partnered with Tesco, the major supermarket chain in the UK. Customers there shop very consistently, so their data is rich.

[12:50] We recruited 150 women diagnosed with ovarian cancer, and through Tesco we obtained their shopping data. Then we essentially did what supermarkets did decades ago: developed machine-learning methods to analyze shopping baskets.

[13:08] To our surprise, my collaborator James Flannagan and PhD student Kevin found a marker suggesting that changes in shopping baskets eight months before diagnosis were systematically indicative of the future onset of ovarian cancer. Why? It’s a complex machine-learning pattern that the machine recognizes. But the simple explanation is this: ovarian cancer causes bloating. In Britain, baked beans disappear from the basket. You stop buying cabbage. You buy something for irritable bowel syndrome, and so forth. So this gives you an idea of why this information about the shopping basket now carries a healthcare signal. Something that I don't think most clinicians would have considered as a viable way of diagnosing.

[14:23] And if you go to them and show them your shopping receipt, it’s like, I guess you have this. But once you have data available at scale – and we’re now expanding this collaboration with Tesco to look at other forms of diseases – we can really start opening up completely new ways of thinking about medically relevant information that’s out there. And that’s another step to close this observability gap.

[14:48] Another important challenge is: what do we do the whole day? I remember when my grandmother developed Parkinson’s. I could notice she was acting differently. There were subtle changes. It was very hard to put a finger on that, and it wasn’t the tremor at the time. It was changes in the routine.

[15:07] So what we did around 2010 was set up what we now call living labs. We basically converted the whole corridor of our laboratories into a flat and invited people to live there for days on end. And we collected all sorts of data. This is a 10-year-old video that you’re seeing here. We collected the movement of the skeleton, what people were seeing, measured their eye movements. Everything was annotated. And we basically built a database.

[15:38] So it’s a bit like when you had genes and you did genomics to sequence the human genome. We’ve done that for behavior. And ethology is the science of behavior. So we collected the human ethome, and we’ve collected thousands of data points that way. And now, once you have loads of data about people’s movement, you can start using that for diagnostic purposes.

[16:04] Here on the left, you see children with Duchenne muscular dystrophy. This is a boy aged seven at the time, going through a standardized clinical assessment: standing on one leg, walking for six minutes. And it’s a fairly cruel way of measuring mobility in children who are going to become paralyzed and die by the age of 20–24. It’s a brutal test.

[16:30] If you’re a company developing a disease-modifying treatment, you need to show that in all these assessments, the children have not substantially decreased in performance. These assessments are so crude that there are anecdotes of children having a bad day in school, then going to the clinic for assessment – they walked 50 meters less, or they stood for two seconds less on one leg. These are very crude ways of assessing disease. They’re 19th-century methods of assessing disease.

[17:03] And on the left, you see the same boys simply playing at home, where we’ve captured their motion data doing whatever they want to do – in their context, assessing what’s important to them. So we then took all this data collected from our human ethome project and did effectively what people building large language models do: you harvest all the data – all English-language text on the internet – throw it into a machine, and the machine learns something about the structure and meaning of human language so that it can reproduce text in one way or another. And so we did the same thing for human movements.

[17:48] And so out of this model, we could then basically feed information back about kids doing the clinical assessment in the clinic, or kids playing at home, because for the machine, it didn’t actually matter. It doesn’t matter to ChatGPT if you put in an X message or a Shakespeare sonnet. It can digest and understand both.

[18:07] And lo and behold, we were able to discover equivalents of what you would call tokens in language models for behavior that we can interpret and clinically look at. And what you can then do is digitize human behavior and basically create a string of tokens that a machine can start digesting and understanding. And so what’s very important here is we can now basically predict the disease course for every single kid with twice the precision that’s possible with the FDA-approved gold standard methods of disease.

[18:46] What’s more important – if you want to develop pharma – is that you need to run clinical trials to determine whether your drug works. With this technology, we can do that in half the time because the method is more sensitive to change and requires a fraction of the population.

[19:05] Here you see the curve for how many individuals you need to get a certain precision. At the moment, the FDA-approved minimum is 60 patients for a trial in a rare disease, where it’s hard to recruit kids. We can get the same precision with nine kids – so the trial is a fifth of the size. All of a sudden, by making trials faster and less expensive, you’re taking risk out. We can hope that we can see more disease-modifying treatments emerging.

[19:37] The method was so effective that we thought: can we apply it to other domains, not just children’s muscular diseases? So we started working on FRDA, which is a disease that affects mitochondria. It’s a genetic disease, which means the way your genes produce proteins is upset, protein levels vary, and that’s what makes you sick. If you want to study or monitor patients, you typically need a blood test and go through molecular biology to measure how much of that protein is in a person at that moment. That’s a slow and expensive process.

[20:12] We took the exact same pipeline and fed it through the system. Lo and behold, the identical pipeline worked. Remember, it’s just like language models – it doesn’t matter whether it’s an English comedy or an American drama. It can understand the text. We could use that to effectively reconstruct the rate of gene expression that a patient had on a given day, equivalent to a blood test, by just using wearable data that collected their motion.

[20:44] To my chagrin, the FDA-approved biomarkers could not reconstruct the gene-expression level, so it’s not a trivial problem. All of a sudden, we can use variables that measure how you are doing today – not comparing you to some average after a clinic visit – to be as sensitive as measuring the activity of single genes in your body. That’s the power we have by using AI systematically to unlock information about our health that’s already there. That’s how we can close – or start closing – the observability gap

[21:20] So now we know what you have. We know how you’re doing. So let’s do something about it. That’s where the clinical work really starts – you want to make patients better once you know what they have. This has been slightly overlooked in the AI-for-healthcare domain, of course, because all the quick wins were in diagnosis, especially in radiology. The data was there, it was labeled, and you could just deep-learn the problem to its end.

[21:48] There are, of course, challenges when you think about AI for treatment. Most of these problems are problems of cognition. If you’re a radiologist looking at a scan, it’s a problem of perception. If you’re putting a stethoscope to the chest to hear if the lungs are clear, it’s a problem of audition. It’s all perceptual problems.

[22:13] But once we start thinking about treating patients, it’s a problem of cognition because you need to plan, adjust, and manage what you’re doing. So if we want to push AI technology here, we need to think about AI systems that are agentic in nature – systems that can explore different paths without having to execute them and then pick the best path to take.

[22:39] That’s really what agentic AI is – giving a system agency to do something. Now, you may say, it’s great if a system goes out and explores what it can do for a patient, but it could actually be rather dangerous, right? That’s how we’re training self-driving cars – first in the simulator and then in the real world. But, how can we do it better in healthcare?

[23:04] Here, we started working around 2015 on methods applied to the challenge of sepsis. Sepsis, of course, is the biggest killer. We know how to treat sepsis – you can give patients antibiotics. The problem is that the cardiovascular system collapses and shuts down before the antibiotics work. So you need to prop up the cardiovascular system, and you can do that by managing different dosages of drugs – vasopressors and IV fluids – to keep the heart pumping and the fluids flowing.

[23:34] What we did was walk into Imperial College Healthcare Trust, recruited 45 intensive care clinicians, pulled 30 patients’ worth of records out of the system, put a patient record at a certain moment in time, and asked the clinician, “What would you prescribe?” What you get is a spread of over 500% in the dosage of one drug across 45 clinicians. For the vasopressors, the spread is narrower, but there’s roughly a 50/50 chance whether the drug is prescribed at all.

[24:12] I’m not a medic, and I know these are very difficult decisions. The clinicians sit there thinking for minutes, sweating on their eyebrows to decide what dosage to give. What we’re seeing here is the challenge that it’s very hard to define what a good treatment is, and not all treatments can actually be good.

[24:32] So what we developed were AI methods that learned from operational historical data of how the patient was doing and what treatment was given to systematically find better strategies for treating patients. You can imagine it a bit like a chess game – if you don’t know about chess, you can just observe chess players for a long time, and you will learn the rules. You will see that they make some moves, and at some point, you realize maybe another sequence of moves would have been better.

[25:02] We’ve done this by mining 80,000 healthcare records of ICU patients and building a system that has been called the AI Clinician. This has been a 10-year journey, and it’s been rolled out into a number of hospitals in London, where it’s treating patients. It’s de facto the first semi-autonomous system for treating patients that was entirely learned from data. There was no tweaking by biomedical engineers. We’re now taking it to other domains of medicine, pediatrics, and so forth.

[25:29] Now that’s nice and that’s history. Why is it important? Why am I showing you this now? Because if I want to see the impact of this, publishing papers isn’t enough – that doesn’t mean we’ve helped patients. We needed to take it into practice. This is when I became interested and active in the regulation of AI for healthcare and ultimately in policy and regulation. Because if you want to effect change, you need regulation to be proactive with the changes you have – not just reacting to what’s happening.

[26:06] We need to address the translation gap. One of the first things we did was we wanted to make a quantitative assessment of how AI impacts the system. We took an intensive care ward in Imperial College St. Mary’s Hospital and converted it into a fully sensorized space. We know how to do that. I showed you a decade ago how we did it.

[26:28] What you’re seeing now is a clinician who came in, visited the intensive care ward, and interacted with a patient. We devised a protocol to measure how big the impact of AI was on clinical decisions. We show the clinician the patient’s information and ask, “What would you dose this patient?” Then we show what the AI would have done and ask, “Do you want to change your decision on the drug dosages?”

[26:56] If they don’t change at all, they trust themselves more than the AI. If they change a bit, they trust the AI a little. If they change a lot, they trust the AI a lot. We call this “trust shift” – a systematic way to measure and evaluate the impact of AI technologies in the clinical room.

[27:17] We also ask whether they would stop the AI system because it seems crazy or something like that. Now I’m going to show you the same video again, but with extra data superimposed. We did eye tracking on the clinicians, so you’re now literally seeing where they’re looking every second. The heat maps show where they pay more attention while ingesting information about the patient.

[27:44] And that’s now the AI – essentially just the screen telling you, “This is what you should prescribe.” These are four different ways of motivating my explanation. And here’s where it gets interesting. The first finding was that 80% of the time, clinicians adjusted their decisions toward the AI. That was independent of whether they were skeptical of AI or very enthusiastic about it. We assessed that using sociological questionnaires, and it didn’t matter. The second finding was that clinicians did not pay much attention to the AI’s explanation, which was surprising because everyone claims explainability is crucial. They didn’t care that much. In fact, we found cases where, during interviews afterward, clinicians said, “I think the explanation really convinced me.” But when we checked the eye‑tracking data, they never looked at the explanation.

[28:50] In humans, we call that confabulation: adjusting your conscious or subconscious narrative to match what you did. We learn this in college. But it’s interesting because people complain about hallucination in generative AI – and yet humans do something very similar. I’m here to advocate for the AI, of course, but it’s worth noting.

[29:12] Another interesting factor involves human–AI cooperation. The nurse you saw – actually, my PhD student, who is the clinician – would flip a coin. Whenever the doctor made a decision, and without the doctor knowing about the coin flip, he would simply ask, “Are you sure, doctor?” That’s all that he said. And in 20% of cases, the clinician switched their decision. This human‑to‑human nudge sometimes had a bigger impact than the AI recommendation. Which means we need to study the interactions not just between AI and the human doctor, but also how human‑to‑human interactions as part of the human–AI interactions.

[30:05] This is known in clinical literature, but because I’m an engineer – a computer scientist – I actually have to measure it, so I can compare how my AI system compares to that. So we’ve begun opening up a number of these questions. In a nutshell, the main conclusion for clinical decision support systems – and potentially for future fully autonomous medical treatment systems – is that clinicians want options of what to do. It’s nice to have an explanation, but it’s the options that they want. I compare this to a situation room: the president asks the generals, “What should we do?” and wants the options, not the backstory and history behind each one.

[30:56] And this has implications all the way back to the mathematics behind these models. If you need to provide options, not just a single best solution, you must redesign a number of components in the system.

[31:12] There’s another treatment‑related gap I foresee, and it has to do with how we train AI systems and let them into the real world. At the moment, if you’re a clinician, you go to medical school, pass exams, get board certified, and then you go into the real world – and you keep learning your whole life, becoming a better doctor through experience and interaction.

[31:37] That’s great. But what happens when you have a software‑as‑a‑medical‑device that is an AI? The moment you’re certified by the regulator, we freeze your brain and forbid you to learn anything new, because we don’t want you to do anything beyond what you’re been designed for. I can get it. My pacemaker should not do a samba; it should pace at a steady rhythm.

[32:04] But AI is built to learn, and you want AI systems to learn from interactions with clinicians and patients. You may want them to learn about new diseases as they come up and adapt accordingly. You also want them to learn about the specifics and quirks of the deployed environment. Are you in a rural setting? A central city hospital? Which treatments are working better in which context?

[32:29] There are ways to think about that, to make it workable. For example, you could send information back about what the AI is doing to the manufacturer, who could then recertify or update the AI – but that requires careful handling of information governance. At least in the UK, these conversations are happening about enabling lifelong learning for AI systems.

[32:52] Still, the default answer globally is to “freeze the brain” of junior doctors when they enter a hospital, making them practice only what they learned when they received their degree. That’s what we’re currently doing with AI.

[33:05] With that, I’ve given you a number of flavors of how we can address changes that are required to achieve the full potential. My hope is that at some point in the future, we will have a programmable or software‑defined healthcare system, where we can configure, deploy, and evaluate approaches and technologies, or human, pharmaceutical, and other forms of intervention, in an integrated way, as simply as we configure systems today, and at the speed and scale needed.

[33:47] It’s also clear that this requires flexible, proactive data governance, not reactive, and a flexible data infrastructure that allows us to integrate it all. That’s why I’m grateful to speak with you about this.

[34:07] When thinking about how AI in healthcare is going, we need to think about partnerships and collaboration – not just across institutions but across countries. You may think the 65–70 million patient records in the NHS being integrated into a national data library is a big amount of data. It’s not – compare it to generative AI in language, which became powerful only after absorbing all English text on the internet, books, and more.

[34:45] We need to work together across boundaries and borders to bring data together. We need to think about making AI sustainable – not just because there’s a tremendous cost on hardware. Certain companies bought two-thirds of the available GPUs last year to push their agenda forward. How can we make sure that other domains – especially in healthcare – get serviced by appropriate AI technology that can transform data into a meaningful product and solution? We need to talk about energy and the cost of energy. I’m not too concerned about that because if you think about the power we’re unlocking with AI, it will compensate for that.

[35:20] But ultimately, we need to think about the sociology of the problem. And the sociology of the problem is actually what I find – and I’ve been working in Switzerland, in Germany, in many different countries doing healthcare AI research – is seldom the law that’s the biggest hurdle. It’s not the technology, and it’s not the clinical side that is the limitation. It’s the mentality of people – how they’re approaching it. And so my hope is that, with the methods I showed you, in the near future we can really start thinking about holistic health.

[35:47] And I don’t mean it in an esoteric way. I mean integrating all forms of data that we’re already creating, rendering that useful, and using an AI system to help guide interventions – at the clinical level, influencing your behavior, or even your environment with ambient interventions. We can even think about ambient therapeutics in the future.

[36:19] So really, we want an AI that guides us to the summit of what AI can do. As part of that journey, we launched the Nightingale AI Initiative two months ago. The idea is to render the world’s healthcare data useful to build the world’s first true large health model. You’ve seen large language models in health – that’s a bit like an English major learning to read, then reading medical texts and reasoning from them. We want a system that reasons like a clinician.

[36:44] We aim to train it on the entire NHS healthcare data, all available scientific databases, and the literature. We’ve found a large consortium of institutions across the UK and Europe to support us, and as of last month, we’ve onboarded two California institutions. We are working very hard toward that goal.

[37:04] So, to come to an end, what lessons can we learn? Very often, people talk about changing the system – planning how to change the system. In my experience, the normative power of creating a factual solution that does something, like we did in Wales – showing people, "This is the tool. This is what it tells you." This is far more powerful in affecting change than just talking about it

[37:35] We need to think systematically about using machine learning to uncover things we can’t even think about, because we’re limited in our human ways. We shouldn’t just think like librarians collecting data, but think more like people building factories and tools powered by data, but it’s actually the application that matters.

[37:56] I hope I’ve convinced you that healthcare impact is not just an MD business, not just a medical business. We can use shopping data, for example, to bring things together. We need to think much wider. And thanks to AI, with the right ecosystem, we can bring this data together.

[38:13] And last, but not least, you need to consider people and change management from day one. That’s the computer scientist who preferred spending time with computers over people in his youth, who is telling you that. I think that’s the most fundamental limitation we face when thinking about the future of AI.

[38:33] With that, I’ll leave you with a beautiful piece of art showing a volleyball player. Very often, in data-driven domains, we tidy everything up and have this nice collection of things, but the beauty of the whole is lost. My hope is that with AI and a powerful data ecosystem, we can put these individual bits back together into one whole, beautiful picture. Thank you.

Take The Next Step

We’d love to talk. Fill in some details and we’ll be in touch.
*Required Fields
Highlighted fields are required
*Required Fields
Highlighted fields are required

By submitting your business contact information to InterSystems through this form, you acknowledge and agree that InterSystems may process this information, for the purpose of fulfilling your submission, through a system hosted in the United States, but maintained consistent with any applicable data protection laws.



** By selecting yes, you give consent to be contacted for news, updates and other marketing purposes related to existing and future InterSystems products and events. In addition, you consent to your business contact information being entered into our CRM solution that is hosted in the United States, but maintained consistent with applicable data protection laws.