Join Don Woodlock and other InterSystems experts on a captivating journey through the future of healthcare AI. This keynote explores how trusted health data forms the essential foundation for all AI applications, leveraging an enterprise master person index (EMPI) for accuracy. Discover how actionable analytics power population health management and see cutting-edge demonstrations of generative and agentic AI streamlining everything from clinical workflows and prior authorizations to patient support.
Presented by:
- Don Woodlock, Head of Healthcare Solutions, InterSystems
with:
- Sean Kennedy, Head of Product Management, HealthShare, InterSystems
- Alex MacLeod, Director Healthcare Solution Innovation
- Erica Song, Sales Engineer & James Derrickson, Senior Product Manager
- Julie Smith, Senior Manager, Product Management
- Kristen Nemes, Product Manager & Varun Saxena, Product Manager
- Dimitri Fane, Director, Product Management & Jonathan Teich, MD, Chief Medical Officer and Director of Cinicial Innovation
- Judy Charamand, UX Designer
Video Transcript
Below is the full transcript of the READY2025 Healthcare Solutions Keynote featuring Don Woodlock and Sean Kennedy.
Healthcare Solutions: Vision and Roadmap
[0:00] Don: Hello, everyone. We in Healthcare Solutions – like many of you – get the pleasure of building on our awesome platform, and that’s what we’ll talk about today, which is what we’re doing for some of our solutions in healthcare.
[0:23] I represent – and the team will join me in a moment – our Healthcare Solutions product line, which is HealthShare. It’s a clinical data integration and financial and claims data operational platform that enables integration workflows across organizations, unifying the data, and more. We also have our international EHR business, TrackCare, and our new EHR, IntelliCare. We’ll be talking mostly about HealthShare today, and a little about IntelliCare as we go through this next hour and 15 minutes.
[01:00] I’m going to bring Sean Kennedy to the stage. Sean is new to InterSystems. He’ll be our protagonist in this story. He’s Head of HealthShare Product and joined us from Salesforce, where he was Head of Healthcare Solutions. Before that, he was with the Mass eHealth Collaborative, working on data exchange across a community. Prior to that at Mass General Brigham, he led an interesting partnership with the Boston Red Sox to support our veteran community, and before that, he was in the military delivering health IT solutions. Please welcome to the stage, Sean Kennedy.
Read the full transcript
[01:45]
Sean: Thank you very much for that, Don. Imagine yourself as a healthcare CIO. You’ve got disconnected systems, and you’re looking for trusted data to use. You’ve got all these innovative technologies like generative AI that you want to be able to use, but you’re not quite sure how. It’s almost like you’re caught up in a tornado.
[02:11] [Tornado images]
[02:52]
Sean: Where am I?
Witches: You’re at the gates of AI City. This is the home of the Great Wizard of AI. This is where AI has solved all of healthcare’s problems.
[03:03]
Sean: Oh wow, that’s fantastic. I’m a CIO at a hospital, and I’ve got so many clunky workflows and disconnected systems. I know AI can help. I just don’t know how.
Witches: Well, you’ve come to the right place.
Sean: Can I come in?
Witches: No. You’re not ready.
[03:23]
Sean: What are you talking about? Of course, I’m ready. There are “Ready” signs all over this place. That’s got to count for something!
Witches: Well, first you’ve got to learn about the foundations of a good data and AI strategy. That’s what’s most important. If you learn all that, and you beg us a little bit, then maybe you can come in.
Sean: Okay, I got it. I’m game. Where do I begin?
Witches: You’re going to start your learning in the forest, where you’ll hear all about the importance of trusted health data.
Sean: Trusted health data. Got it. I’m off.
[03:56]
Witches: Don’t you think it was a little mean to send our CIO on that journey?
Witches: No, we’re witches. We’re supposed to torture people, right?
Witches: So, if we’re witches, then should we cackle?
Witches: What an idea. You start.
Witches: [Cackle]
Witches: What was that? Okay, let me try.
Witches: [Cackle]
Witches: We’ve got to practice. Let’s go practice.
Witches: Embarrassing.
Trusted Health Data and AI
[04:32]
Lion: Hi there. I’m the lion of trusted health data.
Sean: Well, the witches sent me your way to learn about trusted health data, but I want to learn about AI. What does that have to do with AI anyway?
Lion: A lot, actually. How can you trust your AI if you can’t trust the data underlying it?
Sean: What do you mean?
Lion: Let’s go back to basics. In healthcare, it’s important to know which patient or member is which, right?
Sean: Of course. Fundamental.
[04:56]
Lion: Let’s say I have a patient, Margaret Hamilton, coming in for an appointment. I’m going to search for her, and we’ll see if you can tell me which record is hers. Oh no, I’ve got a couple of different records here. Looks like they’re all in Wichita. They could be different people. Let’s try clicking into them. Maybe the clinical data is the same. Okay, you’ve got some diagnoses, an allergy, and a couple of meds. Great, let’s go back to that list. Go to that second one over there. This one has lab results, but none of the diagnoses, meds, or allergies that we had in the other record. So, it’s entirely possible these belong to totally different individuals.
[06:03] All right, Sean, let’s go back to this list here. You tell me which record is our Margaret Hamilton.
Sean: Oh, I don’t know. I’m scared I’ll guess wrong.
Lion: That’s my point. If you trusted your data, you wouldn’t have to guess. You’d just know, and so would your downstream AI applications. I can’t tell you how many people have come to me and said that they’re bringing their data together when they’re not actually integrating it in any meaningful way. They’re just storing records side by side. If you’re going to have data from multiple sources, you need an enterprise master person index, or EMPI solution. If we’re going to understand what’s happening here, let’s take a look at our InterSystems EMPI.
[06:44] This is the worklist. Each of these entries represents a pair of records that may or may not belong to the same individual and need to be matched by hand. Look, Sean, here are all those Margaret Hamiltons we saw earlier. No wonder we had so many different search results.
Sean: Wow, that’s a lot of record pairs to match by hand.
Lion: Don’t worry. I’m not going to make you match them all by hand. With InterSystems EMPI, you can use our combination of probabilistic algorithms, machine learning, and, for records that don’t have enough data in your system to actually make that determination, you can leverage a curated external reference database of identities, addresses, and similar information to make that determination. That’s called referential matching.
[07:22] Let’s apply some of those tools to our system here and see what happens. Back to that worklist. Let’s refresh it. All right, Sean. How’s that look?
Sean: Wow, that’s much better.
Lion: Let’s try searching for Margaret Hamilton again in our clinical viewer. Ah, there she is. Let’s click in. Look at all that data there.
[08:00] In case you missed it, let me explain what just happened. We had five different records for Margaret Hamilton – Marge, Margarita Hamilton. Using all of those tools in InterSystems EMPI, we were able to decide that they all belonged to the same individual, and we then brought all that data together in one unified record of care. That was done automatically. Now you don’t have to worry about long, scary worklists, and you can trust that all your data for Margaret Hamilton is right here when you need it.
[8:27] Sean, trusted data is the foundation. You can never use AI in healthcare if you can’t trust that your health data is complete and accurate. Imagine trying to tell AI to create a care plan for Margaret, and you have a record that doesn’t have her allergies, current meds, or any lab results. Or even just trying to predict if she’ll show up to this appointment, without a list of her other appointments and attendance history. I can’t think of a single application of AI that doesn’t involve good, clean, trusted data. An InterSystems EMPI gives you the full picture you need, so you can have confidence in the data you’re using and have the courage to take action.
[09:20] Now, do you understand why the witches sent you here to me?
Sean: I totally get it now. Trusted health data is super foundational. But now that I have trusted health data, what can I do with it?
Lion: All kinds of things. I hear that scarecrow in the cornfield down there is doing some cool stuff with actionable analytics. Why don’t you go over there and see for yourself?
[09:38]
Sean: Okay, great. Thank you.
Lion: Thank you.
Actionable Analytics
[9:45]
Sean: Trusted health data – super foundational. Let’s see what we’ve got for analytics.
[09:50]
Sean: Hey there, Mr. Scarecrow. I just learned about trusted health data, and the lion said to come over to you to learn about actionable analytics.
Scarecrow: Welcome. You’re in the right cornfield. I’ve been working on something to utilize all this data in a meaningful way. We know we can trust our data, but we need to take the next step and make it actionable. I’d love to pick your brain on how it’s coming along.
[10:16] Recently, I learned some alarming statistics around hypertension, and that reminded me to get my blood pressure checked. At my doctor’s office, I picked up some literature on managing hypertension. Turns out, one in two adults in the US has hypertension, but only one in four are managing their condition effectively. At the same time, hypertension is the number one modifiable risk factor for things like heart disease, kidney failure, and stroke. It’s costly. It’s deadly. It’s very common. But more importantly, it’s fixable. And this isn’t just a clinical issue – it’s a population health opportunity worth solving. Effectively managing hypertension can yield three times the ROI in avoided hospitalizations and complications.
[11:04] Let’s take a look at what I’ve come up with. This dashboard pulls data from Health Insight, performing calculations and aggregations to derive meaningful insights. We now have information on medications, lab tests, visits, etc. – all in an easy-to-use, intuitive dashboard. What do you think about this, Sean?
[11:24]
Sean: I think it looks great, but what does it tell us?
Scarecrow: We can identify patterns, highlight gaps in care, and support targeted interventions that improve outcomes, not just for individuals, but across entire communities. Let’s dive in and start by looking at the distribution of patients by hypertension status. You can see that this data is across multiple facilities. This is fairly in line with expectations — nothing out of the ordinary. But let’s choose one particular facility that stood out, and not in a good way.
[12:00] This facility isn’t doing so great. I wonder what could be the reason behind such a high proportion of Stage 2 hypertension patients here. I think we should investigate further and look at the population characteristics for these patients.
[12:10] Here we have the demographic data for all patients associated with Oscare. Looking at their age distribution, you’ll see they serve an older population, and that might explain why hypertension is more prevalent. This area also has a higher-than-average poverty rate, which can impact lifestyle decisions around diet, exercise, and stress. All of these factors can impact hypertension. If you look at the map over there, you can see this is an urban area. They tend to have better healthcare access, but despite all of this, patient outcomes are not improving.
[13:03] Now, I know I’ve shared a lot of information so far. Let’s try to make this actionable. Here’s a list of all patients associated with Oscare. I feel confident this is the right group to target through patient outreach programs. Wouldn’t you agree?
[13:24]
Sean: Yeah, makes sense. But all those tornadoes can’t be good for their hypertension.
Scarecrow: Good point. Having all this data and being able to leverage it is truly putting your data to use. It’s a way to give your health system a brain.
Sean: This is great. I understand now how you can take action on data through dashboards. This is super helpful. But what else can I do with it?
Scarecrow: I heard there’s some cool AI stuff going on over at ISC University. Ask for the professor.
Sean: Perfect. Thank you.
[14:12]
Witches: I wonder how Sean’s doing on his journey.
Witches: I’m sure he’s having a grand time. Who doesn’t love meandering through the forest learning about data and AI?
Witches: Should we check the crystal ball to see how he’s doing?
Witches: I don’t have a crystal ball. Did you bring the crystal ball?
Witches: I don’t have a crystal ball. Oh, but I did get this snowglobe at the Orlando airport.
Witches: We’re poor man’s witches. All right, let’s give it a shake.
Witches: I think I see Sean in there. Or, is that a dolphin?
Witches: No, not Sean. I see his red shoes.
Witches: Looks like he’s learned about trusted health data and actionable analytics. That checks everything off for me. Should we let him into AI City now?
Witches: Still no. Aren’t you supposed to be the Wicked Witch? What happened to that?
Witches: I forgot.
Witches: And my popcorn’s not empty. Let’s have him learn more. Let’s teach him how to put AI into an application and how to use AI to connect a community.
Witches: Excellent plan.
Witches: Hey, how are you getting home?
Witches: I just got a new broom, but you know how it is – one minute you’re soaring through the sky and the next minute, bam, there’s a barn.
Witches: A broom? That’s so 80s of you. I want a giant pink bubble. I think that’d be cool.
Witches: That’d be good for you.
End To End Applications
[15:50]
Sean: Hi there. Are you the professor?
Professor: I am the owl professor.
Sean: Fantastic. I was told you could teach me about putting generative AI into practice.
Professor: No question, so let’s take a look. We’re in Margaret’s chart, as we’re all familiar with, and we have a great opportunity to review her chart and see all her data. But what if we could do more than look at it from a category-by-category perspective, and really jump into what AI has to offer?
[16:21] I’m moving here from the standard chart to the AI Assistant. In the AI Assistant, I see prompt options right away. I can also ask questions and get targeted answers. But let’s start at the beginning with medications. With medications, I get returned a list similar to the viewer today.
[16:55] I can see what medications she’s on, but I’m not getting a lot of value yet. It’s pretty simple. Let’s go a step further. When I do, let’s look at labs. In the viewer, we know that labs appear most recent first and then work their way down. But I want the AI Assistant to do more. I ask the AI Assistant to group it by categories, and in the categories, I can see hematology and chemistry — the way I think about them clinically. That gets me further on this path, but it doesn’t go all the way.
[17:32] Let’s keep going. Let’s pick conditions. Conditions, clinically, are a combination of problems and diagnoses, so now we’re mixing data together. I’m also asking the AI Assistant to group them by categories – by body systems such as respiratory and cardiovascular – and I could keep scrolling. In addition, I’ve asked it to infuse more information by giving me SNOMED and ICD-10 codes, and you can see them listed.
[18:08] Now, that’s great, but what if this still isn’t giving me everything I want to know? We know Margaret has high blood pressure, right? She’s shown up, and we want to be able to act on that. Let me see what her most recent blood pressure is.
[18:31] Let’s see if it can extract that. Her most recent blood pressure is 150 over 90, so she’s still high-risk. I think I need to consider Margaret’s profile in a way that really assesses her full cardiac risk. Let me see if it can put that together for me.
[18:56] One of the great things about a live demo is that you get to see all my typos – and the AI Assistant can manage that. Here you can see it pulling together in the AI Assistant diagnoses, medications, lab results – the things relevant to a cardiac profile.
[19:17] When I think about who might use this, I don’t think it’s just for me. Do you think others could, too?
Sean: Oh, absolutely. My doctors would love this.
[19:30]
Professor: So let’s create a new prompt to make this available to all my users. To do that, I’ll go in as an administrator. Now I’m moving away from the end-user application and into the system settings themselves. I have a lot of controls here – deployments, prompts, roles, data. There’s a lot of configuration opportunity for defining what my end users actually have access to.
[19:56] Let’s start with roles, because it’s incredibly important that when we think about the capabilities of AI – that we know we can do it with the controls needed, applicable to the right users at the right time. Here, I can configure which roles see which data, what messaging they receive, what prompts they receive themselves, and what they have access to. And being built on top of HealthShare itself already has the consent rules and engine that allow for data control.
[20:33] Now that’s the role-configuration piece. If we dive further into data, you can see I can also control, down to the data-element level, what data is available to my users. This defines what I can feed into an LLM to return answers. This is one piece of the larger conversation around how data is processed as we move from our core products into an LLM and back, and about all the controls that shape the appropriate response.
[21:00] If I choose, for example, to include prescription status, I can simply save it, and from then on, that data element is available. But what I’m here for is prompt configuration – to create that cardiac-risk assessment in a way my users can access.
[21:28] I want to know also how often they access it, so I need to make sure I have a change log for every action taken in the system to be able to review and visualize. When I do this, I can see all the entries I’ve made, all the data and responses that were returned. I can even drill down further and have a record of every single piece. If I want to analyze this, I simply generate an export of the file and do additional metrics.
[22:04] Now, let’s go ahead and create that prompt. When I go into Prompts, you can see there are already preconfigured available options – some of which I clicked earlier to show you what’s possible. But let’s create a net-new one. In this particular case, what I want to build is a cardiac-risk assessment. To do that, I’ll simply type it in.
[22:40] To prevent you from having to watch me type, I’ll copy and paste the rest. Here, I can actually define it further. In addition to simply naming it for what my users will see, I can also configure date-range lookbacks, and I have controls to define what data is fed in. For speed today, I’ll just select everything by default. But if I wanted to include only subsets – maybe problems, diagnoses, medications, labs – and leave out others to account for better performance and token utilization, I have the ability to do so.
[23:29] It’s also important to say who can access this. Front-office staff don’t need a cardiac-risk assessment. I’m thinking about offering this to my PCPs – my primary care providers – so that’s what I’ll do. How long do you think that took me, Sean?
Sean: Oh, not much time.
[23:40]
Professor: Exactly. With no coding – low code, really no-code experience – just clinical expertise, I can create this prompt and make it available to my users. Let’s see how well it did. As I pull it up, you can see it appears immediately. Let’s see how well it returns information. What I fed into the prompt was guidance around some of the common screening tools for cardiac risk, as well as some guidance around what labs are pertinent, and what medications are pertinent, in order, iin order for it to give me a more robust response.
[24:27] Now, this I think, is incredibly helpful and gives a broader, more comprehensive answer than when I just free-text it in, talking about some of the prompt-engineering necessities to really get a comprehensive answer. This is really helpful for my primary care providers, but I bet it could be useful for other specialties as well.
Sean: Absolutely. My docs would love this.
[24:46]
Professor: And when they love it, they might want to configure it by specialty. By specialty, you can create something that says, “This is for my pediatricians, this is for my geriatricians,” or any number of people and effectively create your own viewer. I think that’s a really great opportunity.
[25:04] But when I looked at Margaret’s assessments, I have to say she has cardiac risk, and clinically I would consider further testing. In fact, I might consider a stress test. Stress tests, you know, Sean, I’ve got to tell you, are a challenge. I don’t know if you want to go through all that pain, but before you consider it, what do you think of the prompt builder?
Sean: I think it’s incredible. Our clinicians would love those pre-built prompts and the prompt builder. Unbelievable. We could use this today.
[25:37] Excellent. I’m so glad to hear you say that. But I have to wish you well on your journey for prior authorizations, because I really think that stress test is necessary, and I think Jim might have some ideas over there in the forest.
Sean: Wow, fantastic. Thank you. This was great.
Connected Ecosystem
[26:02]
Sean: Hey there. Are you Jim?
Jim: Yes, I am.
Sean: Oh, great – nice tie.
Jim: Yeah, it’s Tin Man silver.
Sean: Very appropriate for the forest. Jim, I was told you have a cool generative AI tool that’s going to save me a ton of time.
Jim: Indeed, I do. In the last few days, we talked a lot about prior authorizations being a bottleneck in healthcare. Much like the Tin Man, prior authorizations have lacked heart, preventing payers and providers from truly connecting.
[26:47] With generative AI, InterSystems is helping accelerate connectivity between payers and providers. The prior-authorization process today starts like this: it’s form-driven, frequently done on paper, and a PDF is faxed and sent to a provider. This PDF is disconnected from the EHR and requires the provider to do duplicate work, even though all the information is already in the EHR.
[27:12] Using generative AI, we’re able to change that. We built an AI questionnaire generator that can take a PDF or other form – scanned or HTML – ingest it, identify intent and structure, and create a FHIR-compliant questionnaire that is machine-readable, fully structured, and in a digitized format that can be exchanged between payers and providers, making their lives far easier.
[27:35] If we look at the FHIR JSON created using GenAI, we can see how it aligns with the PDF form. Focusing on one section – the chest-pain or suspected angina checklist – we can see now it’s been translated into codified values with descriptions.
[27:54] Having the FHIR questionnaire in a structured-data format means it can be launched from the EHR using a SMART on FHIR app, querying all that information using FHIR – accelerating the prior authorization process. Not only that, we can take this form within our questionnaire builder, verify it, and edit it to make it more usable. For instance, we can take a drop-down item – NPI – and change it to myocardial perfusion imaging, making it more user-friendly. Once verified, we can exchange it using our electronic FHIR authorization solution, and it can then be launched from the EHR as a SMART on FHIR app, pulling forward all of Margaret’s information from the EHR – her registration, coverage, prior exams, and pre-existing conditions – into the prior authorization form. This prevents errors, reduces work for providers, and streamlines the submission process.
[29:03] InterSystems electronic prior authorization is better connecting providers to payers, making them compliant with regulations, and making them ready for automation. We’re making the prior authorization process faster, smarter, better connected, and far less painful for everyone.
[29:21]
Sean: Wow. That’s just incredible. You’re connecting the payer and provider ecosystem with a headless workflow and using generative AI to generate and populate a form. My prior authorization folks are going to love this.
Jim: Yes, you could say it’s almost wizardry.
[29:46]
Sean: Well, thank you. That was great.
[30:00]
Sean: Well, I’m back. That was quite the insightful journey you sent me on. I learned a lot.
Witches: What did you learn?
Sean: Oh my goodness, I learned so much. It starts with making sure you’ve got trusted data, and that starts with EMPI and referential matching, where duplicate records are resolved and just fall off your worklist, letting the patient profile grow. Then there’s machine learning, which can further reduce your worklist. Most importantly, I learned that once you trust your data, you have the courage to take action on it in downstream apps, analytics, and AI – and with InterSystems EMPI, you can be confident in your data.
[30:43] I also learned how to take data and make it actionable through dashboards. In healthcare, we have so much data – multiple patients, members, providers – and we can bring it all together through a dashboard pulling from Health Insights. That’s your brain, where you can analyze and find meaningful insights in the data.
[31:09] Then I learned about generative AI, how you can use it with prompts to give voice to your data. Unbelievable. This is powered by simple-to-complex workflows with pre-built prompts that deliver summaries of patients. And then, I learned about prompt builder. You pick the data you want to ask questions about, and it returns natural-language responses in the context of the patient.
[31:43] And then lastly – and I don’t know if it’s most important, but another one – I learned how we can connect the payer and provider ecosystems around electronic prior authorizations in an API-based, headless workflow. Incredible. And you can take that further using generative AI again to take a paper prior authorization, turn it into an electronic prior authorization in questionnaire format, and workflow it to completion between payers and providers until it is done. Unbelievable. I have learned so much on my journey. So I now humbly request entry into AI City to meet this wizard.
[32:32]
Witches: Fine.
[32:48]
Wizard: I am the almighty wizard of AI. You have entered the great green palace of AI City. Who goes there?
Sean: Is that the wizard?
Witches: I think that’s just Don with a new fancy microphone. This wannabe wizard needs to be cut short.
Wizard: This is the great palace. Only folks who understand how the foundations of data and AI work together.
[33:20]
Witches: Hey, we see you, you know.
Wizard: Oh no. Hey, Sean.
Sean: Hey. So you’re the wizard? I thought you might be taller.
Wizard: I am indeed. I’m 5'8 and ¾. I think that’s fine.
Sean: Oh my goodness.
Wizard: I am indeed the wizard. Yes.
Sean: I’ve heard you’re super powerful and super intelligent.
Wizard: I am super intelligent. I almost passed InterSystems’ hiring test.
Witches: Don, I think you got like a four.
Wizard: I got four and a half. But I am super intelligent and powerful only because of what’s behind the curtain. I leverage AI agents. They help me run AI City: schedule the flying monkeys, maintain my hot-air balloon, plan celebrations, that kind of thing.
[34:17]
Sean: Well, that’s great. But I’m just a CIO at a hospital. I want to use AI agents to superpower my organization and have my users be aided by intelligence.
Wizard: You can actually do that in healthcare. Let me explain what agentic AI is briefly, and then we’ll show you the impact it can have in healthcare.
[34:38] We’ve all had a good run with ChatGPT the last few years. And GenAI – for all the wonderful things it’s done – has basically been focused on writing. You ask it a question, it writes a response. You give it an audio file, it might write a summary of what it’s hearing. It’s been writing. The idea behind agentic AI is that we add two verbs to what we ask an LLM to do.
[35:03] The first is to call tools. An LLM can directly schedule an appointment, book a bed, send an email, call an API, or do something specific — get something done. LLMs can start to do things, not just write.
[35:26] And the third verb, in a sense, is that they can plan. They can plan a whole multi-step workflow: given this situation or request, here are the four things needed, and depending on what happens at step three, here’s what comes next. Agentic AI is really taking LLMs from simply writing to being able to do more impactful things – and it will make a difference in healthcare as well.
[35:52] To give you an example of that, I’m going to bring to the stage my friend Dimitri Fain, and he’ll show us a demo of what it looks like in healthcare.
Sean: Great. Thank you.
[36:03]
Dimitri: Thanks, Don. Hi, Sean.
Sean: Hi there.
Dimitri: So, Don’s given us a great explanation of agentic AI. Here in the EHR business, we’ve been experimenting with agentic AI combined with real-time voice analysis to see if we can build a truly transformational user experience in the EHR. What I thought we’d do today is take you through a demo of a prototype system of what our EHR of the future could look like, and then I’ll come back and explain a little bit about what we’ve seen.
[36:34] Our patient Margaret Hamilton has arrived at the emergency department. She’s unfortunately a bit sick and in respiratory distress. Luckily, we’ve got a very good doctor with us – our own chief medical officer, Dr. Jonathan Teich. I’ll bring Jonathan to the stage now. Jonathan is – in addition to being our CMO – also an emergency doctor in real life. So Jonathan, I’ve loaded up the EHR, brought Margaret’s record up, and you’re welcome to take a look. Just tell me what you need me to do.
[37:18]
Jonathan: Sure. Thanks, Dimitri. You know, this patient looks pretty sick. I think I need to go see her right away. The problem is that I don’t know her. She just came in; I’ve not seen her before. I can review her record, but her record’s full of data. You’ve seen all the data that’s accumulated there. I can review, but even in the best of EHRs, it’ll take me five or six minutes to go through everything I need to know.
[37:31] But what I do have is an AI assistant and a really great prompt. That prompt is able to suss out what her chief complaint is and filter everything else out so I get just the information that’s relevant to what I need for this patient. So in 10 seconds instead of five or six minutes, I have what I need.
[37:43] So let’s get clinical here. Her heart rate is up. Her respiratory rate is up. This doesn’t sound very good. She was seen in the hospital about six months ago for the same thing. I’ve got her medications – even a bit of extra assessment. This is pretty much everything I needed to know out of that whole chart, and the AI did that for me. So let’s go see her. I think it’s time to examine her. Sean, could you bring our patient over?
[38:25]
Patient: Hi, Sean.
Sean: Hi there.
[38:31]
Jonathan: Hi. I’m sorry you’re not feeling well. Let’s talk a little bit. I’ll ask a couple of questions, do a quick exam, and then we’ll start treating you right away. So I see you’re having an asthma attack. When did that begin?
Patient: It started about two days ago, and at first, my inhaler helped, but now it’s definitely not enough.
Jonathan: I see. Have you had any fevers with this or any coughing?
Patient: I haven’t really taken my temperature, but I’ve been coughing a lot.
Jonathan: Does any junk come up when you cough? Anything greenish or brownish?
Patient: Yeah, it’s kind of rusty-colored, I would say.
Jonathan: Any chest pain with all of this?
Patient: Only when I cough.
[39:13]
Jonathan: According to your record, you’re on a couple of medications. You’re on Singulair and you’re on a salbutamol inhaler. Is that right?
Patient: Yes, that sounds right.
Jonathan: And you’re still smoking. Is that correct?
Patient: Yes. Unfortunately, I do smoke about a pack a day, and I know it’s really bad for asthma. I’m really trying to quit.
Jonathan: It’ll make a big difference if you can quit once and for all. Now, I understand you took a flight from Boston to Florida recently. Did you have any leg pains or anything else on the flight?
Patient: It was a long flight, but no leg pain. I honestly felt okay until I got here with the crazy humidity.
[39:49] Let me listen to your lungs. I’m hearing wheezing all over your lungs, which is typical for a severe asthma attack, and reduced sounds in the left lower lobe, which makes me concerned about a possible infection. I’m going to order a chest X-ray to see if you have an infection and start treatment. We’ll begin with a Combivent nebulizer, then a salbutamol nebulizer, 2.5 mg every 20 minutes for an hour. Since you’re having pain when you cough, I’ll also order ibuprofen, 600 mg. Hopefully, that should help several of your different symptoms.
Patient: Yes, thank you, doctor.
[40:41] Let’s get your X-ray now.
[40:46]
Dimitri: Let me explain a little bit about what we’ve just seen. This is truly agentic AI in action in real time, operating as they’re speaking. If you look on the right-hand side of the screen, that’s not a screen we typically show to end users, but because this is a technical audience, we thought we’d share the agentic dashboard to give you a sense of what’s going on behind the scenes.
[41:04] You can see a number of highly specialized agents. For example, the line-annotation agent is looking for clinical concepts in the text stream as it comes through. You can see those highlighted and color-coded. There’s also an actions agent that takes those clinical concepts, matches them in the database, and tees them up for Jonathan to execute them, with appropriate human control in the record. This is agentic AI. We then thought: what if we could take this one step further? What if we could humanize the agents in a way that makes the interaction even better?
[41:47] Jonathan, this is a pretty complex case. Do you think you could use a little help with it?
Jonathan: Yeah, she’s not getting better, so I think we need some help.Maybe one of our AIs can help with that as well.
Dimitri: Let me introduce you to our AI agent avatars. You can go ahead and ask them anything.
[42:07]
Jonathan: Hey, AI medical expert. I’ve been treating this patient with nebulizers, but she’s not getting any better. Is there something else we can do?
AI Medical Expert: Let me check the patient’s file to see what else is needed. You’re likely to need more than just salbutamol. Here are the ACP asthma guidelines recommending magnesium sulfate and methylprednisolone. Do you approve?
Jonathan: Yeah, absolutely. Let’s give her both of those. That should help.
[42:44]
AI Medical Expert: On screen, you can see the new results that came in a minute ago, showing pneumonia in the left lower lobe. Given these findings, along with fever and low blood pressure, sepsis is likely. IDSA guidelines recommend starting vancomycin and cefepime. Do you approve?
Jonathan: Yes, let's give her both of those. You know, I hadn't been considering sepsis, but that's a great idea. Let's add both those antibiotics.
[43:18]
AI Medical Expert: Lastly, I see there’s a drug alert. Let me transfer you now to the pharmacy agent.
Pharmacy Agent: I noticed ibuprofen was ordered for her pain, but she has an allergy to it. I’ve recommended substituting paracetamol 500 mg instead. Do you approve?
Jonathan: Oh, yes, very much. Let’s do that. Sorry, I didn’t have a chance to check the allergies, but that’s much better.
[43:45]
Pharmacy Agent: All medication orders are complete. Do you need anything else?
Jonathan: Yes. This patient needs to be admitted to medicine. Can you handle that as well?
Admissions Agent: I see you’d like to admit her. Let me check availability and prepare the admission plan.
[44:14]
Admissions Agent: I’ve arranged everything: booked a bed in general medicine, created the admission request, and prepared notifications for the ward. Do you confirm?
Jonathan: Yes, absolutely. That’s amazing. This is like 10 minutes of work you’ve just taken from me. That’s wonderful.
[44:36]
Documentation Agent: Since you’re admitting her, I’ll create the documents you usually prefer for these cases. Please give me a moment.
[44:45]
Documentation Agent: The documentation is ready. Would you like to sign the visit now?
Jonathan: Sure. Those notes look good. I’ll have a chance to review them also. Thanks for doing all of that for me as well.
Documentation Agent: I’ve completed and signed off on the visit documentation.
[45:02]
Dimitri: Let me jump in one more time. What you’re seeing here are AI-generated video avatars connected to our agents. The avatars themselves are not the agents — they’re humanoid representations — but behind the scenes, they’re doing real-time speech-to-text and text-to-speech connected to the AI agents behind the scenes. You’ve seen specialized agents for both clinical and administrative purposes. And I think a great example, Sean, is the admissions agent, who had to do a complex multi-step workflow: find a free bed that’s free, call an API to assign the patient, notify the ward, send a message, and get everything ready. That would have saved you an enormous amount of time.
[45:28]
Sean: It’s wild. It knows the entire sequence of an emergency visit and the tasks involved. Just having these specialized agents is great.
[45:46]
Dimitri: When we implement this for real, I’m not sure whether we’ll use the video avatars. You can turn them off at the top if you like. You might choose text only. We’d like to give users some choice. And you and I both found them quite creepy at first, right? But having worked with them preparing for today, they grow on you, don’t they?
Jonathan: They do. I don’t know if I’d stick with the video, but honestly, having someone speak to you is faster and more efficient — even faster than a great chat window. It’s like having a consultant over your shoulder in a real ER saying, “Don’t do this — do that instead.” Having that kind of normal conversation was actually pretty good.
Dimitri: Multiple modes of communication tend to wake you up and drive your thought process. It’s just one more option, and we think it’s quite fascinating. Sean, what do you think?
[46:55]
Sean: Oh, I think it's remarkable. And I tell you, I think a lot of our staff would really appreciate the video avatar. To your point, doctor, it would really humanize things. And some people may be turned off by it and choose to use the regular agentic AI instead. Either way, it's saving me time.
Dimitri: Do you think your users in the hospital would like the time savings? Jonathan mentioned this saved him at least 10–15 minutes.
[47:11]
Sean: Yes. Everyone wants time back.
Dimitri: Well, certainly here in AI City, we all want time back. Thank you very much.
[47:26] We have one last avatar to show you before I leave. And it's one that I actually think might be the first one that gets used. We’ve also generated a patient support avatar. So, Margaret here has been admitted to the hospital with pneumonia. She's not feeling comfortable. Margaret, we've programmed this agent to understand the demographics in our system. It also has full access to your clinical record and everything going on with you. You can speak to her naturally, however you like, and ask her any questions.
[48:01]
Margaret: Is she programmed to look like me? Is that part of the service?
[48:05]
Dimitri: I don’t really see that, but she does look a lot like my colleague Judy. It’s funny. Anyway, go ahead and ask her anything you like.
Margaret: [French]
Patient Support Avatar: [French]
Margaret: Oh, I'm so sorry. I was speaking in my native French. Do you mind repeating that in English for everyone who might not speak French?
[48:34]
Patient Support Avatar: You’re receiving nebulizer treatments every 20 minutes to help your breathing. This treatment uses a machine to turn your asthma medication into a mist, making it easier to inhale deeply into your lungs. This can quickly open your airways and relieve your symptoms. Would you like me to explain your treatments further, or do you have another question?
Margaret: What nebulizer am I taking?
Patient Support Avatar: You're currently receiving salbutamol through the nebulizer. Salbutamol is a medication that helps open your airways, making it easier to breathe during an asthma attack. It works by relaxing the muscles around your airways so air can flow more freely. Would you like to know more about this treatment, or do you have another question?
[49:25]
Margaret: Yes, actually. Does it have any side effects?
[49:31]
Patient Support Avatar: It’s normal to feel worried. I’m here to support you. Salbutamol can sometimes cause mild side effects. The most common ones include shakiness or tremors in your hands, feeling your heartbeat faster or stronger, palpitations, headache, mild nervousness, or restlessness. These effects usually go away soon after the treatment. If you notice anything unusual or feel worse, let your care team know right away. Would you like to know more about your treatments, or do you have another question?
Margaret: No, thank you. I’m all set.
[50:14]
Dimitri: Well, Sean, I hope you enjoyed our tour of the ER of the future. Thank you very much.
[50:29]
Wizard: All right, so what did you think of this journey today?
Sean: Well, I tell you, that was wonderful. I learned so much about how to build a comprehensive data and AI strategy, and we could totally use it to transform our hospital at home.
[50:40]
Wizard: That’s great. That was the idea. What you learned, what we tried to teach you, aligns with our roadmap. It all starts with a connected ecosystem on the bottom. You need to take your community and your data silos and pull them together, building on trusted health data. You can’t just colocate the data. You have to bring it together – normalize it, match patients, and that sort of thing.
[50:59] Then you put it to use. Analytics is a good use case – like we saw with hypertension. That’s a good example where population health activities can make a positive difference in a community. And then we saw the AI assistant basically digging through a big chart, building your own viewing experience, tailoring in using low-code or no-code manner – your own way to interact with your data. And finally, we gave you a little picture of the future: what AI agents could be all about. Taking on bigger tasks. Not just “write me this,” but get this patient admitted, or do the follow-up for this appointment, or complete all the documentation.
[51:37]
Wizard: That’s the idea behind AI agents: taking GenAI up to that next level. So, I hope that was all educational for you.
Sean: It was. And I tell you – I feel like I’m ready to put this to use.
Wizard: You feel like you’re ready now, huh?
Sean: I do.
[52:07]
Wizard: Well, I have a little secret for you. Since your hospital had already been using InterSystems technologies, you were already ready. We just wanted you to learn that for yourself. Plus, the Good Witch here thought it’d be a good idea to send you on an hour-long journey.
Sean: Oh. Thanks.
Witches: You’re welcome.
[52:24]
Sean: But I think I’m ready to get back to Kansas now and put this to use.
Presenter: Ah, Kansas. Very nice place. I’ve seen it in the movies.
Sean: Yes, indeed. That’s where my hospital is, and I want to take this back and help them transform. So how do we get out of here anyway?
[52:35]
Witches: Oh, we can take my broom. I’ve gotten so much better at flying it. I haven’t had an accident in hours.
Sean: I think I’ll pass on that one.
Witches: We can use my brand-new pink bubble. I parked it over there.
Sean: Fantastic. Let’s do that.
Wizard: Very good. Bye.
[52:54] The end. I hope you enjoyed that journey through our vision and roadmap for healthcare solutions.

























