Dive into the next five years of health technology with our expert panel from READY2025. This discussion explores the transformative role of AI in healthcare, from enhancing patient engagement and streamlining administrative tasks to potentially reshaping medical education and insurance models. Discover how AI can reduce provider burnout, create financial incentives for healthier behaviors, and fundamentally change how patients interact with their care. The panelists also tackle the challenges of AI adoption and share their "big ideas" for crucial AI applications.
Panelists:
- Don Woodlock, Head of Healthcare Solutions, InterSystems
- Scott Gnau, Head of Data Platforms, InterSystems
- Timothy Ferris, MD, Healthcare Practice President, Red Cell Partners
- Jennifer Eaton, Research Director, IDC
Moderator:
- Christina Farr, Managing Director at Manatt, Editor-in-Chief of Second Opinion and GP at Scrub Capital
Video Transcript
Below is the full transcript of the READY2025 Healthcare Solutions Keynote featuring a panel discussion with Don Woodlock, Scott Gnau, Timothy Ferris, and Jennifer Eaton, moderated by Christina Farr.
[0:24]
Christina: I am so excited about this panel because we’re going to be talking about the next five years in health technology. When we talk about AI, people have said – and I think it’s right – that what we’re going to see in the next five years is incredible progress, but probably the biggest gains will come in the next 10, 15, even 20 years. But we’re still bracing ourselves for some big changes ahead.
[0:50] To that point, I’d love to kick off a question to the panel, and we’ll start with Jennifer on my right, about what you think we’ll see in the next five years that could actually feasibly happen in five – not 10, not 15 – that will have a very real impact on the system, including providers and patients.
Read the full transcript
[1:14]
Jennifer: Within my crystal ball, based on some survey data, I have a strong feeling that we’re going to see true engagement with AI, particularly around patient engagement and enablement. We see that time and time again as one of the top use cases in our survey data. That is very timely. We heard this morning a lot about data overload – what are we going to do with all this data? How are we going to empower patients and providers to glean insights that can produce higher-quality care and a more satisfied, engaged, and empowered patient?
We’re seeing use cases everywhere – from AI bots and symptom checkers related to call centers to hyper-personalized care plans driven by care coordination and care managers. I think we’re going to lean heavily into that, knowing it can be a long-term driver – hopefully, if done correctly – of cost savings and improved overall health for patient populations.
Here in the US, the aging population is growing, so there’s a big emphasis on aging in place and care at home. Technologies outside of AI – such as hybrid care, remote health monitoring, and care-anywhere initiatives – that will also help address access issues while keeping individuals engaged with their healthcare provider.
[3:17]
Christina: I want to push back a bit on something you said about call centers and patient engagement. Whenever I call customer service, the first thing I do is press zero to reach a human being quickly. Sometimes it feels maddening to navigate the system – constantly repeating “representative, representative.” Do you think the AI in the next five years will be good enough that people won’t feel the need to get to a human immediately?
[3:56]
Jennifer: I don’t think we’ll be replacing these individuals in five years. I think what we’re going to be doing is augment workflows and empower the people answering phones with patient information. No longer will it be a cold call with no context about Ms. Jones. A call center representative may have an AI-generated summary that helps them understand, for example, that Ms. Jones hasn’t had her annual wellness visit yet – and they can help schedule it. This approach addresses needs proactively while maintaining human touch and empathy.
[4:46]
Christina: Hopefully, there will still be a representative when you press zero. Tim, what about you? Any five-year predictions that could change very soon?
[5:05]
Tim: I think with very high confidence that interactions with the healthcare system – the purely transactional part – will be almost entirely converted to AI. There are so many simple things that we do now, care management and call centers – that is not a simple thing. That is quite complex. I think it will take a while longer before we have seamless AI bots where you can have a complex conversation, but we have to remember that 30-40% of healthcare transactions are just very task-oriented. You have a task to perform and just want to get it done. OpenTable, scheduling – all of that has been coming for a while, and now it’s just accelerating.
The experience now for people with their doctor is the doctor sitting, typing, and then there’s the ambient – the computer is listening. That is fantastic and transformational, as the wonderful video we saw showed – less pajama time. But that system is not actually at the core of what doctors do, which is to diagnose and treat. Aldo’s presentation started to get into that a little, in very narrow use cases.
I’m absolutely convinced that within five years, when you go to the doctor, there will be a third cognitive entity in the room with you. A lot of the conversation will be: “What does that intelligent AI say?” Importantly, it won’t be a conversation between you and the intelligent AI as you were talking about. It will be a conversation where there is an agent making recommendations that will enter the conversation between the physician or nurse and the patient. I am quite sure that when you go to a doctor in five years, that will be occurring.
[7:44]
Christina: Do you think that on either side – the patient side or the provider side – we will see some period of resistance?
Tim: Oh, absolutely.
Christina: Especially on the provider side. Not every provider, not every doctor graduated at the top of their medical school, and is the absolute best in the field. But it’s hard to say that to somebody, and yet we as patients want evenness in the experience we get, and we want the best doctor. There’s ego in that. So, how do you then say: “You’re going to have to work with this AI input”? That’s essentially what you mean. And then for a patient to start adapting to seeing that the diagnosis and treatment recommendation is a product of these dual inputs.
[8:28]
Tim: Yeah. So, actually, fantastic – if you’re interested, this past Wednesday, there was a real shot across the bow published in the New England Journal of Medicine on the downside to doctors of AI, walking through the argument in much more detail.
It was legal – about employment law, really – and it goes into quite a bit of detail about what it means if there is a computer in the room with the doctor and the patient, recommending diagnostic and therapy, and at the same time measuring whether or not what the doctor did was in agreement – concordant – with what the computer said. That information has never been available before from the room of the patient encounter.
And what does that mean for a doctor? That is, I will just say as a doctor, sort of deeply disturbing. I think in the next five years, there is going to be a lot of work to work out exactly what that means. Who owns that data will become quite important in that conversation.
But I want to stress that I used to be worried that this would be so difficult for doctors that they wouldn’t adopt it. I’m no longer worried about that. The access to care issues that are growing worldwide mean patients are going to insist that their doctors use this.
So, there are going to be two kinds of doctors in the world – the doctors who resist this and the doctors who embrace it. Over time, there will be a movement. It’s going to take time. As Terry said this morning, this is about culture and changing culture, which is very difficult, and there will be a movement toward supplementing a physician’s judgment with choices presented by a decision support tool.
[10:53]
Christina: Just a quick straw poll in the audience – how many folks here have used ChatGPT or one of the LLMs to type in symptoms and see if you can get a diagnosis? That looks like about two-thirds of the room, if not more. So, to your point, it’s already happening; it’s upon us. The next question is: what do we do about it? I’d love to throw the question over to the folks from InterSystems. Scott, Don, this future is here.
[11:19]
Don: If I could contradict him a little bit: you mentioned the Gen AI – ambient hasn’t been applied to the core of what a doctor does, but I think that’s okay so far because there are a lot of non-core things that doctors do around the administrative side.
If we can, at least for the next couple of years – the next three of the five years – really take a bite out of the administrative aspects of being a clinician, I think that would be huge. Ambient is nice to just get a record of the physician's office visit and document it.
If we can get that to drive orders, medications, follow-up visits, and complete the whole administrative flow of next steps, I think that would be terrific from a workforce point of view – burnout, all that good stuff. And I think it will be trust-building. Then we can go to the next step on the ladder, which is helping physicians make better clinical choices and getting to the core.
I’m not as frustrated as you that we haven’t quite gotten to the core. I am as optimistic as you, but I think the administrative stuff is easier – easier tasks, perhaps safer tasks, and more trust-building. If AI is walking up a 12-step ladder, to the second floor, I think this is a good first step – to deploy that widely and then get into clinical decision-making. That’s the way I would think about it.
Tim: Absolutely agree.
[13:09]
Scott: Since I’m sitting between them, I won’t cause controversy, but on the administrative side, like Don said and we saw this morning, there’s more demand than supply for care. Real opportunity in the next three to five years is an expansion beyond core clinical and diagnosis data into other lifestyle factors data that is available or that patients make available.
In retail and financial services, they talk about Customer 360 – it’s kind of the same thing. How do I have a relevant conversation with a patient when they’re not sick, not in an acute situation, and use that as a way to create better lifestyle management?
Hopefully, the accuracy leveraged by AI will prevent you from repeatedly hitting “operator, operator, operator” when you call, because messages will be relevant and encouraging.
[14:17]
Christina: The presentation we had earlier from Aldo Faisa looked at that with the ovarian cancer example and the grocery cart. I was just googling over a break to learn more about this, and it sounds like you could combine that with pharmacy information as well, because you may be picking up more pain medication while having abdominal symptoms – an early warning sign.
But to that point – what do you then do next? We talk a lot about the signal we’re collecting and the value in that signal. But what about the next step – how do we then speak to a patient about getting a scan, taking a preventative measure, or, more complex, behavior modification? If it’s something like pre-diabetes, how do we talk to a patient using AI, or not using AI, about the right next step so they can prevent that health outcome? What do you think is possible in the next five years on that front?
[15:30]
Don: I would say the cold engagement – cold reach out to patients – is a difficult journey to make a piece of engagement. The thing I’ve seen work with many of our customers is meeting patients when they’re already in the system, already dialoguing about something.
If you have an AI system with preventative alerts or recommendations, those could benefit from a case management program. Those alerts are presented to the physician or patient when they’re in the system for whatever reason – annual visit, radiology test, something like that. The uptake and engagement are so much more.
I’ve heard some stats that a cold reach out to a patient is about 15% effective, but 85% effective if they’re already in the system, already thinking about healthcare, already dialoguing with somebody they trust. Having those nudges in the right place at the right time – when the patient is open to it, or the clinician is talking to the patient and taking care of them – really improves effectiveness.
[16:50]
Tim: I’ll take what Don said even further. I don’t know if Carrie is still here from the Blue Cross Blue Shield Association, but the central challenge behind health behavior change is that we do not have aligned incentives. Healthcare insurance does not provide a financial advantage if a person wants to improve their health.
We pay for health insurance – it’s use-it-or-lose-it, on an annual basis. That doesn’t make sense, especially when 70-80% of healthcare costs, in any individual in any year, are completely predictable. Why do we create risk-based capital pools when we already know what most of that will be spent on? That’s not a reason for insurance at all.
What excites me is the potential for AI to completely remake health insurance – even the way we conceptualize it – so that we can create financial incentives for people where it’s personally advantageous to take better care of themselves. That would be remarkable. The reason our current insurance is structured the way it is is not because people don’t know its limitations – it’s because executing is so complicated. AI allows you to do exactly that.
[18:54]
Christina: Jennifer, I’d love your thoughts specifically around prevention. I have a newsletter where I write about healthcare topics. One big area recently has been longevity. Many people call it “prevention 2.0” – really a form of primordial prevention, except with aspirational branding and more focus on consumer experience.
I think this is spot on as an argument. I’ve tried a bunch of these things myself. Take the Neco scan I recently did in Sweden – the Spotify CEO’s new healthcare company. They have a waitlist of 100,000+ people trying to get this scan. And yet, in this country, people aren’t going to primary care visits that are free, covered by insurance, yet the scan costs around £00-400. How do we create aspirational experiences on the consumer side that people actually use – essentially prevention – and have an insurance system that supports it, rather than limiting access to these tools?
I think AI plays a huge role in longevity because we’ll be able to take all this data and do a lot with it.
[20:20]
Jennifer: Absolutely. Everything you’re talking about related to efficiency gains and utilization of AI makes me think of a concept in healthcare called elevation to top-of-license work. If you remove lower-level drudgery – like three phone calls to reach a patient, or dealing with no-shows – and elevate that work, you have a much more engaged workforce. A workforce that can really lean into understanding each individual patient’s needs.
For so long, we’ve thought about the shift from fee-for-service to value-based care to population health. Now we’re figuring out that understanding the population isn’t enough. We need to think about the individual – the patient of one – and address things holistically: medical, behavioral, social.
Understanding social determinants of health and potential barriers is key. If a patient is unreachable, missing visits, or skipping screenings, what if our data shows transportation issues? Or that they’re in a 5G desert without reliable cell service? If we understand that information, we can proactively reach out and offer access to care in different ways – promoting individualized engagement, empowerment, and education.
This is a huge step toward an engaged, loyal patient who understands the relevance of healthcare screenings. It’s also a shift in mindset – patients inherently want to be healthy, for themselves and their families, and to be productive members of society. However, education and engagement levels vary.
The first step is taking away the stigma that missing a visit means a patient is uninterested in being empowered or educated. That’s not true. It’s on the healthcare system to find better ways to reach out and garner engagement. Part of that is understanding the data around that individual patent to empower every individual interacting with that patient, whether they are in acute care, rehab, or understanding benefits and insurance enrollment.
[23:45]
Don: There’s a dichotomy in your question, because all of us care about our health. Healthcare is the most googled topic. Yet some patients don’t show up for GP appointments or mammograms. Why? Two reasons: one, it can be difficult to access the annual visit, and two, patients don’t see value in the experiences delivered by health systems. Both need to be addressed.
I think you had some good comments around social determinants, population health, and increasing access are part of the solution. Education, nudges, and engagement help patients understand that even if an office visit or mammogram shows “nothing happened,” it’s an important part of keeping them healthy. Tools, engagement materials, and technology allow patients to connect the value of healthcare experiences with their own health.
[25:00]
Christina: To that point, you mentioned the back-office administrative aspect of AI. Right now, provider offices are bogged down in it – the constant calls to find out what kind of insurance the patient has, the prior authorization that was discussed, the endless pajama time, and documentation.
That leaves very little time to spend with patients. I think the average now is somewhere between 6 and 8 minutes for a visit. Do you think that if we took all that away – made some of the administration that much easier – we’d get a system where we could be more like longevity companies and spend much more time with patients? Or would doctors just take on a bigger patient panel and still have six- to eight-minute visits with more patients?
[26:00]
Don: I would hope shrinking the administrative aspects of healthcare – making it easier to interoperate data, manage prior authorizations – would benefit patients. It may not be enough, but regarding the workforce shortage, part of it is that people don’t necessarily want to become physicians anymore because it’s not a great job, not what they signed up for. Physicians aren’t recommending medical school to their kids. If we can make that job better, more fulfilling, and more patient-focused, maybe we’ll uplift the workforce as well. It may not be sufficient, but it’s a necessary step.
[26:58]
Scott: It’s definitely part of the product. I agree with Don. There’ll be net benefits: more patients can be cared for, and less burnout for physicians. It feels like a win-win.
[27:18]
Tim: My middle daughter will start medical school in six weeks. I want to say I’m not one of those doctors who wouldn’t recommend it. The honor and privilege of being trusted to relieve suffering is potentially one of the most satisfying and impactful things you can do with your life. That won’t change because of AI.
How you spend your time in the job will change dramatically. I’m a proponent – the cognitive load, reading every chart before seeing each patient, stopped being possible about 20 years ago. The burden and anxiety of missing key information have risen in my professional life.
I’m excited for the potential to reduce cognitive load and focus on interpersonal interactions with patients, applying experience with sickness on a human-to-human basis. I can’t see that going away.
[29:44]
Christina: Let me add a follow-up question. Right now, we mostly want medical students who come from biochemistry or STEM backgrounds, because the job used to be about memorization and skills grounded in math and science. If AI handles summarization and research at a speed no human could, the job becomes more about human interaction. Why not recruit medical students from philosophy, history, or other backgrounds that develop empathy and resilience? How do you see that shift happening in the next five years?
[30:46]
Tim: There has already been a shift in medical school requirements. It’s been positive. It hasn’t moved very far, but it has started moving in that direction. My daughter was a comparative literature major and took basically no science courses during her undergraduate years. She had to make up the coursework after college in order to get into medical school. I think things are moving in that direction. There will still be an important cohort of medical students who are medical scientists, trained in science and translating it to the bedside. The two paths, largely overlapping now, might become more distinctive.
[31:51]
Christina: That’s fascinating. Jennifer, any thoughts?
[31:57]
Jennifer: Related to your previous question about more time with patients, I think we’re going to see a shift in how we collaborate with patients. It won’t just be within the four walls of the office or hospital. Hopefully, with empathetic and informed conversations, we’ll have more engaged patients to help drive better outcomes and care.
Regarding medical schools, as a nurse, I’m also seeing that same struggle: “This is not what I signed up for” or “This isn’t the working environment I was walking into.”
Younger generations are hungry for technology, particularly AI, to support functionality like summarization and query searches. That means not spending 20 minutes looking through a flow sheet when technology can surface the most recent and important lab results, empowering them to do the best job possible.
I think what we’ll start to see is organizations that are investing in these tools and technologies to pull in the smartest new graduates. Those new grads will anticipate and expect this type of technology to support their work. It could become an attribute that draws those new graduates to those employers.
To those employers and organizations not embracing that technology, they may experience even more difficulty in shoring up their labor shortages.
[33:48]
Christina: I want to shift to what you’re seeing in the field. I’ve been in this industry for about 15 years and have seen cycles where technology is built to solve a problem that nobody really has – and fail. On the other side, health systems and plans have made it extremely difficult for newcomers, with long sales cycles, free pilots, and setups that set them up to fail.
With this new wave of technology, which in many cases is expensive and not every organization has the resources to pay for vendors and tools, I’d love to hear what you’re seeing in the field. How do we ensure that as AI develops to do all the things discussed on this panel, it actually gets into systems, works, is paid for, and creates generational companies? We have the potential to create the next Google, Facebook, or Microsoft – but how do we ensure these companies are built successfully, so it’s not just Google, Facebook, and Microsoft?
Tim, maybe start with you, since you’re doing a lot of work on the investment side with AI and health, then we’ll move to others on the panel.
[35:25]
Tim: There are a couple of things. With any new technology, the price curve and affordability usually come down. I think we’ll see the same with AI deployment. Early on, everyone is excited and experimenting with the technology, but then comes the disillusionment that Gartner calls: “I didn’t get any ROI, but I spent all this money.”
I think AI is transformative enough that we’ll get through that gap quickly. A couple of things are happening. A few months ago, DeepSeeek announced this new model that they built for a fraction of the cost of OpenAI, and the accuracy is really good. People were surprised by that – and they shouldn't have been – because that’s what happens when you're in a heavy cycle of innovation. I think the cost side will start to normalize a little bit better.
I think there's also an opportunity for AI to help make AI better – and what I mean by that is, we heard this morning, and everybody's worried about hallucinations. So getting clean, quality, and lots of data into these models to reduce hallucinations is extremely important.
One of the hardest things to do – and it’s been the same for 50 years – is get access to data and understand that data. That generally takes human beings, which are expensive and time-consuming. Why not use AI to evaluate all of these different data silos you have available and automate that process, where AI is actually driving better accuracy for AI? I think you'll see some of those things happening as well.
[37:13]
Christina: Scott, where do you think we are in that hype – you mentioned the hype cycle. Do you think we're in the disillusionment now?
[37:20]
Scott: Yes, according to the people who manage the hype cycle, we're still kind of at the peak of inflated expectations. It is the first technology ever that debuted at the peak – that’s how fast it's happening. I think it is just that transformative.
[37:42]
Don: Yes, I think the trick to AI success is it’s not just about AI itself. I think there’s going to be a ton of innovation there – but there are other legs to the stool. Like – I’ll give you a quiz question. Professor Fisizel's first case was in Wales, where there were administrative events, plus zip code, gender, and age – you could predict admissions. Why was that study done in Wales, and could it have been done in another country?
The answer is more centralized health. All the data was together, so it couldn’t be done in other places. The ovarian cancer shopping case: those 150 women were loyal to one grocery store, so all the data was together. If they used multiple grocery stores, that AI model wouldn’t have worked – neither of those models.
The reality is, you really need a good data foundation – bringing your information together to get AI to work. A lot of people gloss over that in the magic of AI by itself. And then there’s trust building – and integrating with the workflow. Small startups that create a cool tool but don’t make it easily accessible to the user, integrated into their experience, will struggle to succeed.
So, innovation in AI needs to be coupled with good approaches and innovation in these other areas to make the whole package successful.
[39:19]
Christina: And Tim, I’d love your take as a physician. You’re also doing a lot of investing in this field. How are you seeing things go from an in-the-field perspective?
And maybe to add to that – I have seen some companies experiment with, “Hey, let's just see which of the physicians and nurses in this cohort are very likely to use AI and start with them.” Maybe that’s a lower-cost entry point before expanding to the whole system. I’d love to hear about that approach and other innovative approaches you’ve seen to get adoption going.
[40:04]
Tim: You pointed out in the first question two things I think are really worth underlining. Most providers are underwater. A third of health systems in the US are in the red. That’s sort of perennially true. It’s not always the same ones every year, but they’re mostly in the red.
Internationally, government health system expenses are breaking budgets, so a huge problem. The NHS capital investments have been running at 50% of the OECD norm for healthcare technology. They’re way behind in capital expense requirements to invest in technology.
Why do they do that? It’s not that they don’t want to – it’s that mounting expenses keep piling up. The same is true for providers in the US. Then you get to the question of what health system leaders think about investing in.
The problem is, when you’re choosing between cool stuff that will be the future someday and repairing the garage that’s falling down, or the building, the park, or operating rooms that haven’t been updated, it’s almost always safer for the CEO to invest in redoing what needs updating. There’s a lot of waiting on the sidelines: “Yeah, we’re at the top of the hype curve, I don’t necessarily want to be first in – I’ll just wait to see it’s coming.”
So the key is finding the people who are willing to do it. What I find exciting is going around and talking to health system CEOs and health ministers in various countries – there are a few who want to be the first, and that’s the key.
If you’re familiar with Everett Rogers – he wrote The Adoption of Innovation – the famous innovation curve. You have to find the early adopters and work with them, whether that’s at the CEO/health system level, the health minister level, or on the wards.
It’s the nurses you engage who are avid for change – they arrive on the floor and say, “There’s probably a better way of doing this.” They’re not so ingrained in routine that they’re not open to change. Getting that first cohort started – and then it becomes the standard. The Mark Twain “painting the fence” thing – it’s just, “Okay, this is clearly better than that.” And telling the stories and backing it up with data – that’s how we move along the adoption curve.
[43:25]
Christina: And to that point, I’ve definitely seen, just in 15 years of conversations with founders, that they used to just say yes to free pilots and essentially do whatever the potential customer wanted. Now I see more pushback – maybe not free, maybe a lower-priced, lower-tier pilot for six weeks, with constant daily check-ins on progress. You get the experience of selling into that customer, which makes the bigger sale much more feasible and seamless when it happens.
I think that’s another thing that could really change now that AI tools are ready to go. What are you seeing in the field regarding the long-standing challenge of getting this innovation adopted?
[44:19]
Jennifer: For us, speaking to technology buyers and survey data, the last mile is so difficult – moving from proof of concept to a scalable, sustainable product. A lot of that ties back to understanding the problem you’re trying to solve, understanding how to incentivize and proposition this new innovation or technology to the individuals who will be using it, and then understanding what success is going to look like. And that success may be a winding road, in that we cannot continue to do the same process we did before. We have to think about it holistically and involve each individual along the way to ensure we are truly solving the right problem. If we’re not, adoption won’t happen, and the innovation will fall by the wayside.
Personally, I’m reading about a lot of innovation within some US academic medical centers – known to be some of the leaders, with the dollars to invest, the right strategic ecosystem, and partnerships to create playbooks for the rest of us. To say, “A, B, and C were successful – here’s how they did it.” Then others can replicate that, and our chances of success may be higher.
[46:00]
Christina: I love that. I think you’re spot on. I think it’s changing at the health system level, with more self-awareness – we did make it hard before, and now we shouldn’t. How do we partner with these companies? They’re not trying to disrupt us – they want to collaborate and help.
In our final few minutes, one thing I like to do with panels is put you all on the spot – no prep. I’d love to hear your one big idea for healthcare and AI – something that, if done successfully, you’d back tomorrow with your own money. You’ve got a smart developer audience here – someone could actually build it. Who wants to go first?
[47:06]
Scott: I’ll start with an easy one as a patient – using AI to automate the scheduling process and make it easy to get an appointment.
[47:31]
Don: I’d say the thing that bothers me most is the prior authorization process – the administrative complexity. Patients, families, providers, payers – everyone’s caught in the middle. If we can use AI to eliminate that or reduce it to a few exceptions, that’d be tremendous.
Christina: Can you get specific? Build it for providers, payers, or both?
Don: I think payers have a debatable role in this whole thing. So, ignoring that debate, I think it takes two to tango. You’d have to build something that really satisfies the needs of both parties, and it’s difficult to get everybody on the same page – there’s complexity, decision-making, interoperability, and data. It has a lot of challenges, but I think these are challenges we can solve.
Tim: Very specifically, prior authorization is conceptually about appropriateness. There are criteria for appropriateness, so prior authorization is about applying appropriateness criteria to data that exists. But right now, the process of applying those criteria to that data is this arcane series of steps. In fact, there is no reason why, with the touch of a button, you couldn’t apply the appropriate criteria to the appropriate data and get the answer in milliseconds.
[49:29]
Christina: Last big idea?
Jennifer: I think selfishly because I just went through this with my parents. When you access test results through a portal, they’re not typically in layman’s terms. So what do we do? We go to ChatGPT or Dr. Google to help us understand: Am I going to live? Am I not? Are my new medications influencing my lab results? I would love to see some type of interpretation of outcomes even ahead of the opportunity to speak with the provider, which can take days – to put the patient at ease, make it reliable, make it secure, make it interactive, make it holistic so it understands my medications, all my diagnoses, and my results – to promote a better experience for the patient.
Christina: Incredible. So, you heard four very good ideas, and I think the panelists would welcome speaking with any of you who want to build them. I just want to say thank you to this incredible panel. I had a great time, and I hope all of you did too.

























