In this keynote session from READY 2026. InterSystems Senior Vice President Scott Gnau explores one of the most pressing challenges of the AI era: the explosive demand for power, infrastructure, and intelligent software needed to deliver on AI's promise.
Scott draws on historical technology parallels to make the case that solving AI's resource demands will require not just better hardware, but smarter software and more efficient data management.
Presented by Scott Gnau
Video Transcript
Below is the full transcript of the READY 2026 Keynote
[0:00] Thank you all for being here. Before the break, we're going to spend some time talking a little bit about — and showing you some of the innovations that we are building in the data platforms organization. And if I were to give you like one big guess, what do you think I'm going to talk a little bit about first? Anyone? AI maybe? Kind of important.
[0:29] So I want to talk about what is driving us and what we think about in how we're investing, optimizing, and building software inside of the data platform — because certainly all of the great and amazing things that we've seen: applications and demos and drinks last night, other keynote speakers — all of those applications are really driven by that digital backbone that Don mentioned. And making those outcomes relevant is our mission inside of the data platform team.
[1:05] So if you turn on CNBC, or any news program, or read your local paper, you might be seeing some of the headlines that are here. AI is great, but it's using a lot of power, and its demands on infrastructure and natural resources — not just power, but even water for cooling — are extreme. You can see here from the World Economic Forum: before AI, we had this trend of using natural resources for compute power. Now, with the advent of AI and the extreme demand that creates, you can just see that we're skyrocketing in terms of the amount of resource required to deliver on the promise. And on this scale, this only goes to 2035 — which is really not that far away.
[2:07] In fact, in my hometown, which is very close to the nuclear reactor at Three Mile Island — you may remember it was shut down because of a big safety issue — it's been reopened solely to power a data center for one of the hyperscalers. So these demands are real, and keeping up is going to be difficult.
[2:25] Well, I've seen this kind of curve before, and I think this leads to the point that I want to make about how AI is going to be demanding of not just hardware, not just power, but really intelligent software and better data management.
[2:44] So if you remember the Pentium processor — which was so game-changing in so many ways — it started out powering desktop and laptop computers. Of course, the demands of new applications, things like Windows operating systems and so on, required more and more processing power. So Intel added transistors to make things run faster, and also started to increase the clock speed of these processors to give us more of what we demanded.
[3:25] And that worked really well — because as you increase the clock speed, you simply got more stuff done more quickly. Applications ran faster; you were able to build more sophisticated things on the desktop. That was all really good, except that it started to get really hot. Those early desktops had big fans in them to cool down. And if you look at the trajectory of what technology would provide and what applications would demand, that heat curve became extremely unsustainable — to the point that, by the year 2000, it would have made chips hotter than the surface of the Sun. Kind of hard to package that in a laptop, right?
[4:08] So of course that was not sustainable — kind of like the power curve I showed on the previous chart. But what happened was there was this innovation called multi-core. Instead of having the clock go faster, Intel just gave us more cores. And so that solved the heat problem while still being able to deliver on the processing cycles required by the software.
[4:36] But it wasn't just the hardware — because in this environment, software had to be very sophisticated to take advantage of not a single thread running faster, but: how do I distribute all of that work across multiple threads to take advantage of those transistors? And many software vendors at the time struggled to get any benefit out of multi-core. So it required great hardware innovation and great software innovation to take advantage of what was happening.
[5:08] Another thing to think about — having seen this play before, where we have increasing demand and need to be really smart about how we deliver it — is smartphones. We all have them. I had to put mine on the table. Think about the early days of cell phones: if I could get an hour out of the battery, that was great. There was all this demand — just like the demand for AI.
[5:38] So what happened? Well, they put in bigger batteries. But in the smartphone age, with these big screens, we had these refresh rates. It started out at 60 Hz displays. But of course we wanted to see streaming video and didn't want it to skip around, so we went to 120 Hz. Well, what happened? Battery only lasts half as long — not a good outcome.
[6:05] So of course, in the current generations of phones that most of us in this room have today, something very sophisticated happened: the display only updates the pixels that need to change, at the time that they need to change, instead of doing a full screen refresh. And voilà — the battery lasts the better part of a day. Better performance, more sophisticated, because you're only touching the data that needs to be touched at the time that data needs to be touched.
[6:39] So again — solve the problem of demand by making it more efficient in processing. My point behind this history lesson is not to bore you with how old I am and how these trends have played out, but in fact to say: the AI demand is a play that we've seen before. And to really solve this, it is going to require innovation in hardware — and of course we all celebrate the latest releases from chip manufacturers, and that's great — but it's also going to require smart software that can take advantage of these processors. And it's going to require very sophisticated, smart data access: only touch the data that are required, don't scan the entire dataset to get an answer every time. Move the data as infrequently as possible to help solve the overall demand on natural resources coming from AI.
[7:47] So in the end, the way we think about it in IRIS Data Platform is: how do we really enable better models, cleaner data, and optimized processing to truly deliver on that promise for AI?
[8:05] What you'll see is that we really focus on four key areas — the "different by design" approach that I talked with you all about last year — inside of IRIS.
First: We really do try to move the processing to the data. Remember, data movement takes energy, it creates latency, sometimes the data doesn't arrive properly — so there's also a cleanliness factor. And of course there's the notion of privacy and security. So our software is designed with the intent of enabling you to move the processing to your data, instead of having to move and maintain multiple copies of data everywhere.
[8:47] Second: We support any kind of data and any analytics. You're not restricted in the kinds of analysis you want to do. Of course, AI is a very sophisticated tool that we're all seeing the benefit of in our daily lives today — but it's not the only tool. Machine learning algorithms are still extremely valuable. Deep learning algorithms are extremely valuable. And many of our organizations still depend on traditional business intelligence. So being able to do any data, any structure, and any analytic in one place is foundational to what we offer.
[9:21] Third: We deliver limitless scale — in terms of the size of the data you deploy and the size of the application clusters you deploy. We'll talk a lot more about some of those new advances in the software in the afternoon breakout sessions.
[9:44] Fourth: We provide an unparalleled, sophisticated interoperability engine built directly into the product. It's not a bolt-on, it's not something you have to think about separately — it's guaranteed delivery, and it's highly efficient.
[9:57] And so in these ways, what we've seen when we look at our deployments for analytics: you can actually deploy leveraging InterSystems IRIS with a much smaller carbon footprint, with much less power, heating, and cooling required — because you're moving the processing to the data, you don't have to have multiple setups or multiple copies of the data, you don't introduce all that latency, and you can scale to whatever level you need.
[10:21] So this is what drives us. You're going to actually see, later this morning before the break, a couple of examples.

























