Video: The Interoperability Inflection Point: What Healthcare Data Leaders are doing in 2026 | Duration: 2832s | Summary: The Interoperability Inflection Point: What Healthcare Data Leaders are doing in 2026 | Chapters: Welcome and Introductions (3.84s), Industry Report Origins (164.385s), Defining Interoperability (360.905s), Interoperability Layers (643.025s), Operational Efficiency Metrics (954.86s), Community Health Intelligence (1248.94s), AI-Powered Interoperability (1421.08s), AI Implementation Readiness (1631.36s), AI Implementation Challenges (1889.98s), Implementation Challenges (2166.685s), Getting Started Advice (2271.085s), Implementation Guidance (2379.825s), Q&A Session (2527.57s), Closing Remarks (2712.28s)
Transcript for "The Interoperability Inflection Point: What Healthcare Data Leaders are doing in 2026": Alright. Hello, everybody. Welcome to today's webinar. We're going to spend about half an hour together today, maybe a little bit longer. So I'll just go ahead and and get started right away. My name is Victor Wilson. I work for Hakoda, which is an IBM company, a systems integrator, focused on helping, hospitals and other health care and life sciences companies, build out modern platforms with Snowflake kind of at the center. And I'm really happy to get to chat today as your emcee and also work with some, pillars of the industry here. We have Murali and and Sharan here as well. I'll let them go ahead and introduce themselves, and then I'll go straight to the agenda. Early after you. Alright. Hi, everyone. I'm Global Regulated Industries, CTO with Snowflake. Me and my team are focused on, bringing our technology to truly help solve problems in the health care licenses, public sector, and public health agencies, industry segment. Super excited to be, you know, in this webinar and, looking forward to the conversation with both Sharan Thank you very much. and Victor. Pleasure to be here. Wanna tell the the group here a little bit about New York City health and hospitals first, and then I'll talk about what I do there. So New York City health and hospital is the largest public health system in The United States. We have 11 key care hospitals, five post acute care skilled nursing facilities, and 70 plus, primary care delivery locations across the five boroughs of New York. We serve over a million and out patients every single year. Now the thing that's very different about us is the the mix of patients that we serve in terms of the complexity, in terms of their ability to pay, and that is the core of our mission. We serve all New Yorkers without exception, and that goes to the heart of our mission, being able to provide that exceptional care from a whole health perspective, from primary care to specialty services across a very complex population. Now with that background in mind, what I do here is I serve as the deputy chief data officer, and, my specific focus is around the enterprise data platform, which is the foundation for analytics and AI across the enterprise and making sure that when we build things, we make it available in a way that's easily available for consumption across enterprise from a, you know, quality perspective, from ease of use, and management perspective. So that's a little bit about me. Thanks, Marlene, Sean. Yeah. And and the reason we have both of these folks on today is, clearly, they're at the forefront of, figuring out how to store data, make it interoperable, and then start using some, frontier tooling, to really interact with it. So that's really the topic of today's session. And then, just very briefly about Hakoda, an IBM company. We have partnered with Snowflake, producing a lot of of content and research, including the topics we're going to be discussing today and some of the figures you will see. And then we're also very fortunate to be recognized as, Snowflake's two time, health care and life sciences partner of the year, and with, Snowflake Summit coming up here, at the beginning of next month, going for the three p. So really looking forward to having, really interesting perspectives both, from the breadth of that Merali interacts with, lots and lots dozens of, providers and other health care operators, and then having Sharan, who's one of the leaders, nationally in figuring out how to deal with disparate data, store it effectively, make it interoperable, and then, use AI on top of it. So okay. Let's let's go straight into maybe the the origins of this topic today, which is really this report that was produced. So as mentioned, Hakoda partnered with Snowflake to interview, I think, maybe a 185 or so, different actors kind of across the industry and talk about some key metrics and key questions, how they're actually navigating some of these challenges today. So, Marley, maybe you can go into a bit more detail on, the origins of the report that we produced. Yes. Certainly, Victor. Our our intention is to truly understand, the interoperability from a health care standpoint has been in existence for a while. And, we now see the rise of AI as a technology, and, it is really catching wave within our own industry. So we really wanted to take a pulse of, what what different organizations are really feeling about the progress they are making from an interoperability standpoint, but at the same time, how are they trying to balance it with the new upcoming technologies like AI? You know, is it is it interdependent? One, is, you know, is connected to the other, and then where where do we see the the, maturity spectrum and where do the where do we particularly see the priorities? And one thing we really wanted to, you know, make sure as we were conducting the surveys, we wanna make sure that the survey represents all corners of the health care and life sciences ecosystem. So we have folks from payers to providers to life sciences pharma companies, even to the public sector, companies and agencies. They all participated. So we we tended to hear, from all aspects of the ecosystem because it is super important from that perspective because of different sections of this ecosystem are in different levels of maturity, different priorities, you know, in terms of what they are focused on within this ecosystem. So we, you know, we really wanted to make sure that that's that's out there. But truly, at the end of the day, right, it it's essentially taking a pulse of, you know, where organizations are and, you know, where do they see their priorities, particularly from these two, you know, technology and, and the opportunities. Got it. Got it. Yes. And and just for everyone's awareness, link to the, Foley book will be available, you know, sent post this session. I think there's also a QR code in this presentation where you can go ahead and download it. So if you haven't read it, it's chock full of really top line figures, but then a lot of details and a lot of, stories and anecdotes that are aligned with some of the opportunities, some of the challenges, and then some of the ways that folks are using technology to to solve those challenges. So, let's go ahead and dive into maybe some of those those key numbers here. And here's kind of the the big one, that stood out to us at first when we did this research, and it was eighty four point seven percent of health care decision makers are rating interoperability even higher than they were a couple of years ago. As we've grown more connected in terms of the technology being able to make these kinds of connections, it kind of makes sense when you think about interoperability even though it's been, for those of us who have been working in health care for a long time, one of the key challenges. How do how do we get the data to talk to each other when the organizations are disparate or we have to do data sharing? So, Marley, I'll start with you. Help help for the group define interoperability in kind of plain terms and maybe why it's been so difficult to achieve historically, and you have that great view kind of sitting, looking across how many different organizations are trying to solve it today. Yeah. It's a it's a great question to to, baseline our conversation today in this, webinar. The, see, if you look at health care life sciences and ecosystem, it's a network of entities. Actually, we have payers, providers, life sciences, part TBMs, and, they all interact with one another. They all interact with the with the regulatory agencies and government bodies. And, what does that mean is, we we really need a robust way to exchange that data above the common denominator. The common denominator here is the patient. At the end of the day, everybody serves to make the, lives of the patients better. Now with that in context, the way interoperability opened up is as the patient is moving from one cap setting to the other, interacting with from one entity to the other, they are basically leaving their digital footprint in terms of their data. How do we make sure that as the patient is going from one setting to the other, their data is shared with the other, organization or an institute. By that way, the the organization has a much better view of the patient. Right? So that's essentially the core to the interoperability framework from a health care perspective. The the key point to note there is it focuses more on what actually just happened to the patient. So it it becomes transactional in terms of, hey. Here is what I mean, there is an encounter event. That encounter event, is something that I wanna share with the other health care provider or the payer or the payer then says, hey. Here is the most recent time information, that I have for for a patient. But there is also the angle we need to look at, particularly when you talk about, you know, value based care arrangements that, providers are signing up for the risk, and thereby, they truly want to understand the longitudinal health history of the patient population that they truly, wanna, take up the ownership in terms of health outcomes and thereby bearing the risk. And in that case, they are really interested in understanding the longitudinal health information of the patient because we need both. We need to know what really happened to the patient over a period of years. You also need to know what is currently happening to the patient. The combination of that two is essentially how we will understand patients whole as a whole. Now there is one more dimension that is getting added, which is the patient generated data. Today, we are in the we are in the age where variables dominate how we how we go about doing things. Right? So everybody where all of us have variables, and that variables is capturing tons of information about, the health and wellness of of that patient. How do we bring that also into the equation? So the interoperability, in my opinion, is, you know, looking at it from a futuristic terms, is putting patient at the center and truly try to understand what happened happened to the patient, to what's currently happening, and what do we know about the patient in terms of, the, the, the health and wellness aspects through the variables integration. So all of this put together, in my opinion, is how we should broadly define interoperably. Yep. Yep. Love it. And then, Sean, for you, I'm kind of drilling into your perspective as as a leader at one of the kind of premier hospital systems nationally. Does this number kind of resonate with what you're hearing with your other executives, what people on the front line are kind of talking about? I mean, serving a very large population in New York City, obviously dealing with other organizations, kind of on the auxiliary of NYCH and H. So does this number kinda make sense? Has it picked up these kinds of conversations in the last year or two? Yeah. Absolutely. I think, that's what we're seeing in the survey, and there's a reason for that. A big reason, of course, is the interest around AI. But to add a little bit to what Morley was saying, which was a great way to think about it, which is patient at the center and how do we serve the patient through health care services, through health care operations, and anything around that, you know, from a, you know, from an insurance standpoint, from a medication prescription perspective, whole health, management perspective. So there's all these different perspectives that we wanna use to serve the patient, but there's really the way that I think about it is really three levels of interoperability, each building on top of the other. And the first one is the standards and data exchange protocols that have existed for a very long time, and that's really around the business of running health care, if you will. Right? So speaking from a health care provider perspective, and even a payer perspective, there's the business of making sure that we can provide the services in terms of being able to look up benefits and, in terms of being able to, schedule between doctor's offices, in terms of being able to approve prior authorizations, in terms of being able to pay claims. Right? All those standard types of things to be able to do the business of health care, essentially. Right? It's like operational interoperability, if you will. Right? Like, what what does it take to run the business? And then the second layer, which has also been going on for some time, is analytical interoperability which is around, how do we share and exchange data for research and observational outcomes? How do we do that from a population health risk and stratification and management perspective? Of course, there's quality measures and reporting that we have to, submit to CMS as, you know, part of the, you know, industry, as part of, being in a healthcare organization. And then there's things that we also have to do from a compliance perspective for value based care and accountable care organization and participating in those frameworks. So that's the analytical approach and basically, what that allows organizations to do is, ensure that when we wanna run, a million cohort analysis, for example, and see, like, what's happened longitudinally over time with those patients from a quality improvement perspective, Are we all speaking the same language? Can we run those analytics? Right? So that's the second layer. And the third layer is really AI interoperability, which everyone's trying to figure out right now. Of course, some organizations are more advanced than others, but it also builds on top of the previous standards. Right? So you've got data exchange in terms of health care operations. You've got, like, the data models and are you using similar context like the OMOP data model or I two b b two data model so that you can share data in ways that you don't actually have to, you know, figure out every single time how do you store patient information and encounter information and share it with participating organizations. Now the third thing in terms of AI interoperability has multiple layers as well. Right? So you've got you know, again, it builds on the FHIR standards and h l seven standards. It also, requires model context protocols, so how will AI agents read from the data and connect tools to it, and what's the agent to agent communication. Then you need governance protocols and frameworks around that. And then, you you also you also need the semantic layer and the context layer, which are which are different things. So semantic layer being things like, if we're gonna count a readmission, like, how do we count that readmission? And the way that we define our logic, we can't allow AI from a probabilistic standpoint to figure that out automatically by itself. We have to provide those guard lane guardrails and join relationships and things like that. And then the context layer actually then provides, additional information around, like, if you were to have a human being actually look at the data and provide a service to the patient, they're not just looking at data from the database. They're also looking at knowledge bases. They're looking at emails. They're looking at chats. So how do you put a context layer around that for the AI agent to execute, you know, like a human would? Right? So given that getting to that AI agent layer and that AI automation and prediction on a scale, enterprise scale, and potentially even industry scale as we think about, like, how can we benefit what we're doing for the patient, not just in our system, but beyond that, that that really increases the priority and the importance of interoperability, again, building on each of these layers that I spoke about. So that's why I think everyone is, rating this so highly now compared to what it was even two, three years ago. Yeah. Totally. And and we'll get the meat of the content here is kind of prepping for the the AI future and how we can leverage some of the stuff we're building with interoperability to to build all those you know, a system within a system is, I think, what you're describing. But just drilling into maybe some of the the operational and analytical, kind of metrics that were also in the report as well on the next slide, please, Andre. We have operational efficiency, patient experience, and and VVC value based care. And so maybe, Shround, staying with you here kind of at the micro level versus zooming out to the entire industry. What do you have any specific examples at NYCH and H where maybe planning for either the operational interoperability, leveraging standards, doing these kinds of things has maybe moved the needle and and progressed how you want to deliver care, forward? Yeah. There's there's a couple of examples, that I think are relevant here. There's there's more than that, but a couple that I'll highlight just, you know, for the purposes of our discussion today. So, a number of years ago, we were, running an initiative, a program called Special Populations. And the objective of that program was to provide holistic health care including support for medical, medical health, mental health, as well as, support for income and transportation and and and basically, aiding for social risk factors that that the homeless population in, New York experiences. So as you can imagine, complex population, complex needs, data is not always consistent. And, this is a program that was running across, multiple agencies required coordination between, CMS, Department of Health, Department of Housing, Department of Transportation, obviously, the payers and us, you know, in the health system being in the middle of all of this. So, the objective was to provide holistic health care to people experiencing homelessness in New York City. And, in order to do that, we need we need data. We need to understand which patients are actually in the shelters. We need to know what their, you know, current situation is in terms of their illnesses and any sort of treatments that they're currently undergo undergoing or have undergone in the past. And then how do we put those folks in touch with the right care coordinators so that they can they can basically take them, you know, provide the next level of care that's required to support them in in the in the most appropriate way possible. So, bringing it down to a, you know, a tactical, you know, point at point now, You know, classic data problem, we have insurance companies, payers. Actually, at the at the beginning, you know, they they get the information about the members. They're enrolled in these programs that that actually ensure that they are getting the right insurance coverage and so on. And then from that, in health and hospitals, we have to make sure that we receive that membership enrollment information and then pass it on to our care coordinator and managed care team so that they can take the next best actions in terms of making sure all of the holistic care services are able to be provided to those members, knowing what who the members are, what their situation is, and what do we have to do next, essentially. So this is a process, just the member enrollment and data transmission process to New York City Health and Hospitals used to take somewhere between five to seven days. And the reason for that was, it was a manual process. You know, we used to receive these large membership files. You know, there's different history, different grains, you know, different logic, different columns, you know, between the different payers that are sending us essentially similar information for the same population. Right? And so this used to take between five to seven days. We went through the process of standardization on what the payers would be sending. We actually took the, you know, the opposite approach. We didn't ask the payers to change anything on their side because, you know, every organization is super busy. They've got multiple priorities, so we decided to take on ourselves to standardize in the middle. We built a data vault modeling approach. We built a standardization approach in terms of, like, what is the dataset that, essentially all of that membership information gets stored in and then how it gets processed. But long story short, in terms of moving the needle as you as you as you kinda pointed out, this was an operational efficiency play. Of course, there's patient benefit, like, once we connect patients to providers more quickly. But since you asked about operational efficiency, the five to seven days became five minutes and it was run automated end to end in terms of member data enrollment processing, you know, within within h and h, so we can take the next step in terms of care coordination. So that's one example. The second example in terms of improved patient experience is actually something that we're working on right now, and it's working with our HIEs to get information about the patients in our community. So, for example, in Brooklyn, we need to have better, understanding of who are the patients in that community, what are their health needs, what are they going to the doctors for, what are they coming to our hospitals for, and are there care gaps that are not getting addressed in terms of looking at their, you know, quality measures and metrics, and what was the last follow-up with the doctor, and what is their medication adherence, and so on and so forth. So we received that information from the HIEs. We started getting the information from the HIEs, but the objective essentially is, like, how do we how do we know, like, in our hospital if in our community, there's more patients with, let's say, chronic asthma and chronic diabetes, where are they actually going to receive the care? Are they go coming to a New York City health and hospital facility, or are they going somewhere else? And are they going somewhere else because we're not providing the services or our services, you know, maybe have some quality issues? How do we get that sense? So that interoperability and data exchange is so critical to make sure that when we have a hospital where we're investing, of course, millions and millions of dollars in terms of making sure that we have the right specialty care, the right primary care, the right, you know, set of staff and doctors and and nursing capacity, that we actually have that intelligence about what our community needs. And so that is something that will, of course, as you can imagine, improve the patient experience. If someone's living in a particular ZIP code and they're traversing, in New York, you know, just a couple of miles can take you forty forty forty minutes or fifty minutes to get to, but knowing where people are, generally, you know, because we can't get to exact locations because of patient de identification, patient privacy, requirements. But just generally knowing where people are going to will help us, you know, create services and create campaigns and create communication that allows the patient to know that this is a facility that is closer, that provides these services. I can go and, you know, get my regular care and specialty care there as well, and that allows them to have convenience and better access and better, better access to care. And then, of course, their experience as a result of not having to wait and not having to travel that much and also being supported by social services, transportation, housing as needed, you know, based on, you know, what their background income levels and social situation is can be a huge benefit. So that's that's an initiative that we're actively working on right now, which is enabled by the data exchange between the HIE and us. Love it. Yeah. And that's one of the dreams of of the HIE, right, is that you can, seamlessly share data kind of across. And and as a patient, you know, someone that goes to from one clinic to another, how wonderful it is to have that continuum of care for for the patient. Lorie, coming to you kind of more at the macro level. Do you think this is unique to to NYC H and H, or do you see other providers in the industry kind of nationally leveraging the same macro and micro abilities, whether it's technology they own or it's technology that's making it easier to share? And then I'm sure another thing that people are very interested in is payer provider. We we don't have to get too deep into that today, kind of the the natural tension of sharing or not sharing data. But maybe if you've heard of any quick wins of payers and providers leveraging, you know, interoperability standards and technology to to make the life of the the patient or member better, that would be wonderful to share as well. No. Certainly, it is a it is a trend that we have we have seen across the board, whether you look at, the provider side of the house or the payer side of the house. I think there is more more, willingness to share data as much as possible, within within the use case boundaries that they have contracted, for. The, I to me, you see, the the biggest opportunity is, today, it's the combination of the interoperability plus AI. Right? So think of something like a prior auth. As we are approaching the prior auth mandate, where, you know, prior authorization requests are, you know, supposed to be routed through fire frame or fire interoperability framework is a is a is a is a beautiful thing that's happening, right, on on the provider side, before even that fire request is sent out, there's an opportunity to embed the agent e k I process that truly, does a pre, you know, service medical eligibility criteria check against those pay of specific rules and any clinical documentation or clinical guidelines to make sure that the prior authorization that's being sent, right, has a higher chance or probability of, being approved. Now on the other side, as the as the requests are being received in a file format, there is a lot better automated opportunity to intake this file data, you know, as as the information is being submitted, even run through AIs to unpack the clinical evidences from the clinical notes, and go through the additional, documentation that was provided by the provider to come to the PA decision faster, right, which is more grounded based on evidences, more grounded based on our ability to share data with one another in a fastest way possible. Right? So this is the type of transformation that we are seeing, and it it it truly aligns in terms of right here, look at a case like this. This is about delivering better patient experience because, you know, we are not getting not just patient experience. Right? It's the patient and provider experience. And it's also about improving the operational efficiency. But everything is now moving towards if we can get the prior authorization done quickly, that means that we are gonna treat the patient faster and thereby, you know, drive towards the value based care. So it it's also interrelated and so core. So the timing of this discussion, can be more, more more apt. So Yeah. Totally totally agree. And and now we'll kind of get into the meat of it because I think what ends up happening for those of us that have been in the trenches and and worked with data in a health system, sometimes the process needs some time to catch up to the the technology. And with the rate of technology, that's moving very quickly. You can see here that, 64.5% of health care organizations are already starting to adopt, play with POC, experiment, or plan to implement AgenTic AI because the barrier of entry that's coming down with tools like Snowflake, is making it easier and easier to access what maybe previously you had to hire a bunch of PhDs to help implement or stand up a big data center or something like that. So, may and this number is probably even higher since this report came out, to be honest, as some of the big frontier model providers, and then Snowflake is building more and more agentic AI infrastructure, within its own platform. This number is really increasing. And so I think now we're kind of at this this crossroads where, you know, we talked about operational, interoperability, analytic interoperability, and kind of prepping all of the contextual stuff that you need for an autonomous agent to operate on that data in essentially a deterministic way. Right? Instead of random output that it generates each and every time you call it, we wanna get to the point where we give enough context and enough structure and enough standardization on the data to to make sure that we get as close to a deterministic output for the same input at every single time. So, we kinda have this tension where more and more and more organizations really want to adopt this technology, but it may be the case. And there's some number in here. Yes. Bearing the lead here, 33% feel like they're actually ready to do it, meaning they have their data organized, their processes dialed in, their governance, etcetera. So, that's kind of the key tension that we're at right now in 2026. So, Marley, over to you first. Like, what did I miss there? What what foundational elements need to be in place at the data level and the process level from what you've seen to actually be successful in their first steps using, autonomous or or agentic AI? Yeah. Wilson, you touched upon, Victor. We just touched upon all of those all all those aspects. Right? See, the the the most important question when it comes to AI and in my opinion is, AI is not a technology first conversation. It is a business first conversation, meaning we should what we should be asking is what workflow do we wanna disrupt with AI? What business outcomes do we wanna, drive with AI? That's when we we clearly start. Is it an operational use case we are trying to solve, or is it a true, patient touching clinical setting, use of AI? Because these two use cases requires two different levels of governance, security, and access controls. Now when you start with that, it's much easier to then boil down to say, okay, if I want to safely deploy AI for this particular type of use case, what is that I need from a governance, for in terms of access controls and in terms of, the security that I needed to bring bring to that particular use case with this technology? And then the one layer down is, do I have all the data? Do I need that is needed for AI to make that right deterministic, you know, insight that you talked about, Victor, right, in terms of, you know, does it have the right context? Does it have the right data? And it then boils down to, do I have do we have all the data about the patient in a way that it is easily accessible in a solid data foundation. Right? So the the the think of these as a building block layers of the data foundation, the government security, and the the the the use case, for which we wanna deploy this AI agents for. And, it doesn't start from bottom up. It it starts from, the the use case of, you know, what what process do we wanna really disrupt with this technology. Yep. Or what problem are we trying to solve, really? You know, Yep. working working backwards sometimes can be helpful. Sean, so so, you know, you you talk about building those layers. I'm certain that NYC H and H is deep into the 77%, you know, heavily invested in AI, exploring opportunities to serve your patient populations better and better all the time. What are some of the challenges that you've seen as your teams have have started to roll out some of this tech? Have they been able to leverage some of those layers you all have built and some of those wins you had before? What have you seen as kind of the biggest, challenges or impediments so far? Yeah. I think, just being honest, where we are right now as a society in terms of the AI capabilities that we are seeing, used every single day, you know, we I don't think we could imagine, like, four years ago, like most of us. Right? Of course, people that were deep in the space that were working on these frontier models have been seeing it and talking about it for a very long time. You know, we know that AI is a concept and, you know, the technologies around it started around the nineteen fifties and, of course, we've had a lot of technologies around recommendation systems and machine learning and so on. But, you know, the, you know, the tipping point was when AI just became accessible at our fingertips, you know, through a chat window, we're gonna ask any question and get any answer. Not always right answers, but at least we could get answers that sounded really, really correct and precise. So now we're at that point, and we're all trying to we're seeing new breakthroughs every single day in terms of what these technologies can do. So first, it was basically, again, just chatting, getting answers, being able to do, you know, really complex research in a very short period of time and having access to all this information quickly. And then you started getting reasoning models. Right? Reasoning models can that can do things more and more, think about things more deeply and actually problem solve. Right? So it's not just getting asking a question, getting an answer, in terms of feeling and experience, like, the next level of search, curated search, if you will, where you don't have to navigate through links, but you get something back as you were speaking to as if you were speaking to the person. But now reasoning, problem solving in terms of building strategy, in terms of building campaigns, in terms of running differential diagnosis in on terms of in on top of patient data, those are the capabilities that, you know, that have already, you know, come into play. And then now we're getting a agentic AI whereas, you know, we start thinking about, well, we have a human being doing this job and, you know, sort of not not not not getting into societal and economic implications of all of that for the time being, but thinking about, like, what is the value added work that a human being actually does and thinking about the things that are more commodity and low value add. And that can that is super manual intensive and, you know, that is super repetitive. Why don't we give that work to AI agents to do so that we we as human beings can actually operate on the things that that really require human discretion and human judgment and human capability. So when we think about it at that level and seeing that all of that now really seems within grasp. Right? Really, all of that seems within grasp, and we see these frontier companies doing incredible things. And you see, news every single day about, you know, at least at least, you know, statements from different technology leaders about how much of the the coding is being developed by AI. You know, I think the latest, you know, is 30% at Google, for example, they're talking about. I was talking to a leader at, at Bain Capital the other day that they have a directive, a policy that says no quoting will be done by a developer, right, to start with. Right? Absolutely none. Right? So, basically, you're getting to the point of, okay. Start with AI, Right. then bring the human expertise on top of that. So when you think about all those things, then absolutely, we feel like I speaking for myself, I sometimes think, like, we're playing catch up, but I think everyone is kinda feeling like that. Right? So and that's why there's so much interest and so much priority on being able to do more with this technology that seems like it's already here and it's just up to us for the taking. Now as far as what's been difficult to do, just because we know it's possible right now doesn't mean that we can actually do it. Right? It goes back to building on those layers. You know? Did we have the proper discipline for operational interoperability? Do we have the proper discipline for analytical interoperability in terms of the taxonomies and standards and terminologies and data models? And now we're getting to AI interoperability. What is it gonna take to build a semantic layer and the context layer? And some of those require different technologies to be procured. Speaking from a public health system perspective, you know, we cannot move as fast as commercial organizations in terms of our budget cycles and procurement cycles. Sometimes because of, you know, there's there's a real need to do extra level of due diligence because we get into longer term contracts. Sometimes it's just, you know, the way things work in government, a little bit slower, generally speaking, because our our focus is a little bit different. But what that means is that we have to, be more thoughtful. In some ways, we have to be, you know, more cautious, of course, from a health implication and risk management and bias management perspective, especially when it comes to AI. So putting all those things in place requires time. Right? And by the way, while we're doing these things, which are in a way foundational, we can stop and tell our customers and stakeholders, Right. consumers of data and AI solutions and products that, hey, guys. You know, let us let us build a foundation, and then we'll get back to you on meeting the needs that you have around intelligence and how to run your operations and deliver better better patient care. That is not an option. Right? So these things need to happen in parallel, and I think that's really where the hardest part, you know, starts to come in. And and totally agree. And and talking about patient at the center, these are these are big impactful things that folks in this industry do every day that we do need to be diligent about and thoughtful about and and do with a lot of great care. Not to mention, you know, the back office staff and the administrative staff and the clinical staff and all the people that, this will change, you know, their workflows and and how they operate and and potentially what their job looks like in the future. So let's let's kind of wrap up with maybe just a piece of advice. And, Jeron, since you had them, Michael, I'll start with you. Given all that context about how to navigate, you know, this industry tends to move a little bit slower technologically for a myriad of reasons you laid out. What's maybe one thing that folks can do? Maybe they've been thinking about this, but they haven't taken that first step. Like, what is the first thing from your perspective, that they should consider maybe to to get started or to give themselves a baseline of what they're gonna do? And then, Marley, I'll kick it to you after. Yeah. I think, just in a few words, I I think the stop thinking of interoperability is a compliance project. Right? You know, there there's laws and mandates and things like that from ONC and CMS that we all have to comply with as health care organizations. But to get the true value unlock from AI, we have to build these layers. We have to do hard hard work of assessing where things are and then figure out where the gaps gaps are and then create essentially foundational investments and strategy to support that while at the same time also being able to continue business operations. So, you know, basic advice, you know, identify what it's gonna take to do this, explain the benefits to executive stakeholders, get the buy in, and then start working on this in parallel while also continuing to support business operations. Yeah. I love that. And then, Willie, for you, you know, seeing across the entire industry, what's one piece you're good to advise a a senior director or VP of analytics or something they wanna get started and do this kind of stuff? Yeah. I'll I'll add on top of, what Sharon shared. Sharon, by the way, are extremely well put, in terms of, the building the right business case. It it goes back to the point of, you know, what problem do we really wanna solve for, and why does it matter? And it it's all about, the return on investment on a technology like this. To me, I I'll I'll leave the audience with couple of thoughts. One, from an interoperability standpoint, interoperability is getting much more broader and, putting patient at the center from a context of interoperability is super key. So that means that, we really wanna go after the transactional encounter, clinical data, client data as it happens to the patient through a very robust fire based interoperability. And how do we intake the data? How do we put that data side by side to the other data that we have within an organization? But at the same time, embrace the longitudinal data sharing capabilities of, you know, if I have to, understand the three year longitudinal health history of a particular patient, how do I, you know, enable data sharing across clouds? How do we tap into open and proper formats like iceberg? Liberate that technology to to get access to the data as as quick as possible, within the business business boundaries. And, you know, the reason that needs to to be a part of this equation is because that's when you you have a robust data foundation. You have, you're really moving towards that, you know, whole patient concept that, we we talked about. And then, on the AI side, it is all about, the ROI, and and ROI means, you know, are we identifying the right workflow, right business process, on where we are inserting this AI and why it needs to be inserted and what value it is driving. Right? So that integration, thinking from a context of business process, and how we are really looking to disrupt that business. But it all boils down to the, the integration, the the the the equipment of AI, and applicability of an AI to disrupt that particular workflow. Understanding that, combine that with the power of a robust data, foundation, with the right the necessary governance and security guard rights, you you you can you can see the magic flowing. Yeah. Totally totally agree. And we, extremely well put, Marley. So, we're kind of at the end of our our webinar here. We did wanna save a little bit of time for q and a. I saw one question come into the chat that maybe, literally, you can help answer quickly because, you manage and and work with technology that maybe solves this problem in in some ways for folks. So Richard asked, how can health care orgs overcome the governance and security challenge kind of inherent in sending data to an LLM or putting an LLM with context with their data? What are what are some ways that folks that have Snowflake are are doing that today? Yeah. I I mean, you can think of the, the security and and governance from the from from here is that is a multilayer cake. Right? I mean, at the core of it is what data is going to be exposed to the, to the humans or to the AI. Now AI is not going to be used from a context of AI independently accessing it. Right? So you are going to be exposing that AI. At the end of the day, is either part of a a business process workflow or is it part is is it reinteracting with the the end user to give the necessary information? At that point, the end personas access is what the AI will get access to. So how do you narrow that down is to truly restrict based on who's who's actually making this AI call or who's, behalf of whom does AI is making that data request and put that necessary, data access controls in place. You know, something with Snowflake, you have ability to bake in, you know, row based, access controls in the column level data maskings. Everything is based on the roles and the and the end users and the personas that they are attached to to. Thereby, you have a natural, you know, way to trickle down the access controls down to the data level. Right? So that's, you know, one layer. The other layer is, as this AI is being called, you know, who has access to call AI? Does this person even have access to call the AI? Right? So having that access control on, who can call AI, who can use it for what purpose is is that second layer. And the third layer is, when you talk about the, the the the, the the way AI is is called and it is being invoked in terms of, getting necessary data. Are we really looking at it from the workflow perspective or boiling down the security access controls all the way down? Right? So think of this as a multilayer cake where you you trickle down the access controls of the governance all the way down, with the necessary guidelines. You know, when an agents are being built, are we really, restricting it to, to the, to to what it is supposed to do, right, in in a role play, and are we really providing right guardrails from that perspective? Right? So all of them kind of forms that layer, and that's how we could think of. And that's, you know, many of our customers, who are, early adopters of, AI, have approached to this. Yeah. And that's why, you know, we we love partnering with with Snowflake that has kind of baked in that security and RBAC and colocation of data layer because the more you can operate within your own construct and within your own realm of control, the better. So, you know, Snowflake having frontier models available to use, and your data doesn't have to leave your own ecosystem, quite handy to kind of mitigate just some of those concerns of sending data externally, especially if we're not if we're not doing HIEs and and stuff like that. So, yeah, it's one of the reasons that, you know, we feel we feel like Snowflake is well positioned for companies to start building this kind of stuff, these health organizations, because the colocation and security of your patient data and your PHI data is really what matters more than anything else. So, okay. So I'm gonna wrap it up just with some quick, blocking and tackling things. So we've got a QR code on the screen here. Feel free to scan that and download the report. It will also be sent, along with the recording of this meeting or recording of this webinar, for those that may have missed it or want to rewatch it. It will also be available on Hakoda's website, as well to consume. And then the last thing I'll say, before thanking these guys is, Snowflake Summit is coming up. Hakoda, has a booth there, Booth 2106. So come and find us if you wanna chat about interoperability, AI, the combination of those kinds of things, those pillars, and maybe how, we're helping organizations kind of build, and stack wins along the way, as they're building towards using using AI. And the last thing I wanna say is thank you to Sharan, thank you to Myrtle, for joining us today. Really, your guys' insight is really valuable. I know it because I talk to organizations every day. And so hearing this kind of stuff, I think, is gonna help a lot of people. So thank you both, for presenting insight today. That's it. Thanks for having us. Thank you for having us. It's been a pleasure. Alright, everybody. Have a wonderful day, and, talk to everyone soon. Take care. Bye.