Sarrica Fink
.
8
min read
.
December 1, 2022
December 1, 2022
Webinar

Medcase Roundtable [Transcript]: Healthcare Data Privacy & Interoperability

Where do healthcare data interoperability and privacy currently stand in today’s market? Why does adopting new methods still pose a challenge for many organizations?

Sarrica Fink: My name is Sarrica, I’m the marketing manager at Medcase, and I’m very honored to be a host of these two amazing experts in their field. 

So first, we have John Kalafut, he is the co-founder and CEO of Asher Orion Group. It’s a professional services organization activating Medical AI for improved outcomes. And we also have Brian Goad, and Brian has over 25 years of experience in the Healthcare Information Technology arena as a manager and developer and as a firm believer in team structures. 

So, you’re going to learn so much more about them that doesn’t even cover it. But our topic today is “Healthcare Privacy and Data Interoperability.” And where does that currently stand in today’s market? And why is adopting new methods such a challenge for many organizations? 

So, I’m going to let you guys take it away, I’m very excited to hear what you have to say. And we are going to have a live Q&A at the end of this so if anybody watching has any questions, feel free to ask, and we will get to them if we can, and that’ll be it. Take it away, guys!

Brian Goad: Let’s see. John, do you want me to go first?

John Kalafut: Yes, sir. 

Brian Goad: Okay. Yeah, this is the next slide, please. Okay, so let’s quickly, we’re just going to talk about interoperability. I mean, I know most of us know what interoperability is, or actually believe we know what it is, but there’s a difference between data moving between two systems and interoperability. 

Interoperability is the exchange of data, once it does happen, that data can then be used or recognized by that downstream system, right? So, it’s immediately put to use. It’s not a matter of just FTP or dropping some data across from one system to the other. So, that’s, in terms of interoperability, that’s what we’re kind of talking about. You know, so I want to kind of set the stage with that aspect. 

The other thing is you’re going to hear a term engine with me and the engine is basically an application that allows the data to be transformed or massaged, if you will, into a usable format. So, it comes out of one system as X, and we transform it into 1, 2, 3, into the receiving system, and it can go to multiple receiving systems for that matter, so that’s, the engine handles that transformation aspect. 

So, the history of interoperability, so HL7 committee, HL7 is Health Level Seven. It was actually founded and formed in 1987. And the first iteration, the first version of HL7 that was released, was version two. By the way, that’s still in use today in many facilities, and it’s kind of the workhorse, the backbone of interoperability today, somewhat. There have been some adjustments to that over the years from 2.1 standards up to, I think we’re right now at 2.7, I believe, is the version we’re still seeing a lot of today. But the first version was released in October of 1987. 

Now what’s interesting to note is ONC came on board around 2004, and they kind of pushed meaningful use aspects and things of that nature, so that pushed a lot of people into interoperability, doing things that they weren’t used to doing or doing things that they wanted to accomplish and there were monies now available to them. 

So, after that, you saw version v3 come out. It wasn’t widely accepted as much as everyone thought it would be. And then here lately, and I’ll say here lately, 2012 was the first introduction of it, but FHIR. So, and I know John’s going to talk a lot about “FHIR” because that’s the future. So, currently, the standards where we are on the historical aspects of things is that FHIR

 and HL7 v2 are working together at this point in time. I think that we’re going to talk about the future shortly, but for right now, just know that FHIR is pivotal, we’re at a pivotal point in interoperability at this point in our time. 

So, we’re going to talk about what’s good and what’s bad in interoperability. So, let me say this—and this is just from my experience, bad interoperability requires hospitals and clinicians to change their process, to change the way they treat patients. It’s more red tape. It adds more overhead to what they’re trying to do. Good interoperability, it honestly reduces human error, it compliments those workflows, it’s timely, it lowers the cost of care, it improves the outcome of the patient, and improves patient satisfaction. 

So, good interoperability is very important to have in that aspect. And then the last little point I want to make before we talk about the slide you see here because this one’s kind of important to me, and I think John and I are going to... We might differ just a little bit on this opinion, and we’ll see shortly, but KISS. So, Keep It Simple. Stupid, right? That’s what we always thought “Kiss” meant, right? Not the rock band, but keep it simple. That’s the approach we’ve always taken to finding a rock. Our interfaces are designed to do what they’re supposed to do and are as simplified as possible so that they can be supported once we’ve completed them. We see and follow people that build the most complex interfaces that no one really understands except them and so, I just want to say, “Just keep your interfaces as simple and as possible. Don’t try to make too much happen in one little code segment.” 

So, the slide on the screen. So, point to point, years ago before engines– and we discussed what an engine was—a lot of interfaces were point-to-point. So, each individual system would have an API per se, and we’re going to say API for now. But they would have a methodology that they communicated from system A to system B, and then from C to D, E to F, right? And it’s just basically spaghetti at that point, right? So, it’s all over the place. 

Now, I see where a lot of, right now, the vendors that we’re dealing with, a lot of them are writing these FHIR interfaces and they’re doing interconnectivity internally, right? So, the two systems that they sell communicate internally via FHIR. It’s great, it’s perfect! That’s exactly what should be happening within each individual.

Now, when we’re dealing with models within the inside of the walls of a hospital, though, I think you should still consider keeping—and this is where we’ll go to the next slide, John, please— consider running everything through your engine. And here’s what, and with the engine, we’ll basically create a broker or a traffic cop per se, that’s going to give you the ability to if you receive a FHIR request or response from a particular system, you can actually utilize that response to the system that requested it. 

But you can also capture that response into a system that doesn’t have FHIR capabilities. You could transform that into an ORU message or an ADT message, or even an ORM message, or some sort of scheduling transaction, right? So, you can still utilize it within your enterprise. Doing that with a point-to-point interface is very difficult.

Point-to-point, interface methodology works beautifully if nothing ever changes, right? So, in other words, you don’t have any OS updates, so it couldn’t cause an impact. You don’t have version updates that take place within those systems that are communicating directly with each other. If you do have those communications, and those two systems have those updates, those two systems have not been worked out to communicate together, then you’re going to have difficulties with your interoperability at that point. Whereas with the engine, you’ll have that still coming into the engine, you adjust one side of that interfacing and everything keeps moving along. 

So, those are the biggest pieces I wanted to put out. I know John’s got quite a bit he wants to talk about and I just wanted to really get in real quick and kind of give a history of what we’ve seen, what’s good, what’s bad as well as, kind of cover where we see interoperability going as far as at this point in time and some of the bad things we see happening. Well, I’m interested to see what John’s got to say. So, Mr. Kalafut,

John Kalafut: Well, thank you, Brian, and we’ll also have some back and forth here. So, I figure we’ll definitely pick up some of those topics. And also, for the audience, I think gives some context to maybe where we’re coming from. So, again thanks for the invitation, and the opportunity to my friends at Medcase for coming on today. 

As Sarrica mentioned, so I’m John. I’ve spent 20-plus years doing every type of product development in medical devices, and then got involved in larger-scale healthcare informatics a little bit later on in my career. But fundamentally at heart, I’m an engineer and a scientist. Algorithmic stuff was really the orientation of my dissertation work. How to do numerical computations to make a particular type of radiologic procedure to be much more precise and personalized led to that activity. 

The unique advantage of doing my dissertation work while I worked full time led an advanced development and research team at a midcap medical device company. And that was about the mid-2000s, late 2000s and I think that really, I had both professionally and academically and also really as a patient was, especially at that time, the middle 2000s in imaging, radiology, medical imaging, and also cardiology. There was more of a widespread adoption of digital management of images. Prior to 2000, you would have—I don’t know, Brian, probably better sense—I would say maybe 30% of facilities, hospitals in North America maybe had their radiology images digitized. 

Around 2003 to 2006, we kind of went from 32% to almost a 100%. One of the reasons for that was some technology and scientific breakthroughs and how we do CT scanning and also MRI which made it physically impossible for radiologists, you know, the physicians who interpret and read medical images to do their job. So, it necessitated really the need for data to be either transformed from their native state in radiology because up until that point, you would’ve X-rays, may be other types of what we call imaging modalities done, and those data could be converted into digital form. But that was expensive, it was a nice easy analogue format for using films. But imaging modalities are like ultrasound computed tomography and CT magnetic resonating imaging. These are actually natively digital, to begin with, right? You cannot do a CT scan or MRI scan without doing numerics. Under the hood, it’s all numbers. 

But what happened in 2003, 2005, those imaging technologies became so advanced where you can generate thousands of images for one small part of the anatomy. So, it became physically impossible to do your job as a radiologist or cardiologist or other ologist that was dependent on medical imaging to do their job without using the software. So thus, we saw this widespread adoption of digital networks. 

There was also availability at the time, you think about what was happening in the world, or broadly speaking from an externality perspective, there was the…I wouldn’t quite say commoditized, but there was mostly commoditized—they’re routers and switches and ethernet, also with the ability to store data on hard drives. So, you had those capabilities also coupled with the medical need and impacted to clinical care, all converging to really spring this digitalization of at least one very big segment of diagnostic medicine and being big medical imaging. 

Why am I going through all that? Well, some of that comes from my own personal journey through here. At the time, as I mentioned, I was developing prognostic and predictive analytics algorithms to do better CT scanning. But by doing that algorithmic work, I also realized that my team’s like, “Well, you’ve got the folks that are working to build those digital storage image management systems, what we now call RPACS, radiology, picture archiving, communication systems.” You had the folks working on the image acquisition systems, ultrasound, CT, and MRI, and you had departmental information systems, right? So, we’re scheduling what happened.

And remember, at the time, around the 2000s, it was not necessarily or really ever, widely adopted that you would have a centralized electronic patient management system in the healthcare facility. So, a lot of these departmental systems in radiology and cardiology, had to be done understanding that, okay, you’re not going to necessarily going to have, say, digital orders coming in. You’re not going to have schedules. There are going to be a lot of paper-driven processes. So, you got to have to work within this hybrid environment.

Brian Goad: Well, best of breed, right?

John Kalafut: Being... Oh, sorry, what’s that?

Brian Goad: What best of breed is. That’s what it was, right? It was best of breed back then, so, yeah.

John Kalafut: Yeah, right, so you would have coding systems and billing systems, right? There’s always the joke in US healthcare IT that EMR systems are a billing database with some clinical stuff tapped onto it, right? And that’s not too far from the truth, right? Because in the US fee-for-service market, it’s always been “how do you ensure you get paid.” So, portions of the electronic medical record system were digitized, but most of the clinical workflows—now, the exception is little island and usually the weirdos in the basement, right? Radiology, nuclear medicine, those guys were doing digital stuff. 

The realization that I had both in terms of value and doing innovation was, well, there’s not a lot of folks thinking about—just even within that weirdo portion of the hospital in the basement, there are not a lot of people thinking about the interconnectivity, the data flows that are happening from the patient appearing for a scan or even for the physician deciding to order a study, right? That information and the systems and checks and balance that they happen from that point to getting a patient into an MRI or a CT scanner; once you’re in the scanner, okay, yes, there are software systems inside of the big giant scanners. Those images are digitized and sent over to a reading system, but not a lot of understanding of what happens in between. 

So, that was really, I’ll say, where my career, also both academically from an informatics perspective and then innovating, really took on a different flavor. How can we use this information flow, the data streams, and in “ology” land, in radiology, cardiology, we have, and still have a very robust very willer articulated and architected set of standards that allowed us to do interoperability with the imaging data. But then that also allowed secondary uses, secondary workflows, you could build upon that for your imaging innovation, if you will. 

So, fast forward, maybe 10 years or so, and all of a sudden, we now have this other externality, and Brian mentioned this, the meaningful use initiative in, at least in the USL system, which was a motivation to digitize the records, primarily the clinical information, right? The administrative information for patients. Again, thinking back to the coding and what you are billing your insurer for payers. In most countries, particularly in the US, those were already in some sort of digital format. There was even a specific—Brian mentioned HL7, which was the standard that evolved for sending information amongst different systems.

Now, again, something that you might have inferred from Brian’s slides there, is that the HL7 version 2 standard, it is agnostic to actually how that information moves from point to point. So, it was not—unfortunately, still is not that uncommon, where you may see information, messages that are encoded or that are written according to how the HL72.x standard says, “If you are scheduling a procedure, you should put this kind of information in these fields and there’s not necessarily a prescriptiveness to which sequence, which orders those information pieces can go in.” There are some things that are, “Yeah, you’ve got to have the patient ID and a patient medical record number.” But you could, and it’s legal, in terms of an HL7 transaction, you could develop that order—what we call an “order message” as a text file, and have that emailed from one system to another or FTP or on a file system store, right? 

So, what happened in those? Now, later 2000s, 2010s when meaningful use would say, “Hey, we are going to now force institutions to start digitizing the clinical records.” You had very few, I would say out there, larger-scale systems that could enable the digitization of clinical workflows. You had some of these standards like HL72.x, that is very ingrained and entrenched, not necessarily developed thinking about, “Well, how are we going to enable clinical decision making to happen amongst different actors across institutions, across regions?” 

A lot of what it was in the original HL7 type of standards was about “how do we facilitate the operations of the healthcare environment.” When a patient shows up when they’re admitted, how do you notify another system that they’re now transferring from the ED to another ward? How do you alert that, “Hey, the laboratory, you’re getting an order to do some lab work. Wait for the specimens to show up, and then when you do the tests, send back another message that has the data in there, right?” So, they were there in a way to make these information flows happen, but not necessarily designed as an informaticist or computer science would ideally make these things happen, right? 

So, from the 2010’s onward, I would say, the rest of healthcare in certain ways is caught up to what radiology and cardiology had to kind of figure out and deal what to a certain extent in the late 90’s and 2000’s. And that brings us to today, in most western countries the electronic healthcare record system is a known thing populated with information that’s captured at the point of care by a physician or by somebody else, and those are put into a system. Now again, because of this rush to, especially in the US, to get the digitization of electronic medical records the existing infrastructure systems and technologies weren’t, as I said, not necessarily architected or constructed in a way that we’re going to maximize the utility of this information to flow among and to all bunch of different actors so you can start making decisions. 

Again, it’s really not anybody’s fault necessarily because again, the market demand pre-2000…I always only have a joke. I think the first EMS meeting I went to, there were like maybe 3000 people in attendance, because the electronic medical record, the CMIOs, Information Management healthcare, it was this little subsection of the nerds in healthcare that, “Hey, yeah, we’re going to digitize stuff. Really, we’re going to get there.” And the existing systems that existed. The market wasn’t going to bear spending $500,000 on some fancy space age information system if the physicians aren’t asking for this, if the insurers aren’t asking for it, right? 

So, in a certain way, we got what we got. But now fast forward to 2022, I think—and one of the big points here I wanted to stress on and talk about is to address a lot of these acute issues. And I know, especially for a lot of you that are on the line that are providers or in healthcare organizations, you really acutely understand and have lived through the pain and suffering of maybe less well-architected and articulated solutions that are bringing digital workflows. 

We are now at least, again, largely speaking here from the US perspective in some of the current events and against the more externalities that have been happening recently, that are really now moving us away from the pains and sufferings of talking about just the plumbing, that we have to get the pipes whether it’s using an HL7 message or a JSON payload, or how do we get one electronic medical record system to have information in a way that another one can read it and interpret it. We have been moving away from that, thankfully and largely because of, in the US, a very impactful law that by some miracle too, was a bipartisan supported initiative back in 2015, which became enacted in the December of 2016, called the 21st Century Cures Act. 

And in certain regards, the health information aspects of this, I don’t want to say [inaudible 23:18] was a second thought or secondary. Then, a lot of the original intentionality and why there were 1400 lobbyists involved in the initial drafting and the ratifications of the Cures Act was there was a lot in the 21st Century Cures Act about how to modernize and streamline evidence collection for Pharma and for Medtech. So, a lot of the excitement and interest that all of us have seen in the zeitgeist the last two, or three years about real-world evidence and real-world data, a lot of that was coming out of the initial intent of this 21st Century Cares Act.

How do we embrace and use other forms of data for either speeding through or optimizing—this is probably a better way of saying it—drugs and devices. But as in the cases with most large legislative initiatives, a lot of other items got packed into there. And also, if you think about it fundamentally, if you want to modernize the way clinical data are captured for the real world, so-called ideal real-world evidence, I guess maybe pausing it for a second. What that means is historical, any clinical study or clinical trial, clinical investigation done is, at least for drugs and what we call class three medical devices, is done in a very tightly scripted prospective, very clearly delineated and protocol approach. Meaning that we are going to ask for these types of data from these types of physicians, these types of institutions. They’re going to be this kind of data forms or electronic systems, they’re going to go here. 

So, it’s a very well-orchestrated and coordinated set of information and data. Because again, your job as a biostatistician trying to show that therapy or a diagnostic or a tool is making an impact and safe and efficacious, you need to rule out any sort of bias or other things that go wrong, right? You conflate that though with what the reality of clinical data are, which are for reasons historically I just talked about, aren’t really in, haven’t been in structured formats. The units even that are used vary from system to system. 

So, trying to embrace and use data and outcome information that are collected for patients that now make regulatory decisions without there being some guardrails or some ways of forcing the industry and the community to say: okay, look, hey, we collectively, whether right or wrong, the healthcare establishment, we think that the collection and using evidence data as devices and therapies are delivering market, we want to use those for the continuation of claims. Well, we’re going to need ways of collecting those data, right? 

So thus, in many ways in this CURES Act, that is why data sharing, data interoperability and one of the most important pieces that have happened in the last few years in the US when this has come to fruition, is the prohibition on vendors and healthcare systems from blocking validated use cases for getting access to that information that is either trapped inside of electronic medical record system, maybe not out of op prior design. It wasn’t necessarily that there were the electronic health record vendors sitting around 20 years ago and saying, “Hey, how can we make all the data here as proprietary as we can so nobody can get access to it.” 

You know, granted, there’s probably some of that in the health system. If you look at the history of finance and how it took maybe 30/40 years to get credit cards and credit bureaus to break banks—grasp on, “this is our customer data.” Yes, sure, there’s a certain amount of that, but largely, a lot of the information hoarding or being trapped was technological to a certain extent, architectural and of an informatics nature. 

So, regardless of the reason for it, one of the most important aspects from a healthcare delivery perspective, whether you’re a researcher, provider, or innovator, is that now the electronic health record system and the healthcare environment itself must allow both patients access to this information, but also, again, as I mentioned, for other valid uses. And so, this is what you see I thought might be interesting to actually look at the legislation itself because people hear about the Cures Act all the time. But these are actually elements of the legislation itself. 

So, these four areas of the Title IV are really critical now for getting back to what I see, and I think, generally speaking, is moving away from, “Hey, we’ve been struggling and fighting to get the various actors involved in the healthcare ecosystem to agree upon how are we going to move data back and forth to share them.” We lease the US market. 

There is now this externality that says, “Look, if you’re going to get money from CMS, the single-payer for Medicare Medicaid in the US, and if you’re going to accept other federal funds, you have to abide by these mandates, which have been now turned into administrative statutory requirements by the Health and Human Services. 

And then organizations like ONC, the Office of National Coordinator, the organization that was initially funded and started during the meaningful use in 2008 2009 era to say, “Hey, let’s define a system and a process for getting information systems healthcare to understand each other and we’re going to incentivize the use of these.” But that Office and National Coordinator has taken on now much more responsibility and influence, and now in recognizing, well, how do we now realize this definition here? This is what’s in the legislation. And so the Office of National Coordinator, Health and Human Services, FDA have been now forced to say, “How do we reckon? How do we now put in executive functions and processes to enable this law?” And then you can see in here, right? This is sometimes if you struggle, “Well, what does interoperability mean?” Well, here’s a good example.

 I highlighted some of the areas here too that I think are really important. Because especially in A, the idea, the notion, the wisdom in the legislation was such that, “Okay, let’s not allow the vendors or implementers of these technologies.” Yes, we enable exchange and interoperability, but you’re going to have to deploy 16 different systems and you have to pay two and a half million each year for every system. And this is written in Hexa decimal and you have to assemble it, right? So, the whole idea was like, “Okay, look, guys, no, you have to make it easy to use for the end use of the consumers, the researchers, the medical device, community, pharmaceutical companies, researchers, and patients to enable access to this information.” 

And then also another piece though, in this sub definition B, is this also does not mean that everybody and everybody everywhere can get access to healthcare data. There is the need to be aware of other existing statutory requirements, whether they be privacy related, like in the US we have HIPPA, the general data privacy regulations, and the framework, which at least guides a lot of the rest of the world. You need to be able to transact under those frameworks and those guidance, and also recognize though that you need to open up the pipe, so to speak. 

And this picture, I think is typical of what maybe exists inside a lot of healthcare institutions in terms of, you’ve had over the years, different systems implemented and different integrators have come in written different pieces of code. When one system was changed, oh, we’ll keep that in there. Oh, we have to make some kind of web service now around all these FTP transactions that happen, how do we do that? So, there’s been, and there still is—don’t get me wrong—a lot of this plumbing that needs to be fixed. 

But what is happening though is because of the side effect, and aligning as a side effect, I think a direct effect of the 21st Century Cures Act and the prohibitions on blocking access to this information that is in the electronic health record systems, allowing that to happen, the executive functions, again, Health and Human Services, Office of the National Coordinator, they need to then define methods, mechanisms to detangle this mess. We’re not going to be letting everybody define what’s the right size diameter pipe they should use, where one vendor is going to mark the hot water pipe with the purple, and another one’s going to use red. 

So, how do we align upon some way of enabling the legislation and the law, but also now doing this in a way that’s sustainable? And that’s going to allow us to recognize these other pieces of the initiatives behind the 21st Century Cures Act and going back also to meaningful use, which is “How do we capitalize on, if we do, and when we do, digitally describe healthcare processes? How do we now use this to make some important decisions?”

So, what here is, okay, look, there’s no longer this prohibition. Your EHR system vendor can no longer say, “Yeah, we’ll get to that maybe when we do, or an open API is going to mean that your data will be at risk here.” So, really, the landscape today is, “Okay, nope. Look, you have to provide that.” No, this doesn’t mean this comes for free and this also doesn’t mean that immediately today if you’re a healthcare operator or a healthcare organization, or the same thing for a vendor, I think there are a lot of newer entrants, especially in digital health, when they try to start now working in the provider side of healthcare, come to this realization like, “Oh, shoot, there’s a whole way of working in this space here.” 

Because fundamentally, healthcare operations and the provider in the hospital setting and in pair, its enterprise it, right? It’s not your mobile phone working on a network and you have some apps downloadable, right? So, there are still constraints, and there are still challenges. You still need expert guidance to work and it’s also not going to be automatic that most institutions, you can walk in with an application you designed that’s going to run in the cloud and it’s going to have full access to patients’ clinical history, in some instance, that will happen and other places not. So, you still need that coaching and guidance, understanding what really happens under the hood if you will. 

Another important aspect too of the Cures Act is the notion of, well, it’s one thing to have information accessible if you are a patient in a health system one. Great, all my doctors and care teams there can see that—well now, the likelihood is you’re going to get care somewhere else, right? You’re more likely going to move across the country or across the world, or you’re a citizen of another country coming to the US. 

So, another important piece, again, thinking about the interoperability at a higher level, it’s one thing to have the digitalization, the understanding of information where they reside and where they lie in that health system. But now how do we capitalize on that on mass, if you will? And I think the takeaway of this part of the 21st Century Cures Act, anybody that’s been around healthcare it the last 15, 20 years, there have been various fits and starts called HIE, Healthcare Information Exchanges. There have been various attempts from legislators, states, and the federal to make some of these things happen. They’ve largely been cobbled together, the economics of who pays for them has been a bit all over the board, to say the least.

But now, again, with this landscape, with this new legislation and the adapting federal statutory requirements, we’re now finally seeing, I think, a harmonized approach to how are we going to really ensure that systems can interchange and share data amongst themselves, not just in one geographic area. And again, this is not immediate, it’s not going to happen overnight. There are still lots and lots of informatics and software and IT challenges in the US in particular, the fact that we don’t have a unified patient identifier makes seemingly simple tasks thousand times more complicated. 

So, you still have issues like that, but nonetheless, we are now at the point of: yes, we are going to have this ability to aggregate. And this is now thinking about from an innovation perspective, you can now consent because the third point now is extremely important that it’s one thing to have information shared and to have that accessibility. It is implicit in your relationship with the healthcare provider, whether that provider is a physician or a healthcare organization, a system or a payer that there’s a rationale, there’s a reason that these data are going to be shared because it’s for the betterment of the care, it’s for the betterment of potential for public health. Things get a lot weirder, right? When you start talking about, “Well, now if somebody wants to monetize this information or somebody’s... What does that really mean? What exactly is that?” 

And another issue we have too in healthcare, and my last soapbox rant of the end here—I’ll come back to this—is that one of the biggest differences in biomedical in the biomedical industry especially in terms that relates now to data innovation and machine learning and analytics, is in biomedicine. You are also working with representations of that human being. It is by statute and by ethical decree, if you’re working with a specimen or data of a human, that needs to be treated with the same respect as you would if that patient, you know, you were doing a procedure. So, you have to recognize the different layers of protection for human subjects, incorporate that into what you’re going to do, why you’re going to do that with the data. And so, that is an important piece and this is also too underscored throughout the 21st Century Cures Act. So, again, I’ll riff on that a little bit later. 

A few important takeaways here, now in terms of, if from the innovators’ folks building stuff out there some of the tools of the trade, so to speak. The USCDI, this is becoming now the de rigueur if you will, means of naming stuff. Now interoperability as I had my first slide is not just about the pipes, it’s not just about the plumbing, and it’s not just about System A can dock into System B. The informatics capabilities and implications for interoperability now become much more challenging to an extent, once you’ve agreed upon “how do you move data back and forth?” Now you start shifting the conversation from “Can we connect” to “Okay, we’ve connected.” 

Now, what do these data mean? One system, one concept, one part of the country or the world might mean something else somewhere else. One institution’s using one set of standards for their lab medicine calibration if something as simple as those units may differ from laboratory to laboratory. So, the USCDI is an important part of the semantic, if you will, effort. How do you establish common meaning and understanding of data that are moving over defined APIs? And so, FHIR, the fast healthcare interoperability...

Brian Goad: Resources.

John Kalafut: ... resource, thanks. FHIR, itself, is not a data model, right? And that data means something if you have a computer science informatics background, right? FHIR is a way of ensuring consistent, connectivity, it does call out, there are certain elements and certain things you have to have in certain places, but you do not have a data model just by saying, “We’re using FHIR,” right? So, there needs to be this other, you know, we call semantic and understanding of interoperability, which is now an ongoing effort. 

So, having common vocabulary or ontologies that could be shared or, and this is where there are opportunities too for innovation. Those of you have been involved, maybe in general IT, there is an aborted effort maybe in mid-2000, something called the Semantic Web. There is this whole effort, “how do we bring meaning to the World Wide Web?” 

Lo and behold, it just became easier to apply statistical language processing and not try to define some massive sets of ontologies that you put on every web transaction. And now in healthcare, it’s a different story, right? We’re not dealing with anything from ordering your Halloween costume to getting a new vacuum cleaner. So, again, there’s still a lot of space for having ontologies and understanding in healthcare. 

SMART on FHIR, so interoperability is not just about the data and the meaning, it’s also about applications. We’ve struggled for decades in healthcare to get one application to acknowledge another one. Now, again, in the FHIR world, they are also embracing open standards and standards from the general IT community. So, yes, the notion of single sign-on reliable tokenization and authentication of sessions, cool they’re in there. Now the holy grail is how do you define a set of standard primitives to allow your application, which may not be from your EMR vendor to be able to be launched on the same desktop? 

And so, you can launch into another application to do some funky, cool thing, but you don’t have to have the user log into another system, or have a whole other set of user interfaces. What happens when you interrupt that workflow for that physician? You have to log back in, right? 

So, this notion of having clean ways of application interoperability is very important. This FHIR cast, again, is an evolving set of standards. It’s a very active community. This again may end up not being something that gets widely deployed, but it is extremely important and needed. There was, for those of you who have been veterans, you may remember this horrible acronym, CCOW, from the earlier days of interoperability for applications, which good intent, suffered in its implementation, and a lot of the assumptions that were made in terms of real workflows, and healthcare. But FHIR cast though, trying to learn from those lessons and bring some commonality to how applications can interchange. 

I mentioned this here, I get back to that point about the semantics. That it’s not enough just to have data elements and information that’s in a standard form. You need to have some way of also defining those meanings and so something that is broadly embraced by the medical informatics research community and also from outcome research, observational research and a lot of institutions that are building research or enterprise data warehouses, have been using what’s called the OMOP Common Data Model. Which again is HL7, 3. x and 4. x, which is FHIR fit within this common data model. But it’s a way of saying and enabling, “Hey, there are different healthcare systems that are built for different tasks. So, we shouldn’t try to manage everything in a one-day holistic, monolithic data model. Let’s recognize army differences among these different systems, but how do we allow them to interoperate and to at least have some common understanding across them?” 

So, the last piece here is, again, the “so what” of all this interoperability data meaning applications, there is the hope, the promise that you get these data, you get understanding, you get metrics. You can now start informing and making better innovations to help in making diagnostic decisions more accurately and make therapeutic recommendations that are more specific.

However, we have to tamper that enthusiasm with the realization that in biomedical research, as I mentioned before, it is research, you are dealing with a human at the end of the day. The implications of having an algorithm that is going to give some idea about the propensity of a patient to have cancer, the negative ramification of that being wrong is a heck of a lot worse than the recommendation of a movie in your Netflix queue. 

And so one thing that I’ve seen—and Brian, I’m sure you’ve seen this too—is often when technology-driven folks get into healthcare, they think, “Hey, all we need is data, data’s the new oil.” To quote Judea Pearl, the mathematician data are profoundly dumb. And in healthcare, it’s even worse because you never going to get 100% clarity of data. If we did, we would live to be a thousand years old, right? So, medicine is still very much that art, along with the science, and you can’t replace the inferential decisions, the inferential choices that are made by nurses and physicians based on what they’ve seen in the past. It’s not just all probabilistic and statistical, right? If it were, we would just replace every physician with a set of decision trees.

And so again, it’s not just about having all these data you need to consider “What is the hypothesis you doing? Doesn’t make sense to use these data? How do you bring these together for meaningful use?”– I guess pun intended– but doing that in a way that improves the outcome. So, that is my little spiel there so let me shut up there and Brian, I’ll give you a chance to react to some of those. So, a few items we could also talk about, but.

Brian Goad: Sure, definitely, let’s go back because absolutely, Best of Breed, I mean, Best of Breed software, and you kind of hit on it quite a bit in there, in the aspect of when I think of what you were describing within the model today, and let’s just talk about the US. Radiology systems are purchased by those departments to complement the way they operate, right? 

So, it’s not a forced measure that you will use this radiology system, and this is the “one,” that we’re going to use or lab or whatever. And, that’s where I think our models and all of our interoperability, all of those particular pieces, those are the things that make interoperability more difficult, right? We have different ways of operating throughout the healthcare industry, and it doesn’t matter if you’ve seen one implementation, you’ve seen one of a particular piece of software because everyone operates differently. So, that’s one of the things, so I agree with you definitely on that. The patient ID across the enterprise, so we both know what picks is. I mean, everyone knows what picks means and things of that nature, but going outside of the walls, so the HIE models and things of that nature. 

So, stepping out into the—not just our communities, but our states and countries for that matter – of sharing this information, the longitudinal view. What if I’m on vacation? So, it’s one thing for me to go to multiple hospitals in my community in my data following, but it’s another thing for me to be on vacation on the west coast and something happened and they can see all of my records without break-glass, right? Which we didn’t talk about all of that stuff, but those kinds of things are where I see the future going. I think that HL7, honestly, I think in my lifetime anyway, I don’t think HL7 VERSION 2 will be gone. I think it’s so ingrained and the reason being is why would anyone if it works today, why change it kind of thing?

John Kalafut: Sure, and well again, gets back to the intent, right? They have breached basis of transactional messages, right?

Brian Goad: Yeah, and that’s their purpose, right? You’re absolutely right. That was the purpose. “Hey, dietary, Brian’s in the hospital, he’s got a nut allergy, right? Make sure you don’t send nuts into his room,” kind of thing, or whatever the case may be. That was the purpose so that everyone knew that I was there and what I was there for so that they could properly attend to me, so anyway.

John Kalafut: Yep, all right. No, that’s great. Actually, you know what? And then there are a few good questions coming in here. So, one is about best practices for managing security and breaches. So, again, if we have data being interoperated, and moving around, what are some best practices to prevent or at least mitigate the damage from security breaches, or if there are breaches? I’ll give it a quick run at that. 

So, one solution to that is there is a definite advantage to using public cloud infrastructure. Because it’s one thing to have maybe 10 security professionals on a vendor’s team dealing with a hosted deployment versus 2000 from AWS or Asher. So, that is starting to see that as kind of a best practice in a sense. 

But however, that’s not going to be feasible for all places in all deployment scenarios, and definitely not all parts of the world. The economics don’t always make sense, Brian, as you can attest that you’re going to replace all of your on-premises or hosted infrastructure with a cloud provider because of the cost of your internet bill. 

So, something I would say to really recommend and look at in terms of state-of-the-art is really this notion that’s called Privacy Enhanced Computing Technologies. So, how can you encrypt everything at rest in transit and even as you operate on it? So, if you do have this is a ransomware attack, even if the ransomware attack was somehow to hit your edges even as data are going to the cloud, you’re still operating everything in an encrypted sense. 

So, even if those get sucked up in a ransomware attack or a leak, well, you can say, “Hey, fine, nobody’s going to see those because they’re all encrypted and at rest. And by the way, we have a redundant path and redundant system for you. So, I know Brian, so especially from working in the trenches every day in terms of best practices for security and then this also relates to our question about privacy relation,

Brian Goad: Yeah, definitely. I mean, we’re seeing quite a bit more of the certificate-level, interfaces that take place, right? So, certificates on both ends and the communication, the pipe’s still there. You know, as far as the privacy aspects, I mean, that’s a huge one, right? Because the thing about it is you can opt-in, you can opt out as a patient level but the systems, you still have patient data flowing across all of these systems. And so securing privacy, I think is something that needs more attention on all this data flowing across that we haven’t really touched on a whole lot and not just us, John, what I’m saying is the industry, what’s the next big thing? 

We’ve got this new—and I’m saying 2012—new methodology to communicate, that’s added on to a very solid foundation of interoperability. But where are we going next? You know, something needs to improve the ability to protect that privacy as well as the security aspect. Certificates have really helped tremendously but I don’t think that’s the end all to be all, I think it needs to evolve more kind of thing.

John Kalafut: Yeah, especially with a lot of the known limitations, or there’s been... I mean, these are being addressed in some of the web standards, right? But there’s been a slate of, or a state of certificate spoofing and other approaches especially if a penetrator gets into the network, right? So, yeah, I think largely in healthcare, a lot of the same principles in general IT, right? You have to assume there’s going to be some penetration, somebody’s going to get into there. 

But how do you now maintain and minimize the issues? I think from the privacy perspective, it’s interesting if you look at least in the US in terms of HIPAA breaches and where there have been unintended disclosures of information, the majority of them are not technical cyber-attack related, right? The majority of privacy leaks are either somebody left a folder– this is a real story— a folder of 5,000 patient records and a folder on the Boston T, or in a subway. 

Now, when ransomware attacks, which has been something that’s really been hitting, especially healthcare enterprise environment, that’s less about the disclosure necessarily of all the information that’s in there. That’s more about bringing the operations to its knees and bringing them down but I think this fundamental notion though of the data that are being used for your treatment of your disease and your day-to-daycare, they do need a second level and more important governance around how are they going to be used and for what purposes? 

From a technology perspective, this is why I’d say most public cloud platforms and hosted ones and electronic medical records, they’ve invested in and have various consenting modules and forms which make it feasible and to reality to do an audit trail, to understand and allow patients to opt-in. 

I think one of the biggest challenges I see in the US and a lot of healthcare systems, especially when it comes to data-based research, is patients make consent that “Hey, their data’s going to be used for potential, when it’s de-identified and secondary use” or whatever, often that’s buried within a seven-page disclosure document. So, it’s one thing to say, “Oh, yeah, look, we disclosed the patient, and they consented that we can use their information for various purposes but it’s another thing, do we really understand that? 

So, how do you ensure that and again, there are technical ways of making that happen again, definitely from the governance perspective from US federal government states, the same thing through in different parts of the world is definitely there to force that. But how do you do that from an operational perspective? Maybe that’s easy.

Brian Goad: Right? Well, and that’s a very good point. So, we’ve actually worked on quite a few projects where we’ve taken data for research purposes, quite a bit of de-identification. Now, the key to it though is because here’s what’s always the ask afterwards, right? So, we have all this de-identified data that we’ve run through our engine, sent in down to a database system that researchers can utilize and pull all this great information out of. But then at some point, somebody wants to put it back together with names kind of thing to make sure that it is accurate. 

You know, the research we found, is the outcome accurate? Is this the same patient basically that we’re looking at the same model? So, you know, that’s been a difficult challenge. We’ve actually built a method to do that. It’s probably not the greatest method in the world, to be honest with you, but it does work. But data for research is definitely securing that, the only way that I can even foresee that is anonymization.

John Kalafut: Yep, great. Yeah, and I’ll just give a last, so I mentioned this a little bit. So, Julia asked a question here about, how do we enable that user sharing for research? So, these privacy-enhanced computing technologies really are some of more of the state-of-the-art ways of doing this. So, multi-party computation, what we call federated learning from an analytics perspective, how do you bring algorithm development to data pools, keeping data at rest homomorphic encryptions? 

So, there’s a whole emerging science and capabilities to think differently about how you actually do your research activities without compromising the data corpus. Because the other thing that’s tricky, and I lived through this in my career, is that you completely anonymize a lot of data sets is that it makes it useless in certain applications for research and for development, right? So, you do need to have ways of doing meaningful, what we call pseudonymization, but then are also mindful of privacy laws and that’s again, what some of these technologies and gender. 

But again, I would say in terms of looking at what’s state of the art out there. Likewise, there was a good question for LinkedIn here about intergovernmental discussions around standardization, that’s a whole maybe other 45-minute discussions. But I think what’s fortunate, at least now about this common data element effort is the initiatives like OMA and Odyssey, a lot of these are getting cross-regional geographic buy-in and understanding. And so as different groups, the EC, in particular, is looking at a lot of these, how to enable secure privacy computing for research across the EU zone. A lot of these topics are now being brought up and thought about. But again, that’s a very good observation topic and something needs a lot of help. So, I think, yeah…So, no, again, thanks for the opportunity here for us to ramble on a bit and talk about the state of preference.

Sarrica Fink: I feel like I’m [inaudible 57:00] expert. Thank you guys so much! So, this is amazing. So, just a very formal shout-out. You are experts John and Brian, thank you so much for being here. Thank you so much to our guests who stayed with us, and who asked questions, we love the interaction with you guys. 

And yeah, there are definitely a lot of talking points about this topic, so I’m sure we could do part 10 of videos like this, but if anyone here would like to discuss it further, or if you have any suggestions or feedback about future roundtable topics, we would love to hear your opinion. You can head over to our website, www.medcase.health and you can drop us a message there. 

John and Brian, I don’t know if there are ways people can reach out to you if they have any questions. You can always add them on LinkedIn.

Brian Goad: Definitely.

Sarrica Fink: But that’s it for us.

Brian Goad: All right, thank you.

Sarrica Fink: Thank you so much, everyone.

John Kalafut: All right.

Sarrica Fink: Bye!

John Kalafut: Thank you, everyone. All right, take care. Bye!

More from Medcase