Making Digital Health and AI Safer for Patients - Insights from Experts on Healthcare Provider Technologies and SAFER
- Published
- Apr 2, 2025
- Topics
- Share
Check out this Panel Discussion with Dean Sittig, co-author of the SAFER Guides, advisor to the ASTP, and professor of Biomedical Informatics at the University of Texas; Arvind Kumar, Digital Health Leader at EisnerAmper; and Reese Gomez, Founder and CEO of SalesSparx, LLC.
Discover how to capture the benefits and mitigate the risks of AI and other digital health technologies to drive better patient outcomes and streamlined operations. Watch the full video to gain valuable knowledge from industry experts on leveraging technology for a safer and more efficient healthcare system.
Transcript
EisnerAmper:
At EisnerAmper, we are creative problem-solvers that take a 360-degree approach focusing on you. We're an award-winning firm with decades of experience. EisnerAmper, let's get you ready.
Reese Gomez:
Thanks for taking the time to have the discussion. What we want to really do, as in talking to you both as experts in the field, is we are seeing is that while a lot of this technology, whether it's electronic health records or AI, is being implemented to make health systems safer, but sometimes there's also increased risk, and what we want to talk about is where you see those increase in risk occurring and what can be done by organizations to be able to mitigate it. So could you give us a little bit about your background and kind of where you honed your craft in the safety area?
Dean Sittig:
I just retired from the University of Texas, where I was a professor of Biomedical Informatics, and I've been doing that at Texas for about the last 15 years or so, working a lot with Hardeep Singh at Baylor College of Medicine, and so we started really focusing on the EHR aspects of how it affected clinical care. And it was right about that time when the ARRA stimulus package was passed by the government, and we were talking to some people at the government that said the U.S. government's spending $35 billion to really push these EHRs out into healthcare systems, and we're seeing some issues with them, and they're not as safe as we should, and we think that the government should have some sort of guidance for how people could develop these things safely.
Reese Gomez:
That's great. Well, thank you very much. Arvind, can you give us a little bit of your background in the safety area and how you sort of look at it from your perspective?
Arvind Kumar:
Sure. Happy to do that. So started out as a process engineer, and pivoted to working with electronic health record deployments, and had an opportunity to actually work with some very large-scale transformations, where the entire country or state was looking to move to electronic health records such as in the UK, in Malta, and some of the geographies. And at that time, it became very clear that while electronic health records provide a lot of options to be able to pivot on efficiencies, safety, the real crux of using the electronic health records from a standpoint of safety came to light when I got an opportunity to work in the Harvard ecosystem at CRICO, which is more a med mal carrier or captive, but had the collaboration of the different organizations to look at data collectively across about 23 entities in the Harvard ecosystem, which includes Partners HealthCare, which is now MGB, Boston Children's. So we built some models.
And at that time, we also had a chance to intersect with Dean, and trying to reverse-engineer how some of these safety events and claims could have either been avoided or could have been ... Provided the right guardrails if the configuration was done right in terms of using the EHR. So that became the starting point, and that was back in 2014, and that allowed us to then take that to very large organizations like Mayo, BJC, Kaiser Permanente, to name a few, and build up our library of knowledge around specific vendor configurations that could help with safety. With the work that Hardeep and Dean did, back in 2014, and then ONC, being able to provide some guidance through the safer guidelines around the EHR, we very quickly helped other organizations that are of different sizes to look at a safety focus through the use of the electronic health record.
Reese Gomez:
That's really helpful. If you sort of think at the highest level that we've just spent billions of dollars in capital to try and make it safer, where are the areas where it falls down? Where are the risk areas that you've seen, from your perspective, where in spite of all of that attention, and effort, and capital, that things just fall through the cracks in unintended ways?
Dean Sittig:
Right. I've been a medical malpractice expert witness for cases, a lot of the time for the defense, but sometimes for the plaintiffs as well, and one of the things that always strikes me in those cases is we talk about a Swiss cheese model of safety where the Swiss cheese, if you get a big lump of it, it looks like a solid thing, but if you get a slice, it has all kinds of holes in it. And the idea is that those holes don't line up, and so you can't stick your finger through the Swiss cheese. It's cheese. It turns out, in those medical malpractice cases, the holes line up, and you wouldn't believe how many different holes can line up, and so you're perfectly covered unless something goes wrong.
And so it's like that in everyday life too. You are perfectly capable of walking down your steps, but then all of a sudden, it's night, the lights are out, and someone's left a toy on the step, and you step on the toy, slip and twist your ankle, fall, and hit your head, and you die. And you're like, "How could you die falling down the steps? I've been walking down the steps every day for my entire life." And you're like, "Well ..."
And there's been toys on the steps every day, but it hasn't been dark, and it hasn't been ... Whatever. And so these things just line up. And so I just can't ... It's shocking.
They'll tell me about what the case is going on, and I'll say, "Well, what about the doctor?" "Oh, he was on vacation." "Well, what about the patient?" "That patient has died now. We can't talk to him."
And so it's like we just have things that you would never expect. So one of my cases, it was a case outside of Boston, where two patients with the same first name and last name and very similar dates of birth within a few years, but not like that's all, had the same X-ray or same radiology procedure done on the same day at the same hospital, and the doctor looked it up, and instead of looking up Mr. A, he looked at Mr. B's record, and Mr. B had a cancer on his kidney, and Mr. A got a surgery for cancer on his kidney. And you think, "How could that possibly have happened?" We've got so many safeguards in the hospital about looking at the X-rays before the surgery, and people are signing off, and it turned out that in the OR someone had misconfigured the firewall on the computers in the OR so they couldn't get out to the outside hospital to see the X-rays. And the doctor said, "Don't worry, I saw this kidney."
"It has this huge mass on it. We can go ahead and do it. We don't need to look at the kidney." And it went on and on, and they took the kidney out, and the guy said, "Well, it doesn't feel like there's a tumor here." And they took it to the pathology, and the pathology came back by 20 minutes and said, "There is nothing wrong with this kidney."
And there were two doctors, two surgeons in the room, and the surgeon, the secondary surgeon looked it up on the computer right then, and he goes, "Oh my God, there were two patients with the same name. We took it out on the wrong patient." And you're thinking like, "There was one mistake sort of at the beginning." He looked at this record, and then after that, he just kept documenting, and all these places and checking off. There was a kidney, it's cancer, and people were like, "Kidney and cancer," and they did their time-out, and they say, "What are we doing? We're doing a surgery to remove this cancerous kidney," and they checked everything.
They did all this stuff right, except there was this little mistake that just kept going right through all those holes. Yes. So you think you're safe, and you are safe unless something happens to you that you just weren't expecting. And if two or three or four things happen, then you really are in trouble. It really tests you.
Reese Gomez:
I think the thing that we run into, and I think what's really interesting, Dean, is reading some of the work you've done, is you're trying to put more reliable systems in place, systems that have less holes, systems that are less likely to line up. And I think what the attitude is often of some of the organizations that we've seen is that they look at this as sort of a series of unfortunate events, that no matter what you do, you just can't prevent it. So they look at this as a, "This is the cost of doing business. I'm going to spend X amount ... It's X amount of problems are going to happen no matter what I do."
And I think what we are saying and what you're saying is that there is something you can do. You can create. And if you were speaking to, as you think back to the organizations that you worked in, and you have a CEO who's just allocated a billion dollars worth of capital to do this, and you were saying, "Hey, you know what? There's some things you really have to look out for." We can control this.
You're going to be spending a billion dollars, and yeah, we're going to make the organization better. But what are the things that you would ... What would you say to that CEO to really look out for?
Dean Sittig:
People talk about five 9s reliability, and usually, that means five 9s to the right of the decimal place. And in healthcare, we're like one 9 to the left. We're like at 90%. And so when you're making 10% mistakes, and then you think about how many things you do to a patient in the hospital every day, I mean, you're talking about maybe 20 different medication administrations in a patient in the hospital in a day, and several procedures, and lots of nursing procedures. You think about how many decisions and how many things were made, and if 10% of those were a mistake, that's going to be several mistakes on every patient every day in the hospital.
And when we do large studies, we find that. Several mistakes on every patient every day in the hospital. Now, the thing that keeps us from just killing everyone in the world, if you think about it, all of those mistakes could have led to a death, is we've got some of the most highly trained and highly caring people. And so now, we're making a different kind of mistake, but we don't have that sort of a safety net that we used to have. The computer can act as a little bit of a safety net, but it can't do all the things that the people were doing.
And so we're in this situation, we're in this transition where the computer's not good enough to do it all by itself, and we can't afford to have all the people that we had before. And so we're in this, I'll call it a hybrid situation. I thought this hybrid situation wasn't going to last as long as it did, but we've gone to almost every hospital is using the EHR almost fully throughout its hospital, but there's still some things that the computer's not very good at, and we haven't figured out how to do those yet, and that's causing us some mistakes, and so these abnormal test results continue to fall through the cracks. And I'll say that the abnormal test result is one example of a kind of process that you want to have, what I call a closed loop. So I want to do something, it wants to go to someone, they're going to send it back to me, and I'm going to know what happens.
So we're going to close that loop. And so it could be that I've sent an order for some medications to the pharmacy, the pharmacy's got to deliver that medication to the bed, and that we've got to give that medication to the patient. And so all along that process, we can lose track of that, first it's an order, then it's a medication, then it's a barcode. And the question is, "How do we make sure that we're closing all the loops that we're supposed to?" And so that can happen with test results, it can happen with orders, with referrals, procedures that we've scheduled, and things like that.
And all of those things, we can't get that loop to close adequately. And a lot of the times, we used to have a lot of people involved, like a doctor would write something on a piece of paper and someone would transcribe that to another piece of paper, and then they would send it to somewhere, and then those people would look at it, and write it on another piece of paper, and then they'd send it back and someone else would check it. Now, we have these systems where a doctor types it in, and it goes to the pharmacy, and it goes right to that automated fill machine, fills the pills into the bottle, and someone just cursory glance at those pills to make sure they're the right ones.
Reese Gomez:
Well, the way I was looking at it, and maybe Arvind, you can weigh in a little bit on this, but I'm definitely oversimplifying, but if a parent has a new child, you go through the house and you cover up the outlets, and you lock the doors, and you do all these other things. It feels to me like there's an approach that you can bring to the table, where you know where those risks are, just like what Dean said, and you understand where those ar,e and you could understand what the organization has implemented from an EHR perspective, artificial intelligence, whatever it is, and say, "These are probably the most likely risks," based on what you just said, Dean, and all the extensive work, and you could go through that. Could you give us a little bit of a sense, Arvind, about what you do with that initial upfront process to just kind of help to understand where those risk areas are?
Arvind Kumar:
So I think Dean hit upon a lot of areas in his experience and how they have built the framework for safety, and not only the use of the EHR, I call it digital technologies. I think the first point of entry, which he hit on was that second area, which is use of technology. And that's a huge implementation lift that most organizations will pay attention to more from making sure patient throughput is good, they can get to billing collections. There's less of a focus on some of those guardrails, such as in medication administration, such as in closing the loop, because all these configurations are very much available in the software that comes off the market or comes from the market. It's a question of paying attention to some of that detail, where you are able to project both your workflow and your staffing allocation, knowing that you will be driving some efficiencies to get to your ROI, and making sure that you are able to monitor and close the loop in the case of referral management, you know?
But in order to do that, given that you have kind of squeezed out some of the staffing that you had prior to having an automated mechanism through the EHR, you have to make sure you've got the right points of which you are surveilling, which goes to the third point Dean mentioned, which is, "How do you monitor that these processes are working as intended through the electronic health record, through the use of technology?" And that's where ... You and I have discussed this, Dean. We have brought forth the concept of controls from the financial industry, right? What are some of the control points, some that are preventative?
Preventative, meaning, "What can I force function?" So, for example, you remember back in the day, with some homegrown EMRs in the Boston area, you could basically close an encounter without doing a history and physical. That led to a few med mal claims popping up.
Reese Gomez:
So in your experience, when a malpractice award gets done, and you may not have visibility into this, but it feels to me like you would go back and figure out, "Well, let's make sure this doesn't happen again," and at least fix the issue that caused that malpractice event. But I'm curious, do you know even anecdotally, if people are working backwards from that?
Arvind Kumar:
You know, Dean, on an average, a med mal claim, or if a claim is asserted against a health system on an average nationally, it costs about $465,000. And that's a pretty steep number for many organizations, because like you said, a lot of them don't go to trial, and even though the plaintiff is shooting for getting the policy limit as a settlement. So with that in mind, there's an expense and there's a morale that is impacted from a physician burnout perspective or a provider burnout perspective. The effect is very, very negative on individuals, on the institution, even if there's not a payout. But isn't that compelling enough for organizations and leaders to say, "I've got to start looking at ways in which I make my organization safer," not just from a standpoint of avoiding med mal claims, which is the ultimate ROI, but even some of the safety events like you mentioned, where you've done an intervention incorrectly, you've extended the length of stay?
That extended length of stay or utilization is really a burden based on the reimbursement mechanism back on the institution, right? But it still doesn't seem to catch the attention from a standpoint of, "I've got now a very structured framework and lens."
Reese Gomez:
Well, gents, I know we're past our time here by about 10 minutes. We really appreciate, Dean, you taking the time to meet with us. Any final words that you guys want to talk about or make sure we cover before we sign off here?
Dean Sittig:
I think I'm good. This is a complex challenge. It's not easy, it's not labor-free or cost-free, but it makes people's lives better. And I think that most people, they don't think it can happen to their hospital, and they don't think it can happen to them. They don't think they're going to get sick.
The doctors don't think it's going to happen to them. Most doctors don't have a malpractice case against them in their whole career, or if they do, it's one or two. And so they change after it happens, but it only happens once. And so you'd think we could learn from others, and that's what we need to do, is learn from others and fix this stuff because you're going to be in the hospital, and you're going to have medications given to you, and you're going to have diagnostic procedures, and you don't want a mistake to happen to you.
Reese Gomez:
Well, gentlemen, thank you so much for your time, and great, great discussion.
Dean Sittig:
Thank you so much.
Arvind Kumar:
Thank you.
What's on Your Mind?
Start a conversation with Arvind