Event ID: 3053674
Event Started: 9/28/2016 8:14:42 AM ET
[Please stand by for real time captions] .

Good morning, everyone. Maybe we can take a seat and we can get started.

Good morning. It is wonderful to see you all here this morning. I am Andy Bindman, Director at ARC. We are thrilled to welcome you to our first Research Summit hosted here at ARC. This is an exciting day for us. It's also a bit of an experiment for us. This is the first time we are hosting a meeting like this and our new building. It's a bit of an experiment that we hope will go well. Based on what we hear from all of you and experience with you together, we will assess that and make the termination about scaling it up over time. That's what we do here at ARC. We like to implement. We like to learn from our experiences and grow that. We are excited about this topic today on diagnosis and we are really pleased to have you here. This is kind of a new space bar us. We are still breaking and our new home. Here at AHRQ it's a delightful place. It has a few kinks as some of you may have experienced this morning. We want to thank you for your patience as you had an opportunity to interact with some parts of our home, and thank you, very much for your patients related to that. As I say, I go by and the. I am new in this position since May of this year. I am new to some of you. I do not know all of you, although I've spent my career being a health services researcher, the focus on patient safety and diagnostic error is something of a new, exciting topic area for me. I'm really excited to engage with all of you today and to learn about your work in this area and how AHRQ can be a part of it. Aside here but all of you in the room today, I should say we have as many as 400 people participating in the meeting via Web chat. We're excited to have such a large community focus on this. My own career as a primary care physician so I am very familiar with the issues that can come up with diagnosis and then diagnostic error. I practice for about 30 years prior to coming here at San Francisco General Hospital, and I always viewed that one of the things that was so exciting and important about practice was too both learn from practice and to take that knowledge and what is learned from practice and feed it back into practice. That is very much the strategy I hope we will bring to learning about diagnostic error as we work on that today and going forward. For those who are not completely familiar with AHRQ, I think many of you have touched AHRQ in different ways. AHRQ's mission is very much about producing evidence to make healthcare safer, higher quality, more accessible, equitable and affordable. We do that work both with partners within Health and Human Services, CMS, NIH and others. Will also do it with external partners, including many of yourselves. Thank you, very much for that. We see going forward is a partnership going forward. Our goal is too not only generate evidence but that evidence is applied to health care quality and to make it safer. In terms of a very brief history in AHRQ's work in patient safety, it was in 1999 that AHRQ was designated as the federal lead in the area of patient safety. Much like today, that work was launched in 2000 with a national Summit on medical errors and patient safety, and it has now been just about 15 years since AHRQ first started giving grants in the area of patient safety. Of course, the field has grown tremendously based on AHRQ's mortuary, the work with researchers, and those who can use the evidence to make health care safer. How does AHRQ do it's work? As I say, we do invest in grants. We are also involved in synthesizing research findings into evidence and to form a consensus about what the best way is to move forward. We then take that material and develop tools and trainings to work with end users, Health Systems, frontline providers, health professionals to ensure the evidence is actually applied at the frontlines of care. Then finally, it's critical for us to have data systems that allow us to monitor the progress were making toward our goal of improving patient safety. This is what that looks like in terms of connecting up the different pieces. We do not do them in isolation. We do not do our research and evidence and say, maybe someone else will get involved in tools or maybe some of these things will happen. We are very specific about trying to create the connection between evidence to move it forward in tools. You have seen many of these if you are involved in any way in our work in patient safety. Would've done this, for example, in very similar work related to central line infections involved in work related to urinary tract catheters, [Indiscernible] too reduce infections in hospital settings. We then try to monitor the impact of this work in terms of making use of our critical databases here at AHRQ. Things such as [Indiscernible] that collects the monitor data from their State and [Indiscernible] claims and their Emergency Department settings. We also have data from MEPS to monitor. You will probably see this slide more than once today but this is an example of how we try to close the feedback loop to understand how AHRQ work is [Indiscernible] in patient safety. Between 2010 and 2014, I will work contributed to a 17% reduction in hospital-acquired conditions which saved over 80,000 lives, over 2 million patient harms of 40, and close to $20 billion in savings over this 4-year period. That's quite a remarkable return on investment. I'm wondering if you all know what the size of that investment is. From AHRQ's perspective, this will have you reflect a bit of what you think AHRQ's investment is in the patient safety portfolio in this current fiscal year. Was it under $100 million? Between $100 million and $200 billion? What do you say?

$76 million is the amount we had and I will make light of that. I will say that even the scale of the issues were dealing with, with patient safety issues and errors in medical care be one of the leading causes of death in the United States. If you subtract the [Indiscernible] in area a patient safety with other leading car -- causes such as cardiovascular disease, cancer, and so for. Co-investment pales in comparison to that. We are able to leverage the funds that we have. I think there are opportunities will need to be able too get additional investments to continue to make great progress toward making health care safer. Our focus today, and we will hear in a couple of moments in great depth a fantastic report produced by the National Academy of Medicine. This was published in September 2015 called, Improving Diagnosis in Health Care. It was part of a series of reports that the National Academy of Medicine has produced. This is part of a series on the quality CASM series which has been highly influential and charting a course for AHRQ's work in this area. AHRQ has continued a partnership with the National Academy of Medicine helping to cosponsor the development of this incredibly important report which we will be talking about today as a guide as we think about going forward with a research agenda in the area with improving diagnosis in health care. Some of you may think many times it's not such difficult problems, the diagnosing of different types of problems that come to the office. For example, this position has astutely identified and having a fairly dramatic ringing in its years. If there were always this simple, of course, we would not need to have that kind of discussion that we are going to have here today. In fact, health care is much more complex than this in many cases. Even when things seem quite simple they can be much more complex than you think. As I reflected in my own experience, today, coming into this meeting, I thought back to a patient I had many years ago who I thought just had essential hypertension, which means I thought the only problem we had was high blood pressure. It took me as a young physician more than one-year or 2 caring for the individual before I realized his high blood pressure was caused by a tumor called a [Indiscernible] side. I had missed, and a sense, an important condition that this individual had had. I think it's always an important lesson for us to realize the things that are common and seem obvious are sometimes complex under the surface. I think that is the issue with regard to the issue of diagnostic error. That's why we need to be very thoughtful in how we approach this issue here today. What are our goals for today in gathering you together as a broad community with interest in this area?

We want to make sure that we are clear about the findings what diagnostic safety and diagnostic error is so that we are all talking about a common problem in a similar kind of way. We want to begin to know how to measure a diagnostic error because if we are not capable of measuring it, we're not going to be able to make impact upon it, and demonstrate our impact on it, and the same way that we have another areas of patient safety. Were also going to start to explore potential solutions, opportunities related to new technologies that can improve our ability to do diagnosis, and to talk about the organizational factors in how healthcare is organized that may have an impact on how we approach the issue of diagnosis. First and foremost, we're going to commit ourselves today at AHRQ, and I hope is a community to a research agenda to make improvement in diagnosis possible going forward, and for us to make consistent progress towards this goal. We have an absolutely fantastic program of speakers here today. I will be introducing them as they come to speak, but I wanted you to be aware of some of the fantastic individuals and to give you a sense of the structure of how we're going to do our program today. We're going to have some wonderful introductory comments that will highlight for us some of the key issues of the National Academy of Medicine report, and highlight the relationship of how the National Academy of Medicine has worked with AHRQ in the past to promote and catalyze going forward, and agenda in the area a patient safety. We moved here from AHRQ's lead in the area of patient safety, Dr. Jeffrey Brady who helped to lead these efforts for us. We will then you about physician and patient perspectives from Dr. Gordon Schiff. We'll talk about milestones from Dr. Mark Graber. We within share views on what the issues are about. We will have breakout panels which are opportunities for you today to participate in 2 to the breakout panels at different times during today focused around the areas of use of data in measurement, Health IT's Roles and Responsibilities in this issue, and organizational factors that come into play with regard to diagnosis error. Finally, we will reconvene at the end of the day as a group to talk about what we discussed in these panel discussions and to consider next steps going forward as a community. Ultimately, I want to emphasize this is not an activity AHRQ can do a lot. Need your partnership and we want to do it together with you. We can only make a difference in improving diagnosis of we do it along with you. That means you as researchers, you as leaders of Health Systems, you as health professionals participating in the delivery of care, anyone in the interest in this issue. We need your engagement and involvement. We very much look forward to your comments and your participation today. That includes all of you here in the room, as well as those of you participating through WebEx. Thank you, very much, for coming today. And thank you for stepping up to this important challenge. I want to now turn to -- thank you very much [Applause] -- I want to now turn to our speakers and to launch this program in greater depth. Our first speaker this morning is Dr. Victor Dzau. He is the President of the National Academy of Medicine formally known as the Institute of Medicine. Dr. Victor Dzau has an extraordinarily rich resume simply which I recommend you to look at all of the background materials on our speakers -- I should say our available on our website about this program so that you can read about that more in- depth. I do want to say that Dr. Victor Dzau in addition to leading Academy of medicine, he serves as the Chair in health and [Indiscernible] of national academies of science, engineering, and medicine and is Vice Chair of the National Research Council. Dr. Victor Dzau has made significant impact on medicine through his seminal research and cardiovascular medicine and genetics, and his leadership in health care innovation. He is really truly one of the world's preeminent health leaders, Dr. Victor Dzau advises government, corporations and universities worldwide. Were deeply honored by his presence here today to help in passing the baton from the some of the report that the National Academy of Medicine bit on improving diagnosis, and had been the baton, and a sense, too us here at AHRQ, where we are, as they say, committed to building the research capacity to take on this challenge. Thank you, very much. Please welcome to the podium, Victor Dzau.

[Applause]

[Indiscernible] is a man of many skills.

Do not speak to early I have not pulled it off yet.

[Laughter]

Good morning, everyone. Andy, thank you, really so much. I was saying to and the earlier about his great leadership and I think it shows. Of course, getting all of us together to look at what is the next big challenge is what leadership at AHRQ is about. I took some notes in his comments and the sick, partnership, relationship with AHRQ [Indiscernible] promote and catalyze and [Indiscernible] work together. That's what I am here for. When I was asked to speak, I was preparing improving diagnosis but when I told Andy he said to tell us more about the [Indiscernible] journey and health safety. Particularly, our relationship with AHRQ, in fact, I think you've got better slides than I did but I will [Indiscernible] about AHRQ. I am his voice piece, if you will, and talk about where we are going. That is what I will be doing this morning. Many of you know, just a quick primer because people are somewhat confused to what's in a.m., IOM. [Indiscernible] Congress [Indiscernible] charter and founded independent organization called national Academy of science. It's to advise the government during the Civil War, experts and scholars about science [Indiscernible] issues. 46 years ago, the [Indiscernible] decided it should have a health arm of NAS called Institute of Medicine. The 46 years we have blossomed and made major impacts to including and functioning as an Academy. July 1, 2015, a little bit over 1 year ago we were renamed as National Academy of Medicine. You heard what Andy said, we [Indiscernible] together in the National Academy of Sciences, engineering, and medicine. It's a good thing because we have an opportunity to cost [Indiscernible] and work across discipline, the whole idea of converging sides today. That being said, I know people somewhere said what happened to IOF? What's our report? From now on it will be from the National Academy of Sciences, engineering and medicine. Because it [Indiscernible] so much, [Indiscernible] to health and engineering and national sciences as part should have a uniform branding all together. That's a little bit of advertisement. I would say that we have been involved and been interested in quality for a long time. 1990 we wrote a report for Medicare, as you can see, strategy for quality assurance. Then the Council of IOM in 1994 wrote a white paper to say that America's health is in transition. We have to look back and improve quality. That led to a lot of work that, ultimately, is this [Indiscernible] report 17 years ago report called, To Err is Human. This is followed by the quality [Indiscernible] which is how will we start moving what is quality [Indiscernible] line of care. As you know, I think this report as we talk about improving diagnosis is [Indiscernible] and desktop public attention. [Indiscernible] quality and care and safety, but we say, up to 98,000 preventable deaths in hospitals. I am a physician and that's unheard of. I was thinking as a young man practicing medicine, etc., we think we're doing good all the time not realizing this, but also the cost at that time of $17 b illion to $29 billion a year. This was 17 years ago. You can see that I wrote in the bottom, more recent statistics show that the numbers may even be higher. [Indiscernible] treadmill if you will, given the fact patients are getting older, [Indiscernible] disease and there is a lot more complexity. You can imagine that, in fact, this is a significant problem. In the additional quality [Indiscernible] Chasm six of them, [Indiscernible], patient-centered, timely, efficient and equitable. I suppose you know all of this by now. I think this is a very important [Indiscernible]. Chasm we said, look at the bottom square. We have to reengineer the care of cost 17 years ago. We have two of effective use of information system and technology, and we have to have knowledge of skills management, develop teams and coordinate across patient condition service and care over time. We're still talking about this. We have not quite gotten there yet. That was 17 years ago. When I heard this I was a young man. I am now more grayer. Over the years, we have done as you heard from Andy, a quality series. In here is not all of the series that are there and I have also added a few that are not in the series relevant to quality. You can see for yourself it expands many different issues. I will select a few. Early in our discussion in quality Chasm as I mentioned, we talked about the need of national commitment to build an information infrastructure. Today, of electronic health records, we have amazing data. I think the question is, overbuilding the right infrastructure for quality and patient safety? We also talked about the need to have metrics and in the infrastructure they have data standards, and also the ability too [Indiscernible] data interchange, and a comprehensive patient safety program in healthcare organizations. Importantly, this resulted in slides eight, six, seven, [Indiscernible] Congressional mandate on the Medicare prescription drug organization improvement act have three reports that talk about performance measurements, quality improvement, and also pay for performance. I think that's the beginning of lots of thinking, that we understand that, obviously, reimbursement policy great influence, in fact influence and emphasize the quality aspect. Still on the IT, we had reported in 2011 that given the pervasive use in Health IT, we need to be sure that we of safety -- we address safety concerns and have much better oversight between the public, private sectors to protect Americans from medical errors caused by Health IT. As you can see when we look at the whole issue, I will talk later on a focus for AHRQ and a great partnership we have developed. We also talk about profession education. In this case, we really talk about core competencies for health professionals, patient-centered care interdisciplinary team, evidence-based practice, and quality improvement and informatics. The report recommends a mix of approaches to help education improvement but looks at a variety including training environment, research and public [Indiscernible] and leadership. Our report emphasizes positive leadership. Many of you -- everyone in this room, I know our leaders care about this area and you have to lead by example how quality safety is. Of course, collaborate and work not only within the organization but outside the organization within health care but also health sciences to bring in the tools to really improve patient care. The other one that we have emphasized is the work environment. This report focused on the work environment of nurses because we know how important it is and they really are the frontline. It I ever tell you my own experience previously as a physician and [Indiscernible] system, nurses play a critically important role in patient satisfaction and safety and quality is very much dependent. Yes, [Indiscernible] work in an environment that is not supportive and that's support we need to look at work conditions. If I were to summarize our work, I would say that embedded in the first report on the second Quality Chasm are the principles on which we have done a series throughout the year. Enhance knowledge about safety, identified and learn from errors, measure and transparency and accountability, the use of information technology, preparing the workforce, creating safety systems within health care organizations, and engage patients and families and families. With this lens it's what I want to talk about the great work AHRQ is doing. And he asked me to tell us about how you think IOM or NAM impacted the whole safety quality in this country. We don't want to take all of the credit because a lot of people have been working on this but we may have started the movement. This slide shows you a roadmap of the time of the report to what has happened today. I think you can highlight, for example, the critical [Indiscernible] 580, health care quality research act of 1999 signed by Bill Clinton and [Indiscernible]. The Agency renamed of healthcare policy research as we know today AHRQ putting emphasis on quality. Followed by an Executive menu -- memo from [Indiscernible] too say quickly improve quality and protect patient 70 [Indiscernible] recommendations. You can see the journey to steps and many other activities, suffice to say George W. Bush signed in 2005 patient safety quality improvement act. Of course, we all know about the [Indiscernible] care act of 2010. As we look at this journey, I would say one thing is for sure, [Indiscernible] come to some significant quality and safety problems [Indiscernible] predecessor and [Indiscernible - low audio]. I think the culture has changed. The focus on quality safety has now become pervasive not only throughout the U.S. that everywhere else. I think that you can see the ability of people to speak up to create centers and departments on quality improvement; to look at hospitals [Indiscernible] program; public reporting, as well as systematic processes for problem-solving; guidelines, education and patient acknowledgement. We've come a long long way but we are not there yet. I think this, in fact, is why we are here today. I want to really commend the great work AHRQ is doing. And so being renamed and reenergized, [Indiscernible] too say, and the you have the job to conduct support research and build by the public partnerships to look at identification of cultures caught developed [Indiscernible] strategies and disseminate such defective work. This is where I'm going to advertise for AHRQ for a few minutes. In fact, they really follow the mandate, as well as [Indiscernible] a great Partner with a catalyst. David said, implement. I want to be sure that we don't take all of the credit. Believe me, a lot of people will have this. In fact, AHRQ has initiated a lot of studies. Having the direction, the report 2000 and say, do with patient safety. The federal action to reduce medical errors impact. [Indiscernible] Roadmap of 100 or more activities to look at reducing errors and improving safety. [Indiscernible] said that are -- AHRQ should that [Indiscernible] of patient safety and set national goals, federal research agenda, and research as you were caught develop methods and [Indiscernible] determination and communication. Jeffrey Brady has the job now of being the head of The Center. It's a big job. That, in fact, I think today to say that happened. Of course, I will talk a little bit about things. AHRQ -- this is a map I put together which is only a selection of what AHRQ has done. It's made an enormous impact. I will talk about a few things. As a set, I would do it from the length of IOM or NAM to show you what they have been in the dissemination of knowledge, transparency, accountability, use of IT, workforce systems, and engaging patients and families. First of all, research agenda, use all the slide from Andy that the national Summit was called in 2000. Then in 2001 they formed the research agenda and then working with [Indiscernible] it put forward this publication called AHRQ evidence report 43, making health care safer. What that they started funding nearly 100 programs to run this work for many of you and others to start looking how do we make a movement, how do we move things forward? Of course, importantly, it was [Indiscernible] created lunch resource of patient safety information that held leaders and health care organizations to look at how they are doing. This led to 200 projects in 48 states with 20 different hospital levels and seven regional level measures as patient safety indicators, as I am showing here of being able to begin to measure in health care of the many thanks they are doing. It launched as part of the national report to Congress about comprehensive overview of quality, disparities, and identifies trends and weaknesses and [Indiscernible] priorities of quality strategy. Information technology launched national resource Center for Health IT that led to 200 projects in 40 states, full-spectrum of planning implementation, demonstration and evaluation. It also worked on the [Indiscernible] workforce in creating the released tool, workforce [Indiscernible] of the patient safety Improvement call and training programs in 2003, and also a [Indiscernible] for the nurses from the [Indiscernible] foundation [Indiscernible - low audio]. And in creating systems in healthcare organizations, it created the patient survey -- hospital survey for patient safety. That is critically important with 600 some organizations sharing data and comparing national benchmark to be able to know [Indiscernible] where things are. Also reengineered hospital discharge programs and even designing hospital looking at evidence-based design principles and [Indiscernible] Innovation. Also important looking at issues of [Indiscernible] hygiene dispensers, safety spencers, lifting and so on and so for. Finally, engaging patients. Working with counsel to launch a series of public service advertisements to patients to tell them they should be [Indiscernible] with health care providers. I think you owe me a dinner since I have just given you such great advertisement. I want to talk about where we are and where things are. Of course, I think is an important impact is the quality care act. I think put right square in the middle is quality provisions. You can see all of them National Quality Strategy, [Indiscernible], central quality improvement, and I think you know all of those things, except to say or suffice to say that new payment and delivery models are being authorized. That's critically important changing [Indiscernible] and hospitals, etc. The early idea of pay-for-performance. In fact, we all know that Secretary [Indiscernible] launched this Better Care, help the people, starter -- smarter spending to look at payment categories which we are moving into category 2 or 3, pay for service went to quality, but also [Indiscernible] payment models and moving towards operation based data. She is a very bold and ambitious agenda. I think we all have to work very hard and quick because by 2018, 90% will be linked to quality of which 50% well [Indiscernible] payment. I say all of this because, as Andy say, haven't done any better? This report in 2013 says 1.3 fewer patients harms. You saw a much more beautiful slide in 2014, AHRQ reported the hard work of all of you, with leadership and others has [Indiscernible] 17 decline in hospital-acquired conditions, 2.1 million fewer hospital-acquired conditions and almost 87,000 [Indiscernible] patient with tremendous savings. You can see on the bottom the kind of numbers that all of these areas have gone downwards. I think we're on the right track. Nobody believes we can go to sleep at night comfortably. We have a long way to go. Especially across -- things are getting more complex. I will talk a little bit about that at the end. It so complex that people have tremendous amount of work to do and work with IT and complex patient [Indiscernible]. -- [Captioner has lost audio connection] .

[Please stand by while re-establishing audio connection] .

[The event is experiencing technical difficulties] .

[Please stand by while re-establishing connection] .

In the comparative population we are about twice as high in these numbers and the trans R going the wrong direction. If you look at depression and suicide, it is alarmingly high among physicians but also among nurses. This study shows Post Traumatic Stress Disorder symptoms for nurses work in ICU and there is similar data to look at nurses who were generally in practice. With that with the merging of [Indiscernible] at [Indiscernible] and [Indiscernible] @-at-sign-at ACGME, Jim [Indiscernible] and others -- at ACGME, Jim [Indiscernible] and others -- [Captioner has lost audio] .

[Please stand by while re-establishing audio] .

[The event is experiencing technical difficulties]

It is up to all of you and, of course, AHRQ to take read this will put forward as well as [Indiscernible] ideas to move forward, implement and make a difference. I'm so glad to be part of it with you. Thank you, very much.

[Applause]

That was so exciting to hear. It's amazing to have a learned guest who can have to educate all of us about the connections between AHRQ and the National Academy of Medicine. We are so fortunate to have Victor Dzau to be a Partner with us. Thank you for the advertisement and the education. I'm relatively new here and I see the incredible milestones the AHRQ have been involved in and how it has interwoven its development with the National Academy of Medicine is really exciting. It gives me great hope for what we're talking about today as we move forward on this activity today. Thank you, very much, for that. What I would like to do now is welcome to the Terich, Dr. Jeff Brady, who is going to help give us a little bit -- welcome to the states, Dr. Jeffrey Brady who is going to speak. He is our leader here and AHRQ for work on patient safety. Events of the Center on this. You can see more details about Jeffries simply on our website. I want to say what a pleasure -- Jeff's CV. Geoff feels deeply committed to these issues and is a fantastic ambassador for AHRQ for connecting with key partners and for synthesizing the importance of those relationships as he leads our efforts here at AHRQ. Thank you, very much, Jeff, for your involvement with us. Do you have some slight? I will find those. There we go. -- do you have some slides? I will find those. There we go.

[Silence] .

Good morning, everybody. It is really a pleasure to be here with you today. I would like to make -- with this guy that and the and Victor Dzau talked about. As you have all heard this is a big event for us. We certainly know a lot about how to put on a meeting but a topic of this importance, we have taken to heart the responsibility we have today. Thank you all for not only being here but for helping us care about responsibility. We have somebody who have already played key roles in improving diagnosis. Of these pioneers will no doubt continue to help advance this frontier. I would like to recognize a few of those people today. Certainly members of the IOM itself, the National Academy of Medicine Committee on national [Indiscernible] and on the [Indiscernible] step that helped bring the report into being. Mark Graber who is sharing the panel with me. Paulette there is in the audience. They have led 2 [Indiscernible] and more recently, the coalition to improve diagnosis. I can tell you the importance of those groups have been phenomenal.

-- Paul Epner there in the audience. I can see Helen Haskell in the audience. Of those imported in advancing diagnostic safety. Patient breast safety, Helen has been that exceedingly well over the years. It has been a pleasure to work with her. Our own Dr. Kerm Henriksen is around somewhere. Kermcut raise your hand. This is our AHRQ expert not only of this topic but others. He is not exactly a one-man show but it seems like it on some days and it's always a performance I enjoy watching, being a part of them benefiting for our programs benefit. Finally, our federal partners. I have seen several in the audience. Dr. Pam again from CMS, feathers, it's just been great to have this kind of support. I could go on and on -- members of the [Indiscernible] community. I have seen a few of those with the audience and have had a hand in pushing this for A. thanks for the broad-based effort. I think our collective actions are gaining traction, as you have heard. In the next very few minutes, I will be touching on some topics relevant to the summit today. We want to first highlight and go deeper into the national progress you heard about in hospital patient safety. Over the past few years. We'll help this success and patient safety more generally will serve as a model for improving diagnosis and healthcare. I will next describe a bit more about AHRQ's work in research and implementation in the areas of patient safety. Will generally talk about our role in improving diagnosis and healthcare cost specifically. Finally, I'll cover some housekeeping items for today's agenda. This is another depiction of the data that you already heard about. We have been spreading this good news where we can. I think it's definitely worthy of that kind of a message. To put some finer points on it, the tran-twenty 10 and 2014, worry, and the worry is there a broad combination, the cheetah measured unprecedented improvements in patient safety. This charge shows -- chart shows those. The unfortunate common causes of patient forms that still current hospitals. It's a list you know very will already. Adverse drug events, catheter-associated urinary tract infections Gossard new line, associated bloodstream infections, but Pressure Ulcers and the list goes on and on. This is the turnout -- [Indiscernible] hospital patient safety. We do not have the first truly laid out for patient safety. We have some early insight into that but we are still the science base that will allow us to do this very thing for the specific area of diagnostic error. Again, where on that path but there is more work to be done. As we work with our federal partners to accomplish and compile these results, we appreciate it at least 2 things worth celebrating. 1st and most important is the improvement is so. Hospital patients experience fewer harms over this period. These are real people, will harms that didn't happen, families that did not have more to worry about, cost we did not incur, that improvement was phenomenal. Secondly and of equal interest to us because we understand the importance of this point is we were able to produce a valid, reliable method of measuring hospital safety across the country. That was no small feat. I think we can all appreciate that. The imported will this capability has had in our ability to not only track that the poor progress has been managed. We have seen it throughout the lives of the Partnership of patients, a huge national initiative. I want to underscore the importance of that measurement wall. As you have already her, it has also been important and useful to acquire the best information we have available, in which to understand what this measurement relevant. What is the personal impact of these improvements? You see we're measured a 70% production. 87,000 lives were saved as a result of this national improvement. The resulted from 2.1 million patient harms that were avoided, and while not the primary purpose of this effort, there were cost-savings associated. You can read about the details underlying these estimates. Again, the best site were brought to bear to produce this report. Again, broad-based effort, public-private partnership, lots of activity, and most importantly, lots of impact and improvement. If the with the unprecedented reduction there are still far to many patients harmed in the course of receiving care. 121 events per 1000 discharges is a lot of adverse event. This is a substantial impact. We are encouraged by the impact we have made in improvements you have heard about. One way to think about the importance of this improvement is to compare the impact to other sources of morbidity and mortality. Take cancer for example, 87,000 patient lives saved from safer care equates to more than one entire year with no deaths from breast cancer. It's pretty incredible to cancel out for a year or take a break from a disease that significant. That's essentially what we are talking about here. It's certainly not thought about that way. Regularly, we think about it every day. And thinking about it in this light and with that comparison has helped us to clarify the purpose and the goal that we have. You heard about how we make a difference. I want to go into much more detail about those but to simply say that patient safety has followed this playbook throughout the past decade and a half. We think this has been a lot of our success we have seen nationally is attributed to the approach. The consistency we have had and the persistence in continuing to follow this path. This is a little bit more complicated the protection of how we do our work, what we're thinking about on a regular day to day basis. It summarizes the framework we have applied to many patient safety problems. You will see that we always consider the context of the research we support. Dr. Victor Dzau talked about clinician burnout. This is sort of an omnipresent fact we are aware of now. In simple terms what a means for us, even though we produce with our researchers and the work that we support, valid patient safety practices that we know have produced safer care, it's important that those are as lean as possible because we don't want to contribute too petition burnout -- practitioner burnout. We want to have control -- we want them to have control in their environment. We think that's a major contribution and always in our mind as we conduct patient safety research. You can see that in the second column by Mary focus, [Indiscernible] of risks, hazards and healthcare. This includes clinical [Indiscernible] such as risk assessment for Vickie. This is a [Indiscernible] scientific challenge where we are delving into the medicine and that research frontier, we are also considering it in broader context. What are the solution are investigators our producing need to fit within that context where they are almost useless. That comprehensive, broad-based perspective we think has been critical to the success that we have been able to have nationally. I mentioned safe practices. That's the term used to represent the actual changes in care that have been today to a place they do. Implementation consideration is critically important. , and facilitators such as patient safety culture, as you heard from Dr. Victor Dzau, we have a way to measure that, not just talk about what it is but to understand what are the components of safety culture. To understand how they link up and match up with the things were trying to encourage the system to do, and critically important, in general, but also for diagnostic error patient engagement so all of these great situations in which safe care practices our feasible and they actually get done, most important. Ultimate goal is to reduce patient safety risk and the harm that follows. Mentioned already the valid reliable measure is critically important, so much so that it really supports every stage of this process. Our plan is to apply that same model. We are already applied as a model to this new problem of diagnostic error. I think we have already gotten some traction I will tell you about right now. These are a few examples of studies that helped advance our understanding of problems. Those who worked on these are in the room today. We have all of the stars among this topic area. Most of them, at least [Indiscernible] in the fall but we have managed to attract a lot of folks. Dr. Hardeep Singh and his [Indiscernible] involved errors. Dr. Gordon Schiff who you will hear from soon [Indiscernible] their attitudes about diagnostic error and the phase in which, phase in which the care in those as been observed. Looking at diagnostic pathways based on [Indiscernible] data to study abdominal pain, specifically. A few examples. AHRQ has produced several tools and you have heard about some of them already. This is our catalog of sorts. Is this and other materials available just outside on the table. We have a really good price going on these today. They are all great. Take as many as you want and if we run now, certainly, more are available. Again, these are intentionally simplistic -- they remain easy to implement and practice. Some of the tools such as the ones you see here are quite applicable to challenges of improving diagnostic and healthcare. That is a theme I picked up and reading the report. That is a lot of the competencies we have established in patient safety and patient quality in general are applicable. [Indiscernible] but clearly we don't have to start from square one too really get some more progress in this area. We can thank the IOM Committee, the national Academy Committee for elevating the important role of the patient and to the highest level of the report, by including failure to communicate the patient's in the overall definition of diagnostic care. You have already seen that the Dr. Victor Dzau. The central resources listed on the right page -- this patient front and center for him -- basic questions. Not simple when you consider the barriers to asking questions, what questions our most important. What does the Commission do when they are faced with 20 questions? All of those practical issues our addressed in these tools. There is sound of science underlying all of them. What will you do not see here worth mentioning and I think it has already been mentioned briefly is this [Indiscernible] program based on the work of Peter [Indiscernible] barberry. Two things are important about that. First it was initially applied to Central Line-associated bloodstream infection's. The first thing is it similar to the national results that I have said, the improvement it produced was significant enough. This Saint thousands of lives nationally and that alone is significant enough. This approach, this project also changed the way we think about safety so that problems like centralized associated bloodstream infections are not an inevitable consequence of care that we just accept as part of the cost of doing business. That paradigm shift, I think we are still in the midst of but it's well under way. I would like to see that for diagnostic error to and I think that is our goal for today. Finally, to start to wrap-up here, this specific toolkit -- toolkit comment to this that's what tool production again to get information into the field and to achieve change. This is a specific toolkit that is focused on diagnostic error program. Thankfully we are supporting the development of at least one specific tailored to her to address the office testing process, laboratory testing, which we know is an important source of diagnostic error. This fall of this pipeline is already moving. We'd like to open more and have more flow and more improvement. Now I want to shift gears. We know that we have much more work to do. Today is an important part of this work. Just to go a bit deeper into our plan for today, we're going to continue the next few minutes and federal group as we are here in the plenary session. I can tell you you are in for a real treat with the 2 speakers that follow me. Those 2 sessions and what you have heard today are intended to set the stage for the rest of the day. We want you to get ready now because you will be engaged. We want you to be engaged. If you are not, we failed an are goal today. Please get into that mindset. There is time built into the Breakout Sessions to hear from you. Maybe I shouldn't tell you it will be a quick day if you do not do that. Were not going to let you out the. Seriously, I would like you to be engaged so start to think about that. The three Breakout Sessions you can see in your agenda, and during those three Breakout Sessions will also be addressing some important crosscutting topics of patient family engagement and patient education and training. Expect to be prompted within the overall Breakout Sessions on those specific topics. At the end of the day, thus as we think about a total system of care, we're going to reassemble ourselves and bring everything back together and it consider it all together. We have planned for engaging wrapup sessions. Folks are ready too do that. Just a couple of other mundane housekeeping notes that I am supposed to tell you. We really are happy to have you here, despite how you may have felt coming through security. I am imagining some of you might have expected to be boarding an airplane after you got to that. There is nowhere plan. Your seeds are not flotation devices.

Did [Laughter] I miss the meeting? We're leaving already. We ask for your help to keep is on track. The Breakout Sessions, the way this will work is this room divides into three parts. The middle session will be the measurement and data Breakout Session which you will see on the ceiling where that's going to break. To my right -- is the organizational factors session and the health technology session will be on the side. What we would like to ask folks to do is to stay for the initial Breakout Session this morning. Please, try to stay where you are out for the first Breakout Session. If you were not satisfied with your random assignment, certainly, you can move. We have a little space. We know groups came together work we want to support that so that you get broader coverage. That will help us be more efficient when we transition to the breakout. We do have to follow the local fire codes so if we have to ask people to shift, we will have to do that, but I think we will be fine. Luckily, we will all be coming back together. The intent is for everybody to hear everything discussed, even though you may not hear everything in it's first round. We'll have a second round this afternoon and were likely you will be able to get your first choice because people will be coming back from lunch and go to the session that you would like. What to make sure folks know where the restrooms are. There to the sides, out the back and to the side. What else am I supposed to say, Jeremy? Would like for visitors to remain on this for. The other thing as Dr. Andy Bindman said, we have folks participating via Web. We will have questions coming from them. We want to have them equally engaged in the discussion. I am out of time. I am going to step aside. Thank you, very much, for being here.

[Applause]

Fantastic. I thought my feet didn't float so that's news to me. I'm now going to welcome Dr. Mark Graber to the podium. You can look on the website to get more details. I just want to say that Dr. Mark Graber is a Senior Fellow at RTI international and Professor Emeritus of medicine at the State University of New York at Stony Brook. Probably most specifically relevant in this context, he is also President of the society to improve diagnosis in medicine. I also want to draw attention to the fact that Dr. Graber in 2014 received the John Eisenberg award for the Joint Commission and national quality forum, recognizing individual achievement advancing patient safety. For those of you who had a long-standing relationship with AHRQ will recognize the name of Dr. Eisenberg it was Director of AHRQ. The name changed in the way that Dr. Victor Dzau referenced. He was a real leading like for this Agency in terms of the principles that Dr. Jeffrey Brady talked about in terms of connecting research with implementation and monitoring of data. We're still influenced very much by Dr. Eisenberg's brilliant way of thinking about putting research to bring about changes for improvement in health care. It's a very high award to have an award in John's name. Mark, congratulations to you for that. Let me see if I can -- Let me see if I can bring up your slides.

Dr. Graber, thank you.

Thank you, Andy. It's a real pleasure to be here today to see the interest that's growing and diagnose it -- diagnosis and addressing diagnostic error is phenomenal. Thank you for that. [Indiscernible] on behalf of society to improve diagnosis in medicine. We are a non-profit group, only [Indiscernible] organization focused exclusively on the problem of diagnostic error. Our vision is someday we will have a world where the patient our harmed by diagnostic error. I been asked to talk about some milestones in our journey. I would like to show you how we compared to some of the other patient safety journeys and safety journeys, in general. Of course, our motto is aviation safety. Where did they start out? The first 24 [Indiscernible] pilots died in airplane pilots. Your risk today is less than one in 10 million. Tremendous progress there and we have much to learn. We can also learn in our own field of medicine if we look at laboratory safety. In the 1950s half of laboratory tests were believed to not be credible, which led in 1967 to [Indiscernible] and at present time in measuring your creatinine and CBC the order is what A1 and 100,004 tremendous progress. You heard about the progress that has been made in patient safety. Much further time just from 1999, at least 87,000 lives saved. The number is almost surely much higher than that. How about diagnostic safety? Best SM and at present time, 40,280,000 deaths a year from diagnostic error. How much progress and we may? I know of one patient whose life was saved because of our work. I hope to come back in here a much higher number in the future. This is the milestones we have recognized in our own organization. Of first conference on diagnostic error was held in 2008. Every single one of our conferences have been's want too. They have been a tremendous Partner. In 2011 we formed are non-profit society. In 2015 you saw the issuance of the IOM report. Just the last year because of the work of Paul. Are primarily, [Indiscernible] too improve diagnosis which I will talk more about. This is our 5-year birthday party. Gets to is having A5 year birthday party? Very importantly, our work in the journey so far is the IOM report. I would like Erin Bauer to stand up.

[Applause]

Erin dedicated training -- 20 years of her life to the content. Thank you for that. It's the one-year anniversary. The report came out almost exactly a year ago. I would like to review what has been made. Last time I checked in the report had been downloaded 15,000 times, and leading agencies, AHRQ, the Centers for Disease Control started to talk about what they could do to address the problem. This meeting today I think as a direct result of that report. Other evidence of progress, lots of news media attention, papers around the country publicized the IOM report. There have been grand rounds held nation wide, TV media things. I was giving rounds in Susquehanna. It's the fourth grand rounds on diagnostic error in the past years. It's starting to catch on across the country. Further evidence of progress, there is a couple. [Indiscernible] thanks to the efforts of Sue Sheridan is no longer going to be misdiagnosed in this country. Every newborn will be tested for that before they leave the hospital. Sepsis, dramatic process. [Indiscernible] thanks to the sepsis coalition work. We are starting to hear from health care organizations about their interest in doing something. They want to know what they can do, what they should do. This is a starter list. In the last couple of months we heard of 20 or 30 organizations putting committees together to address this. It's fantastic. We hope to hear from every healthcare organization over the next five years. Lots of progress in terms of education. There is the textbooks. Our society started a new Fellowship program for clinicians want to go into the field of diagnostic safety. We're producing six new modules that will deliver on the [Indiscernible] platform that every medical student encounters on their training, focus on first time how to recognize that it error and how to reduce risk of diagnostic error. We are glad to see AA MC will have for the first time [Indiscernible] on diagnostic error on the meeting this year and AA MC is newest member of our coalition to improve diagnosis in medicine. International recognition of the problem, Paul just returned from the leading world health organization where diagnostic error was discussed as one of the major patient safety problems that confronts us. We got back last month from the first international diagnostic error meeting that was held in Poland in Rotterdam. Next year we will meet in Aust to area. John [Indiscernible] checklist you can download from the website, have been canceled -- translated into Turkish conference and [Indiscernible]. They are working on Chinese. There diagnostic interest groups in Romania, Japan and China. This is the coalition to improve diagnosis which is the group our organization inside the response rate. We are a small group. We cannot get into the world in general and a tremendous impact. We're calling on other professional organizations to work with us. The response has been fantastic. I think at the present time we have 26 organizations. You can see a starter list here. Each of these has committed to doing something individually in their own domain with allow members to address diagnostic error. They have all agreed to do something collectively. One of those collective things is to promote research on diagnostic error. Very nice to see. Were here to discuss what can be done in the domain of research. It is an area that needs everybody's attention. There is so much that needs to be done. The most -- one of the most beneficial things to come out of the IOM report is pointing out the gaps on what we know and what we would like to know. There are so many gaps. They are so large. We have suggestions of the IOM report on what needs to be researched. Our own organization has been talking about this for five years. Today I would like to get my suggestions on what needs to be researched. The rest of the day we would love to hear your suggestions. There is what was in the IOM report. There were 41 research recommendations may. You can see the individual domains in which they were a side. I will not go over each one of these but it's a rich source for ideas about what could be tackled of what should be tackled. For example, the new definition of diagnostic error. There are at least three very important research topics here. What is accurate being? Is it enough to say somebody's got lung cancer? Do we need to know something more specific like the genotype? What is timely me? We have never used that concept in diagnosis. How long should it take to diagnose cancer? How long does it take to diagnose anemia? There was a study that showed if you have heard efficiency, anemia takes contributors to establish the correct diagnosis. How long does it take to diagnose asthma? [Indiscernible], way too long. Communication, if you study patient safety, communication on the number one pick them and the exact thing is true if you study diagnostic errors to specifically. Communication breakdown, number one problem needs a lot more study. How effectively are we communicating? How can we do a better job with that? U.S. in the framework presented in the IOM report. Interesting history here. This started out with work from Tesco carry on, [Indiscernible] hold up your hand into the [Indiscernible] diagnosis framework and brought into the IOM report. -- Pascale Carayon. It's obvious but important to point out because healthcare organizations know about process improvement. They know how to break things down and find the weekly go what needs to be study. Thank you to the work from Gordon Schiff, we know where diagnostic errors occur on the step of the process. That's the good news. The bad news is every step is a problem. We do not do any of the steps well. There are problems to address at every step of the diagnostic process. Thanks to Pascale Carayon and Hardeep, the concepts of [Indiscernible] in work systems important and leads us to think about what we need to learn about human factors and what could be addressed in that domain. What could we do to make diagnosis easier? What now it's hard, very hard, in the emergency room, for example. What is the impact of distractions? Whatsit of time? What's the distraction when you saw your position last time. I had 15 minutes for by a physician to get to know to me. -- no me. I'm very healthy. [Indiscernible]. What's the impact of cultural on all of us? Huge impact. This is what are own organization has been doing in terms of trying to define research priorities. In 2011 we formed a research Committee. It was initially chaired by David Newman-Toker work you will be talking with them. It's been taken over by Lorna Swan and [Indiscernible] who are not due. Every one of our diagnostic error meetings since 2012, we have had Research Summit where we have had similar discussions to this one considering what needs to be studied and prioritized. Most recently as beta, we put together a coalition to improve diagnosis and research funding. You're doing more research to AHRQ so that we can make progress is one of the priorities of our coalition. This is the publication that resulted from those summits I mentioned. It was one publication, [Indiscernible] quality and safety. David Newman-Toker also compiled many of these suggestions, submitted to the IOM where they put the report together. Many of them ended up in the other report.

[Captioners transitioning] [ Captioner reconnecting ] There are 2 aspects of that defective biases are very important in determining how successful we are in assigning a diagnosis.. I did one on interventions. Bottom line is there's a lot of ideas but we don't know which ones work. We don't know which ones work better. Decision-support. Second opinions. Working in teams. Engaging patients. We don't know which ones work. When organizations ask me what should they start with I don't know what to tell them we need research to tell them this is where you start this is what will be most impactful. I would like to know the cost of diagnostic. At this point in the United States with another cost of everything and where the rising cost of healthcare is so crippling and is so important, we do not know the cost of diagnostic care. Nobody has ventured to guess. The reason we need that number is because there are 3 players who were not at our table right now. Policymakers, payers, leaders of healthcare organizations. We need to make a business case for safety to get them at the table to get them in caged. This is a listing of the top causes of deaths in our country. It would be on that list. This is a list of the researcher funding that has spent those top causes of deaths. Diagnostic care, you cannot even see the bar it so small. It's a couple million dollars and we are very grateful for that few million dollars and thank you Andy. We would like to see that -- at the same time we brought Bob who I think you are familiar with. His the patient drew. Maybe some way sort of the camper I thought that they both surprised us. So Arthur said instead of seeing we're going to have to retrain doctors to anticipate antibias. He said humans are wired to make these biases . we are probably going to solve this more in a system protection to prevent these errors. We have medication error systems that reduce drug-related overdose errors. 55 reduction test 55% . he said those solutions will work with diagnostic error. We have to deal with the way that people deal with diagnosis. Each point is the opposite way to the opposite person and to me, when I take away from this, is not just the complexity but also the fact that these systems and cognitive issues are very intertwined. Some of you seeing a patient in the emergency room trying to think about what is going on. These are very interconnected. We think the health IT is a part of the solution. We try to compile a taxonomy or list that all quickly mentioned. You can have tools for the system information gathering. Remembered to ask all the right questions or since I only have 15 minutes, maybe the patient needs to do something before they come in, where they work, possible exposures, things that I may have overlooked by not picking about it. Cognition facilitation by enhanced organizations to display information. The current EMR's are a recipe for us to lose track of things. Things are hidden behind tabs and not visually organized. The old-fashioned paper flowsheet, or even computerized, where something was stood out and was red. This read. There are programs out there that start. Other programs, these are tools that we have never really figured out how to use the most effectively, but making diagnosis and making sure it gets and the chart is important. I shouldn't be relying on my memory, but support for selection diagnostic test. Had a make sure that the order the right test or sequence when somebody gets a positive HIV test. Access to diagnostic information. Tools to facilitate reliable follow-ups to make sure that people don't get lost. I think health IT is very critical to that. Tools to support screening for early the text and tools to facilitate diagnostic collaboration. Nick I could push a button to show a dermatologist across the country what this lesion is and what do I need to do with it. Systems that they actually fail to facilitate getting feedback. We were walking around blindly. So one example of this is clinical documentation which is what a lot of doctors spend their time doing. Upwards of 30% of people's time now. It's mostly viewed as CYA and the billing people have to put the stuff in the chart. But we have been promoting this different version, it should be a canvas for your assessment to pause and think out loud so other people can understand what your thinking process is. Weighing the likelihood. What's the etiology. If somebody has anemia, what do I think is causing it. The degree of certainty. The last concept where thinking about her want to raise here is the concept of the diagnostic pitfall. We have work funded by our malpractice insurer. We are looking for clinical situations or patterns where there is a ability. Were trying to get very specific related to these diagnoses. Were trying to get these cases into generic pitfalls bipolar is mislabeled as regular depression. Failure to appreciate the limitation of a test. A woman has abreast lump and get a mammogram is normal. Is going to lead to a misdiagnoses of breast cancer. A typical presentation. Something presents. There's hyperthyroidism with vomiting. You don't tend to think about that but that's another pitfall we have tried to aggregate. If somebody has lung cancer but because there having shortness of breath or its mislabeled as COPD. Other environmental causes. These are factors to make that diagnosis correctly in a timely way. It's fairly rare, but still important to consider to take the person off the drug is needed. That's that last slide, the Institute of medicine and the national Academy of medicine. They grabbed the headlines and I would say that most people that wrote is report. Once per lifetime you're going to have a diagnostic error. Is the least evidence-based figure in this report. I would even say that it is wrong. Because it underestimates the frequency. So hearing from Q and a. I would say that a and Q, your own adverse events. But listen to me personally. I presented diarrhea lower abdominal pain as a college student. Salmonella food poisoning from a sub shop in Harvard Square. I presented with chest pains and now I'm a medical student. The diagnosis is obviously medical student anxiety syndrome [ laughter ]. Correct diagnosis, 40% left lung. Fever and shortness of breath. Abnormal chest x-ray. I was diagnosed as bacterial pneumonia and admitted to hospitals. What's the real diagnosis. Crypto genic organizing pneumonia. So reared -- it's a weird diagnosis. Something infiltrated in my lungs and its called COP. Be final one is post exercise faintness. That's after I was riding uphill on my bike or exercising in the gym. I would get a pre-faintness I called it my wideout spells. I was having an arrhythmia my heart rate went up to over 200 doing fibrillation. So, I have had 4 of these that are real. The question is with your experience, we can throw open the question and answer series going. The examples people raised I did not realize it everyone brought an example that happened in the last several weeks. The Institute of medicine I think their estimate is quite conservative. If we think about this more veterans SPECT and deeply. Thinking about the root cause for each problem did somebody make a mistake. They were certainly consequential. So this might be what you're hoping we can open up for the next few minutes.

I would like to say that nobody needs to violate HIIPA. This is such a prevalent issue and I think in some ways what Gordie has helped to underscore in your comments is that many of us have personal experience with this and in some ways we provided a lot of information this morning in this organized way but can motivate many of us is to feel the connection so we certainly want to provide opportunities for a variety of things. And like to have you join me in thanking Dr. Gordon Schiff . But also want -- Gordon Schiff . We would welcome hearing about the issue with an talking about. Diagnostic error. Why it's something you are motivated about our this is something that has an impact on your life. The insight that I take away from your comments, aside from the fact that you are a medical wonder, is that as you say four times to have had these issues where a misdiagnoses and so forth with consequences. I'd like to say that you're welcome to use the microphone to comment on anything that you heard this morning. I think it would be particularly nice to hear your personal connection to this issue. For those of you participating through the WebEx, please type in any of your own questions or things because we are interested to hear those ideas. So tell us your name, where you're from, and share your comments with us.

I am an emergency medicine physician. I really like the bracelets they look like hospital armbands. It makes you want to share your medical issues. Thank you for the wonderful presentation and bringing attention to these issues. In our setting for the personal anecdotes. One of my children had an injury, head trauma during sports. He went to the hospital emergency department and there was severe headache and vomiting and he came home with a diagnosis of strain after injury. It's interesting, because this is an area of research that I perform. But one of the issues was she ended up having a concussion and nobody had screened her for it. Or even thought of it. I think in not considering the diagnosis is also important when talking about bias, a part of it is things that are not in our training. The potential to miss a diagnosis and what that affect has. I think it's important to include those things in our curriculum. I wanted to see what your thoughts were.

You can't draw blood test or do an x-ray, it's a clinical diagnosis and there is a lot of disagreement about it. That's number one. But what is it mean? I have a long list of differential diagnosis. I think that's a great example of illustrating some of those difficulties. Here about another paradigm in this workshop we could take all these pitfalls that are occurring, malpractices are around that if we went into her database to see.

George has a paper on why did I miss the diagnosis. The number one reason is I just did not think of it. That's been acceptable to the history of medicine there Stapley an area that that can be addressed. I think with the rare is more difficult.

For those on the WebEx, if you have questions please email them into our research Summit. ahrq.hhs.gov

I have an example of fixating on the differential maybe prematurely. I woke up one day with a swollen toe and it was out. After one month of being treated for gout, there was no change in my foot. The physician or rheumatologist said you need and MRI. I said are you choosing wisely [ laughter ]. But I got the MRI, and in fact I had a broken toe, not gout.

I am a dermatologist at the University of Rochester. The CEO of visual DX. After training I had the gift of practicing in a rural area, a underserved community. I noticed in primary care there is a lot of diagnostic error. But I think most of the attention is to hospital-based error. Frequently in practice I will see the simplest diagnosis being missed. Is usually variation of the common you would see dermatitis treated as scabies and vice versa. We see about a 30% diagnostic error rate. So outpatient medicine really should get a focus on the diagnostic error community.

Since I have been working on the outpatient program the scope has been offsetting. In 2015 we ramped up even more extending patient safety to all healthcare settings it was an improvement that we had seen in safety and we could see it broadly across the system. Occluding the problems that exist during systems. I think a simple statement is our scope is inclusive of all healthcare settings. This gives you the broader scope and how we're thinking about the importance of this problem and the role that it plays. This particular role for outpatient care. Were talking about these admissions. So the patient has bilateral cellulitis and almost immediately I can guess that this is not the right diagnosis. If we could have support or examples for situational awareness that they were stepping into and by the way they turn into refractories. You see them and treat them with antibiotics and none of them seemed to be helping. None have refractories. So you can just see, using the lens of a single diagnosis, how some of these things that were talking about in terms of decision support and situational awareness could probably help and save lives and resistance etc.

We think of the emergency room as the petri dish for diagnostic care. But in the study of their malpractice claims, the majority, 56%, originated in ambulatory care.

I recently experienced diagnostic error with my three-year-old child who is diagnosed with a broken arm when it will he wasn't broken. She was given a calf, which made her cry. But luckily we went to another doctor and the cast was removed. But it was a lot of pain for her. In any case, the reason I was interested in this topic, I led a study that looked at the effect of our tenure certification requirement. It seems looking at healthcare medical claims for doing that. Looking at the data, it could have been participating in MOC was improving diagnostic abilities of physicians. So it got me curious about it. As a follow-up we have been interviewing doctors about what they learned by participating in MOC. This is early on but there seems to be a reoccurring theme about diagnoses. It's something that they are learning about. Was to map how the changes in medical knowledge related to diagnoses might compare to other parts of medicine.

Good question. I think one thing that has become sugarless inpatient safety is that it's usually not lack of medical knowledge. I'm not sure that I agree with that. The early studies weren't that people didn't know things, it was more other causes or types of errors. So I because probably worth further study. If somebody comes in with a chest pain I don't this is really know that heart attacks can present with chest pain. I can give you a personal example. There is a role for getting people up to date I think you're probably onto something if the physicians, even if they weren't more aware that this is a common problem, that this bit of knowledge, might be something that could be valuable.

That is where we think these pitfalls can fit in. We would like to see every textbook have a section of common pitfalls to consider when you're treating gout or cellulitis.

One thing I have learned watching expert debaters is you don't always have to answer the question asked, you can answer another question. So there is a huge research need on how to evaluate competency. Beyond that we don't want doctors who are just confident we want them to be calibrated how do we measure that and promote that skill as well.

This is a comment related to something you said Mark. He said communication is the number one cause of diagnostic error. Is that patient to clinician or can you talk about the patient to clinic -- client medication error.

We can talk about medication breakdowns and Gordon probably knows more about it. It everything. We think we are doing a good job communicating but if you look at the surveys of patients who went to the hospital they rated 60 to 70%. One of the most promising things I've heard about is a tool on your phone. When you go to see the doctor you can record what is said and listen to it later. When you are in the middle of that encounter, you're so worried about your own condition that you just don't even sure what going on. There is a lot going on that can be done to improve communication.

The one source of data that point people to is that people from [ Indiscernible ] are here there was a report and they answered to the question is equally rated. It's clinician to patient and professional to professional so the nurse and the doctor not kidding kidding the radiologist in the doctor not kidding kidding. They are both contributing equally. And in some cases both.

Team steps was one of our resources and I think the primary feature of that resource, if you don't know this for team-based training for healthcare teams in health care settings. I think the key element of that resource is communication. There is pretty sound underlying communication theories that talk about how do you construct, deliver, receive messages in the clinical environment. I think is equated a lot to the analogy that Gordon talked about. This campus where each provider that working on a team do they have their own campus making their own picture of what they're trying to construct. Or is it a commonly shared mental model of what is happening. I think that's one way to think about communication is a common space for solving a problem. In this case diagnosis.

Have a question about the definition that reason for diagnostic errors. There was an example given that 5% of these diagnosis. 5% of adults have a diagnostic error. Is that what it was?

My question is. I was going through the report and I saw terms like diagnostic errors, preventable diagnostic areas. Adverse events and I feel like the simple definition does not have the word prevent ability and it. In the patient safety language their adverse events. Sometimes I feel like we're corning these adverse events. So as teenagers melancholy and the doctor misses it and commits suicide, nobody could have caught it. Is that it diagnostic error or an adverse event. I'm on the patient safety committee some of them are preventable and some are not. They changed that composite measure to the patient safety adverse event measure. I'm afraid were going to describe it and it's going to be confusing for other areas and maybe that should be clear. Just wondering what you think about that.


My first research study on diagnostic errors was a collection of 100 cases. I wanted to study where and why they occurred. One thing was prevent ability. I abandoned thatTHE POINT IS, I KNOW WE ARE GETTING WAY from using pain scales but using the chronic pain that were dealing with and pain on opiate usage. We need to consider ways to analyze diseases and pain scales.

Every test has a sensitivity specificity. There's false positives. We have people come in saying they are dying they are in agony and every visit like that falls a false positive. This was a false negative worth the pain doesn't really hurt too much so the doctor says we won't worry about it. We need to think about each test. A symptom is a test question of physical finding. We need to be aware of that. So maybe young men are deniers on the pain scale. Whatever the stereotypes are. We need to be able to calibrate.

I started to make my list and I didn't want to compete with Gordie . I delayed diagnosis of an appendicitis at 10 years old so hopefully it was done appropriately. Just an abnormal presentation. And atypical July your visit where I have an apparent broken elbow with a normal x-ray. A new resident wants to do a fancy do that shows the fracture. But the point was not that there was an error. That there are times of the year in certain situations where errors will occur more often. The last one I haven't heard you talk about with my own error on myself worried misdiagnosed myself and developed significant harm later. With all of that said. Being an outpatient doctor I think we need to think about the opportunity costs of getting too worried about error prevention that this has to be done efficiently. Because if we spent too much time chasing her tail trying to prevent all errors we hurt other patients in the practice.

Thus the most important research question our whole field. If you tell everyone you need be more comprehensive and make differential diagnosis is going to lead to more tests. So we really need to find the sweet spot. For that we need research.

I can give a long list of errors in myself and family will just give one. My son presented with classical appendicitis and his teacher said he was faking it and made him sit through a long test then he was sent out from the ER, the opposite of Gordy with some kind of gastro enteritis and he perfect. Perforated. That he had another layer of complexity. I think there are a lot of issues in diagnostics related to populations that needs to be brought into this., Geriatrician. That's one layer. But other layers are people who don't speak English or have limited English proficiency. They can express their pain. And disability people with cognitive impairment. Also our own biases as health practitioners in terms of stereotyping help people present. So not to make a complex problem worse. I just wanted to raise the issue see what the panelists but about that. When you think of stereotyping that's sort of what diagnosis is. A 60-year-old man who has a history of smoking and hypertension. We make the stereotype judgments for high risk of cardiac problem with chest pains a young woman with chest pain to has no risk factor. Were stereotyping judging all the time. We have to figure out ways of guarding ourselves against this. One thing is taking people seriously. I think we really need that kind of skill and some of it does take away stigma from patients and prejudice that we carry with us in being able to rethink diagnosis and are I assist about the patients who present it

I will say that your comment about every historical question has a sensitivity and specificity. The complexity that a coalition faces and processing all of that reminds me of something that comes out of the human factors area of expertise.'s that is we need to look for more opportunities to build knowledge into the system and not pilot on the back of clinicians. That's easy to say, but actually making that work is anything but easy. Making sure there's integration of the clinicians and things they bring to the table. What to clinicians do best and what does the human brain do best and what does the computer do best. In any case, I think your building knowledge into the system is the thrust that speaks to me and in the face of -- research implementation I think in this case about how to do that is what is needed.

I have good and bad news. The good news is, we have fantastic panelists and a very engaged audience. We also have a scheduled break. So, if we don't break the ceiling is going to crash down on all of us to divide the room. I love the fact that so many of you are in line, but were going to have to break out sessions everyone a lot of this ongoing interaction so think me enjoying this.

I love that a lot of you are standing in line. We are going into breakout sessions. You will be able to ask more questions than. Let's take a slight break.

[ Meeting is on a break - Captioner is standing by ]

Good morning again. I think we are off to a good start this morning. So just to make sure that everyone is in the right place you're in the measurement, it's one of the breakouts . of course you will have an opportunity to go to one of the other two in the afternoon breakout sessions and we will bring it all back together this afternoon. I have not had a chance to say hello to Dr. Sherry Lane. Hopefully shall be in the room soon. Dr. Ling is the deputy chief officer for the center of medical Medicaid care and she is playing an important role today she's our report out at the end of the day. So we've got some top notch notetakers will try to capture our discussion today and support Sherry in that role. But again we want to make sure we capture all of your input and ideas. Before we dive in. Make sure that we are connected to those on the webcast. Hello to all of you out there who are not able to be there today. We will be turning to questions from you from both audiences. Here and on the web. So the same email address mentioned earlier if you'd like to email questions earlier into the session. With this, our plan for today, welcome to the session. Were going to dive into this topic and we want you to be thinking about specific questions, points, and other things that you may have to say about this measurement of the data and diagnostic error. One thing I want to let you do is let you know who's with me. I am joined by 2 leaders in the field Dr. David Newman- Toker and Dr. Singh. You have two renowned experts in this area. So please know that you're an good hands. This is our plan for the session. Will start with some introductory points specifically on measurement to the topic to try to get things rolling and then will kick off the discussion with what you might consider canned questions. We will move as quickly as possible to get with you and get your feedback. We are all on the hook to cover these crosscutting topics provider education and training so you will hear those topics intermingled along the questions that we have. I've mentioned try 11 Shelby joining us as SIDM 11 -- SIDM 11 -- Shari Ling .

This breakout session is on measurement. Will provide this slide. Yses some pieces that we have for this stack bargraph. The one that you may envision for diagnostic safety but there is much more to do and we have talked about the importance of measuring and tracking and impact.

We don't want to ignore the importance of that. A lot of what we have already done and will do in the future draws from the successful approaches that with all had. I'll touch on these broad points these diagnostic error problems are notoriously difficult to [ Indiscernible ] as much as any aspect of healthcare diagnostic process involves many different places, stages, types of information we've talked about. Information for people to get the right information. As we think about this complex facets of the problem we also want to start thinking about measurement. So are there measurement opportunities is it feasible and is a high priority. The other challenge that underlying diagnostics that arrive today is not one single event. This is not whether the infection occurred or did not. It's involving complex stories that we don't often no the answer to. I think measurement around that is exceedingly hard. I like the fact that the Academy of medicine highlighted the feedback and lack of feedback to clinicians. But certainly addresses all of our problems and clinicians need to know when things have not been done correctly. Building that into the session the definition you have seen already, when we talk about measurement it puts it in a new light . what are all the different facets that could be measured. I think they did a good job at piecing out individual work what's the standard the gold standard for our expectations. There are a lot of opportunities for measurement. So I think it's really focused on measuring in particular this the fourth recommendation in the report. We didn't go into a lot of detail this morning. One that has particular relevance for measurement. This one focuses on the development of approaches to identify, learn from, and improve diagnostic errors. That's in fact the measurement. This whole set of recommendations is about measurement overall. It includes mention of the roles of accredited information organizations it also highlights the role of healthcare organizations to diagnose errors as a part of their routine. It's an already busy overtaxed area and clinicians need to think about this. Capacity will be an issue not only with doing something but I like the focus on systematic feedback for not just clinicians but for various stakeholders in an effort to encourage and guide improvement. Postmortem examinations are mentioned in particular. I would like the groups.on it that. Is this a place that you all agree efforts should be focused. And this recommendation charges healthcare satiety is with identifying areas in their specialty. The set deals with developing a reporting environment and a medical liability system that facilitates diagnosis from learning. Two the learning point doesn't necessarily highlight the importance for learning that is a part of this organizational program. That's in fact that I considered to be the primary purpose. It's learning through reporting. So keep that in mind. I think liability system is responding to some challenges within the specific set of issues for diagnostic care. The report in this set of recommendations does not suggest voluntary reporting its an interesting point to keep in mind as were thinking about measurement opportunities. Voluntary versus mandatory. It also recommends we violate our own PS so program. Furthermore, it delves down into the details of measurement around diagnostic area modifying our formats to include diagnostic areas and near misses. This is not explicitly included in the format right now but the scope has a specific module on diagnostic error. Finally, the promotion of a legal environment that facilitates the timely identification disclosures and learning from diagnostic errors including things like the adoption of communication and resolution programs. We have supported quite a bit of work in that area. We have a toolkit out that you may have heard of an acronym is CANDOR. There are other approaches this as safe harbor policies that we can get into if there's interest in this group. Finally the report highlights the role of insurance carriers and captive insurers and encourages their collaboration with healthcare professionals to improve diagnosis and healthcare. So before we transition into the discussion I want to touch briefly the highlight of those abilities to get into measurement for different purposes. I think the observed in our program that research does not translate into operational purposes. There are often things that translate right. But the kind of measurement we do often has a heavier burden and inquires have a -- have the level of burden. Thinking about the purpose of measurement these areas are research projects. A measurement of medical liabilities. I think we've learned quite a bit from this. With the activity that they have had we've learned quite a bit from there analysis and claims in particular. Purposes quality improvement is where our sweet spot is and they want to spend a moment on an example to talk about how some existing databases they weren't designed for enlightening us about diagnostic error and safety actually happen able to be applied in this way when we look at aggregate numbers of events that could indicate attentional diagnostic errors or conditions. And just simply to monitor the progress. This touches on getting us thinking for what were talking about specifically. And just for a couple of minutes. This our recent studies focused on using a big database Claudia Steiner is probably in the room here and knows all about this. There's interest in getting to Summit the details but the main contacts is we have national assets that are already in a form that we can apply to diagnostic error. This project looks at the related symptoms after discharge from an ED for chest symptoms. For folks to come to ED they are seeing some chest complaints they come back later and they are admitted to the hospital for a related diagnosis. Can this suggest some misdiagnosis or a missed opportunity in their first visit to the ED. That's the general construct of this study. I should mention that this is a part of the evolution for the resource. We have an emergency department database. Through patient linking and common identifiers were able to connect these databases and produce this finding that nearly 1.9 million ED visits from about 1000 committed hospitals fell in this category of discharge after just symptoms from ED. They came back with the things that you see here. This high-level finding gives us some sense of the size of the problem. This analysis suggest that only .2% about 3800 patients were admitted for myocardial infarction within 30 days after being seen in the ED. If you're wondering about the message. This is viewed as very much of a hypothesis generating study. It's very easy to do with this large data set. It may help attention on areas and give us a scope of the problem. Enlightened how we spend more resources go deeper and look at the problem more specifically. This should not surprise us much that this is not a huge problem. Just in the course of my medical career in the past 25 to 30 years. The way we diagnose and detect MI and the hospital has changed. When I was in training, you had to wait around for well. The research community has produced better testing methods. This is a problem that still exists for small number of patients potentially where further study is needed. But we can prioritize but maybe this is not a huge burden in the system. Let's just quickly talk about the findings. It's not a perfect system. Some patients slip to the cracks and we misdiagnosis in the ED. That based on the regression analysis that was done and the odds ratios for various factors. Some interesting findings were suggested that will help us target further investigation. Some signals included patient factors, geographic characteristics and the availability of the catheterization lab which reduce the odds ratio. Doubleday not too surprising. But this problem with diagnostic error is assistance problem. Having eight Cath Lab -- a Cath Lab and having weekend ED visits which reduces the odds. Is anybody surprised about that? And ED on the weekend. The availability of services is probably not so much the ED but people doing the admitting. The complexity starts to unfold as we get into the study. And pay her factors such as readmission for the uninsured whether they were covered by a public payer. So again, trying to shed some light on the problem and focus our attention where we need to focus it for further study. I'll get into this discussion really quick but keep in mind this conceptual framework that the national Academy of medicine gave us. If you're thinking about measurement. The right side of this graphic, talking about outcomes, where we ultimately want to get to. No doubt some measures we want to think about. We want to try to think comprehensively. So, this slide as we shift to the discussion puts some specific things to spur your thinking about questions. Please keep these considerations in mind. I was trying to do was lump them in general. What are we trying to talk about measuring what things are most important. I think Mark made the point very well that cost can help drive change to things that are important based on the amount of resources we are spending on healthcare in general. Putting some cost to this problem in particular. Other things you see there are common things and rare things. And on the right side, how and why to measure the things that we decide to measure. Various methods. The role of reporting. Population-based surveillance. Large databases what are the sources and do we have the right data streams and data flows to help support this. I touched on the purpose of measurement and who is the audience. Hooley acting on this data and this measures. I think it's probably important to think not at the end but what were able to do on the front side that so we are producing information to drive change with the change agent that can do it. With that, let's turn to any quick questions.

Do we look at folks who did not or who did get admitted when there was not a diagnosis.

Of the slides going to be available?

Yes they are. What ED database did you use and how did you combine that.

What were some of the patient characteristics associated with potential diagnostic issues in that same study? Pay or was a big one uninsured. Some of it was the characters when you came in if you came in on the weekend or during the week. The follow-up that follow this. They were probably some characteristics. The paper was just published in the one happy to give you the link.

We've been successful there, the kinds of things that you think about in detail. I want to shift so we can hear from our experts. The format I will ask is for the sake of efficiencies there is some topics that are conducive to having visual aids. Each speaker can answer with their slide presentation. How would you describe the current landscape in terms of strengths and limitations for addressing diagnostic error. What are some realistic expectations for progress in this area. Is there some low hanging fruit or data that is more readily available that can be gathered with relatively modest investments. With that David's going first.

I like to think our staff for inviting us here today it's an honor to be here. I'll also think Jeff for lobbing such an easy question out. I am going to limit my remarks to a narrow area within the topic of measurement. We can measure things related to the causes of diagnostic errors and the potential solutions as well as the overall burden and then there's measurements that are related to method developments themselves. There are many different methods one can use to measure these parameters. I know I don't have time to talk about all of them. But I'm going to highlight one of them for you and overview the landscape in a framework of how they might think about this. The one I'm highlighting is related to burden. It's very similar to the concept that Jeff put up about heart attack. Start by framing things as numerator only methods and numerator plus denominator methods.. From my perspective they fall into two big buckets. There's patient complaints and legal actions. These things allow us to identify new types of errors. They don't tell us much about the actual incidences of error or the overall burden that were dealing with. I think that these are important measures but they are not going to drive the bus for tracking the impact of solutions on reducing the problem. Getting back to that stacked bar graph that just put up and said was our goal . then there are two sites here describing for buckets worth of numerator and denominator methods. These buckets are ones that I work with. Calibration procedures. You can in theory do this also in clinical care situations with multiple symptoms. It's been done in a research setting and I don't know whether these are things that can be truly operationalized. It's a little bit tougher and overall I think they have limited scope for visual diagnosis this is probably the biggest bucket. The thing we've been doing a lot of. Some kind of chart audit read conceptually whatever tangible artifacts are available from the event. It could be an entire videotape of an encounter in a clinical space but usually it's reviewing the chart is an audit. The problem with that is that there are a lot of missing data with respect to the key piece of information that would've led to the diagnosis in the first place. If they had been there in the chart you'd have probably not missed the diagnosis in the first place. Those are little bit tricky and also effortful. These last two categories. There's the systems diagnostic ascertainment. This measure isn't perfect but they would be considered the gold standard for certain types of diagnoses. There are other types of gold standard measures that one can bring to bear. Systematic patient follow-up falls into this same category. I think this last category is the one that offers a lot of promise as far as measuring the overall wording of diagnostic error. Using artifacts of the interactions with patients and identifying performance indicators from large data sets. I'll show you these graphs these are two separate studies done in partnership with AHRQ . Would've called the look back approach and the other is done by the UCSF group. It something we've now replicated through Kaiser Permanente data. We are looking at a disease of interest in this case stroke and looking back to see whether somebody had been seen in an emergency department with certain characteristic symptoms and a look forward approach were saying if somebody had characteristic symptoms like dizziness or headache looking forward to see if they come back for stroke admission. You can see that there is this clustering of events that's not evenly distributed over time. So strokes are when someone has been seen with a stroke or look back approach their most likely to that seen in the ER department that's why this looks sort of exponential. On the right-hand side the rate of return a stroke with the discharge is increased for those first 30 to 60 days. I think this signal is the signal that's going to be relatively easy for us to turn into actual measures of diagnostic errors or in this case, harm associated with diagnostic errors. I think this is a generalizable method. With control groups as you see here, I think we can have some degree of confidence that is his real data. So in summary, the overview of the current landscape. There is no single measurement effort that will review the whole diagnostic of errors. Are going to need a pallet to get big picture. Barriers include such things as lack of chief complaint recording. They're going to limit our ability to do some things but there are still things that we can clean from the data that I think are important as well as from the other methods. The unsystematic methods are available that offer an incomplete picture and will be useful for identifying problems that we don't already know exist or for delving into root causes. The measures that relied gold standards are going to be restricted to research uses and the electronic surveillance type methods are going to be the ones that are inexpensive and promising. The work has to be done carefully to make sure that you're not just measuring noise are junk .

they asked for 10 years and within 10 years at the beach and be possible to have routine surveillance with misdiagnosis. Particularly for acute diagnosis disorders. I think these can be adapted similarly to diagnostic delay for cancer or other disorders. I look forward to seeing that the next few years. Thank you very much [ Applause ].

Good morning everyone and thank you for the opportunity for bringing us here. Measurement really is the first step to improvement but I would say there 2 major challenges why we have made very little progress over the last few decades. The first is diagnosis and diagnostic area rely on the confidence of several disciplines you got human factors and research. The art of medicine the knowledge of medicine, you really need an understanding from all of these disciplines to get it right to get the understanding that you need. The second issue is that it is hard to measure what you can't define. We are still arguing about definition of diagnosis. We changed the definition of sepsis last year. When you have concept that are vague and diagnosis involves over time. With uncertainties involved. There are a lot of things that are black and white it's really not. So these are the situations why we made very little progress. So, everybody has their own framework you've probably heard that most models are wrong or more useful than the others. This is our model and the reason we like it is there is reflections of clinical practice of medicine. If you look at these on the left-hand side of the screen it encompasses the diagnostic process these are were some of the concepts come in because we are practicing within a complex worse -- . system. This measurement thing helps to put in perspective that measurements should be reliable retrospectively. And the goal of all these measurements. The reason we need to measure all of this is for organizational learning feedback improvement. We like this because it's the first step. You could think about this about the first step towards a better definition. That's where it leads to the tools and measurements of where we need to go. So what is the low hanging fruit? I think the time is right for retrospective measurement. You need a lot of clinical data which is absent from many administrative databases. So I would say that the signals are a bit weaker there even though we have a lot of work from administrators. The stronger signals we can use like we looked at patients who have presented with cancer. Almost 1/3 of those patients have missed opportunities in their care. About 10% of patients who've been abnormally tested they have a lack of timely follow-up. We've done a lot of triggers so instead of reviewing 1000 records we just review 100 that are high risk for problems with diagnostic. We generally don't report diagnostic errors, but there's a format to catch them even if we did. This is a paper that comes out in general safety were we took an approach of measurement is related to diagnostic safety the process outcome to develop some measures, I would say measure concept, in fact a table which says these are measurement concepts that need to be discussed further. There needs to be a lot of discussion about what these measurement concepts are and what they should look like in the future. I'm a bit of an optimistic person but at times I reminded about how tough this area is. It is not even close for any type of public reporting we need to be careful with what we do with all this measurement. We need to engage not just patients but also providers. They are not engaged in this area and all most of them are just seeing patients 825 and doing EMR from 7 to 11. Essentially we need to engage providers in this field. We need to help other educational institutions step up. They cannot all come from VA, Hopkins, and other institutions that have been doing this. We need other institutions to do this work because ultimately we need better definition and better standards and better data on diagnostic error. I talked about measuring errors in harm but I think we need to go to a measurement of diagnostic reliability as where is uncertain ability. We are really struggling with uncertainty how do we measure it and define it. All of this data that were going to collect. If it is not used for feedback and learning, look at is a going to be. And like to thank you all my time is up. [ Applause ].

As I said, they will both -- they are both experts in this area and it like to get those the paternity to ask some broad-based questions. Clarifications for David or Hardeep Singh . Thank you everyone for this presentation my name is Rob, one thing that I've been trying to do as a medical student it, everything that doesn't make sense to me or everything that's hands in the air ways is happening, I can try to write it down and find a solution before it becomes something that I cope with and becomes a problem. You made that point, number six, at the end of your slide, trying to use a way to use electronic record and get better follow-up. Finding a way to communicate with them to make sure as an outpatient you can track and manage their health care. One thing that don't see a lot of and I don't see the healthcare community utilizing is electronic health records as a source of communication with the patient. We are starting to see breakthroughs with my portal and different physician based ask. But it seems to be lagging behind technology that exists and is available to us. It's using these electronic-based medication tools as a way to not only follow-up with the results and with the healthcare and quality improvement that were getting but utilizing that as a method and giving patients the access and the ability to take charge of their own health care. So they have this tool they can use to follow-up and ensure once they leave the hospital that they're not disappearing with us. The ability for smart phones, if I'm with the patient if I'm quiet and just text of them bill understand what I said a lot better. You give some a source of information to make sure that we get the information.

There was this idea that you needed to follow up people out of network. One problem with an minister data within a single hospital is often that we don't happen to find out what happens to a patient so when they leave John Hopkins the not in our health record anymore.

[ Captioners transitioning ] It might be the easiest way. You still have to be able to deal with the reality that the ones you cannot reach our unable to be reach prospective this specific reason McTeer question of whether they suffered a diagnostic error. If they died because of a diagnostic error they'll be disproportionate with the patient to do reach. You have to be careful if you are not getting complete capture and you are losing a lot of your events in that kind of technology -- in that kind of --

Find a diagnosis change and improve their communication.

You asked about patient provided communication . I just want to give you an example of the reality. Of all the healthcare systems that I have invested a lot of work in engaging patients to find [ indiscernible ] only about half the patients really end up using the portal despite very aggressive recruitment efforts. Of that half only about half of those are 25% or so actually using or looking at the data or talking or transmitting data. We have a big divide that it is not just about technology. We do have some technology to make it happen but it is more of a divide in this culture of a topping technology. -- Adopting technology. Doctors do not feel very satisfied and they are spending 30 minutes writing an email to patients . and I am think that because I know that somebody.

I think I saw Paul's hand first. So that last comment heard deep into the whole conversation about patient provider communications, Paul Abner. I saw the word look. Thank you. It led me to think about the open notes initiative and where does that fit in and are we systematically mining the editing corrections and additions in the open notes addition to -- initiative to find clues or near misses of diagnostic care.

I just want to make sure there is nobody from open notes here.

Okay.

I can answer. I can defer to job there's Dr. shipment.

-- Dr. shipment -- Shipman. I think that is a bull's-eye in terms of the question. The patients are the ultimate feedback loop on outcomes so they should be assessing our notes that we put an assessment down on this campus that we just talked about -- how does that ring with them this I am think I think it was due to this exposure . they look at it and they say my headache started weeks before I even start on the drugs. There should be assessment of our assessment. These are the things we are traditionally hiding from them. Which means we will have to write in a way they can understand it. And then number to the outcome. So four weeks later, for mock months later this headache -- four months later this headache that was nothing turned out to be a brain tumor. Only one out of four is engaged now. What we going to do about that? But we put in a grant in arc to do that in some they did not like it. Hopefully this is really -- if we don't cook up the patient and get them engaged as part of this process we are really losing out.

They did do a study where they asked patients to correct the notes in some of these were diagnostic related inks . it was published in the joint commission Journal but it was more of a qualitative look at the types of things that patients corrected. See that they have more work under review on the same topic.

Jennifer, I think Eric had a question first.

Let me say quickly. I keep holding you up. There is a whole another set of work that is not necessarily classically in the diagnostic error realm but I think they detected in the original question about patients managing their own illness or at least understanding their own illness. It is easy to connect up with the down definition we now have of diagnostic error and tell you to communicate . to meet that signals an understanding so these platforms this patients like me, Marcus has supported work in this general area. When it comes to mind. In our center but we are certainly aware of it. Patients with colitis and the challenge that patients face. I think the fact that they cannot only learn more about the condition from their provider but from other patients. I think there are many other opportunities here and I think a portion of the actually does connect very well to the work we are talking about any diagnostic error and diagnostic safety. It is a bit messy ethic now but there are tools and resources that are sort of coming out saying. I think our challenges find out how we most efficient use those. The point about the digital divide -- which patients do they match up with best. These are needed for every patient -- both literacy, lots of challenges. Again in general I think a positive thrust and the technology is sort of finding its way in matching up with solutions.

Eric Marx -- this is a pragmatic comment because when you are talking about data and it will probably come up in the organizational discussion -- as a practicing clinician and someone who has been doing patient segments for the last 15 years the easy stuff has been associated with pneumonia. We we are talking about here is considerably deeper and the problem that I see among the practitioners -- all the time is where is the time to do this? So it is not just a matter of collecting the data. It is a matter of taste did you work in a radical school patient there is never any time. You have to figure out a way to integrate into something that people are already doing and if one hour of clinical time is two hours now our documentation time -- that is what -- and I haven't really heard but the challenge of -- not just collecting the data but how to format the data so that I don't catch somebody that had a crowd need that was recorded in the laboratory within the normal range within five year and only comes to me within a consult and now is one .3 instead of 1.2. That is a diagnostic error if you go through two cycles and you doubled the a crowd need but if it is still in the normal range I did not see. Most people come and read their [ indiscernible ] by what comes up and what is highlighted. How are you thinking about putting what you learned into a format that allows me to use it?

Just to -- I can't resist. Processing the question. I think to me what I read into your question is what is the purpose of measurement if you are only talking about operational quality improvement of patient safety. I think capacity clearly is what you are focused on.

So I guess what I will say is this -- I think there is a question that probably does not just deal with the issue of measurement . it deals with the broader issue of how we are going to diagnostic -- measure this diagnostic problem . there are not enough hours in a day and we have so little time. Can we do this in a way that does not create a lot of extra burden I think in terms of the individual clinician getting feedback I think that I was to do that. Instance in Maryland, the crisp regional health information exchange has the capability for people to sign up if they are a patient into their own set of profiles and they get a report just sent to them automatically when a patient of his gets admitted to another facility with some new problem and it gives them at least minimal information. That feedback can follow which is push rather than pulp. It is not burden list but it is much less effortful been going out of your way and what we refine it. We are working with crisp to find that report get more specific direction. I think ultimately that kind of thing will improve the clinicians ability to measure these things without adding a lot of burden. Really data visualization is a part of that which we work describing about having [ indiscernible ] values and having systems that make it easier for you to see things. I think they are critical to some extent. That is what this ministry data type things are. I agree with you that we cannot construct solutions to this problem whether they are measurement solutions or things we try to tackle diagnostic -- I think ultimately it will be things that reduce burden and their use in clinical practice so for us on the issue of stroke misdiagnosis where we are using telemedicine now to try to bring specialty care to the bedside in the emergency room so in a way that is workflow sensitive. I think that will not only give us better diagnosis but it will also teach us a lot about the diagnostic [ indiscernible ]

I think -- and we are going to be capturing a lot of quantitative data. I think it is possible to use technologies in ways that reduce the burden and still get us the data we need to attack these problems.

I will start with technology. We have a huge electronic health record database. We found a exactly the patients who were falling through the cracks to the healthcare system and we called the emails of those patients who had enrolled. We knew it would be transmitted data. Despite all our efforts of communicating this information electronically and calling in emailing, it was a challenge to get them to take follow-up action on their own patients. They are really burnt out. To have no time. I think we need to step back and think as to why truck that we change your work practices so they can take some of our innovations much easier in in ways we have tried to do it -- in the VA we had many studies with test result communications.. That is in influence of people like David and I. The other is to develop tools . so we have created several tools online that provide us with use that include management of test results, management of in basket notifications that are a huge problem in the electronic health record. They can't follow up on many of these patients. I think the emphasis is different but I think your underlying point is really ballad because we need to create the time and space for the providers in their daily [ indiscernible ] to address the safety challenges and make this as a part of their seeing patients. This quality improvement, safety improvement -- exercise has to become a part of what they do. That is where the challenges are.

I had a question someone had brought up a slight about forces of data. I'm from the leapfrog group. My name is Melissa Dan Port. The data shows opportunities to use float resource, big sets of data to identify at least transit opportunities. I wonder if that includes pharmacy dated? It seems like some things could be identified in pharmacy data claims which are really rich in [ indiscernible ] detail if I was prescribed an antibiotic because someone thought I had pneumonia and the someone's example it was a week later canceled and I got steroids. It seems that there could be some markers in the pharmacy plain to identify trends. Particular providers or groups of doctors that have more of these kinds of errors in that seems like a low resource opportunity to get at some of this. I think going into the HR for some of these things is very complex to give gives you much richer data. Ability of stomach problems in trying to use those data set for measures were pharmacy data could be an opportunity.

Really great point.

We have not done that per se but [ indiscernible ] on paper indication base prescribing were essentially trying to sort of catch people who should not be getting that medication because it doesn't tie well with the diagnosis. We actually try to do a study that shows scripts. Audit know if there are some data issues going on. Or if it will get published. Do you want to reflect on your paper?

It is a nice piece putting together how you can use the pharmacy data and that information.

I guess we should think of diagnosis is sort of a walls and treatment. Personal we should not be trying to diagnose people who do not have treatment. We should prioritize things is a way of trying to [ indiscernible ] it is still offered a way to explore. I think it may be limited in applicability . you can pickup some of the stuff but others not going to be on their.

Will have to validate the data .

I will make one quick comment. Which is I think it harkens back to Richard Donaldson's point this morning which is that diagnosis is the foundation for treatment and I think that one of the things that we have completely not measured is the fact that all of our volume metrics for correct treatment of given diseases that people are carrying as diagnostic labels have not been considered for the percentage of those problems that are actually diagnostic errors and they don't have that at all. So the right treatment in the wrong diagnosis is not quality care and I think that this general theme is one that we should be exploring a lot more to consider is probably a lot more harm out there than we know is a from treatments and people carrying labels that they don't actually need.

Okay. Icy one hand. Hello. I am Lisa Sanders from Yale. I love the way you can set up David measuring improvement by using the approach used to identify the cohort of people that are at very high risk of having the wrong diagnoses. If we are going to actually make interventions, we need baseline rates for a lot of diagnoses and it seems like those are really hard to come by. We would never get that beautiful graft that you Dr. Brady seemed so please with and it is very impressive to me that shows these trends changing if we don't even have the baseline rate or how often things happen. David your stuff is great for this one kind of growth but how in you will be able to see what the training people are about and identifying these people better brings that curve down. How are we going to do that? How we know if that has -- how gonna do that for a bigger set of questions?

Well I will just state two things briefly about that . the first is the first is around a model system. So the concept of stroke in the emergency department is a metaphor for a misdiagnosis of any acute condition. What we have been doing is fairly explained that model space so we can understand how to parameterize and generalize it to other issues. Know we have already shown that we do the same thing for headaches. We hardly so that you can do the same thing with primary care . you'll be seeing some of those data heads with diagnostic medical conference. The Kaiser people who have great claims record on the data were able to construct these curves in a matter of a few days -- literally -- no problem at all. You can actually imagine to this across a broad range -- range of problems. I do think that if we are going to have metrics that we are measured over time the one thing that we have to really explore from a research standpoint is how big a sample do you need and how big a time window in order to have a stable measure for these kinds of data and there are some statistical challenges around that, but I think beyond that issue -- if you would just asking the question regarding other different diagnostics the answer would be yes.

Hello. So shifting the conversation from outcome measurements to process it seems like one of the challenges is a lack of a gold standard or what is the next diagnostic step as sort of every sort of clinical presentation write up or the -- across-the-board. We talk about the basic science of diagnostic safety, it feels like a missing piece is sort of that gold standard diagnostic process. I think if you look at David your work still makes a ton of sense to me. I think one of the gaps is that the patient has been nine dizziness well to the HR did they have [ indiscernible ] or some of the other sort of symptoms they had to do have headaches, do they not? It seems like by constructing a set of what is the gold standard -- what would an expert do at every step of this case you then have a reference point and I think with an LTD and other technologies we can start to begin to automate the process to be able to understand where there is a deviation from the expert model. It is complex because of the vast number of chronic conditions in the vast number of presentations from them.

So that's a great question. I think we have doing some of this work so we have this small seed grant collaboration between Kaiser and Hopkins and we are creating a large diagnostic performance dashboard so that outcome measure that we showed you is just one thing that we are putting on the dashboard . I agree with you completely . that outcome measure is what you want to influence but in order to influence you need to be able to see the process. You have to be able to see the process as well. What we are trying -- looking for keywords, positional, things like that that maybe surrogates or full on NLP approach delving into what the processes. You can identify mismatches relatively easily so for instance if you think of just using administrative data summit walkout with a diagnosis of vertigo but hasn't neural image as part of that same image there was a process failure or wrong diagnoses are both because he by non--- benign vertigo should be signed at the bedside. You should not be doing neural imaging on this patient so obviously they should not have gotten neural imaging and you I think you can take those kind of things and start monitoring. You are right though I do think ultimately some of the things will have to be killed by health and a winning of the battle. Because each of the things will be highly tailored and specific to a particular problem and as a result it feels like something well -- this is hard to do. But the truth is if he took 50 smart people in the each tackled one of these problems over the next decade we would knock out three quarters of diagnostic errors over the next or figure out how to solve or how to measure these problems over the next five or 10 years. Student I'm not going to be very optimistic about that. [ laughter ] I don't know how many of you as have been research of diagnostic errors . when you just look at the medical records and you talk to people's and you look at the story, it is just not possible to do the types of things that you are talking about. So I think were yes it could be done in the future -- Jessica got a lot of data try to come up [ indiscernible ]

We talk about data that would be relative to us in many many years. It is good to think about all the things. We do not have gold standards right now. We do not have gold standard and what the diagnostic out put is . we cannot agree. I am not sure if we can decide about all of the steps that should precede the diagnosis of pneumonia or am I. So I think we are far away from doing some of these things but I would encourage everyone to start looking at their own records and talking to your patience and talking to providers about diagnostic and you realize it is really tough to do these things picketed if you put 50 people in the one room will get about 120 different answers.

By the time I retire.

I think I am able to reconcile both comments . it is a very difficult problem and I wanted to add quickly -- I think as much as any other aspect of safety and quality the diagnostic process is so intertwined with the rest of healthcare that whether it is a missing standard process or what is expected timeline with high-quality. Is really hard to tease that out. That is just the basic observation.

Paul I know we are running out of time so I will try to be brief but I want to go back quickly because I am really trying to understand the issue that Missy brought up and Gordy commented on in that I the panelists described this weak signal and that is the disconnect between treatment and diagnosis, looking for this corded as an potentially diagnostic error. On Monday I had the honor to present in Italy at the WHO patient safety meeting on the issue of diagnostic error and I got some pushback from a position about accuracy of diagnosis is not the goal. Getting people that is the goal. If you can do it within in accurate diagnosis but the right presumptive treatment or clinical trial isn't that all we care about? I did push back a little and said yes as long as it is not an excuse for not doing the work, but my question is -- if we look for discordance between treatment or even this issue of we know physicians do clinical trials on patients or presumptive treatment because it is the most likely thing in the harms and cost of more diagnostic testing versus just going to a treatment trial may be the best outcome. Is that an area we need to be looking into as far as how do we know what the baseline of presumptive treatments are? And how do we measure the effectiveness of this? Because I started hearing this a number of times as that is a valid way to treat a patient, to managed care but we need to figure out when it is inappropriate and when it is appropriate and I have not heard anyone talk about that measure.

We have done some work on sort of measuring the mismatch between treatment and diagnosis so for instance if you look at what actually happens in the emergency department with the average dizzy patient nationally based on national data, the vast majority of patients who leave with some sort of dizziness are given accuracy. It doesn't matter which diagnosis they get picked doesn't matter whether they get a sense of only benign vertigo diagnosis and only the [ indiscernible ] diagnosis should get the medication but there is no variation across different diagnosis and I think that is a marker that this is just kind of a generic solution to a problem that is like okay you are dizzy we give you a pill as a solution. I do think somewhere in there there is the potential to find that kind of signal picked it is not to say that in. Therapies are not appropriate. It is just that we -- what should be happening in the situation is people should be coding less the specific diagnoses in giving him. Therapies and it should not look the same across all diagnosis . I think there is some signal that is potentially in that stage.

We can come back to other questions from the audience. I want to do what we said we would do and shift a little bit . we have touched on I think issues related to professional education and training both in the first session for today and of this year maintenance certification was mentioned the back to clinicians. I'm going to throw this question out to everybody in the audience in particular those of you who are interested in professional education and training issues what are some measurement challenges at the early career level for example residency as contrasted with those that might occur for folks who are professionals who are more established may be made or later career. I'm just going to index more into the question. Technology, aptitudes, and it may be different but the way they were trained etc. etc. Again training issues, specifically with respect with diagnostic error. What measurement in particular?

Frank Thompson -- I think the educational flow-through is very approachable. And the basic notions go like this -- diagnostic error has largely been a concern in the patient's environment and rightly so. But I am convinced that a root cause of diagnostic error is medical education process. 2015 medical education does not have a codified evidence-based approach to training to or assessing diagnostic capabilities. We have no idea whether or not at a medical school they have the capacity to hold up performance on rotation exams -- what percentage of these 500 most common diseases can the average medical student correctly diagnosed? Same thing goes for residency training programs of the 150 most common reported diseases it in the program. What percentage of typical and atypical cases can these residents correctly diagnosed? Licenser boards? No meaningful feedback from licensure boards in terms of a thing called diagnostic confidence. So we have a captured audience. There is no reason why we cannot begin to support medical training programs at the undergraduate art residency. One of the common or important differentials for the patient problems. What is a typical in some what less atypical presentation of these diseases and begin to create a standard by which we can clinically at least from a psychometric perspective reliably and validly measure a student's capability for this comment or important problems. Is this something that organizations are willing to support? Certainly very low fruit, very meaningful and measurable outcome.

I think this concept that we really need to teach the future trainees and two of them come to mind. Some may be derived from the cases you are talking about. What is calibration which is essentially alignment between your confidence in your accuracy in that confident and you have sown -- shown the primary physicians are overconfident. We are always sort of struggling between under diagnosis and overdiagnosis. We often don't speak out when we need to. And if calibration could be better measured and improved to some of these cases that would be one way to go. The other concept I think is uncertainty. We really don't do a good job of -- how do we deal with uncertainty better for education. I think realizing and going through some these cases over and over again. I think will make people realize how important the concept is.

I would just think two quick comments what about Skilling and one D Skilling. When we train people to do diagnosis, I don't think we are following as Frank you suggested some of the evidence-based principles. I am a believer of the Jeff Norman space -- really what we often need to do from a diagnostic standpoint is learn from cases that are numerous, unknown and varied. And in order to do that in a real world where there are technical constraints about what you can see during the course of any fixed theater training in terms of the actual patient you have to simulate the experiences . you cannot simulate with high one high-stakes patient that you train . you have to do it with median screen base case situations where you can see 100 different variations of dizzy patient and you start with the ones that are really obvious and you work your way towards the ones that are in the middle that are kind of nuanced problems, differentiating the benign from the dangers . I think those are things that could lend themselves not only to education but to measurement . on the D Skilling said in terms of the issue of training and residency I think we do have a problem with the pace of the medical care that we have been giving in the way in which some of the deeper service etc. such driven medical care in the country where we are basically saying look you have to get the patient out of the hospital fast so the best thing to do is take a shotgun or every imaginal test on day one and sort of sorted out because that is how we get them out of the hospital and take it and in series reasonable approach. I think when that is done for residents -- it has given a shoot first ask questions later mentality. Let's see what that neuroimaging shows and then will think about the case. So I think we need to be thoughtful about that maybe some of the performs will enable us to get back into that space where in the state of Maryland now with low budgeting they will actually be able to drive better care and better education

That's great. I just want to add quickly to the point -- we are obviously understandably talk about more cognitive used with respect to training. This is a personal bias that I have that sort of theoretical underpinnings at least with the right diagnosis to understand system -- symptoms based [ indiscernible ] this is truly assistance problem. Clinicians are often leaders in the organizations of their awareness about this broader base look and in addition to what is happening [ indiscernible ] so I just kind of puck for that. Let me shift this to again. We have talked of fairly well. On this point quickly -- this at the same point? See that okay. Go ahead.

I'm going to throw this out there. We have all of our questions on our exams both are coded by things like diagnostic by definition. There is supposed to be scenarios related to diagnosis. We have done that for years. It seems like it is a potentially good data starts to examine changes and knowledge or change in knowledge about diagnosis and how that might relate and how knowledge might be related to some of these issues are maybe it is not related at all. Even if they know what they should do just because of the time constraints we don't act correctly. That is just another source of data.

That's a great point.

So just about the knowledge issue. I think not all diagnostic errors are created equal. So when I think about for instance some of the tragic misdiagnoses that I have seen. There was one or there was a patient who was a 75-year-old came to her primary care physician for four weeks with headaches. A simple diagnostic reminder would have done the trick. She did good blind. She just was not thinking about it. A diagnostic reminder that would have killed her regarding headaches within other patients. If you think the dizziness and stroke problem is a totally different issue. Every emergency physician on the planet knows and is worried about stroke in a dizzy patient. The problem is not that they are forgetting to think about stroke is that they do not know how to differentiate stroke from other disease. That's a totally different animal. I think knowledge gaps a really important in some diagnostic problems and ones that we really need to be dealing with and unfortunately I think you have to tailor your solutions to each of these problems to where the actual defects are if you're really gonna make things better.

So describe quickly. Knowledge gaps are probably underemphasized in they are coming up very strong in our work. We are seeing cases over and over again we've now seen so many cases who neck pain -- clear red flags, fever, numbness, being sent home from the emergency room time and again. Different institutions . not even just one . I think there is definitely a knowledge issue and we are sort of in this let's cut down MOC. We need to minimize all of that but we probably need to rethink how we Institute and is still knowledge again. There could be other ways as well.

Okay. I think we have addressed pretty well the importance of patient engagement. That has come a pretty frequently. One aspect of patient engagement that we have not touched that is disparity . I want to make sure we give that it's due attention. This is being thrown out to the whole group. What data management tools would be most useful to adjust disparities. So things that we think about we had the disparities report. And clearly we try to think about -- welcome to our new home.

Again data management, diagnostic errors and in particularly disparities. So it out to the audience.

We have done some work on disparities is a large data sets in conjunction with a RC as a collaborative between HR Q.Between . I think data sets are good for facilitating these sets of disparities. It takes away the variance around differences between different cases and what we have seen has been surprising is that woman and men in their 20s are more likely to be misdiagnosed. I think that will be most easily measured using larger administrative data sets pixmap I could add that this is where triangulation from different data sources can be really useful. We had like one study for instance single sign record review study on: rectum cancer patients. We found more opportunities in the elderly and African-American but it was just recently British medical Journal of quality and safety studied lung cancer, different cancer showing the exact same thing. So sort of different data sources but triangulating out. There is some truth to the fact that elderly patients as well as minorities are probably being misdiagnosed more often.

I want to comment on the elderly issue. Brushstrokes it is young people that are more likely to be misdiagnosed. It is an issue of expectation and probability that young patients are seven times more likely to be misdiagnosed when it comes to strokes. Some of these disparity issues can be reversed in funny ways depending on our pay expectations.

I think around the topic of disparities a lot of the focus of disseminating failing and assessing in implementing these kinds of qualities from the bundle has been focus on things that have been tried and true which is primarily in adult populations who are hospitalized. How are you planning to rotted this scope to include children? Obviously in the underserved populations because they are not included. They have not been the focus of a lot of this work.

You were just talking about funding a project on pediatric diagnostic errors. I think children are actually part of the group. I think it is just sort of broadening it. Is really hard to do in the outpatient setting out say that we have been able to engage several practices. I think one of yours is a really good site as well. But you are right. We need to scale beyond the patient setting in adult.

In terms of how we do it this would be a boring bureaucratic answer but we emphasize our funny opportunities must be addressed. Just like with every other probably are addressing we need to prioritize -- what are the most important issues to priority populations based on what the data shows us that is one of the reasons we concluded that study in the beginning just to show that some of the early signals with relatively not a lot of work can suggest prioritization's. Ali Hamza going out now. At the same time. The time is that.

This is perhaps a more specific question. The concerns that icy from getting patients stories are the truly vulnerable populations. The mentally ill, the homeless, the overweight, and the elderly which you have already talked about, but people who are much more prone to prejudice than others and I was just wondering what is being done to address that?

I want to plug one other thing also that is a bit of a -- applicable to disparities but to this topic overall and patient safety and quality overall but AHRQ has had a long-standing support for research in simulation and so the applicability of this methodology I think can be understated just so happens the same expert that carries most of the Internet for us on diagnostic error also cares a lot of the water on simulation but clearly we have a whole team that helps support that but I think there are many more opportunities to apply simulation to diagnostic error and other safety and quality problems and we are open for business on that point. It is an administration grant for those of you who know the grant world. Please make use of that. We are at time. I have real quick Karen -- see that --

We focus a lot on ability of people to respond to surveys and technology in a differences of technology that people use . I wanted tie this disparities issues to a comment regarding electronic methodology. There is big differences in the ability to furnace, use and have access to patient portals and other methods of tracking patients. Electronically based and so I think that those differences are especially evident in particularly the elderly. I think there will be other methods of developed chart priority patients.

So with that I will give some information I think you'll want to hear. It has to do with lunch. Lunch is going to be located to the right of the conference center toward the Windows. I think that is in that direction.

Not outside . you will be protected . in fact I should also remind you it is not my list but remember if you leave the building, if you really like go to security you get to do that again. But we have lunch if you pre-ordered it here. Also you are able to use the cafeteria here. If you pre-ordered lunch I think that is available and then hopefully you all will be able to go to a different breakout session. When you come back to this it will be a real success. But and acid can I asked Sherry do you mind raising your hand. Again Sherry will be reporting out what we have done here and what we do in the second session. We look forward to that this afternoon. Thank you for your help.

[ Lunch ]

This tracks with many of the Institute of medicine's reports over the last 15 years. It should design work systems to support patients and families and healthcare professionals professionals of the diagnostic process and ensure effective and timely communication between diagnostic testing Capasso's large radiologist and pathologist. Recommendation six which tracks with the goal of developing recording environment and medical liability system that facilitates diagnosis and learning, agencies and healthcare research and quality and others should encourage and facilitate the voluntary reporting of diagnostic errors in auction if I went the effectiveness of patient safety organizations as a major mechanism to volunteer reporting. States should also promote it with the environment that facilitates timely identification, disclosure and learning from diagnostic therapies. Adoption of communication and resolution programs demonstration projects of alternative approaches and resolution [ indiscernible ] and professionals liability insurers should collaborate without -- with healthcare professions reeducation. Recommendation seven which tax to the goal of the side of payment that supports the diagnostic process, CNS and other payers will provide coverage for evaluation and management activities including time spent by radiologist and others advising clinicians of diagnostic testing. Some 30,000 tests, 10,000 of which are molecular diagnostic. [ indiscernible - multiple speakers ] CMS [ indiscernible - multiple speakers ] [ music playing alongside speaker. Unable to hear ]

[ music ] There we go. Welcome back everybody. Welcome back to the meeting. I hope you had an enjoyable lunch. I will say that in terms of audience you need to come back and follow the first measurement group it was pretty engaging. We had a lot of good information but I am expecting we look at the same thing from you all. Was the first session you all went to interesting? Sort of a mix? How many were eight IT? How the organizational factors?

So all right. Well our plan -- let me cover logistics 1st period so just to make sure we do still have folks on the web cast I think. Around the room -- so hopefully you all are able to hear us okay and we did not get to questions in the first round in the first round of the measurement.

We did not get any.

I guess there were not any.

Please note that you are on the web we want to hear from you. We have plenty of engagement here in the room that we will definitely have enough to get us to the session the please let us know if you have a question or more comments. Let's see. I think that is it in terms of housekeeping. So what we're going to do right now is we are going to quickly get to discussions but first after just kind of set the stage for our discussions -- I think one of the reasons we had such an engaging discussion in round one of the measurement breakdown was because we have these two experts with us on the panel today. I am joined by two folks who are leaders in the field of diagnostic error research -- the first Dr. David Toca from John Hopkins here close by in Baltimore in Dr. Ardeet Sing . They have been focused on diagnostic errors and they have been steadily drawing attention to the subject both in the field and our ability to address the problem and do the work and research. I would add specifically measurement. That is what we are focused on from the session. Here is our plan for the session overall. I'm going to get out of the way pretty quickly but I will give a brief introduction to kind of spur hopefully some engaged discussion. Next we will kick off the discussion with some kind of canned questions in both David and Ardeet Sing have slides to elaborate on those answers and I think that will help us with the discussion following but we want to move to the discussion pretty quickly. I just want to remind everybody we are on the hook to include a focus on these two crosscutting issues of patient engagement and patient and family engagement and also by training and education. We were able to get to those pretty well in the first round so we will certainly kind of get the temperature of the room in terms of where we want to focus our discussion with you. Also be thinking about things that you would like to address and like us to address particularly. So and then finally I don't know if she's back here yet. We had to work over lunch a little bit. Trying to pull the no sin things together but Dr. Sherry Lake whom I think I mentioned in the first itinerary but she is a deputy chief medical officer at CNS and she is helping us out. So Dr. Ling will report out. She has a tall task of representing of what we are discussing regarding the first round of the breakout and the session here. Try to help support her in make our points clear and specific and again we are trying to capture everything for use later. Okay. So -- we have seen this already in the plenary session. I am putting it back up here because I think I want to underscore the importance of a potential goal if you will for diagnostic error. It would be nice if we hauled the stacked bars for diagnostic error. We don't yet although we do have some insight and we will hear a little bit about that I think as we kick off the session here. The other thing is the trend. To be able to track what actually is happening in the role that that has been driving further improvement is really critical. So I think we are starting to get more insight into how to measure more efficiently -- that is another point that has come up days disability, I think that kind of tension in the first discussion and talk a little bit more about that. I think not only for patient engagement but the impacts that matter are things that we should be focusing on so there are lots of things we could measure. We need to be able to draw the line I think to impact likes these or other potentially that actually matter to patients. These happen to be the ones for general patient safety in the hospital but it is things you don't have to be an expert to understand and clearly there are things that do require expert focus behind the scenes in science and this is what has these kind of impacts to those things you can measure but again we want to keep these in our sites and ultimately that is what helps us gain more support for this work and make sure we are doing things that again matter to folks. This audience probably would not be here with us today if you did not appreciate most of what is on this slide. Let me just touch on it. Because I think again hopefully help set the stage for measurement specifically. So why is it so difficult to address diagnostic error clearly? Problems with measurement, definitions -- again the national Academy report is pushing us further down the path in that regard but we still have challenges in I think as much as any aspect of healthcare the diagnostic process involves many different people, places, stages, tips of information, information exchanges between two people and different organizations and these and other factors definitely complicate not only our improvement efforts but our ability to find the standard processes that are often not there. And then of course what are the points along those processes that are representing opportunities for measurement. That complexity is I think what we are up against we are talking about measurement or improvement oh well. Finally arriving at a diagnosis of fairly discrete event both in place and time. A fairly circumscribed amount of time. The risks that set up for the event to be possible -- is pretty proximate to the event I itself . I think diagnostic error has many more levels of complexity and confusion in in comparison to a problem like let's get infections but again some progress in spite of that challenge and then finally a lack of feedback and certain ownership of the problem most deftly for our progress not only in measurement also in improvement. So we do have the definition. That has already been touched on. I think Mark Kramer did a really great job of teasing out what is behind almost each word and the definition. Certainly keep that in mind. Hopefully as we talk about measurement specifically here. So now we will move to this level of detail. Hopefully you have had a chance to delve into the report a bit -- the national Academy report. There standard for that is [ indiscernible ] or to have recommendations pulled out two sets of recommendations that are particularly relevant for measurement. The first sort of group of recommendations is a recommendation set four here and it really is I think right on the bull's-eye of management and measurement issues and it simply stated it is a focus on development and deployment of approaches to identify and learn from Amadou's diagnostic errors in clinical practice. So, things like a mention of the roles of accreditation organization, the Medicare program's conditional participation, public as I go through these you are getting a sense of the fact that this is not one segment of healthcare's responsibility. Is actually across the whole system -- every stakeholder that you can pickup I think has some connection to diagnosis and problem with diagnosis error. Clearly payors in terms of creditors, the recommendations that here also highly signal for healthcare organizations themselves to address diagnostic errors as part of their routine operations. So I think quality improvement -- that is love what we spend time on. Discovering what is a safe practice and how does that get integrated and added to routine operations -- not the case now which is what we are hearing right now. There is not much they sign focus on diagnostic error, but there are some exceptions -- good exceptions in some bright spots. Personally I really like this focus on systematic feedback for different stakeholders. In an effort to encourage and guide improvements. I remember when the report was released I think Castle did a really good job of underscoring this point -- providers don't really get feedback from patients they never see again were and mistake or error may have occurred. There is no mechanism for that. A learning opportunity. So is that an opportunity for measurement also? Also final with respect to recommendations -- the national Academy charged healthcare professional societies with identifying opportunities to reduce diagnostic error within that specialty. I want to quickly moved to this of a set of recommendations that also I think is important for measurement and that is developing a reporting environment in a medical liability -- in a medical liability system that facilitates and improved diagnostics from learning from near misses. A couple things in this category that the report recommends -- the report takes the stand at least where we stand right now on voluntary reporting . it does not think that we are ready for mandatory reporting considering the state of measurement. [ audio lost ]

[Captioners transitioning]

These are a few examples of activities that have measured diagnostic error. One of the things that came out of the first discussion pretty clearly is we need to be clear about what is the purpose. In light of the fact we are still in the voluntary mode that suggests connections to payment and reimbursement potentially even accreditations we may not be ready for that. However are there opportunities for quality improvement. Clearly yes would be our answer. We need to be more specific about what the research suggests. Again, what is the purpose of measurement. I have mentioned medical liability claims as a source of information. It is clearly in the interest of metal liability to understand they are a big contributor to safety issues. I think that is one of the key reasons that we have learned so much from that community. Clearly, it is a different purpose for measurement. There are some overlapping shares and objectives there. And finally, if we are talking about research, awareness that the burden of measurement that often comes with a research project is not something that there is often capacity for in the current situation or in an operational environment. The burden of what is done in research projects is informative and helps expand our understanding. It often does not translate from a capacity standpoint into an actual healthcare delivery situation.

What I want to do now is quickly run through this example just to get you thinking about some methodologic issues but specific opportunities that exist with the big data set. Healthcare cost and utilization Project, hopefully you're familiar with that. That is a data research that we have supported for several years now. Lots of interesting information and this is one example of a study that was used. This particular work was undertaken with a focus on emergency department discharges or chest symptoms. That came out of the H prep project and their inpatient data set. They have expanded over the years in a very good way. They also have an emergency department data set. What is even more interesting, they can connect patients across these two data sets and the learning that that provides is powerful. This is a good example of that. The study sample in this case consisted of patients treated and released from the emergency department with a primary diagnosis of nonspecific chest pain or coronary atherosclerosis another heart disease. Those who were also admitted to the hospital within 30 days following that visit. The idea here is was something potentially missed in that visit. Again, the diagnostic tests were turned up so they might be suggested of the diagnostic error that occurred. That yielded about 1.9 million emergency department visits from 1000 community hospitals that were in this particular population. Here is one high-level finding and adjust themselves some sense of the size of the problem. This analysis suggested that only about .2% of the 1.9 million or about 3800 patients were admitted for Mario cardio infarction within 30 days after being seen for what could've been a related problem. Clearly, this is a process generating study. It definitely can focus us on the areas where we need to look further. Where are there opportunities by diagnosis or situation in where we can delve deeper and maybe get more specific information to enlighten us about what is really going on here. Shouldn't surprise us that this is not that big of a problem? The evaluation and the detection of heart attack in nonmedical career certainly these patients were in the hospital a couple of days to rule that out. Now this better sensitivity and thankfully while this problem is not completely solved, we get a sense of how big it is in comparison to other things. So quickly, the kinds of information that came from this regression analysis was done as part of the study. The ratios for various factors suggested some very interesting findings that would help us target further investigation. Some of those signals were things like patient factors, geographic characteristics. Other things like availability of the catheterization laboratory which reduce the odds of admission. So were talking about structural measures and in this case the microsystem and what resources are available for a particular patient. If you don't already, you start to appreciate the complexity of the questions that we are asking with diagnostic errors. Other factors having weekend emergency department visits which not surprisingly increase the odds of admission for other cardiovascular conditions. Readmission for uninsured folks cover by a public payer were to to three times higher than those who had a private payer. So again, early insight into problems and this is an example that could be replicated for many different clinical conditions in many different scenarios. This is the kind of thinking we are trying to Spur and we want your thoughts about this approach to measurement.

Obviously we will keep in mind the conceptual frame for the problem of diagnostic error over all. The right side of this graphic is outcomes. Again, if we need to refer back to this weekend. But where does measurement pull off information that will help us understand the problem. The final slide here, just to get us thinking in a more concrete specific way, when you really boil it down, we will ultimately be talking about what are the things we are going to measure. Where are we going to measure them. Settings, timing, lots of factors. Word we want to will focus our attention with respect to clinical conditions. What are the various contributing factors. If this group is interested on cost we have questions that we can delve into. It's that is where your interests lie we can accommodate that. Once we determine what to measure and we consider what to measure how and why were going to measure it again. What is the purpose of measurement. Which messes will we use. Is there a rule for reporting. It's not unhelpful over all but clearly underreporting is an issue that would likely be the case for diagnostic error. Those are the kinds of things we would like you to be thinking about and asking us about as well. With that are going to stop and ask for any basic questions from the audience before we turn to the first question for our experts.

Hearing none, I'm going to ask the first question.

The first question, quite general actually but how would you describe the current landscape in terms of strength and limitations of various measurement approaches for adjusting diagnostic error. What a realistic expectations for progress in this area in the next 10 years. Is there some low hanging fruit or data that is more readily available that can be gathered with relatively modest investments that would help make progress and stimulate interest in reducing diagnostic errors. Where are we at right now?

As I said in the first session, nice to easy softball over the middle of the plate. Three minutes to describe it. All measurement and diagnostic error. Let's give it a shot. I think Jeff alluded to this in his talk. This idea that we've got the burden of diagnostic error, causes, solutions and the methods. You can talk about measurement related to any of these. I will restrict my discussion largely to the issue of burden. Again, different methods might be used to measure these different concepts. I am going to highlight and illustrate what a particular using large administrative data sets to a bill for an unplanned event. I'm also going to give you a little bit of a picture before I do that about how I framed this problem. I caught this up into two big rockets. Numerator only measures and the numerator and denominator measures. To me they are to medium-size buckets. It is the things that are incident reports by providers and then the patient complains or legal action. And all of these things that elicit up. I think that are in common usage of probably make up the bulk of what we see as management related to diagnostic error, the limitation of these numerator only measures is that you can't measure the frequency or the incidence. You can easily use them to track the impact of your interventions to reduce diagnostic errors etc. These are going to be good measures to that weren't now before or do would cause analysis to delve deeply into a particular kind of problem you know exists. Not going to help us track diagnostic errors over time. Is important for those uses but not others.

I have two sites on numerator and denominator methods. I have four medium-size buckets which are two were on the slide and two of the next. Calibration procedures. This is most often used in the diagnostic laboratory in the clinical lab where people used undersized [Indiscernible] and they run a standard [Indiscernible] on the machine to make sure it's calibrated properly. In theory you can do the same thing in some places do with respect to radiology and lesser probably with pathology. Anything with the visual diagnosis component you can have a standard you run through your process diagnosing things and how you assess that standard. You can do the same thing with standardized patients but is harder and more expensive. Overall evidence of being a limited scope. You can't do it for everything. Independent review verification, where most people up and operating in terms of the research base and thinking about delving a little more deeply into finding errors. That includes anything from direct observation all the way to chart audits. The more you get into these direct observations of videotaping encounters, the more labor-intensive you're talking. The problem with charts of course is that they are missing data. Often they're missing a bias subsample of data such as the key data you need. They're missing in a lot of cases. Those have their limitations. These are going to be potentially helpful means and methods to approach and especially using techniques that were suggested to trigger and enhance or in which the sample of charge you're looking at. I think that overall attention may be better spent elsewhere in terms of moving the larger issue of measurement forward. So here are the other two medium-size buckets. One systematic diagnostic ascertainment. This is something that has been proposed by a number of people. Radiographic autopsies, routine autopsies, that will be good for diseases with the structural component. It will be a gold standard for cardiac arrhythmia diagnoses. Anything with the structural elements can potentially be a gold standard. These are going to be expensive and difficult and there are issues of labor force availability for these kind of measures. In general gold standard measures whether it's autopsy are other things that a gold standards for any given particular disease are going to be things that are more often done in a very specific scenario in situation where there is a very specific need for a particular diagnostic problem that needs a rigorous gold standard in order for us to ascertain where we stand. There may be simpler methods of reaching out to patience and systematically calling them not gold Sanders but systematically trying to seek out the answers for one diagnostic errors have occurred. Those are pretty much an study. Finally the category of going to illustrate is electronic performance monitoring idea. This is along the lines of what Jeff was describing about looking at large administrative data says. Usually billing data. Large samples and we can analyze them in a thoughtful way. You can find signals that seems clearly linked to the issue of diagnostic error. I'm going to show you this one example. We have been working on this one for quite some time. Ms. stroke and benign dizziness. The look back approach and the look forward approach. A strong temporal association between you had benign inner ear problems are rather not tell of late -- terribly and then showing up with a stroke. What you can see with the way these curves like is the look forward analysis your starting with patients who were sent home and told they had benign inner ear problems are benign dizziness. You look at the rate of return for stroke and there is a control comparison group there which was heart attack and the red hatched area that I've outlined is excess rate of return was stroke in a short timeframe. That's a biologically plausible Association with the known risk window for major stroke following mine or -- minor stroke or TIA. Patients to come back with major stroke after having had minor stroke therefore these can reflect harms associated with diagnostic error. But is actually higher than that by as much a sixfold or so. These kinds of measures, the same thing looking backwards which is starting with all the strokes. Looking back and seeing the strong temporal disassociation without recently seen in the emergency department. And then compared to being seen in the emergency department at soldier having benign abdominal pain or back pain. The one on the left was a study done with the same group that did the work that Jeff presented to you.

These kinds of measures are going to help us. They can be measured now more or less in real time using available administrative type data. It does take thought to figure out how to analyze it so you're not just making noise and nonsense. I think this is a promising approach that will be fruitful in the future. My closing slide on the overview of the landscape, no single measurement method will address the full spectrum of diagnostic errors are issues. You not going to get those administrative data to do with house analyses for was causing those problems. That will be an outcome measure but not a cause measure.

There are a lot of barriers to these types of measurements. Lack of chief complaint reporting, lack of routine follow-up which are going to be in the way of any kind of measurement. As time goes on we will get better at doing those things. The unsystematic measures, numerator only, are readily available and things that we should be looking at to identify problems. They give us an incomplete picture of what's going on in practice. We are trying to test everybody in every situation with the gold standard of some kind and those will be reserved for specific quality improvement projects or research projects that are focused on one narrow area and not the broad solution once we get patients to report diagnostic errors back to us. Ultimately the electronic surveillance ideas are inexpensive, promising, but they can't just be done casually. You have to think about things like making sure you have regional follow-up on patience and that you're not missing.

Within 10 years and I think perhaps within five years. Routine surveillance for certain types of misdiagnosis especially dangerous disorders we have used for stroke can be applied for heart attack and aortic aneurysms and brain injuries and anything else you want to apply it to. It has the same type of biologically plausible short-term and I think there's clever things people have been doing in the literature looking at adaptations for cancer and other things. I think that this from my perspective is the most promising way to be looking at the burden in a larger group set. Thanks for your attention.

[Applause]

Thank you, everybody. Good afternoon and thank you for coming back. I am going to discuss why measurement of diagnostic error is the first of two improvement but I will start by telling you why this is a difficult topic and why we've made very little progress over the last few decades. This lies at the intersection of many disciplines including human factors, psychology, informatics, social science, communication science, I can go on and on. It is really hard. The second basic issue with we have trouble defining what diagnoses is. Were still defining definitions of diabetes, hypertension, we have changed the sepsis definition. When you talk about diagnostic evidence it is harder than we think. There is a lot of uncertainty involved. Diagnosis also -- often evolves over time. All of that must be considered. All models are wrong. Some are more useful than others and this is the one we like because we developed it. This is what the Institute of medicine used. The reason we like this is because it reflects real-world practice. If you look on the left side you see the dimensions of diagnosis. We need to think about all of those issues and not session also following up on lab results. We've got to think about all of those valid reliable retrospective, prospective, all kinds of measures you can think about. You have to put all of this within the attack -- within the context of the whole system. Were not just measuring what's happening in their brains but were measuring about the way their performing in a larger healthcare system. This is real-world practice and we use it a lot. Ultimately the goal is collective mindfulness and making sense of all the data and using them for learning and feedback. We have a long way to go. All of this information can inform better measurements and that's why we need to start measuring now.

We were asked about low hanging fruit. I think everything on this page is low hanging fruit. For instance, I would say that signal from administrative data because it does not contain clinical data. Signals are weaker from administrative data. We have looked at a lot of records of what we have tried to do is look at collective high risk populations. And cancer cohorts about one third of cancer patients have had missed opportunities in the diagnosis. And that's just not one system. Multiple systems. Abnormal test results. About 10% of patients that have blood work have missed. Improve the signals by selecting the records that I need to review. Instead of looking at 1000 patients, I would look at only 105 50 or 60 that might be of interest to West. And then reports provided by patients. Not much is happening there. The patient reporting is still very early.

Here's the paper that is just about to come out in the Journal of patient safety. We have changed the focus of measurements from diagnostic error to diagnostic safety. We used the model of structure process outcome and gave examples of measurement concepts, not measures, measurement concept that can be fully discussed as candidate measures going forward. And these are just examples. We can argue about them for a long time, but that is the idea. Shift the focus a little bit. Being realistic about future progress, I think management is ready for quality improvement. It is not ready for any type of public employee or any kind of penalties. We shouldn't be using the measurement for that at this stage for diagnostic area. I think a few things that are mentioned on the slide are important and I want to highlight one. Not just engaging patients but engaging providers. Very few providers are engage in quality improvement in safety improvement measurements around diagnosis. They just don't get the time and effort and they are burned out with things like electronic health records and administrative burdens. They are not able to meet what would be a nice requirement of quality improvement. I think healthcare institutions must step up. We have had early studies, I'm not talking about malpractice I'm talking about in general. Very few institutions have looked at their own data to see what kind of diagnostic problems they have had. We need to learn more definitions and better standards. We need to measure harm in safety but also think about different types of measures. Reliability and uncertainty. We don't have a code for uncertainty. There is so much uncertainty and we can't really quantify that. And then lastly, the measurement needs of the two improvement. It needs to lead to some type of feedback. This is a big research question. Essentially, how do you turn that measured data into learning improvement? Thank you.

[Applause]

Hopefully it's obvious to you now why we asked them to be part of this group today. We are going to start was an opportunity for additional questions. Questions from the audience. Or if we have any from the web. Was get a microphone over to you. What we're doing that I would like to point out is today's meeting would not have been possible without a lot of support behind the scenes or right on the edge of the scenes of all around. There has been a large team of folks working at the agency to help bring you all here today. I know firsthand the kind of issues they have had to deal with. I just wanted to recognize that we have a lot of folks in our team and our program, front row back row and off throughout. With that, did you get a microphone? It's on its way.

Thank you. I have a question. He talked about outcome measures under development of some sort which is great. What about structural measures or best practices? Are there any now that can be applied the says this is what a health system should have place as a best practice?

Sure. Actually, I'll give one example. There is more structural measures than outcomes. There's only one outcome measure because they can come to a consensus. There are a lot of best practices that are already out there. For instance, we have developed [Indiscernible] sponsorship best practices to both providers and patients. We have developed it with the electronic health systems. Proactively institutions can assess how are they doing in terms of consumers and providers. I think if hospitals get the awareness they need in the collective mindfulness some of talking about, I think they will use these guides. They just need inside him what they look like.

Just briefly, I haven't seen all of what the measures are listed but I think a lot of us have been bandying around the idea of even having someone in charge of dealing with this problem. Achieves diagnostic ATF or whatever you want to call it or just that there is someone who is tasked with this problem. I think it's one of the one that among people who would been thinking about this a lot has had some resonance and consensus around that idea.

There's a question in the back here.

Low Mike -- [Indiscernible -- low volume]

Hasn't been any work around that as a proxy for prevention and closely associated with medical error quarks

--?

The risk assessment, I think what you're talking about is more towards global safety risk assessments -- assessments.

[Indiscernible -- low volume]

The paper I'm talking about is an early draft of what this could look like. I don't think we have such a measure on their. But we can make that less. It would become really long really quickly. Hospitals really hate doing anything that takes more than one page to look at. So the Carter was talking about, one of the problems they have is its detailed in the cause you to do more work. Anytime we talk about developing best practices are measures around best practice every risk assessment is going to lead to a lot more work after you do the risk assessment. We need that evidence to inform the development of the checklist of the risk assessment tool you're talking about.

I don't know the specific data but we have done some like an malpractice data sets at the issue of repeat offenders so to speak. There is an increased risk of diagnostic error among people who have had one to begin with. There are potentially markers at the physician level and provider level I think the most potent markers will be clinical variables in the future but we could potentially solve very small amounts of the problem of the variance that we see through that kind of stratification I suspect we're going to get a lot more bang for our buck looking for specific high risk clinical situations. This idea that there are some known traps over and over again. I think if we focus on those, those are our high-yield places rather than specific position characters.

Do you really need all of those data streams we showed you on the slides to get that insight or the intelligence you need around the diagnostics. If you are an administrator you need to be aware through various data sources so you can develop [Indiscernible].

David you had a lack of chief complaints. Thinking about our reimbursement system that is built around having an ICD pin code rather than a problem or chief complaint code, I think the billing system encourages problem of anchoring and diagnostic error. My question for both you is around how big a problem is set in doing your research to really know what the chief complaint is. Is that going to be adequate or do we really need to have a policy discussion about moving away from all of ICD 10 and encouraging in the record unexplained vitamin -- unexplained vomiting, unexplained [Indiscernible] so were not encouraging premature closure.

That's a great question I think there are two things about coding that have come up repeatedly. One is that the chief complaint disappear the other is this idea of not coding uncertainty so everything looks like it's certain when in fact it's not in actual practice on the chief complaint coding thing, the billing is driving us to make more specific diagnosis them maybe the situation warrants because of the financial incentive it doesn't entirely impede our ability because we hack the system. Each of the coding systems, the CDC's data set allow you to do some of that. You can sort of fake it. We can look at diagnostic errors in patients that were given a diagnosis of dizziness or vertigo as a system only. -- Symptom on the. -- Symptom only. It would be much better if we could have, the place where it hurts you not having chief complaint data is when you look at the process. You can look at this subset we actually have the data and I have a fairly high degree of confidence that when there is a mismatch between diagnosis someone left with and returned West but that is probably reflective of the outcome and the harm problem. When you start talking about what am I looking at at the diagnostic process that got me to that point, you have to look at the whole palette of people who came in with a chief complaint of. -- X.

Now you are grossly biased seeing your assessment of the problem. You don't know what the neuroimaging read and all that other percentage of patients. You don't know what the process defects were an electronic health record that you didn't pick up. That's where it gets critically important. In terms of the NOP issue, ethic and other systems have chief complaint coding systems. They are not uniform and well structured. It would be great if we could make them more so. If we could just make people double report a code for reason for visit and final diagnosis code. I don't know that NOP in terms of River at five we would be able to get key pieces of information using simpler stuff including keyword searches.

I I think a couple of things. There is at least 7% of the outpatient video -- visits. We don't have a diagnostic code for uncertainty. The other thing is, we did a study and could we match the most commonly misdiagnosis to some chief complaint so we can say it is the[Indiscernible -- low volume] There was no cross linkage. You can come into the clinic saying I'm here for routine appointment and you can have a totally different course. I think we can't rely as much on chief complaints. This is a tough area. You just have to look at the entire scenario.

I've talked about this with you before but it has become personal for me because I've recently had to manage my dad from afar. I'm in DC and he's in Connecticut. My dad has been having unexplained falls. He just falls. He doesn't lose consciousness. He doesn't feel dizzy, he just falls. I'm scared is going to hit his head and have a real injury. With this is making me wonder about is the pace of care. Now I'm seeing a from a caregiver family perspective and the patient perspective. First is the clinician perspective. When I worked in the ED we have the's mantra, treat them in Street them. Not really a good thing but we had to move people out. We would treat people and move them out of the ED. The same thing I see taking place in the doctor's offices. My dad will wait all week for his appointment and then he gets in there, they check them out, and then they move him on. He is 79. He was vibrant and healthy. So I think measuring the pace that we look at, that piece of care and putting that against not having a diagnosis, I think the other part of that as a family member I have been able to advocate for him and have him FaceTime me in the doctor's office so I can virtually go with him to his appointments, that slows folks down when I'm talking to them. But they stopped because there's an accountability barrier. So the special, the ENT who thought maybe it was his ears, doesn't want to go any further than that am a person to a neurologist. The neurologist did a protest but he doesn't want to go further than that and refers into a cardiologist. So that idea of having someone in charge or even just looking at this is a really interesting concept or measuring that intangible. It's something I think would be helpful.

I think we used to have those people and they were called primary care doctors who used to listen. Primary care doctors don't listen because most of them are basically volume. The going with volume and entering data in the chart. Is not the technology that's a problem is the social are the social technical work system that comes with that they are chart. Is not just the chart causing the doctors to run through patience. I think the cognition definitely gets involved. There is definitely an influence on cognition when you have less time to spend with the patient. I also think there's a bigger problem than just the factors affecting the doctor-patient encounter. And nasty ability to gather died or. We used to be able to get data from the patience quickly and get to some type of diagnosis. Even when the doctors have plenty of time to spend with the patients, were still getting the diagnosis wrong. So when one of the studies we give them relatively easy cases, and the accuracy was only about 60%. There were no time restrictions. They still were not able to do a diagnostic workup. So I think the issues you bring up are important and they are very core to the practice of medicine that I think people around you have influence on but certainly I think there's more to the story than just that. I think were new -- losing the knowledge and skills were used to have. The critical thinking we used to have. Something these the beast studied much more further.

I would like to make two comments in response to that question.

The first is to echo that there are some scenarios where time is not the issue. Time is the issue for a lot of things in it clearly makes the problem worse but there are some things for you to give people all day and dizziness and stroke is one of them. Just because the knowledge and skill bases in out there. Is not the people don't know that stroke could be a part of DC.

I want to comment from the specialist perspective on what you said. A lot of people in this field are very primary care oriented. They bring a rich perspective that relates to that. As a specialist and I narrow off to -- [Indiscernible] the thing is, I think especially in situations like that where there is an uncommon problem and is failed the first couple of rounds,, things are common approach, what I would like to see is a better kind of specialty care than the kind that you've got. Not the kind where the ENT says I am in your guy and I am going to tell you whether it is the ear or not. I am a neurologist and I'm going to tell you whether it is the brain or not. They have the capability to rule in a rule out every disease and their domain space. That is not possible. I think you need a different kind of specialist. You need someone who was a sudden fall specialist or in unexplained sink in fall specialist or something like that who is crosstrained in these different disciplines that you got sent to 12 different people. Because in those situations where it has gone past and is not going to work in primary care, I think you're going to get different specialty care and I would like to see more.

I think the call geriatricians.

We will take a question from one right now.

[Indiscernible -- low volume] [Indiscernible -- audio cutting out]

Sometimes you want a leisurely workup. You want to wait and see if it will resolve over time. And so how do we incorporate the need for timely diagnoses when it is urgent but the need for a more leisurely conservative approach when it is not. That's my first question. My second question from the geriatric perspective is went don't we want to make the diagnosis. How do we balance that?

I will reflect on that last but. I've had lots of conversations with people with this issue. I think there are two schools of thought. Some people say you always want to know conceptually. And then you want to not overreact. You want to know that someone has cancer and explain their symptoms but you don't want to overreact by treating them if the treatment harms outweigh their life expectancy.

I think that we should think about whether we want to even pursue a diagnosis. I think a lot of that revolves around shared decision-making with the patient. Ultimately that is a dialogue between the physician and the patient and the family. Ultimately it is about what are the prospects that resolving further uncertainty about the diagnosis will lead to a health benefit. It's not always easy to do that. There are a lot of uncertainties. We are working with a group looking at sure decision models. I think that there are some interesting conceptual space but I agree you don't always want to make the diagnosis

Is important to communicate on uncertainty. We want to project confidence patients will come back to us. But this is a measurement challenge. Is a huge measurement challenge because if I start communicating uncertainty, how would I do that. With patience the doctors asking for help, they get less comfortable. And when US doctors, one that you communicate uncertainty patience, they never come back if I tell them I don't know what's going on or if I'm not sure what was going on to you. There's also calibration. Alignment between confidence and accuracy. We are not seeking as much information as we used to. We're not asking for help as much as we used to. Again, it's the concept you want to be certain.

I want to reflect for a second on the issue of uncertainty. In communication with the patient, you've got this numeracy aspect which is how do you communicate probability to the patience. There is robust literature on how best to do that. Humans still better with absolute numbers rather than percentages. If you show them the cards with the little gray people in green people for good outcomes and that I'm come so they can visually get it, that can be done for diagnosis. In terms of uncertainty, there are multiple layers of uncertainty. Almost qualitative types of uncertainty. There's the difference between probabilistic uncertainty where you understand the problem formulation and there's a 60% chance of this in a 40% chance of the other thing. And then there's the ambiguity that comes from not understanding where it's, there is no evidence, if this person's problem is unique or different it doesn't fit in any categories, you have a massive layer of uncertainty where you can't even offer the patient those kinds of numbers. I think there is a lot we have not done in the uncertainty space. There are people that study this area.

My name is Natalia Farkas. The question I have relates to the way were shifting the discussion to the patient provider communication issues. One thing that I noticed was there is a lack of measures that relates to patients and patient experience. It seems to me there's a huge body of literature around patient centered medical homes and there is a lot of progress in that area that promotes a coordination collaboration and integration of care. I am wondering if there is any work that you're aware of the wood relates to diagnostic error with the patient medical center, the patient center medical home model.

In terms of measures, one of the programs within our center for quality driven patient safety is the's program. A few of the questions that really speak to me in general but also part of this problem of diagnostic care is broad and I think for the purpose of teasing of diagnostic error there are opportunities to expand to go Diskeeper but did the provider from the patient's perspective, did the provider explain things to me in a way I understand. Again, that is the patient. Many different facets of that question, opportunities to go deeper for different functions across healthcare. We've talked about shared decision-making for diagnosis and how is uncertainty dealt with. Just think of the sample point, we have some role in healthcare. I know we struggle with understanding and our role is to help with that understanding. Many times is beyond the realm of the patient's understanding. I think we're making progress in general not even specifically with diagnostic error on how we normally communicate with patience and encourage their interaction with the healthcare system or appropriately discourage not interaction but treatment for their own benefit. My hope is that we will have opportunities specifically to focus on diagnostic areas as well. At least the survey instrument thinking off of the cuff. You might imagine a diagnostic error margin will for consumer surveys. Again, they need to be developed and tested but that is the kind of thinking that we are looking for. It is the kind of work that we fund. Developmental work. Not everything is a success but many are. I think we have many opportunities..

I think this is a great concept. We had a paper about this about something to do with medical homes and diagnostic. Essentially we said were going to have the right teamwork the right information management, essentially those things can make so much of the difference to the diagnostic process. The issue is when you get to the practical application of those principles are the diagnostic issues, those examples, here is something when abnormal test results. The doctor, the nurse, can the nurse help with follow-up of the abnormal test results. We have tried to do some of those things but some of the nurses to feel they should be the one for that follow-up. We have bigger issues and were trying to -- we do have better measures. The follow up gets better over time. Hopefully in a year or two we may have a little bit of an answer but I think it's just said, this huge -- there is huge opportunity in that area.

I'm Ted Winslow from Chicago. As I listened to the discussion including art and David I'm wondering if the way we use the EMR which is frequently used as a billing tool rather than the other things that we would hope might actually facilitate the development of diagnostic errors. Because we are then forced in to putting in a real diagnosis down without the uncertainty that we have talked about. What you put something on paper particularly if it is printed, it becomes much more meaningful and real. People stopped questioning it as opposed to a like the idea of the chief complaint or the problem rather than an ICD 10 diagnosis that is a billable event. I think a lot of times charts get turned into billable events.

Yes. Clearly. I would like to underscore the phrase this morning, we really need a canvas for people to figure out problems. In an ideal world, eight HR would serve as a common shared campus.

Especially emergency physicians have advocated for codes such as not yet diagnosed. There is a clear expression that the diagnostic process and in theory it could should and would cause premature closure on getting a diagnosis inappropriately farm.

-- Form.

Were going to address these crosscutting issues. We haven't necessarily touch on disparities so I'm going to try to weave that into this question. It is for both. Are there certain patient conditions for which cost-effective approaches and improved diagnostic accuracy and timeliness and likely have high impact potential may be as were thinking about that we think about particular party populations with the conditions for which there is not even distribution of health burden. Cost-effectiveness.

So we've done some work in the space of cost-effectiveness. The place for writing cost-effectiveness plays into this equation is twofold. One is an issue of modeling and the other is in the issue of thinking about where we want to invest our energy in terms of balancing the trade-offs between overdiagnosis and under diagnosis, if you will. On the modeling front I think we have underutilized cost-effectiveness modeling for diagnosis problems. To do cost-effective modeling you need to have certain amounts of data. Sometimes there is insufficient evidence. For a lot of problems we have plenty of data to do with. And we probably do not -- we probably chase after solutions without understanding the problem space. I was having a conversation the other day and we want to close a test results on the lung nodules that were reported on a chest x-ray. In this person said, have you thought about modeling that out. I am not clear that at the end of the day if you actually did that you would help anybody. There are so many false positives and you could be harming patients chasing after the fall's positive long modules. I think we should be thinking more along those lines. We should never allow something to be mess. I'm not suggesting missing test results a good thing but we should think carefully about what loops we're trying to close first. We should pick one where were likely to have benefit. And then there are trade-offs between specificity and sensitivity. I actually think that there are many instances where we have poor diagnostic accuracy in practice but were missing on both sides. Dizziness and struck as one example but there are many others. We are over testing a lot of people and we are under testing a lot of people. We are under testing a smaller number than we are over testing. If we can identify those situations where we are often sensitivity and specificity and for those who receive operating [Indiscernible] we should push that into the top left corner rather than sliding back and forth along the same threshold for our decision-making. We can reclaim highly cost-effective results for patients and we can improve quality and decrease costs at the same time. If you improve your sensitivity for dangerous problems you improve quality. If you improve your specificity than you decrease your costs. We should be looking for those priority conditions and situations in some of those might be a priority populations. Whatever it is, we are missing on both sides. We should be looking for those opportunities.

Many years ago he said the best diagnostician is one who makes accurate diagnosis using the least amount of resources. Essentially he said it really well. You can keep investigating all you want. You might get the right to sit down -- diagnosis but you will raise a lot of resources and between. David made an example of chest x-rays. This is where using health information technology could actually play a role. We have done studies where we looked at thousands and thousands of medical records where a chest x-ray which has been coded by the radiologist as abnormal, suspicious malignancy, we look for those codes and follow-up actions on that x-ray to see if someone followed up that chest x-ray are not within 30 days. We have had two papers on this. When we did that, we were able, we looked at a couple hundred thousand records. We've reviewed 100. One of the ways to make the process of looking at records and make it more efficient and going after the high risk which actually does code [Indiscernible].

I think we have time for one more question. We will have break starting at 3:00 the we will reconvened the full group at 3:15. It will be in the large room. I have been asked to ask you all that we keep the aisles clear. These walls are going to disappear up into the ceiling and we are going to try to get back in our seats by 3:15 so we can get everybody. We have a lot to squeeze into the final session. I'm looking forward to what these other two groups that. We were pretty productive for both the bars. We will reconvened. During the break, there will be some light refreshments in the same place where you picked up your lunch. With that, let's see if we can get one more question and.

I did general consulting. I noticed a public health is not one of the partners listed in the session so far. I'm glad I waited until after the second question because it fits in with the health equity question as well. Public health has a lot of good information. Particularly on wrong diagnoses. Things like several a HIV and AIDS outbreak areas where we have a profile and someone comes in presenting with something different. The profile is you should test this person for selfless or whatever else is going on in the community. I think public health is better able now to get this information quickly. The problem is getting it into the physicians workflow in the correct time. And how to make that happen. I think that should be part of the research agenda you were looking at here. I don't see public health as a partner to bring that up. Of this comments on that, I think there's other opportunities with health equity as well.

I wanted to say quickly in response to your question, I think if you really look broadly within healthcare in the payment reform there were in the midst of, that clearly raises to the top level population health which for me as a public health physician, that is population health apply to health care medicine. The fact the public health and the traditional healthcare that we have seen over the last few decades are very much on a convergent path. I think that's very obvious. It is not a question of what -- it's not a question of if but when. In order to meet some of the goals that I think health reform value-based purchasing is placing on the system intentionally of public health population health perspective will be needed. To step back into the realm of diagnostic ever these likely, this is one set of problems that successful organizations are going to have to solve and figure out where do we expend limited resources for diagnosis and where do we not? Where other impacts that are just not there whether it is diagnostic uncertainty. Where does it not matter as much. And modeling has a role in that. So I think it's a great question and I think it's not only going to happen it is happening now. We need to provide the science to enable it to happen in a more robust way. Workflow was another question.

I'm going to reflect quickly on that. There was clearly a lot of public health folks there and they were interested. Maybe the two fields need to connect. CDC has been showing more interest in this area. They have specific sections. They have interest in this area. They need more funding and support for doing this. Regarding workflow, I completely agree with you. A lot of our innovations that we think will work, don't work because they don't get the right information. The Apollo misdiagnosis that happened in Dallas, there's a lot of public health indications. The nurse who documented the travel history was essentially doing the influenza vaccine. She was more focused on that. That information never made it to the clinicians. The clinician never asked many questions about her travel history anyway. They also were using lots of templates. They spent more time at the computer than talking to the patient. We wrote a paper on this describing the huge catastrophe that happened because of the misdiagnosis. I think the two fields need to connect a whole lot better

Thank you for the question.

Thank you all for your participation. Please join me in thanking our experts.

We will see you back here at 3:15. Thank you.

[The event is on a 15 minute recess. The session will reconvene at 3:15 Eastern Standard Time. Captioner on stand by.]

Will will will I -- -- -- --


If everyone could come in and have a seat, we could get started.

We are going to get started here with the report outs from the different breakout sessions so that everyone has a chance to learn from the breakout groups. I really love the dynamic going on here. We separated you into three groups. So we have the good fortune of having three outstanding individuals who were kind enough to offer to go to one of the breakout sessions twice. And to really take careful notes and reflect on what they were hearing so that they could share back with you both in terms of you probably went to some of the same breakout sessions but because of the way it was set up, you probably missed one so we wanted to make sure you had a chance to take advantage of all the learning. So I am going to very briefly introduce our panel last and give them a chance, they will each take 10 minutes to tell us about their breakout sessions and after we do that we will have some reflections of Helen representing the [Indiscernible] point of view. So on my most immediate right, our first speaker is going to be Dr. Sherry being. Sherry is a geriatrician and rheumatologist and she is currently at CMS as the deputy chief medical officer serving in the office of Q. She is responsible for assisting CMS chief medical officer in the pursuit of higher quality healthcare, healthier populations, and she like all of our excellent panelists has more extensive resume that you can see on our website . Thank you. I will give you a chance. Sitting next to Sherry is Dr. David Hans was a surgeon and who has joined the office of the national coordinator for health information technology back in October 2007. He currently serves as a medical directed -- medical director in the office of clinical quality and safety. And then next to David we will hear from Dr. Lucy Savage. She is the assistant vice president of delivery systems science and the director of research and education for the Institute for healthcare delivery research at Intermountain healthcare. She is also a research professor in clinical epidemiology in the school of medicine and the adjunct faculty in nursing, pediatrics, family and preventive medicine at the University of Utah. I recommend all of you to take a look on the website if you want to read more details of out each of our speakers. After these three false report out for rest, I will introduce Helen Haskell to you will also be able to provide some comments from her experiences here today. Sherry, let's begin with you.

Are you able to hear me now?

All right wonderful . Thank you as a former chief resident you would think I could figure this out. Thank you and good afternoon. I am delayed -- I am delighted to be here and to be part of this. Thank you for including CMS as part of the program activities. I will start with just a couple words of context. As you know, delivery system reform is underway. Following the secretaries call for paying for healthcare differently, paying for value as opposed to volume, changing the way that we practice from [Indiscernible] to care models in providing parents to alternative payment models and tying care to quality and payment to quality as well as sharing and distributing information, I think the conversations today actually fit right in the middle of all of that. And so it is a tremendous opportunity to fill out the charge in the context of customer reform. Driving towards better value care and its improving outcomes for the beneficiaries we serve. The purpose of the use of data and diagnostic safety was to explore what is known, with the current state is, the environmental scan of what exists in terms of measurements and data on diagnostic safety.. -- We also have -- had discussions about what's available in the short term to improve and reduce diagnostic errors. There is a great deal of discussion given the topic. I will summarize the three main points. The three Takeaway points are, and I will start with first of all where these conversations always should begin which is the focus on the patient. There are assumptions that we have to recognize that tests must be done. Will rethink about diagnostic testing to start and ask ourselves how we use this test in the care and keeping of the patients that we are serving. That is one point to make. And importantly, as we see our healthcare system and the component parts whether it is in the outpatient setting, the inpatients -- inpatient setting, what ever spoken setting were looking at in the context of quality of care. The one constant throughout all of those settings is the patient. So as we think about diagnostic testing and the accuracy that is needed in the air is the calm when we fail in providing accuracy and that translates into errors. We need to think about diagnostic testing throughout the entire continue will and how better coordinated care can deliver to reducing diagnostic errors and more continuous care. It also calls out the opportunity and there is some discussion about this, that patients that we serve, the populations that we are serving our becoming increasingly complex. With the demographic shifts with the complexities and multiple chronic conditions that are before each and every one of us in the context of healthcare delivery, the question that we ask ourselves is how will we use information from this test? There is some discussion about that and what the opportunities are to meet the needs of this increasingly complex patient population. And it is a diverse population as well. We can only achieve improvement if we measure. We have to measure, and this is the context of business case for measurement, is how we use that measurement to improve. That brings me to the second major Takeaway which is measurement is not measurement in and of itself, always. There are metrics, there are measures, there are quality measures, there are different definitions of all of this. And the point being, these are all intended to permit us as a healthcare team to improve. So the measurement, we have to be clear about what it is when we talk about measurement. And measures and metrics. Not all should be used and implemented in the same way. There is discussion about metrics for quality improvement purposes. I think there was agreement that we can measure to improve. We can put in place metrics for quality improvement purposes but likely because improvement is a very local phenomenon, there is likely not going to be one measure that suits all purposes, nor will there be one use of a measure and likely there should be a palette of measures that can be compiled and used for different purposes but another important point that was shared is that the point is to improve healthcare. When you look at a measure to know what is actionable about that measure and for whom, it also touches on attribution, accountability, and also it was stated in shared that as measurement of falls, the temptation would be to then tie to that measurement incentives or disincentives. So going from quality improvement, pay for reporting, pay-for-performance, pay for value, that is a different evolution. It is an important evolution and yet we want to unveil where errors are occurring. How do we enable this unveil laying and the most earnest and honest and more than forthcoming environment we can so that we are all reading from the same sheet of music. Sold a take-home point -- so the third take-home point is that data and measurement must be actionable. Data is not information necessarily. It has to be informative. How do we make the information available and understandable and comprehensible at the point of care where care is happening so that care providers can learn from their own performance and their own behavior using data sources that preferably are integrated into the workflow recognizes -- recognizing the challenges of reporting. I think those are the three main points. There was a crosscutting theme in that included engaging patients as a partner in their care. To utilize patient information and open notes were mentioned to confirm accuracy of information that is being provided from provider to provide out but also recognizing that patients actually live beyond where your clinic is. And to enable the opportunity for patient reported outcomes is another opportunity. The final crosscutting issue was that of education and training. And so how do we take the data, create information, information that permits us to learn from our own behaviors and practices? How we do this also is a matter of change in culture. So we can provide technical assistance. We can provide tools. Those tools must be adopted and can only be adopted if there is a culture of safety which I have been working on for many years to improve and to improve on the accuracy. In the final is, thoughts on what would be specific research priorities. We did not get to this part of the conversation but I could put forward to you that there could be key outcomes that research and quality improvement can converge upon and to perhaps start with principles that could drive the prioritization of work that could be forthcoming . Thank you.

[Applause]

I guarantee you, my kids will tell you that I am always heard. It's the content tissue. I was in the health IT session. You are going to hear a lot of the same themes and variations of those things again and again I suspect. Before I do a breakout of our session I want to really be a little opportunistic and think not just for today but their leadership in this area all around as is typical what I have seen are they run out ahead and then they allow the rest of us to catch up. This is just another great example of that. We are tasked to identify gaps in priorities and research. During the session we had three great speakers. We heard about AHRQ and from consumer informatics the design of recommendation, the discussion of medications and treatments all the way down through diagnostic errors in patient reported outcomes. The portfolio of what they have is broad and deep. And there we heard to excellent presentations. Dr. Carreon was on point when she during the course of her presentation, she asked three questions in two of which are so imperative and relevant. Health IT for whom and then health IT for what? Dr. Needleman discussed measures of diagnostic error and identify the different highlights of the different modalities from voluntary reporting all the way up to automated measures. He discussed in some detail his measure which is actually the first patient safety measure that has been approved by in Quito F -- NQF. We can categorize this in three big categories. The people involved, the structure, and in the processes involved. The big question is what is the mixture with people with structures that we can use to change the process. It should be no surprise that throughout this discussion we've returned to the idea that this is a social technical session with a lot of interplay from a lot of different actors. Each of them plays an important role. It's obvious that we have to engage as AHRQ already has with federal partners . One current role that came up time and time again is that of the role of the patient and their family or their caregivers. -- In this work. That came back time and time again and I think you could probably say throughout all of the rest of the discussion that I'm presenting that the question came up, what is the role or how can we interject the patient and their family or their caregivers in this process right here? That was again, and again, and again. The diagnostic team must be considered much more than the physicians and the entire clinical staff and nonclinical staff actually have to be engaged with this. And then went good and very important point was that we have to include private industry. When we think about the structures that are involved, obviously there was consideration of be charged. There was a point and I think that it is to some extent talent that right now our BHR's are really optimized for documentation and they could be made better to better support cognitive support for the diagnostic process and that is one challenge and a great opportunity along those lines of what health IT can be used a simulation came up and simulation is not just clinical simulation but also we heard a little bit about cognitive tools that could be used that can help the clinicians at all stages in their career began to change and improve the way that they make diagnoses. Another concept that came up that I was really intrigued with was the idea of real-time CMD. That is the idea that your current portfolio of patients perhaps during the week or month or whatever time course you. Will then trigger the CME activities that you could then engage in via discussion, web-based EHR's and immediate relevance between what you were seeing in the clinic and what you are actually studying and learning. There was a discussion again obviously of measures. Sort of echo, there was some concern that echoed the rime of the ancient sea merrier measures, measures everywhere but not a drop to drink. The idea was that in this diagnostic Iraq we have a bit of a measures desert if you would. Even though there are thousands of measures, we do not have enough for our purposes right here and to echo, there are measures for quality improvement that do not need to be used for accountability. We need a portfolio of measures to get a good sense of what the epidemiology of this is. Other tech modalities need to be explored in use from Tele hope to registries, the use of video and tele-mentoring. Tele-mentoring is becoming a big thing even in my area of search and. A surgeon across the country can take a look in over your shoulder and make recommendations during the course of your operation. Virtual care. And then when you look at the areas of process, the one thing that is obvious is that the process is dynamic. We really need to combine and collaborate with different sets of team members on a regular basis to see how we can learn how to mitigate area -- Evers to see what teams -- Evers to see what should be brought to bear to help us with this process and without gauging -- engaging all the team members in the process and that has to include the members and their families. We have to think about the idea of priority patients and priority populations and the underserved. That came out will we talked about Darrian presentations of different diagnoses that may be common but may not be presented in a way that we may typically think of for the standard mail. Throughout all of this is the idea that time, and that's ironic my two-minute warning, time is a common thread throughout all of this and the idea that we have to understand how time management impacts our ability to make or not make a particular diagnosis. Those are the big themes that are there throughout our group. There is obvious overlap between these different categories. These are just an artificial designation but I will say the discussion was spirited and it was very good. I would say that the quality of the content that we were able to discuss and that you have just heard here is due to the excellence of the people that were in the sessions. Any errors or omissions or defects are all on my own.

[Applause]

I usually don't have trouble with my voice either. I raise to boys. I had the privilege of sitting on a panel that was incredibly informative. I hope you will all angry with me what a stimulating day this is been in that there are so many people on this journey together. Thank you for letting me be here with you. I will try to do justice to the panel. We began with an umbrella that was context from the report. Thinking about one of those organizational factors that can influence the diagnostics in delivery system settings. We highlighted issues that are no-brainers of things we already know about. And then we went on to a more deeper dive and we talked about organizations and diagnostics and what that meant. Again, as David already alluded to, you will hear reflections across these groups. The power of context. If you think about the shaping situations that allow you to have high diagnostic performance. Time pressures came up as a long discussion particularly in the second session. We talked about the people and their knowledge in this notion of coproduction. In thinking about how do we engage and the notion of coproduction at the point of care is key. The information available and meaningful measures if you well that are meaningful not just to that [Indiscernible] but the patients in the family and caregivers that are serving. And then the third section which was particularly interesting to me, the organizational priorities, perspectives and how do you get those folks engaged. And talking about the key things that any of the system leaders are looking at. They are looking at safety. They are looking at regulatory requirements. They are looking at the outcomes and then lastly pay for performance or the road that we are on it shaping the way so that we get paid and how that influences decisions made in the organization. To quote Chris, this is really, really, really, really, hard work. In terms of what I have heard from my vantage point in terms of the main takeaways, one was how do we engage all of the stakeholders particularly senior leadership and payers. And in particular making a business case we talk about ROI. What is the financial cost and the payment implications of this work. The second area I heard was the need for meaningful measures. Meaningful as I already said to patients but also important in sorting out, when you think about the overload of measures people have that they are looking at, that was not me -- maybe it was an important point. [Laughter] Looking at the important measures. How do we give that speed back to people. One of the interesting comments that was raised from a New York Times [Indiscernible] in the second session, if you think about it, when there is a diagnosis made it is made across a continuum, were not just looking at a hospital or ambulatory setting. For those providers that see the patient initially, how did they get the feedback back. The journey through the continue worm. Can we use the data effectively in that way. The third take away with the need for an organizational framework. I have the same experience, Chris noted that, every time I go into a clinical setting, we are already at the Max. One of the things that was interesting particularly@MedSTAR is when they put out a simple framework, a strategy which we have all probably memorize, people started saying I already do this or that. Worked it's already going on in the delivery system so we can leverage that is an important opportunity. This notion of intersecting sets of people particularly is a move across the continuum in this process is necessary. If we think about the to cost -- crosscutting issues, one of our speakers talked about the fact that how can we use the structures for training. How can we train them as they are going through that process so they bring this perspective to the care delivery system. And then secondarily, how do we take these organizations is learning laboratories [Indiscernible]. The other crosscutting issue has to do with patience. Both the training and the patient experience as the engine that fuels the work we are doing in these areas. When you think about patience, we have heard all day long about the power of patient stories and patient experience. What that means and how it motivates people. You do not want to lose that intrinsic motivation to do the right thing. And we also talked about coproduction. What is the diagnostic process in an organization. A fundamental first-line question that we look at. We can understand the roles and responsibilities. I don't like to use the term best practices but how do we identify promising practices. And don't understand what to measure so that we can understand where were making improvements. As a secondary issue, what are the common pitfalls that people are experiencing. Are there organizational factors associated with those pitfalls. And then coupled with that, how do we target for action. If we understand what the pitfalls are, how do we take action and know what we should be doing. A third area was cost. We talked about cost to the organization and considering again different modalities in different issues from a different stakeholder perspective, the question I always get asked is cost to whom? Cities to society or the organizations or the payers. Who is saving. Also the downstream cost is an important point. And the fourth area is how do we think about the boards in the roles of senior leadership and their influence on the agenda and priority setting in this particular diagnostic area and the extent to which we can use those influencers to drive change. And then lastly, could we develop a diagnostic error to really views and guide the way we are moving forward?

[Captioners transitioning]

-- Appreciate the fact that many of you set up an eight joyed your experience with that. It is with that in mind that we have invited our next speaker, Ms. Helen Haskell, and Ms. Haskell is president of a nonprofit patient organization called mothers against medical error and consumers advancing patient safety, and she has a pretty powerful story to share. She tragically -- a medical error resulted in the death of her young son who was in the year 2000, and she has devoted herself to advocacy for healthcare safety and quality in a variety of field such as patient and based -- diagnostic error reduction, rapid response and infection prevention among others, and Ms. Haskell served on a number of national and international board an organization, and we were actually very fortunate that she was a former member of our national advisory Council prior to my time being here, but I know from my colleagues here how much your participation in that was so highly valued.

I wanted to take a few minutes and to give Helen the opportunity her reflections as someone who has advocated so strongly for patients and all of us, frankly, in the face of diagnostic errors. Thank you so much for coming here today.

Thank you.

[ Applause ]

Thank you very much your family are to be here.

I just want to talk a little bit not so much about what I heard today, although I heard a lot of very interesting things, but about the priorities among the patients that I interact with on a daily basis, the patient and patient advocate. I heard a lot of interesting things here. There was a lot of interest in patient engagement, but there are also a lot of things that weren't said, and those really are some of the things I want to talk about.

I want to start by telling you a little bit about how I came here, and as Andy said, my son, Louis, died in the year 2000, so I want to sort of take you back to that time. It was shortly after the first conference on medical error, and it was really a different world. It was certainly a different world for us. We were not involved in healthcare at all except as consumers, but there was less awareness of the problems, and when I started on this, the problem seemed simpler, and they were simpler. We did not really think of it that way, but we were really living in a simpler world.

In that simpler world, we took our son, Louis, to the teaching hospital for a procedure that was advertised as safe and minimally invasive and having an easy recovery, none of which turned out to be the case for the majority of the patient per

I'm not going to go into the details because that is not why we are here, but I want to take a little bit of a detour and talk about Lewis for a minute.

He was really an outstanding boy. He was one of the top students in our state. He was a scholar, a musician, an athlete, and actor. He was the reason we did everything we did in those days and he is the reason I do everything I have done since then.

In the hospital, Louis suffered an adverse drug event, which led to perforated ulcer to peritonitis and hem Raj -- hemorrhage.'s condition was not recognized and he died, as is all too common, and a failure to rescue in the hospital.

Lewis's story, misdiagnosis in the hospital, is not what people usually think of first when I think about I -- diagnostic error, but it is very common, and it is more difficult to correct because of the helplessness of the patient and the lack of information in the hospital.

I think his story also shows the complexity of diagnostic error. I have worked in a variety of fields and patient safety and that is not by accident and that is because they all pertain to what happened to my son. It should -- the interrelationship of diagnostic error with all aspects of patient safety I think it's really important. That is something we have got to work out. Medication error, failure to rescue, cognitive error, the role of inexperience, working conditions, certain -- deference to authority and these are all issues in and out of the hospital.

I think most importantly it shows the importance of patient voice. This is one of the errors where medicine has thankfully improved, but in 2000, we had no one to call. We were totally dependent on our bedside nurse. I understood that my son was going into shock, but I was apparently the only one, and nothing came of that information.

I say we have improved in giving patient a voice and we have, but we still have a long way to go. And is obviously one of the major factors in the correct diagnosis and it is still, I would say, our greatest challenge.

Back in 2012, everyone was talking about patient engagement as the buck luster of the century and it is a blockbuster drive. I think the term was first applied to patient engagement, patient involvement, and preparation for surgery, which is certainly had a huge effect, but it is especially the case in diagnosis were information has to come directly from the patient. As you are all too aware, this is not easy, that is why we are here. It is obviously very important to get it right.

In my opinion, I think getting it right involved much more than has traditionally been considered part of the patient provider interaction, so we need to include the patient's ability to monitor his or her own condition, to interpret their condition through the length of their own knowledge, not just of themselves, but of their condition. Patient access to information via the Internet has become the great disrupter of the patient physician relationship, and it is even more important as practitioners workload becomes more compressed, and in several complex cases that I know, it is the patient who has done the diagnosis and that is because patients have both the time and motivation to do research on their own cases at a depth that their help but -- healthcare providers just cannot. We cannot discount this.

When we talk about measurement, we need to be sure that patient input is included. Research shows emphatically that patient reported outcomes are very different from those of their own providers. They are generally more detailed. They see adverse events and side effects as more severe and there are indications that patient outcomes are more likely to correlate with their own physiological status than those of their providers.

We are beginning to capture this kind of information, patient reported outcome measures, but we need to keep in mind the significant for diagnosis and the information that we can take from those problems, as well.

And similarly, as has been mentioned many times today, practitioners need to feedback from their own patients, especially for diagnostic error has occurred.

Lisa Sanders mentioned in one of the breakouts today that people don't always see the significant of other people's mistakes. They don't see it as applying to them, but as they get feedback on their own mistakes and they understand how it happens. Again, we need to find ways to make this possible.

In all these areas we are moving rapidly beyond the concept of patient engagement to the concept of coproduction and codesigned. I still hear a lot of talk. I am always surprised especially in shared decision-making, about making sure that the patient really understand you. I don't think that is the most important point. The most important point is making sure that you understand the patient.

To make the patient provider partnership work, patients need access, not just a general diagnostically helpful information that is available on the Internet or in a lot of decision-making tools. They also need access to their own information. They need their own medical records, their own test results, their own differential diagnoses. These topics are addressed in the IOM report and in a lot of other research by people in these rooms, people in this room, sorry, but again, as practitioners time becomes more limited, the need for an active patient provider partnership becomes more urgent. We need to move faster. Patients and families are the main coordinators of their own care, but they are operating in the dark. If you think justice is blind, just have a look at patient. There left to guess about some of the most important things in their own lives. If you give them the tools to be full partners to create a shared mental model, and then watch what happens. I think the patient are less great untapped resource and it is a solution to a lot of your problems. Thank you.

[ Applause ]

[ Pause ]

That was extraordinary. Thank you so much, Helen, and thank you for each of our reporters. It really has been in my mind an extraordinary day. To have the import of so many different kinds of stakeholders in this meeting I think has made it as rich as it has, and I guess I will say is sort of the host for AHRQ it has been very exciting to have you all in our house . Where sorry a few things in our house did not work all the time. We have continuously tried to improve that, and I just want to say a powerful it is to have the different voices of you all coming here and reminding us about the work that we need to do at AHRQ, the role that we can play as a partner with each of you. Thank you, so much, for the time that you spent with us today. It has been really quite remarkable.

I want to also -- prior to introducing our final speaker today, just say a few words about some of the things that have gone on here, including -- because of some of our technical problems, I wanted to acknowledge that. I don't know if the slides are here. I can perhaps advance --

They are opened [ Indiscernible - low volume ].



Reporter:

We will figure it out

There we go. Look at that. Stuff works.

Later today I wanted to apologize. I know Dr. Schiff gave a fantastic -- he had a beautiful picture of the people diagnoses. One of the things we talked about today -- we talked over lunch in different parts of the meeting, what doing mean by diagnosis, we are going to figure out what it error in a diagnosis is and Gordy had that beautiful picture that somehow melted into each other, but we reformatted a bit. I just wanted to put it up here without necessarily reading them all off, but say how complex and different aspects of this there are for all of us to be thinking about as we pursue this work and to acknowledge. I heard interesting questions and themes that were raised in Italia some interesting discussion this morning about does that mean all of these are preventable or which one of these are and I think it is really an important area for us to be able to figure out what we are talking about and what the research can do to help us understand and put to rest this notion that it is somehow something we have to accept about how healthcare is delivered and to really make it a focus of something we are going to really make a difference about. Thank you to Gordy and to all of our speakers today who I thought were extraordinary in this group and we came together, as well as in the breakout sessions and again here today. Thank you so much for all of you who served as speakers for us today.

[ Applause ]

I also -- given that I am doing a couple think is now, I really need to acknowledge some people here in the room and who made this meeting possible. That includes our very own Jeff Brady, Tom Henriksen, Erin Grace, Richard Russell, and Jamie Zimmerman. And really a full family of folks that are, but this was really the team that got this pulled off and did just an amazing job. If you could all be willing to stand up to so we can acknowledge you, I would really appreciate that.

[ Applause ]

I think the next question for us all is where are we going to go from here. I think we are going to here in a few minutes, and I want to -- Sherry, I know you wanted that like you have a commitment, so if you need to leave, we certainly understand and appreciate your contribution.

[ Applause ]

One of the things that I want to really highlight for you all is that this is not just an idea we are building toward but is an idea that we want to start working on with you today, and so I want to bring to your attention the fact that we have already a couple of funding announcements that I think are relevant for you to consider in the area of what we are talking about today. We have both the are 18 and are one mechanism and I recommend you to go to our website to be able to look at these. The R a T mechanism is our mechanism of thinking about how to turn evidence into action, the implementation side, and the are one mechanism is the mechanism we have to build the evidence the field, as well. I think these would be very relevant areas for you to consider and we would be very excited applications from you or those you know what were doing exciting work in the field.

As a general -- you can see this on the website, but the information listed there that we can give up to $350,000 per year or up to $1.75 million, and even then on the website you may see updates about those amounts over time, and, again, just think about our framework of what we are trying to do, and I think we really showed that this morning, that we want to build the evidence case. We then want to take that evidence and figure out how to effectively move it into health systems at the frontlines of care to bring about the improvements in patient care that will care about, and we want to develop robust systems of measuring the impact of our work so that we can continuously improve over time. Those will be the kind of proposal that I think we would be very excited to see from all of you coming forward.

I think we just heard a tremendously valuable information from our reporters about the breakout sessions, so I don't want to spend a lot of time rehashing that other than to say I think we have been given a lot of information, and of course within this audience and watching through the WebEx for many of the staff at AHRQ I think have been informed by your contributions are today , and we will be processing quite extensively after this meeting the comments and the things that we heard from this meeting.

To help you process along those lines, I want you to know that all of the slides from today will be available on our website, and I think it was mentioned. I don't know if it will be tomorrow or two weeks from tomorrow, but it will be relatively soon, and those will be available to you and I hope they will be helpful to you.

Our goal really is to settle on really helping to identify the priorities for a concrete research plan going forward so that in time I would expect that not only would we have the funding announcements that I am sharing with you here today but probably some additional ones that we will continue to pick up -- that will continue to pick up the themes talked about here and address some of the opportunities and priorities that were discussed as part of this meeting.

It is absolutely critical for us to have partners to be able to do this work going forward, so I'm speaking to all of you, and all that you represent in terms of how we all need to come together to work on this really complex issue.

I really want that kind of partnership and it is in that light that we have asked our final speaker for today, Paul Epner, to come and really speak to us about where are some of the next steps. Mr. Ebner is executive vice president of the society to improve diagnosis or medicine, which is a nonprofit organization that seeks to improve patient safety to make sure it diagnosis was made in accurately and timely manner. Is the cofounder and director for [ Indiscernible ] as well as the chair for the coalition to improve diagnosis, a multi-organization collaboration, and as I think Paul will share with you, he is involved with leading a meeting tomorrow to broaden out the coalition of different stakeholders who will be engaged in the work of improving diagnosis.

We see the work at AHRQ as very much getting into the fabric of the community, and so we welcome Paul to the podium to share your reflections on what you heard today and how you see it fitting into the work of your coalition. Thank you for coming.

[ Applause ]

[ Pause ]

I know I am the only one between you and departure, and it is an interesting challenge. You have already heard a wrapup from three people who did a great job synthesizing what happened in their meetings. You have heard some excellent reflections from Helen. I have been asked to have my own reflections and then also to talk about the path forward, and so that is really the agenda.

I do have a few slides, but I guess on the reflections part, I don't. The good and bad news is being asked to do sort of a wrapup you don't need to prepare slides. The bad news is that there is very little time in a one-day meeting to reflect and to produce reflection, so I apologize a little if there is some redundancy with the wrapup. I certainly have tried to organize into a few teams, but I am reminded of Blaise Pascal in the 1700s who apologized for the letter being so long because he did not have enough time to make it shorter, and so it may not be quite as efficient as I would normally like.

I was struck at the outset with Victor and and his opening comments by the long successful track record we have had in patient safety, but here we are in 2016 celebrating the one year anniversary of the IOM report and diagnostic error. Despite a wonderful set of accomplishments and track records and to err is human, the rest it affect record for diagnostic error is not quite [ Indiscernible ] even though it is really getting going now.

There is still so much to do. When you look at the data that you presented on hospital acquired conditions, you have to ask yourself what would that data look like if the diagnostic error component was added into it across all the different conditions, and so by taking this very big subset of data and excluding at, are we really getting the picture that we need to prioritize things? Do we really have the business case that we need to have? For a country that spends so much time thinking about the financial impact and measuring everything, we heard this morning that there is no cost and everyone can except on the burden of diagnostic error. This is a major missing data point that is again a very important part of any data business case that we will need to make.

We also heard throughout the day about the disparities and its impact on diagnostic error, which we don't understand the impact of pediatrics, the impact on geriatrics, the impact on -- because of language issues, whether it be literacy or foreign languages, the impact from home and for other socioeconomic deprived my own his, all of these things affect the rate, affect the solution, and yet we understand them to not a great degree.

Not all negative. We have tremendous progress that has been made. In the last year, since the IOM report, we have had a great many conversations, what was once rare conversations have become now more frequent conversations, and those more frequent conversations have led to some rare actions. I love Chris's tips for making this real in a healthcare delivery system tied to existing work, define the financial impact, use patient stories, make internal research relevant to ROI analysis and pilot studies. That seems like a wonderful formula for organizations to get involved even now.

I think we can also say that there are other organizations who aren't experimenting, Maine medical, multi-healthcare, advocate help system of Chicago are just a few of the organizations that are trying and doing things, but these are relatively small numbers, and we also heard how often this is absent from the board rooms and from the priorities of organizations personally due to the complexity of everything.

We certainly have shown that we need crosscutting collaborations, and the coalition to improve diagnosis, which I'll talk a little bit more about in a second is an example of that. That still needs further expansion. It is already emblematic of a public private partnership by involving the CDC and AHRQ in many different healthcare organizations, and even now extend and further into the for-profit organizations, I think of [ Indiscernible ] that was here earlier and all but help that we are getting with the mail practice -- malpractice community. Wherefrom AHRQ today about the clinical seduces -- so there are many partners out there, and it is going to take working with all of them. This is a huge problem.

But I also reflected on the organizations who are not sufficiently involved. I have to be careful about be judgmental because it is a bigger issue than all that, but we need to -- I know we need to get accreditation organizations were involved. I know we need -- we heard in one of the breakouts about getting states more involved. States is where the practice of medicine is legislated and licensed. We can do a not that a lot at the national level, but if we don't focus on getting state involvement, we have left a major lever alone that we cannot afford to ignore.

We heard a lot about measures, and the need for measures and definitions of those measures. One of the more intriguing ones is the issue of accuracy and timeliness that Mark talked a little bit about this morning, and I am not sure how much we are going to fix all of those problems. On Monday I was at a WHO consultation in Italy presenting on diagnostic error, and I got pushback from a position when he talked about I don't like the word accuracy. The important thing is getting patients healthier, and if I can get them healthy without an accurate diagnosis because of presumptive treatment, that is what counts, and to some degree, that is certainly a valid argument if we can figure out the appropriateness of that presumptive treatment and does not use it as a reason to not improve.

The issue of accuracy and timeliness is important, but I also reflect on Judge Potter when he was asked about pornography in a Supreme Court case and he said I don't know how to define it but I know when I see it.

I think at the delivery level, at the frontlines, we know diagnostic error when we see it work we don't always say we can measure carefully. We don't always have the great statistics, but we know when the delay was an appropriate, not in every case, but in and of cases that we can take action, and so I think the work on measures is going to continue. It is really important, but it is not the reason to not move forward, and we have to get much more innovative about measures. We need to try more things. Return about -- we heard about patient reported outcomes. The whole issue of a feedback group with patience, a structured feedback loop is what I think is really missing. We heard that some of the discussions today about some innovations around pharmacy data, which is pretty discreet and pretty accurate, and using things like a prescription that should have been refilled based on the diagnosis that was not refilled or a prescription that was canceled and replaced with another prescription in light of a diagnostic code. These are triggers that have potential as ways to find diagnostic error, and so we need to look for those kind of things. We need to try simulations. At that WHO meeting, Carla [ Indiscernible ] from University Wisconsin Madison was showing video on how they were using sensors embedded in a Manichaean to illustrate particular diagnostic skills and techniques and the variation among patients. We need to develop those kinds of manikins, some that are obese, some that have other kinds of physiques, but that have the ability to program in lower right quadrant abdominal pain, things like that so we can start trying to do new things. These are all [ Indiscernible ], but there things that are needed to see if we can find other ways.

The notion of using open notes and looking at additions, deletions and open notes as a potential near miss of diagnostic error. It is something that is think has to be explored further.

If we think about solutions, what I heard over and over again was so much of the work we do and so much of our allocation of effort is in hospital settings and yet weird -- wire diagnosis happens so often is an amatory settings. Balancing that effort to ensure that we are getting the tools to the need is really important, and developing tools as [ Indiscernible ], and time came up over and over again today as being so limiting, and it is great for us to put out tools that are going to add 20% time required to the encounter, but that is just asking for a tool that will not get used.

We have a responsibility to figure out how to use tools to shorten time requirement, not to expand it.

The cognitive burden in trying to shift some of that to the systematic side where we have more experience than where we are better and can use human factors is an interesting opportunity but I was amazed by the debate between [ Indiscernible ] on where can we do the great that is good. The issue is the fact of systematic choice and intervention, if cognitive [ Indiscernible ]? The answer is we don't know when it does not matter. We just need to be doing things.

We need to, I think, reduce the cognitive burden because physicians cannot deal with the 10,000 diseases and 5000 laboratory tests and other procedures. That is beyond anyone's capacity.

I loved chordae talk about diagnostic pitfalls as another cognitive aid of things to watch out for and to be ready for. You can imagine the decision-support tools being used to enrich though use of those diagnostic pitfalls.

Let me conclude this reflects in part, and then just shift over into the path forward with just a couple of thoughts.

At this meeting on Monday, and it is a three-day meeting that ended today, but I had come back here, so [ Indiscernible ] from Harvard led off the discussion, and it was interesting to me that he said from a skeptic's perspective, not necessarily [ Indiscernible ], and [ Indiscernible ] was on that Iowan committee for diagnostic error is probably well known to many of you, but he said that business case for patient safety is still weak. He said if you look back at the billions that has been spent since two error is human and look at the rate of adverse events and other things except in very specific areas, you are not going to say that flexi we moved the needle a lot. It is not that we have not done the right thing and the work is not important, it is just the data is not very convincing for the people want to see numbers.

He also said that safety is not the focus of most boards.

We heard exceptions today, but today in his room are leading edge people.

Is that out of Mid-America it is not the topic of discussion that the board meetings. We need to figure out how to change that.

And he specifically called out diagnostic error as a neglected area of work that the WHO needs to take on.

I think when I heard all of that I was reminded about a phrase that I used to use when I was in the business world of constructive in patients.

We have a lot of work to do, but we cannot be too patient about it. The needs are two great, and the impact is too important, and so we don't want to be unrealistic and we don't want to burn out our welcome by always pushing, but nor should we be satisfied with long time lines and realistic path forward. We need to be aspirational, and we need to ensure that we have not accepted anything less than the best that we can do.

The second piece I would start -- asked to talk about was the path forward. I'm going to do that, it looks long but it is pretty short. The look back, we are very talked about that. I don't think we need to talk about that anymore.

The organizations and the forms that are available to us as tools in how we move forward, and then finally, just a few steps on how we go forward.

If you are not familiar with the society to improve diagnosis and medicine, and Mark up our vision earlier, it is creating a world where no patients are harmed by diagnostic error, and most importantly what is new information for you is the mission.

The society is a small organization but it is a focused one. We think we are the only organization in the world solely focused on this area, and you are dealing with a crosscutting problem by diagnostic error, and when you are dealing with NIH who has its various institutes and other areas that have treatment oriented or organ oriented taxonomies, being cost-cutting and having a focus on this problem, which we believe is big enough to warrant it is incredibly important, but yet we know that we as a small organization have to be affected by ourselves, and so our goal is to catalyze and lead change to improve diagnosis and eliminate hard in partnership, and so it is all about collaborations, and that brings us to the next slide which is the coalition to improve diagnosis.

We convene the coalition. Mark added on 2016. Exit was -- August or September 2015, so the end of 2015.

We said our goal for this coalition was to bring much-needed attention and awareness and action to diagnostic error, and the members [ Indiscernible ], and what was really important was we said we are not looking for endorsement, we are looking for actions. I think we got the impression that we are about action, and so we said you have got to do two things perkier got to figure out something to do by yourself within your own constituency, and they have to be willing to participate in collective actions with your colleagues, again, because the partnership the need for collaboration.

We just in the last 30 or 60 days come up with a plan of collective actions, and I will take it to those in the second, but I wanted to show you, not that you can necessarily read it all, but these are the 27 organizations, W AAMC that just joined us last Friday, so we are now up to 27 organizations that are part of the coalition, and they include everything from government partners to medical organizations, licensing board, PFO, performance measurement organizations like leapfrog. It is just a broad variety, and it is growing. There are more people interested in joining and working through it. Even our first delivery catheter delivery system in the form of Intermountain healthcare has joined us, as well. We are making progress, and this is not a static list. It is growing about one or two a month, maybe three a month from its initial start.

We set up these the collective actions. The first one is research advocacy, which very much ties to the work we did today, and it reaches avidly, this is sort of a quick summary of what the subcommittee of the coalition came up with and that the steering committee of the coalition approved. Is not only to develop a roadmap, but to develop and advocacy platform that supports the need for increased funding.

What we heard today were lots of ideas about all the research we could do, but then we also heard 70 million is a total budget and AHRQ is the prime funder right now , and so the amount on diagnostic error is very small. We have many more good ideas, although I will say that AHRQ -- they wish they had more proposals that they could fund, and so there is the dual responsibility. Wanted to help them get more funding, but the community has to help by providing strong pull that are going to advance the field, and we have not done our job on that, as well.

We need to focus on in this advocacy platform on the power of patient stories. We need to define the key gaps and measurement and accountability, and we need to come up with this cost agency unified research agenda and refunding costs. But those are the major objective and deliverables, and this work will the fact it is starting. It starts today with this meeting.

That second one is to work on interventions, the two identification evaluation and dissemination.

Identify and disseminate effective tools. You cannot have effective tools if you have not evaluated them. Does that mean submit them to rigorous research? That is also a proposal, money, time, probably not, but at least looking for evidence of effectiveness, the Association, the society for help with managers said give us the tools, we will try them all, and we will give you a report.

We can start there. Again, let's not start the perfect be the enemy of the good enough for starters. We are going to try to select but tools. We are going to do a scurvy and connect the equipment to create taxonomy of those tools. Were going to try to come up with some evaluation rubric and we are going to establish a dissemination portal.

The third major item is the awareness campaign. All of the work, whether it is research advocacy, whether it is tool development, it all depends on funding, and you cannot get the funding without people being aware of the problem. My dad was a salesman, never finished college, but he always said the way to cells first to give someone a disease and then you sell them the cure. He did not know how he would be used today, but we have to give people the disease.

This is not simple because on one hand we have got to engage patients. We've got to activate patients, but on the other hand we cannot do it by undermining their competence in the physicians for which they have the strong relationship, so we have to do it in a way that recognizes activation and engagement and it may take strong language but it has to be balanced with positive direction and positive support, and so that is going to be a very difficult difficult, but what we have to do. It is the work we have to do, just try to balance those messages.

In your 1, we will be developing messages that we will test, and we will be developing tools and materials that we can use in the campaign.

I am pleased to announce that officially as of about three or four hours ago when the board of [ Indiscernible ] -- they just got the request this morning, but they approved the contract language giving up us by the Gordon and Betty Moore foundation, but the Gordon and Betty Moore foundation has awarded [ Indiscernible ] grant to fund all of the your 1 duties of the coalition, so we are very appreciative. I know Jeff is there. I don't know Janet is still there.

[ Applause ]

But that is going to make all of this were possible, and so Jeff, on behalf of the community, thank you to Gordon and Betty Moore foundation.

I think we have already talked about the World Health Organization. We need to extend that this is not just the US problem. It is a global problem, and the World Health Organization, this is the meeting I was at on Monday. It is -- according to [ Indiscernible ] with the patient safety [ Indiscernible ] to the secretary general of the WHO, this was their first meeting and seven or eight years, and what he heard from [ Indiscernible ], what he heard from me when I gave my talk was the diagnostic error needs to be on the agenda, and it was interesting. I was on a panel with four people. We were each given 20 minutes to talk and then there was 20 minutes of discussion. In the 20 minutes of discussion, 15 minutes were directed to me and diagnostic error. It really gets strong engagement from within the room, and I think we should feel very good that this is likely to become much more of a global initiative.

I think this is my last slide again, there is some overlap with what has already been said. One is we need -- one of the other directions that we will be taking this forward. I see [ Indiscernible ] as sort of the operating arm of the world when it comes to diagnostic error. Certainly it is a coalition, and we are moving in multiple paths, not just the coalition is not the only vehicle for change.

We clearly are establishing additional partnerships with other organizations to try to make progress in this area.

Improving practice, our practice improvement committee led by Karen Zimmer and Michael Kantor is doing some really great initial work on how do we bring it to the front lines right now, and that Presley heard all day about making sure focusing on amatory.

Engaging patient, think we'll talk about that, how important it is.

We just changed our mission statement and strategic priorities to recognize that the patient are not something we are doing this for. This is someone we are doing is with on their behalf. Sue Sheridan taught me the phrase outcome that matter to patients, and not all outcome are patient, but some that matter to patients and what we are here for.

We need to figure out how to make them an active partner in the process.

Education. We have not talked too much about it today a little bit, but we have had tremendous involvement and support from the outboards, the medical boards, from various specialty organizations, and the specialty organizations themselves all focus on how to retrain new doctors, how do we continue to train doctors that that the clinician that are already in practice and that is going to be an important step forward for education it committee, are doing a number of things that

Starting tomorrow the fact tomorrow we have a policy initiative. There will be 25 people meeting tomorrow about a mile from here to create a policy agenda that again will be crosscutting and that will focus on not only the research advocacy but also other kinds of measures, whether it be payment reforms, although that is an area we don't see immediate opportunities. Impact we certainly see as an important area but other kinds of simple tools that we might be able to move to the policy agenda. I'm not going to say more about it today except that we have got 25 people were going to be working all day tomorrow on that.

And then finally the need for innovation. We have talked about that. This is my final flight to say there is a whole work product that is our conference diagnostic error medicine, that is in Hollywood a month from now. It is amazingly interactive session and for several days with research Summit, patient Summit, the coalition will be meeting there, and then there is wonderful speakers, certainly invite all of you to be there.

With that, I think am I supposed to just adjourn the meeting to what he will do it?

Thank you very much.

[ Applause ]

Absolutely fabulous wrapup. Thank you so much, Paul, for that, and I just want to say again thank you to all of you. This has been such a stimulating day and really I think I just want to express on behalf of AHRQ, our commitment to working with all of you to focus on this issue of improving diagnosis. I think the case has been made very strongly. I think there is a lot of great foundation in terms of historical approaches that AHRQ has patient the fact taken to work in patient safety and we intend to continue to sharpen our land in the area of diagnostic error and diagnostic safety. We look forward to continuing to engage with all of you and with the broader community. I just want to say thank you, again, and the good news is the line going out is much shorter than the line coming and.

[ Laughter ]

Thank you very much and have a great rest of your day.

[ Event Concluded ]