Debriefing across healthcare simulation is important because this practice allows for collective reflection and learning following a simulation-based medical education (SBME) experience. To effectively complete a debriefing, those conducting the debriefings must constantly have their own debriefing quality assessed to improve upon their future performances. Developed by faculty at the Center for Advanced Pediatric and Perinatal Education (CAPE), the Debriefing Assessment in Real Time (DART) tool is a healthcare simulation instrument that uses quantitative measures to estimate quality when completing a debriefing assessment. This HealthySimulation.com article highlights recent research that explains how this healthcare simulation tool offers greater objectivity.

In the past, there have been various other rating tools to attempt to assess clinical simulation debriefing. However, these tools were thought to be limited by both complexity and subjectivity. However, the DART requires minimal training and is believed to better estimate the level of debriefer inclusivity and participant engagement. According to the research titled, “Pilot study of the DART tool – an objective healthcare simulation debriefing assessment instrument,” DART is based on practices in clinical simulation and debriefing in non-healthcare industries, and scores observable sequential debriefing contributions. The tool uses a cumulative tally of instructor questions (IQ), instructor statements (IS), and trainee responses.

To determine the tool’s objectivity, authors Kaushik Baliga, Andrew Coggins, Sandra Warburton, Divya Mathias, Nicole K. Yamada, Janene H. Fuerch, and Louis P. Halamek asked experienced faculty from four geographically disparate university-affiliated simulation centers to rate video-based debriefings and a transcript using the DART. No specific exclusion criteria were determined prior to subject selection as this was an explorative project for the generalizability of the DART tool.

The research was designed with the primary endpoint being an assessment of the estimated reliability of the DART tool. In addition to determining the tool’s reliability, the researchers wanted to explore and investigate the potential utility of the DART tool as an alternative approach to the assessment of debriefing quality. They noted that the small sample size confined the analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability.

Within the DART study, two pre-filmed video examples (Video A and Video B) of post-simulation debriefing were selected for the assessment, and the research reports. Using the DART tool, subjects (n = 8) individually rated the debriefings while watching the video. Printed paper copies of the DART tool were then used to score Video A and Video B in real-time (in a single take), as per instructions of the tool’s designer (LPH). Then, videos were viewed separately on desktop computers to ensure subjects were blinded to each other’s scores.

Upon the conclusion, responses were collated and tabulated by a single investigator (KB). A discussion took place among subjects regarding their reasoning behind their DART scores as well. Researchers reported higher CV% observed in IS, and TR may be attributable to rater characterizations of longer contributions as either lumped or split.

Further, lower variances in IQ and TR:[IQ + IS] suggested overall consistency – regardless of scores being lumped or split. (Researchers explained that “lumpers” are study subjects who had the tendency to score long statements as a single concept, and “splitters” as subjects who had the tendency to score the longer statements as multiple concepts.)

The research also explained that, in terms of specific problems with the DART tool, the authors identified more errors in IS scores. After discussion with each rater and review of their transcript, the authors believed this variation may be attributable to the “lumper/splitter” phenomenon. This meant that variation in each rater’s assessment of a single “statement” or “single concept” appeared to be problematic and may have led to the higher CV% observed for IS.

For example, when asked about their scores, the authors believed raters that were “lumpers” may have considered this as a single statement, giving a score of one. Alternatively, “splitters” may consider each sentence in the quote as a separate statement, giving a score of three. While implementing a standardized training protocol and calibration exercises may reduce these differences, the researchers emphasized that their intention is not to increase “cognitive load or overcomplicate the use of a tool that was designed to be easy to use.”

Overall, from these results, the researchers concluded that the DART tool appeared to be reliable for the recording of data, which may be useful for informing feedback to debriefers. They added that future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.

More About Healthcare Simulation Debriefing

Healthcare simulation debriefing is a period of time following an experiential learning activity during which learners/teams reflect, review and discuss the activity with the goal of improving individual and team clinical skills and judgment. Following each scenario, a debriefing is conducted by one or more people such as a healthcare simulation facilitator (considered to be a content expert on the scenario subject matter). These content experts should also be skilled in debriefing, as many would argue that debriefing is the most important component of a simulation experience.

Several factors affect the nature of a simulation in healthcare debriefing. These factors include the objectives of the medical simulation, complexity of the scenario, experience level of the learners, familiarity of learners with the sim environment, time available for the session, audiovisual recording systems, and individual personalities of participants. Creating a safe learning space is a critical consideration since participating in simulations can have a significant emotional impact on learners, which should begin with the orientation and again in the prebriefing.

Opening debriefing questions often include basic “what” or “how” questions. These questions are open-ended and should always be non-judgmental. Participants should be encouraged and made to feel that their contributions are valued. Faculty often reflect back on learner statements to reiterate points or to open up a discussion. Debriefing should occur immediately following clinical simulation. Note, debriefing is all about the learners who should do most of the talking and not about the educator. Healthcare simulation and debriefing are used extensively to improve team communication, dynamics, and efficiency.

Read the Full Research Article Here

Lance Baily Avatar
BA, EMT-B
Founder / CEO
Lance Baily, BA, EMT-B, is the Founder / CEO of HealthySimulation.com, which he started in 2010 while serving as the Director of the Nevada System of Higher Education’s Clinical Simulation Center of Las Vegas. Lance also founded SimGHOSTS.org, the world’s only non-profit organization dedicated to supporting professionals operating healthcare simulation technologies. His co-edited Book: “Comprehensive Healthcare Simulation: Operations, Technology, and Innovative Practice” is cited as a key source for professional certification in the industry. Lance’s background also includes serving as a Simulation Technology Specialist for the LA Community College District, EMS fire fighting, Hollywood movie production, rescue diving, and global travel. He and his wife live with their two brilliant daughters and one crazy dachshund in Las Vegas, Nevada.
HealthySimulation.com-Relaunch-Ad