Evaluating Healthcare Simulation

The Evaluating Healthcare Simulation website was created by Kim Leighton, PhD, RN, CHSOS, CHSE-A, FSSH, ANEF, FAAN, and other researchers to provide healthcare simulation educators and researchers with freely available instruments. While originally housed on a Google site, the development team has provided permission to HealthySimulation.com to host these open access tools exclusively through the links below. These instruments were developed for evaluating different aspects of simulation-based education (SBE). All instruments on the page have undergone psychometric testing as valid and reliable evaluation methods for healthcare simulation. HealthySimulation.com is proud to host this page as an extremely valuable resource for healthcare simulationists.

The researchers of the healthcare simulation evaluation tools believe evaluation must go well beyond satisfaction and confidence when evaluating SBE as pedagogy. The evaluation instruments have used psychometric testing to establish reliability and validity to obtain the best measures to evaluate healthcare simulation outcomes. These instruments are freely available through the links on each page. From the inception of the initial website in 2018 website through 2022, there have been 8420 unique tools downloaded from 89 countries! This includes all 50 US states, all 10 Canadian Provinces and 1 territory, and 87 other countries!

Comprehensive Simulation Program Evaluation

Click on the instruments to learn more about the instrument, why the instrument was developed, the validity and reliability of each instrument, and suggestions on how best to use the tool.

Permission to Use FREELY: General use is already permitted by posting the statement: I understand that I have been granted permission by the creators of the requested evaluation instrument to use it for academic, clinical and/or research purposes. I agree that I will use the evaluation instrument only for its intended use, and will not alter it in any way. I will share findings as well as publication references with the instrument creator(s). I am allowed to place the evaluation instrument into electronic format for data collection. If official ‘Permission to Use’ letter is required, please contact the primary author. Include the purpose of the official request (research, grant), the intended use of the tool and with what population.

Actions, Communication, & Teaching in Simulation Tool (ACTS) was developed to provide an objective way to evaluate confederates’ contributions to simulation encounters. Given the need to be able to measure all aspects of simulation to make improvements and noting that confederates make errors that impact educational opportunities, Sanko and colleagues (2016) embarked on a quest to design and develop a tool that could measure confederates’ “performance” for the purposes of quality improvement. The ACTS tool is a single-factor, five-item measure using a seven-point behaviorally anchored scale scoring schema designed to objectively measure the performances and portrayal accuracy of confederates playing support roles in simulation scenarios.

Clinical Learning Environment Comparison Survey (CLECS) was developed by Leighton (2015) to evaluate how well learning needs are met in the traditional and simulation undergraduate clinical environments. The CLECS was used in the landmark NCSBN National Simulation Study (Hayden et al., 2014). CLECS is available in both Chinese and Norwegian versions. The CLECS was modified by Leighton et al. (2021) to the CLECS 2.0, which was designed to respond to the changes in simulation delivery during the COVID-19 pandemic for students’ perception of how well the learning needs were met in three environments: traditional clinical environment, face-to-face simulated clinical environment, and screen-based simulation environment. The CLECS 2.0 has now replaced the original CLECS and can be used to compare any two or more clinical learning environments.

Facilitator Competency Rubric (FCR) was developed by Leighton, Mudra, and Gilbert (2018) based on the Healthcare Simulation Standards of Best Practice and Patricia Benner’s (1984) Novice to Expert Theory. The goal of this instrument was to differentiate the varying levels of competency of the healthcare simulation facilitator. There are five constructs: preparation, prebriefing, facilitator, debriefing, and evaluation. The FCR is also available in a German Version.

ISBAR Interprofessional Communication Rubric (IICR) was developed by Foronda and Bauman (2015) student nurses’ / nurses’ performance of nurse-to-physician communication using ISBAR. The researchers noted that students exhibited difficulty in phone communications to physicians during Simulation-Based Education (SBE). Communication with physicians and healthcare providers in the traditional clinical setting may not be permitted for students. SBE is the only opportunity for the students to learn these required skills. This tool was developed for the educator to measure the level of communication performed for the purpose of feedback and instruction.

Quint Leveled Clinical Competency Tool (QLCCT) began when Quint observed a weakness in the Lasater Clinical Judgment Rubric (LCJR). A group of researchers (Quint et al., 2017), collaborated to the Quint tool to address the negative language in the LCJR, especially for novice learners, and the length of the tool for measuring clinical judgment. The rubric measures clinical competence in simulation or the clinical environment.

Simulation Culture Organizational Readiness Survey (SCORS) was developed by Leighton, Foisy-Doll, and Gilbert (2018) to assist administrators in evaluating institutional and program readiness for simulation integration. The SCORS will assist organizational leadership to better understand the necessary components to address PRIOR to purchasing simulation equipment, with the goal of increasing effective and efficient integration of simulation into the academic or organizational education curriculum.

Simulation Educator Needs Assessment Tool (SENAT) was developed by Britt, Xing, and Leighton (2023) based on a needs assessment and gap analysis for simulation professional development, the need to provide data regarding the simulation professional needs and desire for improvement, and to assist the creation of a professional development roadmap for simulation programs and/or individual simulation educators. The Healthcare Simulation Standard of Best Practice: Professional Development was used as a foundation for this instrument. The SENAT was designed to assess the needs of educators to inform continuing education and orientation requirements.

Simulation Effectiveness Tool – Modified (SET-M) was revised and modified from the Simulation Effectiveness Tool (2005) by CAE Healthcare (formerly known as METI), as part of the Program of Nursing Curriculum Integration. The SET-M is designed for evaluation of clinical simulation scenarios. Leighton, Ravert, Mudra, and Macintosh (2015) updated the SET to incorporate simulation standards of best practices and updated terminology. The researchers determined the tool required updating to capture the desired outcomes – learner’s perceptions of how well their learning needs in the simulation environment were being met. SET-M is available in Turkish and Spanish versions.

The Inter-Rater Reliability Guide provided by Robert Morris University, can be used to establish inter-rater reliability when using an instrument to assess how learners perform in a patient care scenario.

About the Lead Organizer, Dr. Kim Leighton, PhD, RN, CHSOS, CHSE-A, ANEF, FSSH, FAAN: As the Executive Director of the ITQAN Clinical Simulation & Innovation Center at the Hamad Medical Corporation in Doha, Qatar, Dr. Leighton is respected as an educator, researcher, and mentor in the field of healthcare simulation. While president of the International Nursing Association for Clinical Simulation and Learning, she spearheaded development of the Standards of Best Practice: Simulation, endorsed by 20 interprofessional organizations and now in its 4th edition. This led to widespread dissemination of best practices to an interprofessional worldwide audience. Dr. Leighton’s research focuses on development and psychometric analysis of evaluation tools for simulation education. The Clinical Learning Environment Comparison Survey (CLECS) was used in the National Council of State Boards of Nursing multi-site simulation study that identified 50% of clinical time could be replaced with simulation if certain conditions were met. Other tools include the only comprehensive instrument to evaluate simulation facilitators, and a survey designed to help leaders determine readiness to support simulation integration efforts. Her work supports comprehensive evaluation of simulation programs. Dr. Leighton is co-editor of the award winning book Simulation Champions: Courage, Caring, and Connection. Dr. Leighton received her ASN from the University of Nebraska Medical Center, BSN and MSN from Nebraska Wesleyan University, and PhD in Educational Leadership in Higher Education from the University of Nebraska.

Special thanks to Dr. Kim Leighton for her help to organize these authors and their tools!

The authors of these instruments and their LinkedIn pages are:

These simulation researchers have led the development of valid and reliable instruments that have been found useful for those working in education and clinical environments. These instruments are freely available through the links on each page to guide healthcare simulation professionals in evaluating all aspects of medical simulation.