The process to evaluate a healthcare simulation course is an arduous task a lot of the time. Often, the evaluation process can be left to the last minute or not as intentionally considered as possible in an ideal world. Evaluations, particularly with a formalized tool, allow healthcare simulation faculty to review the clinical simulation-based education provided and check if the clinical simulation faculty’s interpretation of the performance matches the experience of clinical simulation participants. After checking out the newly migrated Evaluating Healthcare Simulation Tools from Leighton et al now hosted exclusively on HealthySimulation.com, this article by Erin Carn-Bennett, RN, MSN, will encourage clinical simulation faculty members to consider how clinical simulation courses are evaluated and how to emphasize the evaluative process of healthcare simulation-based education. Evaluation of courses provided is a useful way to assist with taking a clinical simulation program to the next level.
Post-Course Survey Evaluation Tool for Participants
The main way to evaluate a clinical simulation course is via a post-course survey. There are many ways in which surveys can be completed for clinical simulation courses. There is a traditional format of provision of a written survey completed with paper and pen. Online surveys can also be useful, with various online survey options available. Some of the more common options include Google Surveys and Survey Monkey. The online survey is often preferred for the participant’s ease and data transfer into other documents for analysis with clinical simulation staff members.
Encourage clinical simulation participants to be honest and state that their feedback will be anonymous. Let the clinical simulation participants know their feedback will be reviewed to assist in future revisions of clinical simulation courses. If clinical simulation participants are made to feel psychologically safe and that their learning experience within the clinical simulation scenario matters, there is an increased likelihood of honest and useful feedback of the experience.
Slide decks from PowerPoint or didactic presentations should be reviewed after the clinical simulation course has been completed, and prior to the next time the course is taught. The post-course evaluation survey will provide information about slide decks from the clinical simulation participants’ viewpoint. This post-course survey will provide a positive and critical review of the slide decks, presenters, and any clinical simulation. After review of the post-course survey, any alterations to the slide deck and course itinerary should be considered. Time schedules should be considered to determine where the schedule went over time or under time. Another consideration is when feedback identifies the clinical simulation participants who want to spend more time on particular course segments that may not have been allowed for within the course.
A QR code can be placed into a slide deck with a QR code converter. The QR Code can be placed in the debriefing room to remind the simulation participants to complete the evaluation survey. This creates an easily accessible way for clinical simulation participants to complete a post-course survey from the comfort of their cell phones prior to leaving the clinical simulation experience. This will also negate the need to continually remind or chase up participants to complete their post-course evaluation via email. Through this process, the evaluation completion rate by clinical simulation participants will usually increase drastically.
Request data points in scales that can be translated into graphs easily once the feedback survey is received. Also, allow for free text options so that people can share their opinions for consideration. Free text feedback is the most helpful feedback to receive to improve and adapt clinical simulation courses.
Tools for Healthcare Simulation Evaluation
Two examples of tools that can be used to evaluate healthcare simulation are: SET-M (Simulation Effectiveness Tool – Modified) (Leighton et al., 2015) and the Facilitator Competency Rubric (FCR) (Leighton et al, 2018). Both tools allow clinical simulation programs or organizations to implement the use of the tools to administered in paper or electronic format to seek useful feedback on clinical simulation scenarios. The SET-M is a tool focused on the clinical simulation participants’ educational experience within the clinical simulation; whereas the FCR is focused on the facilitator evaluation with a standardized tool. These tools been undergone psychometric testing for validity and reliability. There are many other evaluation tools available for simulationists. Make sure to select a tool that evaluates the aspect of simulation you want to measure.
View the LEARN CE/CME Platform Webinar Evaluating Healthcare Simulation in the Days of COVID-19 and Beyond to learn more!
Utilization of Post-Course Evaluation Data
Host a post-course meeting to review how the course went from the clinical simulation faculty’s perspective. The post-course survey can be reviewed by clinical simulation faculty, and alterations considered or motioned within the meeting. Any errors found in the slide deck or discussions around the format of the clinical simulation course can be raised in this meeting.
Sometimes, there may be one individual or a few negative voices from clinical simulation participants in the post-course survey. Careful consideration by clinical simulation faculty should be given to decipher if this information is useful critical feedback and an overall clinical simulation participant consensus versus an individual opinion, which may have been impacted by factors external to the clinical simulation course that has been participated in. Based on the feedback, consider whether the feedback is fair and representative of the clinical simulation course and whether the suggestion will improve future clinical simulation participants’ experience.
The data collected in a post-course survey can be useful to display and share with funders of the clinical simulation program. Data can be shared about the number of participants that have received clinical simulation education, the number of courses conducted, and any noteworthy free text feedback. Data can also be used for testimonials about the benefits of the simulation course if consent has been obtained from clinical simulation participants. Testimonials can be useful in advertisements of clinical simulation courses and within websites or intranet pages.
This article has looked at the evaluation process of a clinical simulation course and points to consider. The process to evaluate clinical simulation courses demonstrates the ability of a clinical simulation program to adapt and improve constantly. Change culture is often present within clinical simulation programs and is highly necessary when striving for excellence. The discussion has centered around the post-course survey as the main tool for evaluating healthcare simulation. The evaluation survey selection, delivery, and structure of a post-course survey have been explored to ensure the right survey is selected to obtain the data needed to improve the simulation course.