A new tool for assessing short debriefings after immersive simulation: validity of the SHORT scale

Étienne Rivière, Étienne Aubin, Samuel-Lessard Tremblay, Gilles Lortie, Gilles Chiniara

BMC Med Educ. 2019



Simulation is being increasingly used worldwide in healthcare education. However, it is costly both in terms of finances and human resources. As a consequence, several institutions have designed programs offering several short immersive simulation sessions, each followed by short debriefings. Although debriefing is recommended, no tool exists to assess appropriateness of short debriefings after such simulation sessions. We have developed the Simulation in Healthcare retrOaction Rating Tool (SHORT) to assess short debriefings, and provide some validity evidence for its use.


We designed this scale based on our experience and previously published instruments, and tested it by assessing short debriefings of simulation sessions offered to emergency medicine residents at Laval University (Canada) from 2015 to 2016. Analysis of its reliability and validity was done using Standards for educational and psychological testing. Generalizability theory was used for testing internal structure evidence for validity.


Two raters independently assessed 22 filmed short debriefings. Mean debriefing length was 10:35 (min 7:21; max 14:32). Calculated generalizability (reliability) coefficients are φ = 0.80 and φ-λ3 = 0.82. The generalizability coefficient for a single rater assessing three debriefings is φ = 0.84.


The G study shows a high generalizability coefficient (φ ≥ 0.80), which demonstrates a high reliability. The response process evidence for validity provides evidence that no errors were associated with using the instrument. Further studies should be done to demonstrate validity of the English version of the instrument and to validate its use by novice raters trained in the use of the SHORT.


Mise à jour le 07/02/2020