Participants' Performance (participant + performance)

Distribution by Scientific Domains


Selected Abstracts


Simulation as a tool to improve the safety of pre-hospital anaesthesia , a pilot study,

ANAESTHESIA, Issue 9 2009
A. J. Batchelder
Summary We conducted a pilot study of the effects of simulation as a tool for teaching doctor-paramedic teams to deliver pre-hospital anaesthesia safely. Participants undertook a course including 43 full immersion, high-fidelity simulations. Twenty videos taken from day 4 and days 9/10 of the course were reviewed by a panel of experienced pre-hospital practitioners. Participants' performance at the beginning and the end of the course was compared. The total time from arrival to inflation of the tracheal tube cuff was longer on days 9/10 than on day 4 (mean (SD) 14 min 52 s (2 min 6 s) vs 11 min 28 s (1 min 54 s), respectively; p = 0.005), while the number of safety critical events per simulation were fewer (median (IQR [range]) 1.0 (0,1.8 [0,2]) vs 3.5 (1.5,4.8 [0,8], respectively; p = 0.011). Crew resource management behaviours also improved in later simulations. On a personal training needs analysis, participants reported increased confidence after the course. [source]


Intermethod Reliability of Real-time Versus Delayed Videotaped Evaluation of a High-fidelity Medical Simulation Septic Shock Scenario

ACADEMIC EMERGENCY MEDICINE, Issue 9 2009
Justin B. Williams MD
Abstract Objectives:, High-fidelity medical simulation (HFMS) is increasingly utilized in resident education and evaluation. No criterion standard of assessing performance currently exists. This study compared the intermethod reliability of real-time versus videotaped evaluation of HFMS participant performance. Methods:, Twenty-five emergency medicine residents and one transitional resident participated in a septic shock HFMS scenario. Four evaluators assessed the performance of participants on technical (26-item yes/no completion) and nontechnical (seven item, five-point Likert scale assessment) scorecards. Two evaluators provided assessment in real time, and two provided delayed videotape review. After 13 scenarios, evaluators crossed over and completed the scenarios in the opposite method. Real-time evaluations were completed immediately at the end of the simulation; videotape reviewers were allowed to review the scenarios with no time limit. Agreement between raters was tested using the intraclass correlation coefficient (ICC), with Cronbach's alpha used to measure consistency among items on the scores on the checklists. Results:, Bland-Altman plot analysis of both conditions revealed substantial agreement between the real-time and videotaped review scores by reviewers. The mean difference between the reviewers was 0.0 (95% confidence interval [CI] = ,3.7 to 3.6) on the technical evaluation and ,1.6 (95% CI = ,11.4 to 8.2) on the nontechnical scorecard assessment. Comparison of evaluations for the videotape technical scorecard demonstrated a Cronbach's alpha of 0.914, with an ICC of 0.842 (95% CI = 0.679 to 0.926), and the real-time technical scorecard demonstrated a Cronbach's alpha of 0.899, with an ICC of 0.817 (95% CI = 0.633 to 0.914), demonstrating excellent intermethod reliability. Comparison of evaluations for the videotape nontechnical scorecard demonstrated a Cronbach's alpha of 0.888, with an ICC of 0.798 (95% CI = 0.600 to 0.904), and the real-time nontechnical scorecard demonstrated a Cronbach's alpha of 0.833, with an ICC of 0.714 (95% CI = 0.457 to 0.861), demonstrating substantial interrater reliability. The raters were consistent in agreement on performance within each level of training, as the analysis of variance demonstrated no significant differences between the technical scorecard (p = 0.176) and nontechnical scorecard (p = 0.367). Conclusions:, Real-time and videotaped-based evaluations of resident performance of both technical and nontechnical skills during an HFMS septic shock scenario provided equally reliable methods of assessment. [source]


Using a Virtual Reality Temporal Bone Simulator to Assess Otolaryngology Trainees,

THE LARYNGOSCOPE, Issue 2 2007
Molly Zirkle MD
Abstract Objective: The objective of this study is to determine the feasibility of computerized evaluation of resident performance using hand motion analysis on a virtual reality temporal bone (VR TB) simulator. We hypothesized that both computerized analysis and expert ratings would discriminate the performance of novices from experienced trainees. We also hypothesized that performance on the virtual reality temporal bone simulator (VR TB) would differentiate based on previous drilling experience. Study Design: The authors conducted a randomized, blind assessment study. Methods: Nineteen volunteers from the Otolaryngology,Head and Neck Surgery training program at the University of Toronto drilled both a cadaveric TB and a simulated VR TB. Expert reviewers were asked to assess operative readiness of the trainee based on a blind video review of their performance. Computerized hand motion analysis of each participant's performance was conducted. Results: Expert raters were able to discriminate novices from experienced trainees (P < .05) on cadaveric temporal bones, and there was a trend toward discrimination on VR TB performance. Hand motion analysis showed that experienced trainees had better movement economy than novices (P < .05) on the VR TB. Conclusion: Performance, as measured by hand motion analysis on the VR TB simulator, reflects trainees' previous drilling experience. This study suggests that otolaryngology trainees could accomplish initial temporal bone training on a VR TB simulator, which can provide feedback to the trainee, and may reduce the need for constant faculty supervision and evaluation. [source]


The development and resulting performance impact of positive psychological capital

HUMAN RESOURCE DEVELOPMENT QUARTERLY, Issue 1 2010
Fred Luthans
Recently, theory and research have supported psychological capital (PsyCap) as an emerging core construct linked to positive outcomes at the individual and organizational level. However, to date, little attention has been given to PsyCap development through training interventions; nor have there been attempts to determine empirically if such PsyCap development has a causal impact on participants' performance. To fill these gaps we first conducted a pilot test of the PsyCap intervention (PCI) model with a randomized control group design. Next, we conducted a follow-up study with a cross section of practicing managers to determine if following the training guidelines of the PCI caused the participants' performance to improve. Results provide beginning empirical evidence that short training interventions such as PCI not only may be used to develop participants' psychological capital, but can also lead to an improvement in their on-the-job performance. The implications these findings have for human resource development and performance management conclude the article. [source]


The quality of a simulation examination using a high-fidelity child manikin

MEDICAL EDUCATION, Issue 2003
T-C Tsai
Purpose, Developing quality examinations that measure physicians' clinical performance in simulations is difficult. The goal of this study was to develop a quality simulation examination using a high-fidelity child manikin in evaluating paediatric residents' competence about managing critical cases in a simulated emergency room. Quality was determined by evidence of the reliability, validity and feasibility of the examination. In addition, the participants' responses regarding its realism, effectiveness and value are presented. Method, Scenario scripts and rating instruments were carefully developed in this study. Experts were used to validate the case scenarios and provide evidence of construct validity. Eighteen paediatric residents, ,working' as pairs, participated in a manikin-based simulation pre-test, a training session and a post-test. Three independent raters rated the participants' performance on task-specific technical skills, medications used and behaviours displayed. At the end of the simulation, the participants completed an evaluation questionnaire. Results, The manikin-based simulation examination was found to be a realistic, valid and reliable tool. Validity (i.e. face, content and construct) of the test instrument was evident. The level of inter-rater concordance of participants' clinical performance was good to excellent. The item analysis showed good to excellent internal consistency on all the performance scores except the post-test technical score. Conclusions, With a carefully designed rating instrument and simulation operation, the manikin-based simulation examination was shown to be reliable and valid. However, a further refinement of the test instrument will be required for higher stake examinations. [source]