Julie Theriault from NCCPA on Vimeo.
- Definition and AHRQ Evidence Report
Medical simulation is defined as “a person, device, or set of conditions which attempts to present [education and] evaluation problems authentically. The student or trainee is required to respond to the problems as he or she would under natural circumstances. Frequently the trainee receives performance feedback as if he or she were in the real situation.”4 “Simulation procedures for evaluation and teaching have the following common characteristics:
- Trainees see cues and consequences very much like those in the real environment.
- Trainees can be placed in complex situations.
- Trainees act as they would in the real environment.
- The fidelity (exactness of duplication) of a simulation is never completely isomorphic with the real thing. The reasons are obvious: cost, [limits of] engineering technology, avoidance of danger, ethics, psychometric requirements, time constraints.
- Simulations can take many forms. For example, they can be static, as in an anatomical model [for task training]. Simulations can be automated, using advanced computer technology. Some are individual, prompting solitary performance while others are interactive, involving groups of people. Simulations can be playful or deadly serious. In personnel evaluation settings they can be used for high-stakes, low-stakes, or no-stakes decisions.”4
Medical simulations are located on a continuum of fidelity, ranging from detached, multiple-choice examination questions;5 to more engaging task trainers (arms for phlebotomy practice); to full-body, computer-driven mannequins with sophisticated physiologic features that respond to pharmacologic and mechanical interventions.6 Simulations also include standardized patients who are live persons trained and calibrated to portray patients with a variety of presenting complaints and pathologies. Decades of experience and research demonstrate that standardized patients are highly effective for medical education and evaluation.7 Standardized examinees (students) also have been used as a way to calibrate and improve clinical skills examinations.8, 9Medical educators have recently combined these modalities where standardized patients, inanimate models, and medical equipment are integrated to evaluate trainees' technical, communication, and other professional skills simultaneously.10
The AHRQ Evidence Report included a review of nine systematic reviews published between 1990 and 2006 that sought to evaluate the effectiveness of simulation methods in medical education outside of CME. The investigators abstracted data about study characteristics, educational objectives, learning outcomes, summary of results, conclusions, and quality of each review and graded the evidence of these articles according to each educational objective and outcome related to participant knowledge, attitudes, skills, practice behaviors, and clinical outcomes. The quality of each review was established using criteria derived from the Quality of Reporting of Meta-analyses statement,11 which is intended for use only as a guide for preparing reports on quantitative metaanalyses that include randomized controlled trials.
The AHRQ report has several limitations, the most important is the review methodology, which failed to find two eligible reports.12, 13 This and other issues make the report's findings about simulation difficult to interpret unequivocally. Nevertheless, the AHRQ report1 argues that the overall “direction of evidence points to the effectiveness of simulation training, especially for psychomotor skills (eg, procedures or physical examination techniques) and communication skills,” despite the low strength of the evidence “due to the small number of appropriate studies and scarcity of [reliable] quantitative data.” We add to these deficits the narrow focus of eight of the nine-included reviews (ie, single medical specialty, single simulation method) and the weakness of most primary studies covered in the reviews. These limitations are attributed, in part, to the lack of consensus about standardized methods to quantify clinical competence, a persistent problem in medical education research. The AHRQ authors also speculate that other limitations may include difficulty of establishing “clinical realism [high-fidelity] for participants,” and “other features that may be responsible for inadequate quality of evidence in this field.”1
- Long Answer Module
- BEME Review
One of the nine literature reviews cited in the AHRQ report but not explained in depth is a systematic review done under the auspices of the BEME collaboration.14 The collaboration “involves an international group of individuals, universities, and organizations (eg, AMEE [Association for Medical Education in Europe], AAMC [Association of American Medical Colleges], ABIM [American Board of Internal Medicine]) committed to moving the medical profession from opinion-based education to evidence-based education. The goal is to provide medical teachers and administrators with the latest findings from scientifically grounded educational research.”2
The broad scope of the BEME systematic review2 is distinct from the narrower focus on the effectiveness of simulation compared to other educational techniques. This article addressed best educational practices, reviewing 670 journal articles published between 1969 and 2003. Despite the original intent to conduct a quantitative metaanalysis, the studies were so heterogeneous and weak methodologically that the investigators resorted to a qualitative, narrative synthesis. The primary outcome of the BEME review is an inventory of 10 features and uses of high-fidelity simulations that lead to effective learning. These features are listed in order of reported frequency (percent) among the final BEME pool of 109 articles, and the report concluded that “the weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions.2The 10 conditions2 are as follows:
- 1. Feedback is provided during learning experiences (47%).
- 2. Learners engage in repetitive practice (39%).
- 3. Simulation is integrated into an overall curriculum (25%).
- 4. Learners practice tasks with increasing levels of difficulty (14%).
- 5. Simulation is adaptable to multiple learning strategies (10%).
- 6. Clinical variation is built into simulation experiences (10%).
- 7. Simulation events occur in a controlled environment (9%).
- 8. Individualized learning is an option (9%).
- 9. Outcomes or benchmarks are clearly defined or measured (6%).
- 10. The simulation is a valid representation of clinical practice (3%).
These findings, chiefly from UME and GME, are a guide for defining a research agenda on simulations as educational technologies. In contrast with the eight other literature reviews covered in the AHRQ Evidence Report, the BEME review spans a wide variety of medical specialties and simulation technologies across a long time frame. In addition, its emphasis on features and uses of medical simulation that lead to effective learning (eg, feedback, repetitive practice, curriculum integration), not just comparative effectiveness, sets a standard for understanding the benefits of simulation for medical education and personnel evaluation. The BEME review2 on high-fidelity medical simulations also included a call for increased rigor in original simulation-based medical education research and improved journal-reporting conventions. In particular, the authors suggested that journal editors insist that all reports of primary research include basic descriptive statistics (eg, means, SDs, effect sizes, number of cases per group) that will permit quantitative synthesis in subsequent metaanalyses.
A subset of 31 journal articles reporting 32 research studies within the 109 articles was found to contain enough empirical data to permit a quantitative metaanalysis. The studies were framed to address the question, “Is there an association between hours of simulation-based practice and standardized learning outcomes?”15 Measured outcomes from these studies were cast on a standardized metric termed average weighted effect size. Hours of simulation-based practice in each study were grouped in the following five categories: none reported, 0 to 1.0, 1.1 to 3.0, 3.1 to 8.0, and 8.1 +. Data analysis revealed a highly significant “dose-response” relationship among practice and achievement, with more practice producing higher outcome gains. These results are presented in a subsequent report15 that demonstrated a direct relationship between hours of simulator practice and standardized learning outcomes.
- Long Answer Module
- Lessons Learned About Best Educational Practices
The scholarship of the AHRQ report and BEME review is amplified by at least two other reviews12, 13 about simulation-based medical education that also can inform CME practices. One way to highlight advances in simulation-based UME and GME is to focus on exemplary education and research programs that identify their special features. Work completed by two medical simulation education and research programs16, 17, 18, 19, 20, 21,22, 23, 24, 25, 26, 27 are illustrative, as they are thematic, sustained, and cumulative and are of special interest for chest physicians. Additional studies inform on academic standard setting28 and mastery learning of clinical skills in advanced cardiac life support29 and thoracentesis.30
A more recent report31 discussed the “scope of simulation-based healthcare education,” pointing out that the best simulation-based medical education is a multiplicative product of simulation technology (eg, devices, standardized patients), teachers prepared to use the technology to maximum educational advantage, and curriculum integration. It argued that the major flaws in current simulation-based medical education stem from a lack of prepared teachers and curriculum isolation, not from technological problems or deficits.
The design of educational activities useful to practicing physicians assumes that CME program directors are knowledgeable about “what works” from scholarly reviews2, 12, 13, 15 and from individual studies having strong research designs, such as randomized trials,24 mastery learning research,29, 30 and cohort studies.32 Program directors also should be informed about the latest scholarship on technology in medical education.33 The key lesson is that medical simulation and other educational technologies are best used to complement, not replace, education grounded in patient care. CME in any form should be based on scientific best evidence rather than on opinion or habit.14
We endorse the position that CME best practices reside in educational programs that have the following three features: mastery learning, deliberate practice, and recognition and attention to cultural barriers within the medical profession that frustrate better CME programs. In particular, mastery learning and deliberate practice are ideally suited for simulation-based medical education. They also conform to accepted principles of adult education independent of teaching modalities.34
Essential elements of the mastery learning model have been described in earlier publications.35, 36, 37 In brief, mastery learning has the following seven complementary features:
- 1.Baseline, or diagnostic testing;
- 2. Clear learning objectives, sequenced as units in increasing difficulty;
- 3. Engagement in educational activities (eg, skills practice, data interpretation, reading, focused on reaching the objectives);
- 4. A set minimum passing standard (eg, test score) for each educational unit;
- 5. Formative testing to gauge unit completion at a preset minimum passing standard for mastery;
- 6. Advancement to the next educational unit given measured achievement at or above the mastery standard; and
- 7. Continued practice or study on an educational unit until the mastery standard is reached.
The goal in mastery learning is to ensure that all learners accomplish all educational objectives with little or no variation in outcome. The amount of time needed to reach mastery standards for a unit's educational objectives varies among the learners. To illustrate, in mastery learning studies on acquiring advanced cardiac life support29and thoracentesis30 skills, approximately 20% of the internal medicine resident trainees needed more time beyond the minimum allocation to reach mastery standards. The extra time needed was usually < 1 h.
The mastery learning model also includes other options in simulation-based education. For example, mastery learning can address learning objectives beyond skill acquisition to include knowledge gains; affective qualities, such as self-efficacy; or features of medical professionalism. Mastery learning requires a standardized curriculum for all learners, with uniform outcomes assessed by rigorous measurements and standards.28, 38, 39
- Long Answer Module
- Implications for CME
Simulation technology as an educational tool could lead to significant changes in medical education, including a new emphasis on skill and knowledge acquisition and maintenance, integration of the technique into a comprehensive clinical curriculum that includes certification and recertification, adoption of mastery learning and deliberate practice, and increased competence and outcome measurement. Research should focus on valid and reliable tools for more systematic outcome measurements, with the ultimate goal of improving the quality of patient care. Policies that inform physician performance and govern the privilege to practice not only need to endorse the effective educational use of simulation technology, but also tackle sources of cultural resistance to its adoption.
Simulation will never replace the situational context and complex interactions learned through interaction with real patients. Expert mentors will always be needed not only to verify trainee performance in real situations, but also to judge the simulators' in vivo fidelity. Nevertheless, cultural barriers should not hinder the adoption and informed use of simulation technology as a powerful and effective educational tool to maximize physician and other health professional training and, ultimately, to improve patient care.
- Long Answer Module
[os-widget path="/interacvault/test-quiz-2" comments="false"]
This page will contain the certification once you have completed the lesson and quiz
Julie Theriault from NCCPA on Vimeo.
- Definition and AHRQ Evidence Report
Medical simulation is defined as “a person, device, or set of conditions which attempts to present [education and] evaluation problems authentically. The student or trainee is required to respond to the problems as he or she would under natural circumstances. Frequently the trainee receives performance feedback as if he or she were in the real situation.”4 “Simulation procedures for evaluation and teaching have the following common characteristics:
- Trainees see cues and consequences very much like those in the real environment.
- Trainees can be placed in complex situations.
- Trainees act as they would in the real environment.
- The fidelity (exactness of duplication) of a simulation is never completely isomorphic with the real thing. The reasons are obvious: cost, [limits of] engineering technology, avoidance of danger, ethics, psychometric requirements, time constraints.
- Simulations can take many forms. For example, they can be static, as in an anatomical model [for task training]. Simulations can be automated, using advanced computer technology. Some are individual, prompting solitary performance while others are interactive, involving groups of people. Simulations can be playful or deadly serious. In personnel evaluation settings they can be used for high-stakes, low-stakes, or no-stakes decisions.”4
Medical simulations are located on a continuum of fidelity, ranging from detached, multiple-choice examination questions;5 to more engaging task trainers (arms for phlebotomy practice); to full-body, computer-driven mannequins with sophisticated physiologic features that respond to pharmacologic and mechanical interventions.6 Simulations also include standardized patients who are live persons trained and calibrated to portray patients with a variety of presenting complaints and pathologies. Decades of experience and research demonstrate that standardized patients are highly effective for medical education and evaluation.7 Standardized examinees (students) also have been used as a way to calibrate and improve clinical skills examinations.8, 9Medical educators have recently combined these modalities where standardized patients, inanimate models, and medical equipment are integrated to evaluate trainees' technical, communication, and other professional skills simultaneously.10
The AHRQ Evidence Report included a review of nine systematic reviews published between 1990 and 2006 that sought to evaluate the effectiveness of simulation methods in medical education outside of CME. The investigators abstracted data about study characteristics, educational objectives, learning outcomes, summary of results, conclusions, and quality of each review and graded the evidence of these articles according to each educational objective and outcome related to participant knowledge, attitudes, skills, practice behaviors, and clinical outcomes. The quality of each review was established using criteria derived from the Quality of Reporting of Meta-analyses statement,11 which is intended for use only as a guide for preparing reports on quantitative metaanalyses that include randomized controlled trials.
The AHRQ report has several limitations, the most important is the review methodology, which failed to find two eligible reports.12, 13 This and other issues make the report's findings about simulation difficult to interpret unequivocally. Nevertheless, the AHRQ report1 argues that the overall “direction of evidence points to the effectiveness of simulation training, especially for psychomotor skills (eg, procedures or physical examination techniques) and communication skills,” despite the low strength of the evidence “due to the small number of appropriate studies and scarcity of [reliable] quantitative data.” We add to these deficits the narrow focus of eight of the nine-included reviews (ie, single medical specialty, single simulation method) and the weakness of most primary studies covered in the reviews. These limitations are attributed, in part, to the lack of consensus about standardized methods to quantify clinical competence, a persistent problem in medical education research. The AHRQ authors also speculate that other limitations may include difficulty of establishing “clinical realism [high-fidelity] for participants,” and “other features that may be responsible for inadequate quality of evidence in this field.”1
- Long Answer Module
- BEME Review
One of the nine literature reviews cited in the AHRQ report but not explained in depth is a systematic review done under the auspices of the BEME collaboration.14 The collaboration “involves an international group of individuals, universities, and organizations (eg, AMEE [Association for Medical Education in Europe], AAMC [Association of American Medical Colleges], ABIM [American Board of Internal Medicine]) committed to moving the medical profession from opinion-based education to evidence-based education. The goal is to provide medical teachers and administrators with the latest findings from scientifically grounded educational research.”2
The broad scope of the BEME systematic review2 is distinct from the narrower focus on the effectiveness of simulation compared to other educational techniques. This article addressed best educational practices, reviewing 670 journal articles published between 1969 and 2003. Despite the original intent to conduct a quantitative metaanalysis, the studies were so heterogeneous and weak methodologically that the investigators resorted to a qualitative, narrative synthesis. The primary outcome of the BEME review is an inventory of 10 features and uses of high-fidelity simulations that lead to effective learning. These features are listed in order of reported frequency (percent) among the final BEME pool of 109 articles, and the report concluded that “the weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions.2The 10 conditions2 are as follows:
- 1. Feedback is provided during learning experiences (47%).
- 2. Learners engage in repetitive practice (39%).
- 3. Simulation is integrated into an overall curriculum (25%).
- 4. Learners practice tasks with increasing levels of difficulty (14%).
- 5. Simulation is adaptable to multiple learning strategies (10%).
- 6. Clinical variation is built into simulation experiences (10%).
- 7. Simulation events occur in a controlled environment (9%).
- 8. Individualized learning is an option (9%).
- 9. Outcomes or benchmarks are clearly defined or measured (6%).
- 10. The simulation is a valid representation of clinical practice (3%).
These findings, chiefly from UME and GME, are a guide for defining a research agenda on simulations as educational technologies. In contrast with the eight other literature reviews covered in the AHRQ Evidence Report, the BEME review spans a wide variety of medical specialties and simulation technologies across a long time frame. In addition, its emphasis on features and uses of medical simulation that lead to effective learning (eg, feedback, repetitive practice, curriculum integration), not just comparative effectiveness, sets a standard for understanding the benefits of simulation for medical education and personnel evaluation. The BEME review2 on high-fidelity medical simulations also included a call for increased rigor in original simulation-based medical education research and improved journal-reporting conventions. In particular, the authors suggested that journal editors insist that all reports of primary research include basic descriptive statistics (eg, means, SDs, effect sizes, number of cases per group) that will permit quantitative synthesis in subsequent metaanalyses.
A subset of 31 journal articles reporting 32 research studies within the 109 articles was found to contain enough empirical data to permit a quantitative metaanalysis. The studies were framed to address the question, “Is there an association between hours of simulation-based practice and standardized learning outcomes?”15 Measured outcomes from these studies were cast on a standardized metric termed average weighted effect size. Hours of simulation-based practice in each study were grouped in the following five categories: none reported, 0 to 1.0, 1.1 to 3.0, 3.1 to 8.0, and 8.1 +. Data analysis revealed a highly significant “dose-response” relationship among practice and achievement, with more practice producing higher outcome gains. These results are presented in a subsequent report15 that demonstrated a direct relationship between hours of simulator practice and standardized learning outcomes.
- Long Answer Module
- Lessons Learned About Best Educational Practices
The scholarship of the AHRQ report and BEME review is amplified by at least two other reviews12, 13 about simulation-based medical education that also can inform CME practices. One way to highlight advances in simulation-based UME and GME is to focus on exemplary education and research programs that identify their special features. Work completed by two medical simulation education and research programs16, 17, 18, 19, 20, 21,22, 23, 24, 25, 26, 27 are illustrative, as they are thematic, sustained, and cumulative and are of special interest for chest physicians. Additional studies inform on academic standard setting28 and mastery learning of clinical skills in advanced cardiac life support29 and thoracentesis.30
A more recent report31 discussed the “scope of simulation-based healthcare education,” pointing out that the best simulation-based medical education is a multiplicative product of simulation technology (eg, devices, standardized patients), teachers prepared to use the technology to maximum educational advantage, and curriculum integration. It argued that the major flaws in current simulation-based medical education stem from a lack of prepared teachers and curriculum isolation, not from technological problems or deficits.
The design of educational activities useful to practicing physicians assumes that CME program directors are knowledgeable about “what works” from scholarly reviews2, 12, 13, 15 and from individual studies having strong research designs, such as randomized trials,24 mastery learning research,29, 30 and cohort studies.32 Program directors also should be informed about the latest scholarship on technology in medical education.33 The key lesson is that medical simulation and other educational technologies are best used to complement, not replace, education grounded in patient care. CME in any form should be based on scientific best evidence rather than on opinion or habit.14
We endorse the position that CME best practices reside in educational programs that have the following three features: mastery learning, deliberate practice, and recognition and attention to cultural barriers within the medical profession that frustrate better CME programs. In particular, mastery learning and deliberate practice are ideally suited for simulation-based medical education. They also conform to accepted principles of adult education independent of teaching modalities.34
Essential elements of the mastery learning model have been described in earlier publications.35, 36, 37 In brief, mastery learning has the following seven complementary features:
- 1.Baseline, or diagnostic testing;
- 2. Clear learning objectives, sequenced as units in increasing difficulty;
- 3. Engagement in educational activities (eg, skills practice, data interpretation, reading, focused on reaching the objectives);
- 4. A set minimum passing standard (eg, test score) for each educational unit;
- 5. Formative testing to gauge unit completion at a preset minimum passing standard for mastery;
- 6. Advancement to the next educational unit given measured achievement at or above the mastery standard; and
- 7. Continued practice or study on an educational unit until the mastery standard is reached.
The goal in mastery learning is to ensure that all learners accomplish all educational objectives with little or no variation in outcome. The amount of time needed to reach mastery standards for a unit's educational objectives varies among the learners. To illustrate, in mastery learning studies on acquiring advanced cardiac life support29and thoracentesis30 skills, approximately 20% of the internal medicine resident trainees needed more time beyond the minimum allocation to reach mastery standards. The extra time needed was usually < 1 h.
The mastery learning model also includes other options in simulation-based education. For example, mastery learning can address learning objectives beyond skill acquisition to include knowledge gains; affective qualities, such as self-efficacy; or features of medical professionalism. Mastery learning requires a standardized curriculum for all learners, with uniform outcomes assessed by rigorous measurements and standards.28, 38, 39
- Long Answer Module
- Implications for CME
Simulation technology as an educational tool could lead to significant changes in medical education, including a new emphasis on skill and knowledge acquisition and maintenance, integration of the technique into a comprehensive clinical curriculum that includes certification and recertification, adoption of mastery learning and deliberate practice, and increased competence and outcome measurement. Research should focus on valid and reliable tools for more systematic outcome measurements, with the ultimate goal of improving the quality of patient care. Policies that inform physician performance and govern the privilege to practice not only need to endorse the effective educational use of simulation technology, but also tackle sources of cultural resistance to its adoption.
Simulation will never replace the situational context and complex interactions learned through interaction with real patients. Expert mentors will always be needed not only to verify trainee performance in real situations, but also to judge the simulators' in vivo fidelity. Nevertheless, cultural barriers should not hinder the adoption and informed use of simulation technology as a powerful and effective educational tool to maximize physician and other health professional training and, ultimately, to improve patient care.
- Long Answer Module
[os-widget path="/interacvault/test-quiz-2" comments="false"]
This page will contain the certification once you have completed the lesson and quiz