metricas
covid
Educación Médica Mixed methods evaluation of megacode simulation training to enhance clinical rea...
Información de la revista
Vol. 26. Núm. 6.
(Noviembre - Diciembre 2025)
Visitas
789
Vol. 26. Núm. 6.
(Noviembre - Diciembre 2025)
Original article
Acceso a texto completo
Mixed methods evaluation of megacode simulation training to enhance clinical reasoning and decision making in critical care nursing
Evaluación con métodos mixtos de la capacitación en simulación de megacódigo Para mejorar el razonamiento clínico y la Toma de decisiones en enfermería de cuidados críticos
Visitas
789
Rudena A. Madayaga,b,
Autor para correspondencia
madayag.rudena@auf.edu.ph

Corresponding author at: Angeles University Foundation, Mac Arthur Hi-way, Angeles City 2009, Philippines.
, Christian Leandro Monienoa, Jonel Mallaria, Angela Apostola, Evangeline Bautistaa, Rei Angelo Mangibinc, Dennison Jose Punsalana, Isabelito Nabonga
a College of Nursing, Angeles University Foundation, Philippines
b Graduate School, Angeles University Foundation, Philippines
c College of Nursing, Holy Angel University, Philippines
Este artículo ha recibido
Información del artículo
Resumen
Texto completo
Bibliografía
Descargar PDF
Estadísticas
Tablas (5)
Table 1. Participant Demographics (Control Group, n = 150).
Tablas
Table 2. Pre- and Post-Test Descriptive Statistics, Wilcoxon Signed-Rank Test, and Cohen's d for LCJR Subdomains (n = 150).
Tablas
Table 3. Pre- and Post-Test Composite LCJR Scores by Domain (n = 150).
Tablas
Table 4. Comparison of LCJR Scores by Demographic Variables (n = 150).
Tablas
Table 5. Themes and Subthemes Illustrating Students' Transformation Across Megacode Simulations.
Tablas
Mostrar másMostrar menos
Material adicional (1)
Abstract
Background

Effective clinical judgment is essential for critical care nursing, yet few studies have comprehensively examined how simulation training impacts both technical and cognitive development. This study evaluated the effectiveness of megacode simulation training, coupled with structured debriefing, in enhancing clinical reasoning and decision-making among fourth-year Bachelor of Science in Nursing (BSN) students.

Materials and methods

A convergent parallel mixed methods design was employed. Quantitatively, 150 BSN students completed pre- and post-tests using the Lasater Clinical Judgment Rubric (LCJR), analyzed with non-parametric tests. Qualitatively, ten focus groups and 20 interviews were thematically analyzed under a constructivist paradigm. Findings were integrated through data triangulation.

Results

Quantitative findings demonstrated significant improvements across all LCJR domains (p < .001), with large effect sizes (Cohen&apos;s d ≥ 1.01), particularly in prioritizing data and skillful intervention. Qualitative analysis revealed six themes, including growth from chaos to clarity, increased confidence under pressure, improved communication, and enhanced critical thinking. Students described transformative learning experiences that bridged the gap between theoretical knowledge and clinical application. Integration of findings confirmed that megacode simulations foster both technical proficiency and cognitive-emotional development.

Conclusion

Megacode simulation training significantly enhances students&apos; clinical judgment, critical thinking, and teamwork. Despite limitations in sampling and short-term outcome evaluation, the findings affirm the value of experiential, reflective simulation learning. Systematic integration of simulation across nursing curricula and longitudinal follow-up studies are recommended to maximize and sustain clinical competence development.

Keywords:
Megacode simulation
Clinical competence
Clinical judgment
Decision making
Resumen
Antecedentes

El juicio clínico eficaz es fundamental en la enfermería de cuidados intensivos, pero pocos estudios han examinado de manera exhaustiva cómo la formación mediante simulación influye en el desarrollo técnico y cognitivo. Este estudio evaluó la eficacia de la formación mediante simulación de megacódigos, junto con un análisis estructurado, para mejorar el razonamiento clínico y la toma de decisiones en estudiantes de cuarto año de la Licenciatura en Ciencias de la Enfermería (BSN).

Material y métodos

Se empleó un diseño de métodos mixtos paralelos y convergentes. Cuantitativamente, 150 estudiantes de BSN completaron pruebas previas y posteriores utilizando la Rúbrica de Juicio Clínico de Lasater (LCJR), analizadas mediante pruebas no paramétricas. Cualitativamente, se analizaron temáticamente diez grupos focales y 20 entrevistas bajo un paradigma constructivista. Los resultados se integraron mediante triangulación de datos.

Resultados

Los resultados cuantitativos mostraron mejoras significativas en todos los ámbitos del LCJR (p < 0,001), con un gran impacto (d de Cohen ≥1,01), especialmente en la priorización de datos y la intervención hábil. El análisis cualitativo identificó seis temas, entre ellos el crecimiento desde el caos hasta la claridad, el aumento de la confianza bajo presión, la mejora de la comunicación y el refuerzo del pensamiento crítico. Los estudiantes describieron experiencias de aprendizaje transformadoras que cerraron la brecha entre el conocimiento teórico y la aplicación clínica. La integración de los resultados confirmó que las simulaciones de megacódigo fomentan tanto la competencia técnica como el desarrollo cognitivo-emocional.

Conclusión

La formación mediante simulaciones de megacódigo mejora significativamente el juicio clínico, el pensamiento crítico y el trabajo en equipo de los estudiantes. Aunque existen limitaciones en el muestreo y en la evaluación de resultados a corto plazo, los hallazgos respaldan el valor del aprendizaje experiencial y reflexivo a través de simulaciones. Se recomienda la integración sistemática de la simulación en los planes de estudio de enfermería y la realización de estudios de seguimiento longitudinal para maximizar y mantener el desarrollo de la competencia clínica.

Palabras clave:
Simulación de megacódigos
Competencia clínica
Juicio clínico
Toma de decisiones
Texto completo
Introduction

In critical care settings, clinical judgment is a fundamental competency that distinguishes expert nurses from novices.1 The ability to rapidly analyze complex patient data, prioritize interventions, and make sound decisions under pressure is vital to ensuring positive patient outcomes.2 As healthcare becomes increasingly sophisticated, nursing education must evolve to cultivate these essential judgment skills in future practitioners.

Megacode simulation training offers an innovative strategy for cultivating clinical judgment by immersing students in realistic, high-pressure emergencies. These simulations require rapid clinical reasoning and decision-making in dynamic environments,2 allowing learners to identify key cues, interpret evolving scenarios respond effectively.3,4

Evidence shows that high-fidelity simulations improve technical skills, critical thinking, and decision-making process in emergent care.5 Reflective debriefing, a key component of simulation, further enhances learning by prompting students to evaluate their actions, internalize feedback, and connect practice to theory.6,7 Integrating structured reflection supports the development of both analytical and affective components of clinical judgment.8

Despite these benefits, existing literature has often focused on isolated competencies or short-term outcomes, lacking holistic evaluations of simulations' long-term effects.9 Moreover, few studies explore students' subjective experiences particularly within diverse cultural contexts limiting understanding of how learners internalize and apply simulation-based training.10

To address these gaps, this study employs a convergent parallel mixed methods design to examine the effectiveness of megacode simulation on the clinical judgment of nursing students. By integrating quantitative assessments with qualitative insights from focus groups, interviews, and narrative reflections, the study offers a comprehensive view of how simulation enhances both decision-making and readiness for real-world critical care practice.

Methods and materialsStudy design

A convergent parallel mixed methods design was used to simultaneously collect and integrate quantitative and qualitative data to evaluate the impact of megacode simulation intervention impact on clinical judgment and student experiences. The study followed the Simulation-Based Research (SBR) reporting guidelines by Cheng et al. (2016) to ensure methodological transparency.11

Participants and setting

The study involved 150 fourth-year Bachelor of Science in Nursing (BSN) students enrolled in an intensive care nursing course at a university in Angeles City, Philippines.

All procedures involving human participants were conducted in accordance with our institution's ethical standards. Written informed consent was obtained from each participant prior to data collection, and ethical protocols were followed throughout. Inclusion criteria included current enrollment as fourth-year BSN students, participation in the simulation intervention, and willingness to complete all study components. Students who did not complete the intervention were excluded.

Sampling method

A convenience sampling was applied for the quantitative strand, selecting students who were available and willing to complete both the pre-test and post-test. For the qualitative component, purposive sampling identified participants best suited to share detailed insights on the simulation experience. This included nine focus group discussions (FGDs) with students and one FGD with instructors, each consisting of 5–6 members, as well as 20 individual interviews. While convenience sampling ensured efficient recruitment, purposive sampling enabled the capture of rich, in-depth perspectives on clinical judgment development.12

Intervention

The intervention consisted of structured megacode simulation training aligned with the American Heart Association's Advanced Cardiac Life Support (ACLS) guidelines,13 and designed in accordance with the simulation-based research framework of Cheng et al.11 The training included four key phases:

  • 1.

    Pre-simulation assessment involved baseline measurement of students' clinical judgment using the Lasater Clinical Judgment Rubric (LCJR). During simulation training, students participated in two ACLS-based megacode scenarios developed with structured scripts replicating realistic critical care emergencies. Simulations were conducted in laboratories with manikins and emergency equipment in a realistic setting. Before sessions, participants received orientations about the study's confidentiality safeguards and researcher roles. Reflexivity was observed through memo writing, reflective journaling and team discussions to manage bias.

Each group underwent two simulation sessions lasting 20–30 min, facilitated by instructors with at least five years of critical care experience. Scenarios increased in complexity to challenge clinical reasoning and decision-making. Facilitators ensured consistency by adhering to standardized scripts and protocols.

Debriefing followed each session using a three-phase structure: emotional reaction, critical analysis, and synthesis. This facilitator-guided approach promoted open dialog, guided reflection, and identification of areas for improvement.14

Finally, Post-simulation Assessment using the LCJR was conducted to evaluate changes in clinical judgment following the intervention.

Data instrumentation and data collection

The Lasater Clinical Judgment Rubric (LCJR), grounded in Tanner's Clinical Judgment Model which delineates the steps of noticing, interpreting, responding, and reflecting,1 was used to assess clinical judgment among fourth-year BSN students. The LCJR uses a 4-point scale: 1 (“Beginning”) to 4 (“Exemplary”) to evaluate each domain.15 Validated in both simulated and clinical settings, the tool reliably measures student performance in AHA-aligned megacode simulations. Content Validity was established through expert review by four faculty members, yielding item CVIs ranging from 0.75 to 1.0, an S-CVI/Ave of 0.90, and a Universal Agreement Index of 1.0, confirming its rigor.16 The LCJR's theoretical grounding and strong psychometric properties make it an appropriate instrument for capturing changes in clinical reasoning and decision-making following simulation training.17

The “Responding” domain of the LCJR, capturing a student's ability to prioritize and initiate timely nursing actions, served as the key quantitative indicator of improved decision-making. Pre- and post-test scores were compared to evaluate changes aligned with the study's core objective of enhancing clinical reasoning.

For the qualitative strand, data were drawn from multiple sources to ensure a holistic understanding. Clinical instructors documented structured field notes on students' expressions, communication, and timeliness during both initial and follow-up simulations. Brief feedback forms evaluated team cohesion and ACLS protocols adherence. Within minutes after each simulation, audio-recorded focus group discussions (n = 10) and individual interviews (n = 20), conducted by researchers RAM, CLM, JM, AA and EB, explored emotional reactions, perceived growth, and skill application. Each session lasted for 30–45 min. Students also submitted narrative reflections detailing insights, challenges, and learning outcomes. These triangulated sources enriched the interpretation of how simulation influenced clinical judgment.

Data analysisQuantitative analysis

Data were analyzed using SPSS (version 29). Descriptive statistics were calculated for each domain of the Lasater Clinical Judgment rubric—Noticing, Interpreting, Responding, Reflecting, and for the overall clinical judgment score. Test of normality, including the Kolmogorov–Smirnov and Shapiro–Wilk tests, indicated significant deviations from a normal distribution across all domains.18 For instance, the Focused Observation domain showed a Kolmogorov–Smirnov statistic of 0.363 and a Shapiro–Wilk statistic of 0.716, both with p values less than 0.001. As all p-values were significant, non-parametric methods were employed. The Wilcoxon signed-rank test was used to compare pre- and post-intervention scores, while the Mann–Whitney U test examined differences by age and gender. Effect sizes were calculated using Cohen's d, with values ≥0.80 indicating large effects.19

Qualitative analysis

Ten focus group discussions (nine with students, one with instructors, each with 5–6 participants) and 20 individual interviews were conducted until data saturation was achieved, indicated by the repetition of themes without new insights. A constructivist paradigm guided the qualitative strand, using Braun and Clarke's reflexive thematic analysis.20 This flexible framework enabled exploration of participants' subjective meanings and socially constructed experiences with simulation-based learning. Transcripts were analyzed through open, axial, and selective coding processes to identify recurring themes.

In addition to semi-structured interviews and focus groups, field notes, instructor feedback forms, and student narrative reflections were inductively coded. These supplementary data either supported or challenged emerging themes, contributing to a richer understanding of experiences.

To enhance credibility, multiple coders (RAM, CLM, RAM, DJP, IN) independently reviewed the data, and consensus meetings ensured intercoder reliability. Member checking was conducted with selected participants’ validated interpretations, while investigator triangulation promoted analytical rigor. Reflexivity was maintained through analytic memos and team discussions on positionality and bias.

To ensure transparency, the study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ).21 The systematic coding and triangulation approaches strengthened the reliability of findings. Finally, qualitative insights were integrated with quantitative data to provide a comprehensive understanding of how megacode simulation influenced clinical judgment.

ResultsQuantitative findings

Table 1 presents the demographic profile of 150 fourth-year BSN students. Most were female (70%), with males at 30%. Sixty percent were 21–22 years old, and 40% were older. All were enrolled in the fourth year of the BSN program.

Table 1.

Participant Demographics (Control Group, n = 150).

Characteristic  Frequency  Percentage 
Gender
Female  105  70% 
Male  45  30% 
Age
21–22  90  60% 
>22  60  40% 
Program
Fourth Year BSN  150  100% 

Table 2 shows that all LCJR subdomains and overall clinical judgment scores improved markedly from pre-test to post-test. Pre-test means ranged from 1.57 (Being Skillful) to 2.15 (Recognizing Deviations), while post-test means clustered around 2.76–2.77, reflecting a consistent shift toward higher performance. Wilcoxon signed-rank tests revealed significant improvements for every subdomain (all Z ≤ −6.995, p < .001). Corresponding Cohen's d values—from 1.01 (Recognizing Deviations) up to 1.75 (Prioritizing Data)—exceed 0.80, indicating large effect sizes. These findings demonstrate that megacode simulation training had a substantial, uniform impact on clinical judgment across noticing, interpreting, responding, and reflecting domains.

Table 2.

Pre- and Post-Test Descriptive Statistics, Wilcoxon Signed-Rank Test, and Cohen's d for LCJR Subdomains (n = 150).

LCJR Subdomain  Pre-test M ± SD  Post-test M ± SD  Cohen's d 
Noticing
Focused Observation  1.71 ± 0.55  2.76 ± 0.69  9.054  <0.001  1.68 
Recognizing Deviations  2.15 ± 0.50  2.76 ± 0.69  6.995  <0.001  1.01 
Interpreting
Information Seeking  1.89 ± 0.74  2.76 ± 0.69  7.948  <0.001  1.21 
Prioritizing Data  1.69 ± 0.52  2.76 ± 0.69  9.257  <0.001  1.75 
Making Sense of Data  1.79 ± 0.79  2.76 ± 0.69  8.360  <0.001  1.31 
Responding
Calm, Confident Manner  2.07 ± 0.62  2.76 ± 0.69  7.426  <0.001  1.05 
Clear Communication  1.77 ± 0.80  2.76 ± 0.69  8.389  <0.001  1.33 
Well-Planned Intervention/Flexibility  1.60 ± 0.84  2.76 ± 0.69  9.057  <0.001  1.51 
Being Skillful  1.57 ± 0.84  2.76 ± 0.69  9.048  <0.001  1.55 
Reflecting
Evaluation, Self-Analysis  1.67 ± 0.82  2.76 ± 0.69  8.797  <0.001  1.43 
Commitment to Improvement  1.79 ± 0.79  2.77 ± 0.69  8.353  <0.001  1.31 

Note. M = Mean; SD = Standard Deviation. Z values are based on negative ranks; p-values are two-tailed. Cohen's d calculated as: d=Mpost−MpreSDpre2+SDpost22. Values ≥ 0.80 denote large effects.

Table 3 compares LCJR scores for each domain and overall between the pre- and post-test for the control group (n = 150). Notably, all domains increased from pre-test (1.73–1.93) to post-test (∼2.76), indicating substantial gains. The ranges indicate that while individual LCJR items span from 1–3 in the pre-test, the post-test scores reflect a shift toward the upper end of the 4-point scale (2–4).

Table 3.

Pre- and Post-Test Composite LCJR Scores by Domain (n = 150).

Domain  Pre-test M ± SD (Range)  Post-test M ± SD (Range) 
Noticing  1.93 ± 0.53 (1.00–3.00)  2.76 ± 0.69 (2.00–4.00) 
Interpreting  1.79 ± 0.55 (1.00–3.00)  2.76 ± 0.69 (2.00–4.00) 
Responding  1.75 ± 0.56 (1.00–3.00)  2.76 ± 0.69 (2.00–4.00) 
Reflecting  1.73 ± 0.56 (1.00–3.00)  2.76 ± 0.69 (2.00–4.00) 
Overall  1.79 ± 0.55 (1.00–3.00)  2.76 ± 0.69 (2.00–4.00) 

Note. Composite scores were derived by averaging the relevant LCJR subdomain scores: Noticing (Focused Observation and Recognizing Deviations); Interpreting (Information Seeking, Prioritizing Data, and Making Sense of Data); Responding (Calm, Confident Manner; Clear Communication; Well-Planned Intervention/Flexibility; and Being Skillful); and Reflecting (Evaluation, Self-Analysis and Commitment to Improvement). The pre-test scores reflect observed means from individual items ranging approximately from 1.57 to 2.15, while the post-test scores are consistent at around 2.76, with ranges determined by the minimum and maximum possible scores from the individual items.

Table 4 shows that female and male students had nearly identical baseline LCJR scores (Females: 1.79 ± 0.71; Males: 1.81 ± 0.69) and improved to similar post-test levels (Females: 2.76 ± 0.69; Males: 2.75 ± 0.71). Mann–Whitney U tests confirmed no gender differences pre-(U = 1754.50, p = .137) or post-test (U = 1959.50, p = .533). By contrast, younger students (21–22 years) scored lower at pre-test (1.75 ± 0.73) than older (> 22 years: 1.86 ± 0.65), a difference that was significant (U = 960.50, p < .001). However, post-test scores were comparable (younger: 2.74 ± 0.68; older: 2.80 ± 0.69), and the age difference was no longer significant (U = 2492.00, p = .736). Pattern indicates that the simulation closed the initial gap in clinical judgment between age groups, while gender remained unrelated to performance both before and after the intervention.

Table 4.

Comparison of LCJR Scores by Demographic Variables (n = 150).

Group  Subgroup  Pre-testM ± SD  Post-testM ± SD 
GenderFemale  105  1.79 ± 0.71  2.76 ± 0.69  1754.50  1.486  0.137 
Male  45  1.81 ± 0.69  2.75 ± 0.71  1959.50  0.623  0.533 
Age21–22  90  1.75 ± 0.73  2.74 ± 0.68  960.50  6.423  <0.001 
>22  60  1.86 ± 0.65  2.80 ± 0.69  2492.00  0.337  0.736 

Note. M = mean; SD = standard deviation; U = Mann–Whitney U statistic; Z = standardized test statistic; p = two-tailed significance. Gender codes: 1 = Female; 2 = Male. Age codes: 1 = 21–22 years; 2 = > 22 years.

Qualitative results

Thematic analysis revealed six key themes reflecting students' progression from uncertainty to competence through repeated megacode simulations. Each theme is supported by student and instructor quotations (Table 5).

Table 5.

Themes and Subthemes Illustrating Students' Transformation Across Megacode Simulations.

Theme  Subtheme  Student Quotes  Instructor Quotes 
1. From Chaos to Clarity: Evolution Through Practice  1.1 The Struggle Begins: Initial Challenges  “It was as if we were thrown in the middle of the ocean, not knowing really what to do.” (P06)“We couldn't count the number of shocks accurately…what if it was a real patient?” (FGD 5)  “The megacode simulation is a valuable tool for reinforcing critical care knowledge.” (I-03) 
  1.2 Rising Above: Mastery After Training  “In the second simulation, we knew every rhythm and what medications to give. We were prepared.” (P12)“The initiative we showed during the second simulation was a big improvement.” (FGD 2)  “Students get hands on practice with essential critical care procedures through the simulation.” (I-06) 
2. Confidence Under Fire: Building Assurance Through Experience  2.1 Overwhelmed and Anxious: Initial Encounter  “Our emotions took over; we forgot to check for return of spontaneous circulation.” (FGD 6)“I could barely focus—my mind blanked.” (P10)  “The simulation helps shape students' mindset and approach to critical care situations.” (I-01) 
  2.2 Poised and Ready: Gaining Confidence After Practice  “After the training, we were more composed and able to perform better under pressure.” (FGD 1)“I felt more prepared and confident during the second simulation.” (P16)  “By mimicking real life scenarios, the megacode simulation prepares students for the challenges of critical care.” (I-05) 
3. Communication in Crisis: From Disarray to Cohesion  3.1 Lost in Translation: Initial Communication Barriers  “Our communication was unclear and led to mistakes.” (P20)“We realized that our lack of clear communication was a major issue.”(FGD 4)  “The megacode simulation provides valuable feedback for improving team communication.” (I-03) 
  3.2 Speaking as One: Enhanced Communication  “In the second simulation, we functioned as a unified team, each supporting the others.” (P01)“We used closed loop communication.” (FGD 5)  “Faculty can effectively assess students' skills in a high pressure environment using the simulation.”(I-02) 
4. From Theory to Practice: Bridging the Knowledge Gap  4.1 Theory Without Application: Initial Struggles  “Even though we knew the theory, we struggled to apply it under pressure.” (P11)“I knew protocols but froze when time pressure came.” (FGD 6)  “Faculty members gain insights into their own readiness through the simulation.” (I-06) 
  4.2 Learning by Doing: Improved Application  “By the second simulation, we bridged the gap between theory and practice, leading to better patient care.” (P04)“The lectures really helped us understand how to apply our theoretical knowledge.” (FGD 9)  “The megacode simulation provides valuable insights for curriculum improvement.” (I-02) 
5. Team Dynamics: From Individual Effort to Collective Success  5.1 Struggling Solo: Lack of Teamwork  “We focused on our individual roles rather than functioning as a cohesive unit.” (P17)“Each of us waited for someone else to start.” (FGD 1)  “Continued training and support are crucial to maximize the benefits of simulation based learning.” (I-04) 
  5.2 Unified Effort: Strengthened Team Dynamics  “During the second simulation, we proved we were a team with one goal—to save the patient.” (P03)“Roles were delegated quickly, and everyone knew tasks.” (FGD 8)  “Students need a strong foundation in basic nursing skills to fully benefit from megacode simulations.” (I-03) 
6. Critical Thinking in Action: From Hesitation to Decisiveness  6.1 Second Guessing: Indecision Initially  “Our hesitation and anxiety caused us to miss key steps, like placing the board for cardiac compression.” (FGD 5)“I hesitated at the shock sequence and was delayed.” (P13)  “Leveraging technology can enhance the realism and effectiveness of simulations.” (I-01) 
  6.2 Swift and Sure: Decisive Actions After Training  “After the training, we were more decisive and confident in our actions during the second simulation.” (P16)“The practice sessions helped us make quick, informed decisions.” (FGD 8)  “The simulation shapes students' mindset and approach to critical-care decision-making.” (I-03) 
From chaos to clarity

Students' first simulations triggered disorientation and anxiety. One participant confessed, “It was as if we were thrown in the middle of the ocean, not knowing really what to do” (P06), and groups admitted procedural lapses: “We couldn't count the number of shocks accurately… what if it was a real patient?” (FGD 5). Instructors recognized this phase as foundational: “The megacode simulation is a valuable tool for reinforcing critical care knowledge” (I-03). This theme underscores early debriefing's role in transforming confusion into actionable insight.

Confidence under fire

Early exercises induced panic, “Our emotions took over; we forgot to check for return of spontaneous circulation” (FGD 6) and “I could barely focus—my mind blanked” (P10). Over successive simulations, participants reported steadiness: “After the training, we were more composed and able to perform better under pressure” (FGD 1) and “I felt more prepared and confident during the second simulation” (P16). Faculty observed that practice “helps shape students' mindset and approach to critical care situations” (I-01). This theme highlights how controlled stress exposure builds resilience.

Communication in crisis

Initial teamwork suffered from unclear direction, “Our communication was unclear and led to mistakes” (P20) and “We realized our lack of clear communication was an issue” (FGD 4). Debriefings introduced closed-loop feedback: “In the second simulation, we functioned as a unified team supporting each other” (P01) and “We used closed loop communication” (FGD 5). Instructors noted, “The megacode simulation provides valuable feedback for improving team communication” (I-03) and faculty “can effectively assess students' skills in a high-pressure environment” (I-02). Mastery of dialog proved critical for error reduction.

From theory to practice

Many students “froze despite knowing protocols” (FGD 6) and noted “protocols made sense until the code began” (P11). Later, participants reported breakthroughs: “By the second simulation, we bridged the gap between theory and practice, improving patient care” (P04) and “The lectures helped us apply our theoretical knowledge” (FGD 9). Instructors affirmed, “Faculty members gain insights into their readiness through the simulation” (I-06) and that simulations “provide valuable insights for curriculum improvement” (I-02). This theme demonstrates hands-on learning's power to cement conceptual understanding.

Team dynamics

Early sessions revealed fragmented efforts: “We focused on our individual roles rather than functioning as a cohesive unit” (P17) and “Each of us waited for someone else to start” (FGD 1). Practice continued, learners described unified action: “During the second simulation, we proved we were a team with one goal, to save the patient” (P03) and “Roles were delegated quickly, and everyone knew tasks” (FGD 8). Faculty stressed, “Training and support are crucial to maximize benefits of simulation based learning” (I-04) and “students need a strong foundation in basic nursing skills” (I-03). This highlights simulation fosters interprofessional coordination.

Critical thinking in action

Initial hesitation led to missed steps, “Our hesitation and anxiety caused us to miss key steps, like placing the board for cardiac compression” (FGD 5) and “I hesitated at the shock sequence and was delayed” (P13). Later, participants reported decisive interventions: “After the training, we were more decisive and confident in our actions during the second simulation” (P16) and “The practice sessions helped us make quick, informed decisions” (FGD 8). Instructors corroborated, “Leveraging technology can enhance the realism and effectiveness of simulations” (I-01) and “The simulation shapes students' mindset and approach to critical-care decision-making” (I-03). This confirms the simulation's capacity to accelerate clinical judgment.

Together, these six themes substantiated by student and instructor accounts demonstrate that megacode simulation fosters skill, emotional resilience, precise communication, cohesive teamwork, and autonomous critical judgment for effective critical care practice.

Discussion

The mixed-methods findings demonstrate that megacode simulation training markedly enhances clinical judgment in fourth-year BSN students preparing for critical care nursing.22 Quantitatively, all LCJR domains —Noticing, Interpreting, Responding, and Reflecting —improved significantly (Cohen's d = 1.01 to 1.75), with the largest gains in “Prioritizing Data” and “Being Skillful,” underscoring the intervention's impact on both the analytic and technical performance under pressure.23,24 Notably, improvements were uniform across gender and closed initial age-group gaps, indicating the simulation's role in standardizing competency development among diverse learners.25

Qualitative themes illuminated students' emotional and cognitive journeys. Early sessions elicited confusion and overload —a “From Chaos to Clarity” transition —mirroring literature on novice cognitive strain in high-fidelity simulations.26 Through repeated scenarios and structured debriefings, learners reported greater composure, decisiveness, and alignment of theory with practice (“Confidence Under Fire” and “From Theory to Practice”).27,28 Structured team protocols fostered clearer closed-loop communication and cohesive role performance (“Communication in Crisis” andTeam Dynamics”), while “Critical Thinking in Action” captured their shift to rapid, evidence-based clinical decision-making.5,29

When quantitative and qualitative data were integrated, four meta-inference patterns emerged. Alignment appeared as narratives of composed, decisive action confirmed gains in the Responding domain. Explanation was evident when accounts of early cognitive overload clarified variable improvements in Noticing. Expansion manifested through emotional maturation and reflective insights that extend beyond rubric metrics. Discordance surfaced when some students reported lingering uncertainty despite high rubric scores, indicating areas for sustained reinforcement. These patterns affirm Tanner's multidimensional clinical judgment model, highlighting the interplay of cognitive, technical, emotional, and interpersonal development in simulation-based education.30

An additional notable finding was the demonstration of accelerated skill acquisition among students with limited prior clinical exposure, suggesting that well-structured simulations may compensate for constrained clinical placements and promote equity in learning opportunities. Moreover, the emergent theme of psychological safety established through transparent debriefing and peer support appeared critical for deep learning and risk-free exploration of clinical errors.17 These insights underscore the importance of optimizing the simulation environment to maximize educational impact and support student resilience.22

Despite these promising outcomes, several limitations warrant consideration. First, the single-site convenience sample limits generalizability to other institutions and student populations. Second, variations in simulation fidelity, equipment availability, and instructor facilitation styles may have influenced variability in learner experiences. Third, our evaluation relied on rubric-based and self-reported measures immediately post-intervention, which may not reflect actual performance under clinical conditions or longer-term retention.

To translate these findings into educational practice, nursing curricula should integrate recurring megacode simulations with structured reflective debriefings across critical care courses. Faculty development programs must emphasize consistent scenario facilitation and debriefing techniques to optimize learner outcomes. Longitudinal studies are needed to evaluate the durability of clinical judgment gains and their predictive validity for real-world competence. Assessment strategies should combine the LCJR with OSCEs or direct clinical observations to provide a holistic evaluation of student readiness. Finally, programs should explore augmented and virtual reality platforms to broaden access, realism, and scalability.

In conclusion, this study provides robust, multidimensional evidence that structured megacode simulations coupled with reflective debriefing significantly advance the cognitive, technical, and affective dimensions of critical care readiness. By fostering rapid skill acquisition, psychological safety, and collaboration competence, such as simulations effectively prepare nursing students for the high-stakes demands of contemporary healthcare practice.

Funding source

Nothing to Disclose.

Ethics committee approval

Angeles University Foundation-Ethics Review Committee 2024-CON-Faculty-003.

Conflict of interest

The researchers declare that there are no conflicts of interest in this study.

Appendix A
Supplementary data

Supplementary material

References
[1]
C.A. Tanner.
Thinking like a nurse: a research-based model of clinical judgment in nursing.
J Nurs Educ, 45 (2006), pp. 204-211
[2]
M.D. Duprey, K.S. Dunker.
Megacode simulation training in undergraduate nursing education.
Nurs Educ Perspect, 42 (2021), pp. 193-194
[3]
C.M. Thomas, N. Barker.
Impact of simulation on undergraduate student outcomes.
Nurse Educ, 47 (2022), pp. E127-E131
[4]
M. Fawaz, Y. Alsalamah.
Clinical competence and self-efficacy of Lebanese and Saudi nursing students participating in simulation-based learning in nursing education.
Nurs Forum, 57 (2022), pp. 260-266
[5]
F.D. Alshehri, S. Jones, D. Harrison.
The effectiveness of high-fidelity simulation on undergraduate nursing students&apos; clinical reasoning-related skills: a systematic review.
Nurse Educ Today, 121 (2023),
[6]
M.L. Kuszajewski.
Nursing simulation debriefing: useful tools.
Nurs Clin North Am, 56 (2021), pp. 441-448
[7]
K.J. Morse, M.K. Fey, S.G. Forneris.
Evidence-based debriefing.
Annu Rev Nurs Res, 39 (2020), pp. 129-148
[8]
L.M. Farrell, S. Buydens, G. Bourgeois-Law, G. Regehr.
Experiential learning, collaboration and reflection: key ingredients in longitudinal faculty development.
Can Med Educ J, 12 (2021), pp. 82-91
[9]
K.A. Theobald, N. Tutticci, J. Ramsbotham, S. Johnston.
Effectiveness of using simulation in the development of clinical reasoning in undergraduate nursing students: a systematic review.
[10]
J.J. Carrasco-Guirao, C. Leal-Costa, M.L.Á. Castaño-Molina, M.B. Conesa-Ferrer, A. Molina-Rodríguez, J.L. Díaz-Agea, et al.
Exploring how evidence-based practice, communication, and clinical simulation outcomes interact in nursing education: a cross-sectional study.
Nurs Rep (Pavia, Italy), 14 (2024), pp. 616-626
[11]
A. Cheng, D. Kessler, R. Mackinnon, T.P. Chang, V.M. Nadkarni, E.A. Hunt, et al.
Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements.
[12]
S. Dawadi, S. Shrestha, R.A. Giri.
Mixed-methods research: a discussion on its types, challenges, and criticisms.
J Pract Stud Educ, 2 (2021), pp. 25-36
[13]
American Heart Association. CPR & ECC guidelines: algorithms. American Heart Association.
[14]
T. Marshall, S. Keville, A. Cain, J.R. Adler.
Facilitating reflection: a review and synthesis of the factors enabling effective facilitation of reflective practice.
Reflective Pract, 23 (2022), pp. 483-496
[15]
K. Lasater.
Clinical judgment development: using simulation to create an assessment rubric.
J Nurs Educ, 46 (2007), pp. 496-503
[16.]
M.S.B. Yusoff.
ABC of content validation and content validity index calculation.
Educ Med J, 11 (2019), pp. 49-54
[17]
R.A. Madayag, E.C. Bautista, J.P.C. Pineda, A.S. Geanga, R.M.S. Agustin, M.L. Roque, et al.
Refining clinical judgment competence in nursing education in the Philippines: a mixed-methods study on the impact of the Philips 66 brainstorming technique in case-based learning.
Belitung Nurs J, 10 (2024), pp. 680-694
[18]
P. Mishra, C.M. Pandey, U. Singh, A. Gupta, C. Sahu, A. Keshri.
Descriptive statistics and normality tests for statistical data.
Ann Card Anaesth, 22 (2019), pp. 67-72
[19]
M.L. Kumar, J. Stephen, R. George, G.L. Harikrishna, P.S. Anisha.
Use of effect size in medical research: a brief primer on its why and how.
Kerala J Psychiatry, 35 (2022),
[20]
V. Braun, V. Clarke.
Reflecting on reflexive thematic analysis.
Qual Res Sport Exerc Health, 11 (2019), pp. 589-597
[21]
A. Tong, P. Sainsbury, J. Craig.
Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups.
Int J Qual Health Care, 19 (2007), pp. 349-357
[22]
N. Alrashidi, E. Pasay An, M.S. Alrashedi, A.S. Alqarni, F. Gonzales, E.M. Bassuni, et al.
Effects of simulation in improving the self-confidence of student nurses in clinical practice: a systematic review.
BMC Med Educ, 23 (2023), pp. 815
[23]
F. Saghafi, N. Blakey, S. Guinea, T. Levett-Jones.
Effectiveness of simulation in nursing students&apos; critical thinking scores: a pre−/post-test study.
[24]
A. Sterner, M. Skyvell Nilsson, A. Eklund.
The value of simulation-based education in developing preparedness for acute care situations: an interview study of new graduate nurses&apos; perspectives.
[25]
S. Chevalier, M. Paquay, S. Krutzen, A. Ghuysen, S. Stipulante.
Learning technical skills in simulation: shared training for medical students and advanced practice nurses.
[26]
M.L. Tremblay, J.J. Rethans, D. Dolmans.
Task complexity and cognitive load in simulation-based education: a randomised trial.
Med Educ, 56 (2022), pp. 1191-1199
[27]
Y. Kong.
The role of experiential learning on students&apos; motivation and classroom engagement.
[28]
T.Y. Khowong, N.N. Khamis.
Transformative learning, priming, and simulation timing: a randomized controlled pilot study among emergency medicine residents.
[29]
T.C. Sung, H.C. Hsu.
Improving critical care teamwork: simulation-based interprofessional training for enhanced communication and safety.
J Multidiscip Healthc, 18 (2025), pp. 355-367
[30]
S.E. Husebø, I.Å. Reierson, A. Hansen, M. Silvennoinen, S. Tveiten.
Post-simulation debriefing as a stepping stone to self-reflection and increased awareness: a qualitative study.
Copyright © 2025. The Author(s)
Descargar PDF
Opciones de artículo
Herramientas
Material suplementario