Buscar en
Medicina Clínica Práctica
Toda la web
Inicio Medicina Clínica Práctica Assessment of clinical competence of medical students: Future perspectives for S...
Información de la revista
Vol. 7. Núm. 2.
(Abril - Junio 2024)
Compartir
Compartir
Descargar PDF
Más opciones de artículo
Visitas
378
Vol. 7. Núm. 2.
(Abril - Junio 2024)
Review
Acceso a texto completo
Assessment of clinical competence of medical students: Future perspectives for Spanish Faculties
Evaluación de la competencia clínica de los estudiantes de medicina: perspectivas de futuro para las facultades españolas
Visitas
378
Stefan Lindgrena, Jorge Luis Pales Argullosb, Josep Roma Millanc,
Autor para correspondencia
jroma@ub.edu

Corresponding author.
a Lund University, Lund, Sweden
b Barcelona University, RAMC, Barcelona, Spain
c Barcelona University, MHPE, Barcelona, Spain
Este artículo ha recibido
Información del artículo
Resumen
Texto completo
Bibliografía
Descargar PDF
Estadísticas
Figuras (2)
Tablas (1)
Table 1. Some methods to assess clinical competence.
Abstract

Medical education should foster professionals with knowledge and skills, professional attitudes, and social responsibilities. Undergraduate and postgraduate education and continuing professional development as specialist each have unique possibilities and responsibilities.

The use of varied assessment formats assist progression towards the expected program outcomes. An analysis of the main methods for assessing clinical competence is done. This will support students, provide confidence, and document the competence acquired; needed for safe practise, professional life-long learning, being a member of the medical knowledge community and a catalyst for a health promoting society.

Spanish medical schools have partially incorporated the Bologna Process, including the introduction of outcome-based medical education. With some exceptions however, they continue with non-integrated curricula, structured on independent subjects.

Instead, integrated and progressive competence assessment programs in horizontally and vertically integrated curricula should be established based on the programmatic assessment paradigm. This supports feedback and summative decisions for each student, to help them progress and reach the expected outcomes.

Keywords:
Assessment
Competence
Feedback
Outcome-based
Professionalism
Program
Resumen

La educación médica debe fomentar profesionales con conocimientos y habilidades, actitudes profesionales y responsabilidades sociales. La educación de pregrado y posgrado y el desarrollo profesional continuo como especialista tienen desarrollos y responsabilidades propios.

El uso de formatos de evaluación variados ayuda a avanzar hacia los resultados esperados del programa. Se realiza un análisis de los principales métodos de evaluación de la competencia clínica. Esto apoyará a los estudiantes, les brindará confianza y documentará la competencia adquirida; Todo ello es necesario para el desarrollo de una práctica segura, un aprendizaje profesional permanente, considerarse miembro de la comunidad de conocimiento médico y es, además, un catalizador para una sociedad que promueva la salud.

Las facultades de medicina españolas han incorporado parcialmente el Proceso de Bolonia, incluida la introducción de una educación médica basada en resultados. Sin embargo, salvo algunas excepciones, continúan con planes de estudios no integrados, estructurados en materias independientes.

Se deben establecer programas integrados y progresivos de evaluación de competencias en currículos integrados horizontal y verticalmente con base al paradigma de evaluación programática. Esto respalda la retroalimentación y las decisiones sumativas para cada estudiante, para ayudarlos a progresar y alcanzar los resultados esperados.

Palabras claves:
Evaluación
Competencia
Retroalimentación
Basado en resultados
Profesionalismo
Programa
Texto completo
Introduction

We are living in an era of change in the paradigm of training future doctors that conditions the competencies required by doctors. These changes should be reflected in the goals of medicine which must be moderate cautious, affordable, and economically sustainable, fair, and equitable and focus on the person behind the illness (person-centered medicine). There are also changes in demography with aging, socioeconomic factors, and epidemiology with new diseases (COVID-19) and disappearance of other diseases, emergence of new therapeutic approaches, and the prevalence of chronic diseases. But also, organizational changes, as the use of information and communication technologies, artificial intelligence, the reduction of time and cost in information, changes in the doctor–patient relationship, and better-informed patients. And finally, changes in resource management. The economic resources are not unlimited, and it is necessary to prioritize. All these aspects must be part of the physicians learning process and translated into competencies.1

The process of defining competences and curricular goals is dynamic. Neither the goals that medical training seeks to cover nor the competencies that must be demonstrated at the end of each stage are immutable. Instead, they must be adapted to meet the societal needs, which represent an improvement in the quality of the healthcare received by patients.1

Regarding specific competencies and learning outcomes, advances in the field of molecular biomedicine and technology have provided understanding of human physiology, pathophysiology, and pathology, allowing better prevention and improved diagnosis and treatment of diseases, and sometimes individualized therapies. Currently, these advances are part of clinical practise, and the incipient technologies derived from them show a horizon of future applications that, in the short term, will be implemented in the field of healthcare. All these changes will have an impact on society. They will require that future medical professionals adapt and, that their training throughout the training continuum contemplates these changes in order to acquire the competencies that will enable them to face their medical practise in the best interest of the health and safety of patients and of society.

Moreover, the recent health crisis due to the COVID-19 pandemic has shown the need to revise our current curricula to include new competencies and expand competences with currently little space as Public Health, Epidemiology in a globalized world, Clinical management, Bioethics, Telemedicine/Digital medicine, Individual and collective protection measures of the health of professionals, etc.2

Lately, we have also seen an increase of the interest in generic or independent domain skills among which can be considered Ethics and Professionalism, Communication Skills, Teaching and Research, Leadership, Teamwork, Management of Health Resources, etc. The acquisition of such competences is important to develop adequate clinical practise and a global doctor. The independent domain skills or generic competencies are complex behaviors. For its correct acquisition, a longitudinal development is required along the training program3 as well as introduction of more adequate assessment tools.

Assessment strategies

The goal of medical education is to foster future professionals, with an ability to practise medicine according to the highest standards and to contribute to the development of medicine and a healthy society. The expectations on future doctors are similar regardless of where in the world they are formulated.4–8

Medical education has 3 phases: undergraduate, postgraduate, and continuing professional development throughout professional life (Fig. 1). After graduation, the doctor must be able to practise safely during postgraduate training to become a specialist. Both patients and employers must have trust in the ability of the graduated doctor to fulfill entrustable professional activities,9 to gradually become more and more independent, and at the same time, a professional member of various teams, contributing to their performance.

Fig. 1.

The 3 phases of medical education. The 2 first have specified learning objectives while the long period of professional life-long learning is mostly unregulated.

(0,08MB).

Assessment plays a fundamental role in the training of doctors and students to ensure that each graduate has demonstrated all the competencies expected. During undergraduate and postgraduate education, assessment has 2 major roles; to support the development of each student to reach his or her full potential, mainly through continuing formative assessment and feedback, and to guarantee the quality of each issued individual diploma through summative assessments covering all aspects of professional competence. To meet these expectations, assessment should focus on the expected competence outcomes of the whole program and support the gradual development of these competences rather than assessing each course separately. An additional requirement is global trust in diplomas issued by individual universities, which requires international recognition of national or regional accrediting agencies.10

To assure a real acquisition of the expected competencies along the continuum of medical education, a new paradigm has emerged; programmatic assessment.

A new paradigm in evaluation; programmatic assessment

In the last 2 decades, a new philosophy of understanding the evaluation of medical students and doctors has emerged around the idea of “programmatic assessment.”11–15 First of all, it should be said that it is not a new instrument but rather a way of understanding evaluation. That is why we refer to it as a new paradigm. Beyond the classic parameters of validity and reliability, the objective is to assume also others like fairness, trustworthiness, and dependability.

The evaluation practise that we carry out generally starts from the idea that competencies are stable and generic. That is to say, the way of measuring them is good for all circumstances (place, time, type of student…), and that there is no relationship between these competencies. Therefore, each competence has its own measuring instrument. If, for example, we want to measure the ability to resolve a myocardial infarction in a patient, we measure communication on a Likert scale of generic concepts (relationship, empathy…), and we measure decision-making based on questions generated in a clinical case. It does not occur to us to think that both competencies influence each other and that we could use both instruments to measure both competencies.

To achieve this, the evaluation must start from a program that holistically conceives the evaluative objective and frames it within the study plan or academic curriculum itself. Each competence must be measured with different instruments, so that we can perceive the relational aspects with other competences. And of course, the same instrument can be used in different competences. So, instead of a one-to-one relationship between instrument and competence, we will have a matrix with several lines and columns.

We assume that when evaluating we carry out a measurement that aims to assess the learning of students. Instead, we should use measurements that are useful for learning. This is consistent with the idea that since the student is the center of the learning system, the evaluation must also serve to help the students realize where they are and which their level of learning is, but also which their strengths are and which aspects they need to improve. The final objective should be to contribute diagnostic (what is the level of competence in each student) and prognostic (what will the future competence be) information, and treatment (proposals to improve the current situation). These different measures regarding each competence allows us to recognize patterns of behavior (diagnosis) in each individual student and propose specific activities of remediation (treatment).

But the most important aspect of the new paradigm lies in decision-making for each student. It involves the team of responsible teachers openly deliberating on the results measured in a system such as the one mentioned above and making a consensual decision about each student's grade. Thus, the challenge of assuming the variability of each student and the relationship between the different measurements is invoked: in short, it is about making a judgment that allows each student to be treated as an individual.

The process of decision is crucial in this paradigm. Students generate measures coming from standardized instruments based on quantitative inputs, and others based on more qualitative approaches. The team in charge of this evaluation must decide in each case the value of the assessment through a process of discussion and deliberation. In order to avoid possible bias, it is important to separate teachers responsible for assessment and those responsible for training and for giving feedback and mentoring.

With this, it is easy to awake the myth of subjectivity in decision-making. We must remember that what we must avoid is arbitrariness, but that subjectivity without bias, well-reasoned and founded, is a fairer way of measuring competencies since it observes the individual in his or her entire performance, taking into account all the relationships and interactions that may exist.

This programmatic approach to competence development requires an overarching leadership of the whole program and is less likely to be compatible with independent leadership of separate courses lacking a holistic perspective. In addition, systems that allow characterization, visualization, and support of the gradual development of individual students are needed. To achieve this, a multitude of assessment methods should be combined throughout the program and used in a way that allows evaluation of all aspects of professional competence and stimulates students to develop these competencies. Many different methods have been documented, all with varying strengths and weaknesses. No method alone meets all expectations. Medical schools should select a set of methods that are compatible with their needs and resources, develop expertise in using these methods and evaluate their performance. Emphasis should be on methods that request that students show how they master a specific task, according to the competence pyramid of Miller16 (Fig. 2), and at the same time, illustrate the level of support and supervision needed. Knowledge tests alone are not sufficient although knowledge is the fundamental basis for all aspects of professional performance. There is always a temptation to rely heavily on knowledge tests since these are comparably easy to construct and manage and comfortably allow grading of students based on the number of points achieved. The scores may even contribute to an impression of “objectivity.” To avoid this, knowledge tests should be the basis of the assessment system only. One model is to use “progress testing” with a multitude of single best-answer questions covering all domains of knowledge from the whole program with multiple sampling from each domain.17 This enables visualization of the progression of knowledge throughout the program while at the same time, knowledge from previous parts of the program is kept alive. If the individual student follows his or her “development curve,” there is no need for re-assessments due to a temporary dip, while further actions must be taken if the development curve flattens. Thus, the main emphasis in the assessment system can be on demonstrated skills and professional performance and attitudes, with a basis in a portfolio system where the students themselves are primarily responsible for providing evidence of competence.18

Fig. 2.

Miller´s competence pyramid. Assessment during undergraduate education should move upwards, while the later phases must not forget to include the theoretical and scientific basis of medical practise.

(0,08MB).
Tools available for assessment

Table 1 summarizes methods developed to assess clinical competence and how they can be used and combined. They are shortly described below. In the light of the development of artificial intelligence, oral assessments where the students reflect on “how,” “why,” “explain,” and “describe” are necessary components of the assessment toolbox, to reassure deep knowledge, understanding, and synthesis. The learning process related to knowledge, skills, and professional attitudes, should reflect the assessment process and allow the students to prepare trough active learning, supported by clear descriptions of the expected learning outcomes, all in line with “constructive alignment.”19 Problem-based and case-based learning, simulations in skills labs and clinical practise-based learning during long placements with defined student responsibilities and gradually increasing independence are examples of such methods.

Table 1.

Some methods to assess clinical competence.

Method  Competence assessed  Duration  Resource demands 
OSCE  Practical  Hours  High 
DOPS  Practical  10–20 min  Low 
Oral vivas  Practical/theoretical  15–30 min  Low 
Mini-CEX  Practical/theoretical  10–20 min  Low 
Simulated skills assessments  Practical  15–30  Medium 
Portfolio  Multiple  Longitudinal  a 
Structured observations during long clinical placements  Multiple  Longitudinal  Low 
a

Low regarding each entry. High regarding evaluation and feedback.

Objective Structured Clinical Examination (OSCE) is a hands-on method to demonstrate the ability to independently perform specified clinical skills.20 It is typically constructed in a rotation scheme with 10–15 stations where the performance of each individual student during a specified short time is observed by an expert according to a predefined scoring sheet, or as a global evaluation. Each station ends with a short feedback before the student moves on to the next station. After a full round, the performance is evaluated. Reliability is re-assured by the need for a complete new OSCE if the overall performance is evaluated as “failed.”

Directly Observed Procedural Skills (DOPS) is a method where an independent observer decides whether the student masters a specific mandatory skill in a satisfactory way after practising.21 The student decides when the DOPS should take place. If the skill is not met, the student must continue practising before re-examination.

Mini Clinical Evaluation Exercise (Mini-CEX) evaluates students when they manage clinical situations and provide feedback related to their performance.22 The students are observed by trained supervisors during a short (10 min) interaction with patients and feedback is given in direct relation to this interaction. Several evaluations have demonstrated that Mini-CEX is a reliable instrument to evaluate clinical competence and that the method has high acceptance among both students and teachers. Mini-CEX assessments should be repeated many times during the program (at least 10).

Oral tests can be used in different formats with patient encounters or in simulated situations. The purpose is to re-assure in oral dialogs that the student has deep knowledge, understands complexity, is capable to solve clinical problems and to explain various pathophysiological or psychopathological phenomena. Questions are centered around “how,” “why,” “explain,” and “describe how.”

Simulated skills assessments can replace situations with real patients particularly when it comes to technical procedures. The student should demonstrate an ability to prepare, inform, and carry out the procedure in the same way that they have been instructed and practised previously. Even more complex situations where the student receives supplementary clinical information throughout the procedure, for example in heart–lung resuscitation, can be assessed in this way.

Portfolio is a student-owned logbook where the student gathers evidence of his or her professional development. It contains prespecified elements and each new item introduced in the portfolio should be verified by a teacher. The portfolio can be paper-based or preferably digital. A digital format also allows visualization of specific professional profiles and demonstrate where further development is needed. The main principle with the portfolio is that it is the responsibility of the student to complete the portfolio and demonstrate to the assessors that he or she meets all the requirements.18

Structured observations during long clinical placements. When planning clinical practise, one choice is between shorter placements in all various clinical specialities and subspecialities, mainly as observer, or longer placements (4–6 weeks) where the possibilities to practise professional roles in relation to patients and professionals of different categories are particularly good. Here, focus is on roles and responsibilities for the students as members of the healthcare teams.23 The goals for such long placements are just as carefully formulated as for theoretical subjects.

Entrustable Professional Activities (EPA) were developed to provide opportunities for frequent time-efficient, feedback-oriented, and work-place-based assessments in the course of daily clinical workflow.9 More specifically, they represent units of professional practise that can be fully entrusted to an individual, once this individual has demonstrated the necessary competence to execute them unsupervised. One example is the ability to prioritize a differential diagnosis following a clinical encounter.

Written tests. Still this format is the most popular in universities. However, it is important to take into account recent research done in the script concordance test in order to assess clinical reasoning.24

Current situation in Spanish medical schools

In the last years, Spanish medical schools have partially incorporated into their curriculum the requirements established by the European Higher Education Area or Bologna Process,25 among others, the introduction of outcome-based medical education.26 Moreover, specialization training programs in Spain have been oriented towards the outcome-based paradigm. This paradigm means going from assessing the theoretical–practical content to the assessment of acquired competencies.27 Since the acquisition of competencies is a continuous process, the assessment of competencies must be continuous and progressive. Integrated and progressive competency assessment programs should be established, based on the programmatic evaluation paradigm.12 The adoption of such model runs better in a horizontally and vertically integrated medical curriculum.

Unfortunately, in general, Spanish medical schools continue to work with non-integrated curricula, structured on independent and non-integrated subjects or topics. Each subject is assessed independently, without connection among them. Furthermore, the concentration of assessments of different subjects at a few time points during the year, leads students not to prepare continuously throughout the training program, but to do so intensively and even compulsively when the date of the exam is approaching, causing significant distortions in the normal follow-up of the course.8 Spanish medical schools have introduced different assessment tools, but multiple-choice questions tests (MCQ) continue to be the most used, in spite of the fact that this tool is insufficient by itself to adequately assess the competencies of a medical student.8

To overcome these problems, Spanish medical schools have developed and implemented a structured objective clinical evaluation (OSCE), which is performed generally at the end of the supervised internship period in the last year of the learning process (6th year). This OSCE serves as an assessment of this internship period. However, it is important to point out that the end-of-degree OSCE which involves a major implementation effort with complex logistics and high economic and time costs, does not ensure the overall acquisition of competencies, nor can it replace the lack of a true continuous assessment of competencies with sufficient feedback throughout the degree. There is no doubt about the educational impact of the OSCEs, but it cannot be the only tool on which almost exclusively, the final verification of the competency status of our students is based. Furthermore, without considering the difficulties in the design, implementation, and evaluation of these tests in our context, we cannot from a psychometric point of view, ensure the total validity and reliability of the test. Other weak points are the lack of remediation mechanisms for students who do not pass (very low percentage) or the difficulty to give feedback on a test that is performed at the end of the degree.8,28,29

According to the considerations above, medical schools and their faculty should not be satisfied with the progress that the generalized implementation of the end-of-degree OSCE means. It is necessary to assume the need to advance in the development of a continuous and truly integrated assessment system and apply it globally and not to individual subjects. It should be emphasized that achieving this objective is unfeasible without the necessary means in terms of organizational, time, material, and human resources, and the participation of experts in assessment, working within the framework of a stable academic structure of medical education (medical education department), whose core function is to centralize and evaluate the assessment processes throughout the degree.

A real continuous assessment of the performance of students with continuous feedback that help them in their learning, leads us to the concept of continuous and integrated assessment. For this, powerful coordination mechanisms between different subjects in the curriculum are essential, which is not the case in our medical schools. Continuous assessment linked to feedback is useful and necessary to monitor learning progression and to take summative decisions (pass/fail). This statement is valid both for the teacher who teaches a subject and for the faculty that certifies a degree.28,29 This continuous assessment must be embedded in a program of institutional assessment or programmatic assessment referred to previously. The more important the decision to be made according to the results of the assessment is (e.g., to certify the acquisition of a degree), the greater the number and quality of the reports must be. Data and assessment activities should therefore be available throughout the evaluation period. Only enough assessment points, structured and organized within the institution's program, will make it possible to make the coherent and fair decisions that society demands.

In this assessment model, Spanish medical schools can use all available instruments, diversifying them and using those that are most valid, reliable, and feasible in their context. Among these tools, medical schools should consider some of those referred to previously for different competencies. In this sense, we propose the following model:

  • 1.

    Regarding the assessment of knowledge and its application, clinical reasoning, and decision-making, one option to consider is the “progress test.” Progress test is a long-term, feedback-oriented educational assessment tool to evaluate the development and sustainability of cognitive knowledge during a learning process, used in different countries.17

  • 2.

    For the assessment of clinical procedures to be acquired by all students at different stages of the degree, the use of the OSATS (Objective Structured Assessment of Technical Skills) tests in a simulated setting should be considered by means of a tunnel of activities that would assess the basic skills and procedures that every student should know how to perform.30 These skills tunnels could be carried out 3 times throughout the curriculum, one of them in the final year. To this end, all medical schools should agree on the common skills and procedures that students are expected to achieve at each stage of the degree. When students have obtained enough experience, or in residents, DOPS (Directly Observed Procedural Skills) is another valid tool for assessment.21

  • 3.

    To assess clinical performance, a minimum number of Mini-Cex could be used during the clinical internship periods to assess the acquisition of basic clinical competencies in a real professional context.22

  • 4.

    Regarding the OSCEs, as previously mentioned, it does not seem correct to establish only a final OSCE at the end of the degree, given its complexity and cost. Several and simpler OSCEs should be carried out throughout the degree, basically with a formative objective.20

  • 5.

    Finally, the use of portfolio should be also considered for the assessment of generic competencies.18

Concluding remarks

Medical schools should make an in-depth analysis of the structure of their current curricula, adopting a real curricular integration and the most convenient assessment procedures to progress towards a continuous assessment model, in accordance with the programmatic assessment paradigm including those tools that are more easily applied in each school that will allow to support and stimulate the learning of students to help them reach their full potential.

Funding

This research did not receive any specific grants from funding agencies in the public, commercial or non-for-profit sections.

Ethical issues

Ethical statement is incorporated with files attached. Authors agree with ethical code of the editors.

References
[1.]
The Physician of the Future, Medical Education Foundation (FEM), (2009),
[2.]
J. Palés-Argullós, C. Gomar-Sancho.
Impact of COVID-19 on medical education: undergraduate training (II).
FEM, 23 (2020), pp. 161-166
[3.]
C.P.M. Van der Vleuten.
Competency-based education is beneficial for professional development.
Perspect Med Educ, 4 (2015), pp. 323-325
[4.]
Medical professionalism in the new millenium; a physician charter, 136 (2002), pp. 243-246
[5.]
S. Lindgren, D. Gordon.
The doctor we are educating for a future role in global health care.
Med Teacher, 33 (2011), pp. 551-554
[7.]
CanMEDS: Better Standards, Better Physicians, Better Care. Royal College of Physicians and Surgeons of Canada. https://www.royalcollege.ca/ca/en/canmeds/canmeds-framework.html
[8.]
A. Martín-Zurro, A. Gual, J. Palés, M. Nolla, Fundación Educación Médica.
La evaluación de la formación de los médicos en España.
Consejo General de Colegios de Oficiales de Médicos (CGCOM), (2022),
[9.]
O. Ten Cate.
Nuts and bolts of EPA.
J Graduate Med Educ, 5 (2013), pp. 157-158
[11.]
C.P.M. Van der Vleuten, L.W.T. Schuwirth.
Assessment of professional competence: From methods to programmes.
[12.]
C.P.M. Van der Vleuten, L.W.T. Schuwirth, E.W. Driessen, J. Dijkstra, D. Tigelaar, L.K.J. Baartman, J. Van Tartwijk.
A model for programmatic assessment: fit for purpose.
Med Teach, 34 (2012), pp. 205-214
[13.]
C.P.M. Van der Vleuten.
Revisiting “Assessing professional competence: from methods to programmes”.
Med Educ, 50 (2016), pp. 885-888
[14.]
L.W.T. Schuwirth, C.P.M. Van der Vleuten.
Programmatic assessment: from assessment of learning to assessment for learning.
Med Teach, 33 (2011), pp. 478-485
[15.]
S. Heeneman, A.O. Pool, L.W.T. Schwuirth, C.P.M. Van der Vleuten, E.W. Driessen.
Med Educ, 49 (2015), pp. 487-498
[16.]
G.E. Miller.
The assessment of clinical skills/competence/performance.
Acad Med, 65 (1990), pp. S63-S67
[17.]
A. Freeman, C. van der Vleuten, Z. Nouus, C. Ricketts.
Progress testing internationally.
Med Teacher, 32 (2010), pp. 451-455
[18.]
E. Walland, S. Shaw.
E-portfolios in teaching, learning and assessment: tensions in theory and practise.
Technol Pedagogy Educ, 31 (2022), pp. 363-379
[19.]
J. Biggs.
Constructive alignment in university teaching.
HERDSA Rev Higher Educ, 1 (2014), pp. 5-22
[20.]
R.M. Harden, F.A. Gleeson.
Assessment of clinical competence using an objective structures clinical examination.
Med Educ, 13 (1979), pp. 41-54
[21.]
N. Naeem.
Validity, reliability, feasibility,acceptability and educational impact of direct observation of procedural skills (DOPS).
J Coll Physic Surg Pak, 23 (2013), pp. 77-82
[22.]
J.J. Norcini, L.L. Blank, F.D. Duffy, et al.
The mini-CEX, a method for assessing clinical skills.
Ann Intern Med, 18 (2003), pp. 476-481
[23.]
L. Schuwirth.
Is assessment of clinical reasoning still the holy grail?.
[24.]
B. Charlin, L. Roy, C. Brailovsky, F. Goulet, C. van der Vleuten.
The script concordance test: a tool to assess the reflective clinician.
Teach Learn Med, 12 (2000), pp. 189-195
[25.]
Bologna Declaration.
The European Higher Education Area. Joint Declaration of the European Ministers of Education (Bologna).
[26.]
R.M. Harden.
AMEE Guide No. 14: outcome-based education: Part 1-An intro- duction to outcome-based education.
Med Teach, 21 (1999), pp. 7-14
[27.]
S.R. Smith.
AMEE guide No. 14: outcome-based education: Part 2-Planning, implementing and evaluating a competency-based curriculum.
Med Teach, 21 (1999), pp. 15-22
[28.]
K. Bouriscot, S. Kemp, T. Wilkinson, A. Findyartini, C. Canning, F. Cilliers, et al.
Performance assessment: consensus statement and recommendations from the 2020 Ottawa Conference.
[29.]
J. Palés-Argullós, A. Martín-Zurro.
Consenso de expertos sobre evaluación de estudiantes y residentes a partir de la Ottawa Conference 2020.
FEM, 24 (2021), pp. 1-3
[30.]
J.A. Martin, G. Regher, R. Reznick, H. Macrae, J. Murnaghan, C. Hutchinson, et al.
Objective structural assessment of technical skills (OSATS).
Brit J Surg, 84 (1997), pp. 273-278
Copyright © 2023. The Author(s)
Opciones de artículo
Herramientas
es en pt

¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos?

Are you a health professional able to prescribe or dispense drugs?

Você é um profissional de saúde habilitado a prescrever ou dispensar medicamentos