Buscar en
International Journal of Clinical and Health Psychology
Toda la web
Inicio International Journal of Clinical and Health Psychology The new multidimensional and user-driven higher education ranking concept of the...
Journal Information
Vol. 13. Issue 1.
Pages 67-73 (January 2013)
Share
Share
Download PDF
More article options
Vol. 13. Issue 1.
Pages 67-73 (January 2013)
Full text access
The new multidimensional and user-driven higher education ranking concept of the European Union
Visits
2582
Endika Bengoetxeaa, Gualberto Buela-Casalb
a Universidad del País Vasco, Spain. European Commission, DG Education and Culture, Unit C1-Erasmus and Higher Education, Brussels, Belgium
b Foro de Evaluación de la Calidad de la Investigación y de la Educación Superior (FECIES), Spain
This item has received
Article information
Abstract
Full Text
Download PDF
Statistics
Tables (1)
Table 1. Indicators proposed in U-multirank for each of the five dimensions which resulted potentially feasible or feasible but requiring further refinement (CHERPA-network, 2011; Van Vught & Ziegele, 2012) provide more detailed information about each of them.

La cuestión de los rankings universitarios es un tema de actualidad y que genera gran polémica por la crítica a la metodología empleada en los rankings existentes y por las consecuencias que se derivan de utilizar esta información para fines para los que no fueron concebidos. En la actualidad no existe un ranking internacional que pueda ser útil para la totalidad de los usuarios y que este técnicamente bien construido so pensando distintos factores más allá de la investigación, tales como docencia, gestión, transferencia de conocimiento, orientación internacional, compromiso regional, etc. La Comisión Europea se encuentra inmersa en la implementación de una nueva metodología de ranking universitario que contribuya a mejorar la metodología de rankings, y que se caracteriza por tener en consideración las diversas misiones y tipos de instituciones de educación superior existentes, así como por permitir a cada usuario elegir los indicadores de su interés y proporcionar un ranking personalizado en cada caso. Este artículo describe la motivación para diseñar esta herramienta, los principios de la metodología aplicada y cómo se pretende implementar para tenerla en marcha hacia finales de 2014.

Palabras clave:
Educación superior; Rankings mundiales de universidades; Educación a nivel mundial; Análisis bibliométricos; Evaluación del rendimiento

Higher education rankings constitute a important but controversial topic due to the methodologies applied in existing rankings and to the use being done of these interpreting their results for purposes which they were not designed for. At present there is no international ranking can responds to the needs of all users and that is methodologically sound by considering the various missions of higher education institutions, mainly due to a narrow focus on research giving less importance to other missions in which higher education institutions can excel beyond research such as teaching quality, knowledge transfer, international orientation, regional engagement etc. The European Commission is currently involved in the implementation of a new higher education ranking methodology, characterised by taking into account a diversity of missions and the diversity of existing higher education institutions. The final aim is to create a tool allowing users to choose the performance indicators of their interest and providing them with a personalised ranking according to their interests. This paper describes the motivation for designing such a tool, the principles of the methodology proposed, as well as the steps foreseen to have it ready for end users by 2014.

Keywords:
Higher education; World university rankings; World-class education; Bibliometric analysis; Performance evaluation
Full Text

Higher education is key to future growth in Europe, where higher education systems are expected to respond to the training demand of an increasingly diverse group of people (including life long learning). Higher education institutions are making the effort to update their education offer to ensure that they meet the changing demand for skills and abilities required by the diverse jobs of a global labor market. In this context, the majority of institutions increasingly need to diversify their profiles and to focus on their strengths.

The European Commission supports this diversification strategy of higher education institutions as part of its strategy to modernize Europe's higher education (European Commission, 2011). Strengthening the various missions in education, research and innovation is a sine qua non for the success of the Europe 2020 strategy which aims to establish Europe as a world leader in the knowledge economy. For example, in the Youth on the Move flagship initiative (European Commission, 2010b) notes that the increased global competition requires the modernization of European higher education in several areas: excellence in higher education, a better match with labor market demands and excellence research and innovation. The economic crisis has increased this trend since institutions are moving from a previous phase of more direct competition to each other in many fronts, towards identifying their strengths to focus on them and consider alliances in other areas.

Reforms such as those pursued by the Bologna Process have brought an increased diversity of higher education, and this has also resulted in a growing type of higher education institutions with very different missions. While this diversity is considered one of the advantages of the European Higher Education Area (EHEA), transparency becomes a key element of the EU strategy for modernizing higher education systems: it is necessary to have more transparency to students, employers and policy makers on how institutions perform their duties and meet their missions. University managers are also direct beneficiaries of transparency, as it provides adequate information in strategic decision making. It is to note that many university rectors recognise not to be fully aware of some areas in which their institution excels (except in some cases, notably in research aspects), see for instance (Buela-Casal et al., 2011; Buela-Casal et al., 2012).

In order to identify their strengths and weaknesses, higher education institutions must be able to benchmark themselves with similar institutions at national and international levels. Since the creation of the EHEA cooperation between universities has became even more strategic. For example, it is considered necessary by many universities to establish student exchange programs, joint research projects, joint degrees, recognition and approval of undergraduate and postgraduate teacher mobility, etc. But this strategy of alliances also requires knowing better how efficiently other institutions perform (Buela-Casal, Gutiérrez-Martínez, Bermúdez-Sánchez, & Vadillo-Muñoz, 2007).

Many tools have been created with the purpose of increasing transparency in higher education, and university rankings are among the best known. Rankings are transparency tools with high potential to allow institutions better positioning themselves and developing strategies for improving quality and performance. They also benefit stakeholders, in particular students, who can use the information to make more informed choices when comparing institutions or considering a study period or placement abroad; and they are a valuable tool for policy makers, helping to inform strategic choices about the overall design of higher education systems.

Unfortunately, existing rankings (such as the well known ones of Shanghai or Times Higher Education) show important drawbacks:

• Most rankings tend to analyse a single higher education mission (or dimension), and focus almost exclusively on research, ignoring other areas such as quality of teaching, internationalization, innovation and engagement with their environment.

• Rankings apply to institutions as a whole rather than at discipline or field level. A disciplinary approach would certainly be more interesting for stakeholders such as students as well as to institutional leaders to plan institutional alliances with other partners.

• Tend to favour especially anglophone institutions as well as some disciplines such as engineering (resulting in a drawback for generalist universities over applied ones).

• Do not cover the diversity of higher education and, in practice, do not include in their lists more than about 2-3% of higher education institutions worldwide.

Other criticisms to existing rankings (Salmi, 2009) also refer to the methodology adopted (for example, many rankings include aspects as subjective as 'institutional prestige') and the type of indicators chosen (often focus on what can be measured rather than what should be measured). Furthermore, since rakings are transparency tools, these are expected to be at least as transparent and comprehensive as usually required to scientific publications (Hartley, 2012). However, certain rankings are not transparent enough on explaining the calculation to obtain the final league table that constitutes the main outcome of a ranking, or on justifying the choice of indicators or the weighting assigned to each of them to aggregate them in a single figure which determines the rank. Another major criticism to rankings is that some indicator names do not generally refer to what they actually measure. For example, the Shangai ranking measures research quality by only measuring some research results. Furthermore, when referring to research excellence regarding publications, there is some controversy on why publishing in Science and Nature is a single criterion and what justifies its weighting.

One fundamental methodological weakness of existing rankings is the unfair treatment of some institutions by comparing blindly organisations very different in size or funding. For instance, rankings like Shangai compare institutions such as Harvard University with others with a budget infinitely smaller, despite the fact that Harvard, a university of a relatively small size with more postgraduate than undergraduate students, has a budget larger than the sum of all university budgets of some EU countries, and even a higher budget the such that some countries in the world. Budget differences definitely influence areas such as research production and teaching quality, and yet many rankings compare institutions regardless of these factors, despite it is known that budget differences have an important influence even when they are smaller (Buela-Casal, Bermúdez, Sierra, Quevedo-Blasco, Castro et al., 2012).

Regardless of the main criticisms to ranking methodologies, one of the biggest risks of rankings is to interpret them for purposes they were not designed for, with negative effects on higher education institutions. For example, certain rankings take into account the number of students or faculty of the institution who earned a Nobel prize. One of the best and most recent references analysing in detail existing rankings and showing examples of applying them for purposes not originally planned, with unexpected results, can be found in (European University Association [EUA], 2011). The reaction of some universities in order to rank higher has been to hire (albeit part-time or for marginal duties) professors who have earned a Nobel for post positions. This obviously is clearly not in line with the modernisation agendas of higher education and it is unclear how students will obtain a direct benefit from these practices. It has been claimed that it would be more reasonable to use impact factors or the dissemination impact of institutional researchers, for example, by calculating the h-index (Buela-Casal, Olivas-Ávila, Musi-Lechuga, & Zych, 2011).

The proposal for a new ranking methodology of the European Commission

Despite these criticisms, rankings are here to stay and they are so far the only transparency tool that provides a comparison (albeit biased) of the efficiency of higher education institutions, both nationally and internationally. Moreover, some governments actively promote the establishment of national rankings of universities to determine key issues such as competitive funding. That is why the European Commission considered important to develop a multidimensional methodological approach to rankings (i.e., not focused on a single dimension as research), with global scope (not only European), based on appropriate methodology, robust and widely accepted.

With the aim of improving the state of the art and defining a set of guidelines on how rankings should be based, the International Ranking Expert Group (International Ranking Expert Group, IREG, 2012) met in 2006 and developed the so-called "Berlin Principles on Ranking of Higher Education Institutions" (IREG, 2006). These principles emphasize that each ranking is purpose-driven, with outcomes shaped by the assumptions and values built into the methods of comparison and calculation, selected indicators and their weights according to this objective, and therefore the final aim of each ranking has to be clearly stated in order to interpret its results correctly. Later in 2008, under the French Presidency of the European Union, the conference entitled "international comparison of education systems: a European model?" called for a new mapping methodology of the different missions (also called dimensions) of excellence dimensions of excellence of higher education and research institutions, both in European and the international context (French Government, 2008).

In response to this situation and to specific requests from the Council of Europe, the Commission has tested between 2009 and 2011 the feasibility of new approaches looking at the diversity of missions of university performance.with the feasibility study "Design and testing of a multidimensional global university ranking". This study, also known as U-multirank, analised the feasibility of creating a new ranking methodology that would address the growing diversity of missions and types of higher education institutions, allowing ultimately a fair comparison of their performance. The main novelty of this approach is that the final result would not result in a single overall listing of universities (i.e. the end result would not be a single league table aggregating indicators using predefined weights). Rather, users would be have available a tool allowing them to create a personalised ranking tuned to their own personal preferences and objectives in the different areas of interest (dimensions). Therefore, this would be ultimately a tool to create personal rankings rather than a traditional ranking table.

A key aspect for the acceptance and credibility of any ranking is that relevant stakeholders are involved in its definition and development. That is why the U-Multirank study included an Advisory Board, chaired by the European Commission, whose opinion was crucial in the design and testing of U-Multirank. Stakeholders provided vital feedback on the relevance of performance indicators and dimensions to be addressed, on the way of presenting the ranking results, and on the different implementation models of the new ranking methodology. The stakeholders that formed this advisory board (even if some of them in a timely manner) can be found in (CHERPA-Network, 2011), and includes representatives from the Organisation for Economic Co-operation and Development (OECD), World Bank, European Students Union - ESU, EUA, and the International Observatory on Academic Ranking and Excellence - IREG. For the upcoming implementation of the ranking, it is planned to also include other relevant actors, such as the European Network for Quality Assurance (ENQA).

This ranking initiative is considered a priority by the European Commission: out of the seven flagship initiatives of the European Commission, the main communications for the implementation of the Europe 2020 strategy, this higher education ranking initiative is mentioned in two of them. In the Youth on the Move flagship initiative (European Commission, 2010b) the European Commission puts stress on the need to focus on excellence on teaching, research and innovation, widening access etc. Commitment #2 of the Innovation Union Communication (European Commission, 2010a) underlines the need for a ranking tool to improve transparency. The European Commission 's 2011 Communication on modernising Europe's higher education systems (European Commission, 2011) identifies the increasing need for institutions to diversify their profiles and to focus on their strengths as one of the main challenges facing European higher education systems.

Principles of the new ranking methodology proposed by the European Commission

The European Commission underlines the following principles for the new higher education ranking methodology:

• Multidimensional: Covering the diversity of missions of institutions, and not just excellent in research. In the U-multirank feasibility study the following five dimensions are identified: 1) teaching and learning, 2) research, 3) internationalisation, 4) innovation and knowledge transfer, and 5) regional engagement.

• User-driven: The ranking tool should not result in a single overall listing of universities (no league tables which aggregates all performance indicators in a single figure). Rather, users will be offered a web interface to make a personalised 'smart ranking' of their own, tuned to their own personal preferences and objectives in the different areas of interest. It must allow each user (e.g. students, teachers, researchers, policy makers, institutional managers...) to choose the performance indicators of their choice and combine them as wished.

• Global: The ranking has the aspiration to cover all regions of the world, including institutions from both EU countries, and other.

• Profiling: The ranking must ensure that comparisons are by the like with the like, and avoiding unfair comparisons of institutions with different missions and resources.

• Disciplines: The tool would measure the performance at both institutional and discipline (or field) level.

• Independence: The implementing organization should be independent (not state run or by higher education institutions). Data on performance should be collected and published in an independent way.

• Sustainability: The implementation model should allow for self-sustainability in the longer term, without having recourse to charging students for its use.

Higher education rankings: Just one more transparency tool

The ultimate goal of the European Commission is to provide tools to facilitate transparency of higher education systems. Increased transparency of how higher education institutions perform their missions should make it easier for students and researchers to make an informed choice on where and what to study and where to work. Better information can also help students and staff to make choices when considering destinations for a mobility move as well as informing policy makers at institutional, national and European level. It would also increase the general level of transparency and enable stocktaking and benchmarking. Moreover, it would offer the various user groups a dynamic tool with which to monitor trends and assess the impact of policy measures (Bologna Process, EHEA, European Research Area, Modernization Agendas) on them over time. This would in turn help policymakers define new evidence-based policies. For institutions, offering more accessible and comparable information helps them better position themselves and improve their development strategies, quality and performance.

In this context it is important to note that this ranking initiative is regarded as complementary to others such as the construction of the European quality assurance system. Rankings provide a first comparative picture which can serve as useful guidance to all stakeholders and users but this information should be complemented later with other transparency instruments to ensure enough information for decision making. Other more specific rankings can also be used to complete this information, as suggested for doctoral studies in Musi-Lechuga, Olivas-Ávila, & Castro (2011); Musi-Lechuga, Olivas-Ávila, Guillen-Riquelme, & Castro (2011); Olivas-Ávila & Musi-Lechuga (2012). Rankings' popularity due to their ease use and interpretation cannot replace other tools such as the valuable information provided by the quality assurance agencies. On the other hand, existing quality assurance systems and agencies should make all their best to communicate to the public in a user-friendly manner the outcomes of accreditation and quality assessment processes.

In addition to its support to the European Quality Assurance System, the European Commission promotes and/or participates in other transparency initiatives complementary to rankings:

1. Classification of higher education institutions in Europe -the U-Map project: Developed between 2005 and 2010, this project presents a classification model of higher education institutions to categorize them based on their activities (Van Vught, 2009; Van Vught, Kaiser, Bohmert, File, & Van der Wende, 2008). It consists of a web tool where institutions are classified based on their various missions of teaching, research, innovation, regional engagement and internationalization. This tool allows fair comparisons across institutions with similar characteristics, and may be useful to ensure the relevance of such comparisons in the case of making a ranking. The project is partly based on the Carnegie Classification in the U.S.

2. European Tertiary Education Register: This initiative aims to build a complete data collection system of European higher education institutions. Its implementation is based on the EUMIDA feasibility study (EUMIDA-consortium, 2010) which conducted a pilot study for data collection, including basic data of all European universities and additional data from a sample of research-intensive universities. The ultimate goal is to create a coherent and comparable European statistical infrastructure and transparent data collection process, providing policy makers and institutions relevant information for comparative benchmarking and assessing trends in strategic decisions, thus supporting evidence-based policy.

3. Assessment of Higher Education Learning Outcomes - AHELO (Organization for Economic Co-operation and Development, OECD, 2008, 2010): the European Commission participates in this OECD feasibility study on assessing learning outcomes of higher education focused on generic and disciplinary skills in engineering and economics. Its aim is to assess the validity and comparability across countries. The final results are expected to be presented at a final conference in 2013, and likely they will show that the assessment of learning outcomes through student surveys show no cultural bias and allow comparison. This project has the potential to improve the methodology for assessing excellence in teaching and training, and could inspire improved ranking indicators in this dimension.

Conclusions of the U-multirank feasibility study

The feasibility study to design and test the new concept of multidimensional and personalized ranking methodology was performed between May 2009 and June 2012 by consultants (CHERPA) contracted by the European Commission through a tender procedure. The U-multirank study (CHERPA-Network, 2011; Van Vught & Ziegele, 2012) concluded with the proposal to address the diversity of missions of higher education institutions in the five dimensions mentioned above. This study revised indicators to measure the performance at both institutional and disciplinary-levels (with engineering and business studies as the pilot sectors). The study was organized in two phases: the design of the methodology (including the definition of indicators) and the subsequent pilot testing phase. Indicators were defined in each of the five dimensions and then in the experimentation phase assessed the existence and quality of data on 159 higher education institutions both inside and outside Europe, answering questionnaires on the relevance and feasibility of the data collection. This sample was chosen ensuring representativeness of the institutional diversity in higher education. To ensure the relevance of the comparison the U-map methodology was applied (U-Map, 2009).

U-multirank demonstrated that both the concept of a multidimensional ranking and its further implementation in a next phase for the wider public are feasible, while underlining that further work is still required to refine some indicators notably in some of the dimensions. The following are the most relevant conclusions for each dimension:

• Teaching and learning: most data can be collected through questionnaires to students and teachers. The main difficulty is the measurement of employability, since many institutions do not have alumni, do not perform tracking of their graduates, or if it is performed at different periods after graduation.

• Research: Is a clearly feasible dimension, in which the several indicators are based on bibliometric data or international databases.

• International orientation: This dimension does not present real difficulties in any of the indicators proposed.

• Knowledge Transfer: Most data concerning joint publications and mobility between academia and business is available. The main drawback is the patent data at disciplinary level as well as its transnational comparability due to different property right legislations world-wide.

• Regional engagement: This dimension shows more challenges for collecting relevant and comparable data.

Table illustrates the indicators proposed in each dimension were potentially feasible or feasible but requiring further development.

U-multirank identifies several challenges to implement a first version of this ranking for end users, among which the most critical is the need to further improve data in terms of availability, solidity and comparability. The work to implement the European Tertiary Education Register (in which Eurostat is also involved together with the DGs of Education and Culture and Research and Innovation) will facilitate the data collection at least for European institutions. U-Multirank findings were presented in a final conference the 9th June 2011 in Brussels. This conference confirmed that despite the critical attitude against rankings (mainly due to the weaknesses described previously), most stakeholders and governments are in favor of implementing such a new multidimensional as user-driven transparency tool.

The next step: Implementation of the new ranking tool

In September 2011 the European Commission adopted its Communication on the modernisation of higher education systems in Europe (European Commission, 2011), which underlines as one of the main challenges facing education systems European higher the increasing need for institutions to diversify their profiles and focus on their strengths. The communication shows the commitment to implement a ranking of higher education institutions and research centers based on the U-Multirank methodology, with the aim of publishing a first version of the ranking by the end of 2013. At the time of writing this paper the European Commission has just published a new call for tender to implement this ranking, with a budget of 2 million euros for the first two years of work and with the primary objective of having this first version of the ranking published around the end of 2013.

The task for the next two years is divided into four main tasks to be developed in parallel:

1. Data Collection: This task includes the data collection strategy, with different update intervals depending on the nature of the information, in cooperation with the European Registry of Tertiary Education. It also involves securing the participation of higher education institutions and the awareness raising strategy with policy makers and institutions to ensure participation of a critical mass of institutions from Europe and outside.

2. Web interface implementation: This is the development tool offering a service tailored to the needs of different users. The handling and presentation of the data should be user-driven, providing a service tailored to the needs of various users. It should be user-friendly, considering initial users (who should be offered pre-defined profiles) and experienced ones (allowing setting weighting between indicators chosen individually), and enable comparisons to be drawn between institutions of similar types.

3. Continued development: The aim is to move forward from the main conclusions of the U Multirank study regarding both improvement of indicators and development of new ones. The ranking tool should progressively cover a greater number of disciplines beyond the current pilot sectors (engineering and business studies). Special mention is made to the cooperation with other initiatives with potential to improve the indicators of the different dimensions, such as OECD's AHELO project which could inspire improvements in the teaching and learning dimension.

4. Dissemination/marketing: This is the awareness strategy for governments and higher education institutions to ensure the participation of a critical mass of institutions within and outside Europe.

Conclusions

There is consensus on the need to assess the quality of higher education institutions, since the society needs more transparency on how institutions perform and users demand clear information for an informed choice on where to invest their time and resources. University rankings can be a useful tool for comparing and benchmarking institutions to each other, both at local and international contexts. This is especially useful in the European Higher Education Area where institutional alliances are being planned and students and teacher mobility is increasing for both education and research purposes. However, despite the robustness of ranking methodologies it is important to ensure that the information provided by rankings is properly interpreted, preferably by complementing the information with other transparency tools for decision making purpose. On the other hand, rankings must be transparent and their results understandable to citizens.

The new ranking methodology supported by the European Commission with the call for tender in March 2012 is another step in the implementation of a new methodology and a new benchmark in the field of the rankings, which is scheduled to be ready for users by 2014. Meanwhile, the tendency to offer a multidimensional approach seems to have been extended to other rankings: The group in charge of the Shanghai ranking announced recently that they are considering adding indicators to cover the dimension of teaching quality, providing this information to users through a web interface while maintaining the traditional research-based ranking.


*Corresponding author at:

University of the Basque Country,

Computer Engineering Faculty, Paseo Manuel Lardizabal, 1,

20018 Donostia-San Sebastian, Spain.

E-mail address:endika@ehu.es (E. Bengoetxea).

Received March 22, 2012;

accepted May 15, 2012


References

Buela-Casal, G., Bermúdez, M. P., Sierra, J. C., Quevedo-Blasco, R., Castro, A., & Guillén-Riquelme, A. (2011). Ranking de 2010 en investigación de las universidades públicas españolas. Psicothema, 23, 527-536.

Buela-Casal, G., Bermúdez, M. P., Sierra, J. C., Quevedo-Blasco, R., Castro, A., & Guillén-Riquelme, A. (2012). Ranking de 2011 en producción y productividad en investigación de las universidades públicas españolas. Psicothema, 24, 505-515.

Buela-Casal, G., Bermúdez, M. P., Sierra, J. C., Quevedo-Blasco, R., Guillén-Riquelme, A., & Castro, A. (2012). Productividad y eficiencia en investigación de 2010: relación con la financiación de las comunidades autónomas españolas. Revista Electrónica de Metodología Aplicada (REMA), 17, 35-50.

Buela-Casal, G., Gutiérrez-Martínez, O., Bermúdez-Sánchez, M., & Vadillo-Muñoz, O. (2007). Comparative study of international academic rankings of universities, Scientometrics, 71, 349-365.

Buela-Casal, G., Olivas-Ávila, J., Musi-Lechuga, B., & Zych, I. (2011). The h index of the presidents of the American Psychological Association (APA) through journal articles included in the Web of Science database. International Journal of Clinical and Health Psychology, 11, 95-107.

CHERPA-Network (2011). U-Multirank Final Progress Report. Available from: http://www.u-multirank.eu [accessed 22 Mar 2012]. EUMIDA-consortium (2010). Feasibility study for creating a European university data collection. Available from: http://ec.europa.eu/research/era/docs/en/eumida-final-report.pdf [accessed 22 Mar 2012].

European Commission (2010a). Europe 2020 Flagship Initiative Innovation Union. Available from: http://ec.europa.eu/commission_2010-2014/geoghegan-quinn/headlines/documents/com-2010-546-final_en.pdf [accessed 22 Mar 2012].

European Commission (2010b). Youth on the move - An initiative to unleash the potential of young people to achieve smart, sustainable and inclusive growth in the European Union. Luxembourg: Publications Office of the European Union.

European Commission (2011). Supporting growth and jobs - an agenda for the modernisation of Europe's higher education systems. Available from: http://eur-lex.europa.eu/LexUriServ/ LexUriServ.do?uri=COM:2011:0567:FIN:EN:PDF [accessed 22 Mar 2012].

European University Association, EUA (2011). Global university rankings and their impact. Available from: http://www.eua.be [accessed 22 Mar 2012].

French Government (2008, November). Conclusions of the French EU presidency conference. International comparison of education systems: A European model? Paris, France.

Hartley, J. (2012). New ways of making academic articles easier to read. International Journal of Clinical and Health Psychology, 12, 143-160.

International Ranking Expert Group, IREG. (2006). Berlin Principles on Ranking of Higher Education Institutions. IREG's 2nd meeting in Berlin, 18-20 May, 2006. Available from: http://www.che.de/downloads/Berlin_Principles_IREG_534.pdf [accessed 22 Mar 2012].

Musi-Lechuga, B., Olivas-Ávila, J. A., & Castro, A. (2011). Productividad en tesis de los programas de doctorado en Psicología con Mención de Calidad. Revista Mexicana de Psicología, 28, 93-100.

Musi-Lechuga, B., Olivas-Ávila, J. A, Guillen-Riquelme, A., & Castro, A. (2011). Relación entre productividad y eficiencia de los programas de doctorado en psicología. Revista Latinoamericana de Psicología, 43, 297-305.

Olivas-Ávila, J. A., & Musi-Lechuga, B. (2012). Doctorados con Mención de Excelencia en Psicología: evidencia en tesis doctorales y artículos en la Web of Science. International Journal of Clinical and Health Psychology, 12, 503-516.

Organisation for Economic Co-operation and Development, OECD (2008). Measuring improvements in learning outcomes: Best practices to assess the value-added of schools. Available from: http://www.oecd.org/edu/preschoolandschool/measuringimprovementsinlearningoutcomesbestpracticestoassessthevalueaddedofschools.htm [accessed 22 Mar 2012].

Organisation for Economic Co-operation and Development, OECD (2010). Assessment of higher education learning outcomes (AHELO). Available from: http://www.oecd.org/ [accessed 22 Mar 2012].

Salmi, J. (2009). The challenge of world class universities. Washington: The International Bank for Reconstruction and Development/The World Bank.

U-Map (2009). Overview of indicators and data-elements, by dimension. Available from: http://www.u-map.eu [accessed 22 Mar 2012].

Van Vught, F. (Ed.). (2009). Mapping the higher education landscape: Towards a European classification of higher education. Dordrecht, Netherlands: Springer.

Van Vught, F., Kaiser, F., Bohmert, D., File, J., & Van der Wende, M. (2008). Mapping diversity: Developing a European classification of higher education institutions. Enschede: CHEPS.

Van Vught, F., & Ziegele F. (Eds.). (2012). Multidimensional ranking - the making of U-multirank. Netherland: Springer.

Article options
Tools
Quizás le interese:
10.1016/j.ijchp.2022.100292
No mostrar más