metricas
covid
Journal of Innovation & Knowledge Knowledge into Practice: Exploring fairness configurations for objective behavio...
Journal Information
Vol. 10. Issue 5.
(September - October 2025)
Visits
983
Vol. 10. Issue 5.
(September - October 2025)
Full text access
Knowledge into Practice: Exploring fairness configurations for objective behavior in peer assessment
Visits
983
Duen-Huang Huang
The Center for Teacher Education
Chaoyang University of Technology, 168, Jifeng E. Rd., Wufeng District, Taichung, 413310 Taiwan
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (1)
Tables (3)
Table 1. Correlation coefficients.
Tables
Table 2. Analysis of necessary conditions.
Tables
Table 3. Configurational results.
Tables
Show moreShow less
Abstract

This research uses fuzzy set/Qualitative Comparative Analysis (fsQCA) to explore the role of perceived fairness in promoting objective behavior in peer assessment. Drawing on three key antecedents - outcome fairness, anonymity, and explanation - the paper presents multiple combinations of antecedents that lead to high levels of perceived fairness among students. Contrary to the proposition that all fairness factors must be high, the findings reveal that perceived fairness can be achieved when any two of the three antecedents are present at high levels. Specifically, three effective combinations emerge: (1) high outcome fairness with high anonymity and low explanation, (2) high outcome fairness with low anonymity and high explanation, and (3) low outcome fairness with high anonymity and high explanation. These results underscore the compensatory nature of fairness perceptions and offer practical implications for educators and administrators designing peer assessment systems. By ensuring at least two fairness dimensions are adequately addressed, institutions can foster more reliable and ethical peer evaluations. The paper contributes to the literature on peer assessment by highlighting the configurational logic of fairness and its practical utility in educational contexts.

Keywords:
Anonymity
Distributive justice
Explanation
Interactional justice
Outcome fairness
Procedural justice
Full Text
Introduction

This research measures how perceived fairness can be ensured in peer assessment under a higher education environment. Higher education evolves in many aspects, including teaching, learning, assessment, etc. Empirical evidence supports the positive impact of peer assessment on performance regardless of disparities in peer assessment designs (Double et al., 2020; Li et al., 2020). Peer assessment and its corresponding review studies cover a wide variety of topics, such as the design and implementation of peer assessment practices, reliability and validity of peer assessment, quality criteria for peer assessment practices, impact of social and interpersonal processes, peer assessment of collaborative learning, and how instructional conditions relate to peer assessment outcomes (Alqassab et al., 2023).

Though peer assessment brings many benefits for instructors and students, such as efficiency, there are still many discussions about its suitability. A strand of the literature focuses on how instructional conditions relate to peer assessment outcomes (Ashenaf, 2017; Double et al., 2020; Hoogeveen & Van Gelderen, 2013; Huisman et al., 2019; Li et al., 2020; Panadero et al., 2018; Sanchez et al., 2017; Topping, 2003, 2013, 2021a, b; Van Popta et al., 2017). One issue is that perceived fairness greatly affects instructors’ attitude toward peer assessment.

Peer assessment has become a critical and popular topic, particularly during the COVID-19 pandemic period. Vander Schee and Birrittella (2021) conducted a study spanning two years to compare peer assessment in pre-COVID-19 hybrid and post-COVID-19 online courses. The peer group grading process and grades are analyzed against the instructor grades. The results demonstrate no significant difference in peer group grades and instructor grades for both courses. Student survey results also show that students perceive peer group grading as fair in both course formats (hybrid and online). Therefore, instructors can feel confident about using peer group grading as a fair assessment tool.

When it comes to questions about reliability, grading, and objectivity of student self and peer assessment, the agreement of peer assessment observed leaves a wide range of distribution of responses. Only 29% of teachers think that student self and peer assessments are as reliable as grading done by themselves. Indeed, most (85%) teachers are skeptical about using the grades obtained from student self and peer assessment as official grades (Gurbanov, 2016).

The literature has widely covered peer assessment using traditional variable-oriented methods (e.g., regression analysis and structural equation modeling (SEM), which focus on the net effect of independent variables on outcomes. Most quantitative studies surveyed by Alqassab et al. (2023) applied variable-oriented methods. However, linear models assume that causal relationships are symmetrical and additive, which may oversimplify the complexity of fairness and objectivity in peer assessment (Ragin, 2009; Schneider & Wagemann, 2012). Hence, exploring the combinations of causes to achieve perceived fairness is rather important.

Perceived fairness has long been studied ever since Greenberg (1986), but most papers target the working environment. This present research borrows the concept of perceived fairness from those studies as the outcome. It then goes further by using fuzzy set/Qualitative Comparative Analysis (fsQCA) as the analytic tool to explore the combinations of antecedents to reach perceived fairness.

Relationships among factors are usually complex (Urry, 2005). One interesting phenomenon in causal complexity is causal asymmetry, where the causes leading to the presence of an outcome may differ from those leading to the absence of the outcome (Ragin, 2009). One benefit of fsQCA is that it provides important benefits over regression-based methods (Woodside, 2013). It also focuses on the complex and asymmetric relations between the outcome and its antecedents, whereas regression-based methods examine factors as they compute the net effect between the factors in a model (Pappas and Woodside, 2021). This study explores the combinations of antecedents for peer assessment to achieve perceived fairness, taking fsQCA as the research method.

The rest of the paper runs as follows. Section 2 reviews relevant literature. Section 3 introduces the data and research methods. Section 4 provides the empirical results. Section 5 discusses them and their implications. Section 6 concludes.

Literature reviewPerceived fairness

Greenberg (1986) stressed the potential importance of fair evaluations in determining workers’ acceptance of appraisal systems (Dipboye & De Pontbraind, 1981; Lawler, 1967) and identified two approaches to the answer: distributive justice and procedural justice. Distributive justice represents fairness of the evaluations received related to the work performed, corresponding to the perceived fairness of received outcomes/resources (Adams, 1965). It is assumed that people perceive justice given a balance between effort and benefits (Peiró et al., 2014).

Procedural justice assesses the fairness of the evaluation procedure. Hence, perceived fairness also includes the procedures by which the resources are allocated. Landy et al. (1980) found that perceived fairness of performance evaluation relates to process components. Specifically, fairness of procedural justice (Thibaut & Walker, 1975) refers to the procedures developed to achieve outcomes or resources. Therefore, while distributive justice is more outcome-oriented, procedural justice is more relationship-oriented (Peiró et al., 2014).

There is growing empirical evidence that judgments are influenced by the enactment of the procedure as well. Interactional justice is identified from procedural justice (Bies & Shapiro, 1987). While the latter refers to the more structural facet of procedures, the former focuses on the more interpersonal aspects. It concentrates on the relevance of interpersonal treatment when procedures are implemented (Bies & Moag, 1986).

Peer assessment and perceived fairness

Peer assessment is an arrangement for students to specify the level of performance of other equal-status students (Topping, 2009). Different levels of schools have applied peer assessment. For example, peer assessment is a tool to enhance primary bilingual teachers’ training (Huertas-Abril, et al., 2021). In a college, peer assessment can foster proving skills in mathematics classes (Knop, et al., 2022).

Perceived fairness refers to the fair representations of effort and contribution (Greenberg 1986; Rasooli et al., 2025). Students consider fairness as a critical issue in classroom assessment approaches (Sambell et al., 1997). Heidari and Saghafi (2025) evaluated 29 architecture students in peer assessment and identified fairness challenges, such as concerns regarding collusion, power dynamics within friend groups, limitations of participatory culture, and overwhelming responsibility. The paper suggests that a multistage peer assessment process is effective at addressing fairness challenges.

Perceived fairness is taken as an important factor in online peer assessment (Lin, 2018). Vander Schee and Birrittella (2021) measured fairness in peer assessment. They demonstrated no significant difference in peer group grades and instructor grades for both courses.

There could unfortunately be rating bias in peer assessment. In Stonewall et al. (2024), both instructors and students detailed bias in their classrooms and with assessments where evidence of bias showed up in peer assessment scores. Kaufman and Schunn (2011) found that a significant drop in perceived fairness occurs after online peer assessment has been implemented. These studies all highlight the importance of perceived fairness in peer assessment.

Antecedents

To measure perceived fairness, this study uses outcome fairness to represent distributive justice, anonymity to represent procedural justice, and explanation to represent interactional justice. Outcome fairness is rather straightforward, by measuring if the students consider the evaluation results are fair. Hence, it serves to reflect the characteristic of distributive justice.

In various contexts, anonymity is considered important in procedural justice (Harris et al., 2013; Hough et al., 2016; Kaur & Carreras, 2021; Panadero & Alqassab, 2019). Rater identity is concealed so as to reduce interpersonal bias and social pressure (Double et al., 2020). Lin (2018) investigated online peer assessment within a Facebook-based learning application with a focus on the effects of anonymity. The results indicated that the anonymous group provided significantly more cognitive feedback, while the identifiable group offered more affective feedback and more meta-cognitive feedback. Members of the anonymous group also perceived that they had learned more from peer assessment and had more positive attitudes toward the system, but they also perceived peer comments as being less fair than the identifiable group did. The findings provide important evidence for the cognitive and pedagogical benefits of anonymity in online peer assessment among pre-service teachers. In particular, anonymity affects perceived fairness and hence represents procedural justice.

The literature has identified a number of features associated with interactional justice, such as the provision of an explanation (Bies & Moag, 1986), honesty (Clemmer, 1993), empathy and assurance (Parasuraman et al., 1985), directness and concern (Ulrich, 1984), effort (Mohr, 1991), acceptance of blame (Goodwin & Ross, 1989), and the offering of an apology (Goodwin & Ross, 1992; Bies & Shapiro, 1987; Folkes, 1984). To measure perceived fairness in peer assessment, this present study employs explanation to denote interactional justice.

Explanation refers to explaining the grading criteria clearly and having them applied consistently (Falchikov & Goldfinch, 2000). Quantitative peer assessment studies have compared peer and teacher marks. Findings showed that peer assessments resemble more closely teacher assessments when grading criteria are explained very well (Falchikov & Goldfinch, 2000).

To explore perceived fairness in peer assessment, outcome fairness (representing distributive justice), anonymity (representing procedural justice), and explanation (representing interactional justice) serve as the antecedents, and perceived fairness is the outcome. Based on the literature, Fig. 1 depicts the research framework.

Fig. 1.

Research framework.

fsQCA

The set-theoretic approach of fsQCA uses Boolean algebra to determine which combinations of antecedents contribute to the outcome of interest (Boswell & Brown, 1999; Ragin, 1987; 2009). The combinations of antecedents provide various alternative causal relationships to help understand an outcome’s construct (Kraus et al., 2018). As a result, fsQCA with its complexity theory in business and management can be applied in a multitude of disciplines (Fiss, 2007; Rihoux et al., 2013).

FsQCA can solve many complex problems. For example, Huang et al. (2023) employed it to explore the influencing paths of college students’ entrepreneurial willingness in China. Cabrilo et al. (2024) used fsQCA to examine the contingency and complex relations between multidimensional intellectual capital, technology-based knowledge management, and innovation outcomes in the rapidly changing business environment via survey data collected from 102 publicly-listed firms in Taiwan. Chen and Chen (2024) investigated the configurational relationships among e-government online services’ technology, institutional frameworks, content provision, e-participation, service provision, and innovation, in order to enhance national governance capacity and offer governance support through fsQCA.

FsQCA may face some limitations in its application. It relies heavily on prior theory to select antecedents, set thresholds for calibration, and interpret results. Without a strong theory, the analysis may appear arbitrary or post hoc. Multiple solutions for this may thus arise. Which one is more valid may be determined subjectively. Furthermore, when multiple solutions lead to an outcome, interpreting the meaning and implications of all configurations can be challenging.

Data and methodologyPeer assessment set-up

In the teaching course, the students worked in groups to deliver presentations. Each group consisted of 4 to 6 students. The presentation topics revolved around current popular applications of artificial intelligence (AI), including but not limited to the following: Advantages of Tesla’s autonomous driving technology; the Cambridge Analytica case: A key factor in Donald Trump’s victory in the 2016 U.S. presidential election; applications of image recognition systems; and applications of speech recognition systems.

The topics and order of presentations were determined by drawing lots. Each group had 15 minutes for their presentation, and the slides and oral report were delivered in both Chinese and English to enhance bilingual communication skills. To improve overall quality of the presentations and fluency of spoken English, students were encouraged to use Google Translate for collaborative translation and generative AI tools (such as ChatGPT) to help organize content, highlight key points, and make the expressions clearer and more natural.

The grading criteria were announced at the beginning of the course and were to follow an anonymous peer review system. Students were encouraged to learn from the strengths and areas for improvement observed in other groups, continuously refining their own performance to enhance learning outcomes.

Performance assessment and assessment tools

This study adopts a multiple assessment approach to conduct a comprehensive evaluation of students’ learning performance. The assessment is divided into different categories, including in-class AI-related English vocabulary instant Q&A and observational assessment, formative assessment through peer evaluation of AI project reports, and periodic assessments such as midterm and final exams. This approach aims to achieve a well-rounded evaluation of students’ professional learning performance.

Based on the assessment methods, various evaluation tools are designed, including paper-based tests, in-class AI-related English vocabulary instant Q&A, group presentations on key AI topics, and peer evaluation scores. These tools serve as the basis for assessing students’ learning outcomes.

Data and measurements

The subjects of this study include 60 students from the Department of Industrial Design and 54 students from the Department of Industrial Engineering and Management in the Fall semester of 2023 (from September 2023 to January 2024), as well as 53 students from the Department of Business Administration and 48 students from the Department of Insurance and Financial Management in the Spring semester of 2024 (from February 2024 to July 2024). In total, there are 215 participants (N=215).

The questions were designed to find out the conceptions and attitudes of the participants towards peer-assessment. Each question applies a 5-point Likert scale, and there are three antecedents. Six questions measure the antecedent of Perceived Outcome Fairness: one question for Anonymity; three questions for Explanation; and two questions for the outcome, Perceived Fairness. We take the average of all the values from multiple questions as the value for each antecedent and the outcome.

fsQCA

There are three major steps in fsQCA (Pappas & Woodside, 2021). First, fsQCA calibrates the data and computes the degree in which a case belongs to a set (Ragin, 2000; Rihoux & Ragin, 2009). In short, it transforms data into fuzzy values between 0.0 and 1.0.

Second, based on the fuzzy values, fsQCA generates multiple solutions for the researchers to evaluate. A commonly-used method is consistency, which is analogous to correlation (Woodside, 2013). Solutions with consistency above 0.80 are considered useful and can serve as theory advancement (Woodside, 2017).

  • Consistency (Xi ≤ Yi) =∑[min(Xi, Yi)]/∑(Xi),

where Xi represents the fuzzy value of the combinations of antecedents (or condition), and Yi represents that of the outcome of interest.

Finally, the analysis results are interpreted.

Empirical analysis

First, this study calculates the average for each antecedent and outcome. Next, it runs correlation analysis between all antecedents. The correlation coefficient refers to a statistical measure that quantifies the relationship between two antecedents. Table 1 lists the results. The correlation coefficients between any two of the antecedents show that there exist positive relationships in-between.

Table 1.

Correlation coefficients.

  Score  Outcome  Anonymity  Explanation 
Score       
Outcome  0.259669     
Anonymity  0.079148  0.354151   
Explanation  0.122838  0.394215  0.290208 

The first step in fsQCA is calibration. There are various methods to conduct data calibration. Absolute and relative methods are two common ones. The absolute method determines thresholds based on fixed ranges, such as survey scales. Here, the thresholds are set without considering data distribution. The reason is that the upper and lower bounds of the survey data are known and fixed. For example, Ordanini et al. (2014) proposed calibrating 7-point Likert scale data with thresholds of 6 (full membership), 4 (intermediate membership), and 2 (full non-membership). Pappas et al. (2016, 2020) adopted this method in their studies.

The relative method determines thresholds based on data percentiles, making it adaptable to varying data ranges. Fiss (2011), Ragin (2009), and Rihoux and Ragin (2009) suggested using the 95th percentile for full membership (1.0), the 5th percentile for full non-membership (0.0), and the 50th percentile for intermediate membership (0.5). In this method, when the ranges of data vary, the thresholds change. This method accounts for diverse data scopes and distributions, which is why it is widely adopted in fsQCA studies (Huarng & Yu, 2024; Yu & Huarng, 2023; 2024). This present research adopts a survey approach to collect data and fits the absolute method.

After the calibration, this study conducts analysis of the necessary conditions. Consistency of all the antecedents is greater than 0.9, showing that all three are necessary conditions for the outcome. Table 2 lists the detail of the analysis.

Table 2.

Analysis of necessary conditions.

Sets of conditions  Consistency  Coverage 
c_Outcome  0.986216  0.508751 
∼ c_Outcome  0.468426  0.837758 
c_Anonymity  0.980443  0.470116 
∼ c_Anonymity  0.362394  0.879360 
c_ Explanation  0.967837  0.478479 
∼c_ Explanation  0.400330  0.842968 

Table 3 lists the results that fsQCA generates. There are three combinations of antecedents. Each combination has consistency over 0.8, implying consistent results. Solution consistency is also over 0.8, demonstrating the overall analysis is consistent.

Table 3.

Configurational results.

  Rawcoverage  Uniquecoverage  Consistency 
c_Outcome*c_Anonymity*∼c_ Explanation  0.398916  0.0546653  0.864658 
c_Outcome*∼c_Anonymity*c_ Explanation  0.361805  0.0379359  0.890920 
∼c_Outcome*c_Anonymity*c_ Explanation  0.462418  0.0868287  0.843542 
solution coverage: 0.56303       
solution consistency: 0.809451       

Observing each individual combination, all three antecedents appear in the combination. Each of the three combinations consists of two High antecedents and one Low antecedent.

Discussion and ImplicationsDiscussion

In contrast to the proposition, which suggests that all three antecedents must be High to achieve a High level of Perceived Fairness, the results reveal a more nuanced pattern. Interestingly, each effective configuration contains one antecedent at a Low level, challenging the assumption that all conditions must simultaneously be High. This suggests that different combinations of conditions compensate for the absence of a single factor, reflecting the principle of equifinality in configurational research.

The first configuration, c_Outcome * c_Anonymity * ∼c_ Explanation, indicates that High Outcome Fairness, when combined with High Anonymity and Low Explanation, leads to High Perceived Fairness. This implies from the perspective of students that a strong sense of anonymity in the peer assessment process compensates for the lack of explanations. In other words, when students feel confident that their identities are protected, they are more willing to grade peers objectively - even if the assessment process itself is not entirely open or clearly explained.

The second configuration, c_Outcome * ∼c_Anonymity * c_ Explanation, highlights a different pathway to the same outcome. Here, High Outcome Fairness and High Explanation, despite Low Anonymity, still result in High Perceived Fairness. This suggests that when the grading process is explained well and perceived as fair, anonymity becomes less critical. Th explanation may instill trust and accountability, reducing the perceived need for concealment of identity. From the students’ standpoint, knowing how and why grades are assigned can encourage fairness, even when peer evaluators are identifiable.

The third configuration, ∼c_Outcome * c_Anonymity * c_ Explanation, presents a particularly intriguing case. In situations where Outcome Fairness is perceived to be Low, both Anonymity and Explanation must be High to maintain High Perceived Fairness. This finding suggests that when students are dissatisfied with the fairness of the assessment results, the system must compensate by ensuring both clarity in procedure and protection of evaluator identity. It indicates that a higher threshold of structural support is necessary to encourage objective behavior in the face of perceived unfairness.

These findings together demonstrate that no single antecedent is sufficient on its own. Students’ willingness to grade objectively instead depends on specific combinations of factors, each addressing different psychological needs - fairness, safety, and clarity.

Implications

The results of this study provide several important contributions to the peer assessment literature, particularly in educational settings that aim to foster objective student evaluation. They reveal that High Perceived Fairness can emerge through multiple, distinct pathways with each one shaped by the interplay between perceptions of fairness, anonymity, and explanation.

First and foremost, educators at all levels, ranging from primary to tertiary education, can benefit from these insights. Careful attention should be given to how assessment procedures are designed and communicated. Educators should recognize that students do not require all ideal conditions to achieve perceived fairness; instead, well-balanced design elements can compensate for the absence of others. For instance, in contexts where an explanation is difficult to achieve (such as blind reviews), enhancing anonymity and clearly demonstrating fair grading outcomes still maintain High Perceived Fairness.

Second, the findings have practical value for educational administrators. Administrators tasked with implementing peer assessment systems should note the importance of student perceptions. Ensuring structural fairness is important, but so is fostering perceived fairness. Systems that incorporate mechanisms for anonymous feedback, well-explained rubrics, and post-assessment reviews are likely to gain higher acceptance among students and thus lower bias in peer evaluations.

Third, beyond educational settings, the findings herein can shape practices in organizational and corporate training environments. Human resource managers or training supervisors who employ peer evaluations, such as 360-degree feedback or collaborative project reviews, can apply the same principles. Recognizing the value of anonymity and explanation improves the credibility and acceptance of peer review systems within teams and departments.

This study challenges simplistic assumptions about what drives Perceived Fairness in peer assessments. It also highlights the importance of designing flexible systems that adapt to varying student expectations and psychological dynamics.

Conclusion

This research investigates one of the most critical challenges in peer assessment: students’ perceived fairness. By adopting a configurational approach, we explore how specific combinations of fairness-related factors influence their perceived fairness. On the basis of the perceived fairness literature, we establish the research framework, consisting of distributive, procedural, and interactional justices. In particular, we measure three core dimensions of perceived fairness by outcome fairness, anonymity, and explanation.

Our findings reveal three distinct and meaningful configurations that lead to High Perceived Fairness as follows.

High Outcome Fairness, High Anonymity, and Low Explanation

High Outcome Fairness, Low Anonymity, and High Explanation

Low Outcome Fairness, High Anonymity, and High Explanation

These results offer several important insights. First, they demonstrate that High Perceived Fairness is achieved through multiple pathways and not just one ideal scenario. While we initially hypothesize that all three fairness antecedents need to be at a High level to produce High Perceived Fairness, the data show otherwise. In practice, students only require two out of the three antecedents to be high in order to engage in High Perceived Fairness. This flexibility suggests a form of compensatory effect - when one fairness factor is lacking, the presence of the other two helps maintain a sense of balance and trust in the evaluation process.

Second, the study provides practical guidance for educators and administrators who aim to incorporate peer assessment into their teaching and evaluation practices. Rather than striving to maximize all fairness dimensions at once, which may not always be feasible due to resource or contextual constraints, educators can focus on designing assessment environments that fulfill at least two of the three key fairness conditions. Doing so likely yields comparable outcomes in terms of student objectivity and reliability in peer evaluations.

In summary, this research highlights that perceived fairness is not a rigid, all-or-nothing framework. Instead, it is a dynamic system where different elements interact to support students’ ethical and responsible grading behavior. By understanding these interactions, educational stakeholders can better design peer assessment processes that not only improve academic integrity, but also foster a stronger sense of fairness and engagement among students. Future studies may build on this foundation by examining cultural, disciplinary, or institutional differences in fairness perception and peer evaluation practices. In addition, research could focus on how individual differences further shape these preferences, potentially leading to even more tailored and effective assessment practices.

FsQCA is suitable for exploring problems with complex causal relationships and generates multiple solutions (in other words, equifinality). However, it faces some limitations. Equifinality may create an issue that determining which solution is more valid often depends on researcher judgment or theory. Multiple solutions also demand further efforts to provide proper interpretation.

This study conducts configurational analysis of perceived fairness in peer assessment. Future studies can measure how perceived fairness may affect grade objectivity. In addition, this research takes college students as the survey subject. Though the findings herein can be applied to various contexts, their adaptation may be constrained due to different environments.

CRediT authorship contribution statement

Duen-Huang Huang: Writing – review & editing, Writing – original draft, Validation, Supervision, Project administration, Funding acquisition, Formal analysis, Data curation, Conceptualization.

Acknowledgment

The author would like to thank the Ministry of Education, Taiwan for its partial financial support to this study, under Project Number PGE1121015.

References
[Adams, 1965]
J.S. Adams.
Inequity in social exchange.
Advances in experimental social psychology, pp. 267-299
[Alqassab et al., 2023]
M. Alqassab, J.W. Strijbos, E. Panadero, J.F. Ruiz, M. Warrens, J. To.
A systematic review of peer assessment design elements.
Educational Psychology Review, 35 (2023), pp. 18
[Ashenaf, 2017]
M.M. Ashenaf.
Peer-assessment in higher education – Twenty-first century practices, challenges and the way forward.
Assessment & Evaluation in Higher Education, 42 (2017), pp. 226-251
[Bies and Moag, 1986]
R.J. Bies, J.S. Moag.
Interactional justice: Communication criteria of fairness.
Research on Negotiation in Organizations, 1 (1986), pp. 43-55
[Bies and Shapiro, 1987]
R.J. Bies, D.L. Shapiro.
Interactional fairness judgments: The influence of causal accounts.
Social Justice Research, 1 (1987), pp. 199-218
[Boswell and Brown, 1999]
T. Boswell, C. Brown.
The scope of general theory: Methods for liking inductive and deductive comparative history.
Sociological Methods and Research, 28 (1999), pp. 154-185
[Cabrilo et al., 2024]
S. Cabrilo, S. Dahms, F.S. Tsai.
Synergy between multidimensional intellectual capital and digital knowledge management: Uncovering innovation performance complexities.
Journal of Innovation & Knowledge, 9 (2024),
[Chen and Chen, 2024]
Y. Chen, Z. Chen.
Can e-government online services offer enhanced governance support? A national-level analysis based on fsQCA and NCA.
Journal of Innovation & Knowledge, 9 (2024),
[Clemmer, 1993]
E.C. Clemmer.
An investigation into the relationships of justice and customer satisfaction with services.
Justice in the workplace: Approaching fairness in human resources management,
[Dipboye and De Pontbriand, 1981]
R.L. Dipboye, R. De Pontbriand.
Correlates of employee reactions to performance appraisals and appraisal systems.
Journal of Applied Psychology, 66 (1981), pp. 248
[Double et al., 2020]
K.S. Double, J.A. McGrane, T.N. Hopfenbeck.
The impact of peer assessment on academic performance: A meta-analysis of control group studies.
Educational Psychology Review, 32 (2020), pp. 481-509
[Falchikov and Goldfinch, 2000]
N. Falchikov, J. Goldfinch.
Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks.
Review of Educational Research, 70 (2000), pp. 287-322
[Fiss, 2007]
P.C. Fiss.
A set-theoretic approach to organizational configurations.
Academy of Management Review, 32 (2007), pp. 1180-1198
[Fiss, 2011]
P.C. Fiss.
Building better causal theories: A fuzzy set approach to typologies in organization research.
Academy of Management Journal, 54 (2011), pp. 393-420
[Folkes, 1984]
V.S. Folkes.
Consumer reactions to product failure: An attributional approach.
Journal of Consumer Research, 10 (1984), pp. 398-409
[Goodwin and Ross, 1989]
C. Goodwin, I. Ross.
Salient dimensions of perceived fairness in resolution of service complaints.
Journal of Consumer Satisfaction/Dissatisfaction and Complaining Behavior, 2 (1989), pp. 87-92
[Goodwin and Ross, 1992]
C. Goodwin, I. Ross.
Consumer responses to service failures: Influence of procedural and interactional fairness perceptions.
Journal of Business Research, 25 (1992), pp. 149-163
[Greenberg, 1986]
J. Greenberg.
Determinants of perceived fairness of performance evaluations.
Journal of Applied Psychology, 71 (1986), pp. 340
[Gurbanov, 2016]
E. Gurbanov.
The challenge of grading in self and peer-assessment (undergraduate students’ and university teachers’ perspectives).
Journal of Education in Black Sea Region, 1 (2016),
[Harris et al., 2013]
K.L. Harris, L. Thomas, J.A. Williams.
Justice for consumers complaining online or offline: exploring procedural, distributive, and interactional justice, and the issue of anonymity.
Journal of Consumer Satisfaction, Dissatisfaction and Complaining Behavior,, 26 (2013), pp. 19-39
[Heidari and Saghafi, 2025]
E. Heidari, M.R. Saghafi.
Perceived fairness in the peer assessment process: a focus on Iranian architecture students in design studio education.
Journal of Applied Research in Higher Education, 17 (2025), pp. 454-468
[Hoogeveen and Van Gelderen, 2013]
M. Hoogeveen, A. Van Gelderen.
What works in writing with peer response? A review of intervention studies with children and adolescents.
Educational Psychology Review, 25 (2013), pp. 473-502
[Hough et al., 2016]
M. Hough, B. Bradford, J. Jackson, P. Quinton.
Does legitimacy necessarily tame power? Some ethical issues in translating procedural justice principles into justice policy (May 24).
LSE Legal Studies Working Paper No. 13/2016, (2016),
[Huang et al., 2023]
Y. Huang, Y. Bu, Z. Long.
Institutional environment and college students’ entrepreneurial willingness: A comparative study of Chinese provinces based on fsQCA.
Journal of Innovation & Knowledge, 8 (2023),
[Huarng and Yu, 2024]
K.-H. Huarng, T.H.-K. Yu.
Causal complexity analysis of ESG performance.
Journal of Business Research, 170 (2024),
[Huertas-Abril et al., 2021]
C.A. Huertas-Abril, F.J. Palacios-Hidalgo, M.E. Gómez-Parra.
Peer assessment as a tool to enhance pre-service primary bilingual teachers’ training.
RIED-Revista Iberoamericana de Educación a Distancia, 24 (2021), pp. 149-168
[Huisman et al., 2019]
B. Huisman, N. Saab, P. Van den Broek, J. Van Driel.
The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis.
Assessment & Evaluation in Higher Education,, 44 (2019), pp. 863-880
[Kaufman and Schunn, 2011]
J.H. Kaufman, C.D. Schunn.
Students’ perceptions about peer assessment for writing: Their origin and impact on revision work.
Instructional Science, 39 (2011), pp. 387-406
[Kaur and Carreras, 2021]
P. Kaur, A.L. Carreras.
Hearing the participants’ voice: Recognizing the dimensions of procedural and interactional justice by enabling their determinants.
Group Decision and Negotiation, 30 (2021), pp. 743-773
[Knop et al., 2022]
A. Knop, M. Dressler, L. Klement, P. Hadjipieris.
Fostering proving skills in upper-division mathematics classes through peer feedback assignments.
Transformative Dialogues: Teaching and Learning Journal, 14 (2022),
[Kraus et al., 2018]
S. Kraus, D. Ribeiro-Soriano, M. Schüssler.
Fuzzy-set qualitative comparative analysis (fsQCA) in entrepreneurship and innovation research–the rise of a method.
International Entrepreneurship and Management Journal, 14 (2018), pp. 15-33
[Landy et al., 1980]
F.J. Landy, J.L. Barnes-Farrell, J.N. Cleveland.
Perceived fairness and accuracy of performance evaluation: A follow-up.
Journal of Applied Psychology, 65 (1980), pp. 355
[Lawler, 1967]
E.E. Lawler.
The multi-trait multi-rater approach to measuring managerial job performance.
Journal of Applied Psychology, 51 (1967), pp. 369-381
[Li et al., 2020]
H. Li, Y. Xiong, C.V. Hunter, X. Guo, R. Tywoniw.
Does peer assessment promote student learning?.
A meta-analysis. Assessment & Evaluation in Higher Education, 45 (2020), pp. 193-211
[Lin, 2018]
G.Y. Lin.
Anonymous versus identified peer assessment via a Facebook-based learning application: Effects on quality of peer feedback, perceived learning, perceived fairness, and attitude toward the system.
Computers & Education, 116 (2018), pp. 81-92
[Mohr, 1991]
L.A. Mohr.
Social episodes and consumer behavior: The role of employee effort in satisfaction with services.
Unpublished doctoral dissertation, Arizona State University, (1991),
[Ordanini et al., 2014]
A. Ordanini, A. Parasuraman, G. Rubera.
When the recipe is more important than the ingredients: A qualitative comparative analysis (QCA) of service innovation configurations.
Journal of Service Research, 17 (2014), pp. 134-149
[Panadero et al., 2018]
E. Panadero, A. Jonsson, M. Alqassab.
Providing formative peer feedback: What do we know?.
The Cambridge Handbook of Instructional Feedback, pp. 409-431 http://dx.doi.org/10.1017/9781316832134.020
[Panadero and Alqassab, 2019]
E. Panadero, M. Alqassab.
An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading.
Assessment & Evaluation in Higher Education, 44 (2019), pp. 1253-1278
[Pappas et al., 2016]
I.O. Pappas, P.E. Kourouthanassis, M.N. Giannakos, V. Chrissikopoulos.
Explaining online shopping behavior with fsQCA: The role of cognitive and affective perceptions.
Journal of Business Research, 69 (2016), pp. 794-803
[Pappas et al., 2020]
I.O. Pappas, S. Papavlasopoulou, P. Mikalef, M.N. Giannakos.
Identifying the combinations of motivations and emotions for creating satisfied users in SNSs: An fsQCA approach.
International Journal of Information Management, 53 (2020),
[Pappas and Woodside, 2021]
I.O. Pappas, A.G. Woodside.
Fuzzy-set qualitative comparative analysis (fsQCA): Guidelines for research practice in Information Systems and marketing.
International Journal of Information Management, 58 (2021),
[Parasuraman et al., 1985]
A. Parasuraman, V.A. Zeithaml, L.L. Berry.
A conceptual model of service quality and its implications for future research.
Journal of Marketing, 49 (1985), pp. 41-50
[Peiró et al., 2014]
J.M. Peiró, V. Martínez-Tur, C. Moliner.
Perceived fairness.
Encyclopedia of quality of life and well-being research, Springer, (2014), pp. 4693-4696
[Ragin, 1987]
C.C. Ragin.
The comparative method: Moving beyond qualitative and quantitative strategies.
University of California Press, (1987),
[Ragin, 2000]
C.C. Ragin.
Fuzzy-set social science.
University of Chicago Press, (2000),
[Ragin, 2009]
C.C. Ragin.
Redesigning social inquiry: Fuzzy sets and beyond.
University of Chicago Press, (2009),
[Rasooli et al., 2025]
A. Rasooli, J. Turner, T. Varga-Atkins, E. Pitt, S. Asgari, W. Moindrot.
Students’ perceptions of fairness in groupwork assessment: Validity evidence for peer assessment fairness instrument.
Assessment & Evaluation in Higher Education, 50 (2025), pp. 111-126
[Rihoux et al., 2013]
B. Rihoux, P. Alamos-Concha, D. Bol, A. Marx, I. Rezs¨ohazy.
From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011.
Political Research Quarterly, 66 (2013), pp. 175-184
[Rihoux and Ragin, 2009]
B. Rihoux, C.C. Ragin.
Sage, (2009),
[Sambell et al., 1997]
K. Sambell, L. McDowell, S. Brown.
But is it fair?”: An exploratory study of student perceptions of the consequential validity of assessment.
Studies in Educational Evaluation, 23 (1997), pp. 349-371
[Sanchez et al., 2017]
C.E. Sanchez, K.M. Atkinson, A.C. Koenka, H. Moshontz, H. Cooper.
Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: A meta-analysis.
Journal of Educational Psychology, 109 (2017), pp. 1049-1066
[Schneider and Wagemann, 2012]
C.Q. Schneider, C. Wagemann.
Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis.
Cambridge University Press, (2012),
[Stonewall et al., 2024]
J.H. Stonewall, M.C. Dorneich, J. Rongerude.
Evaluation of bias in peer assessment in higher education.
International Journal of Engineering Education, 40 (2024), pp. 543-556
[Thibaut and Walker, 1975]
J. Thibaut, L. Walker.
Procedural Justice: A Psychological Analysis.
Lawrence Erlbaum Associates, (1975),
[Topping, 2003]
K. Topping.
Self and peer assessment in school and university: Reliability, validity and utility.
Optimising new modes of assessment: In search of qualities and standards, pp. 55-87
[Topping, 2009]
Topping, K. (2009). Peer assessment. Theory into practice, 48(1), 20-27. https://doi.org/10.1080/00405840802577569.
[Topping, 2013]
K. Topping.
Peers as a source of formative and summative assessment.
SAGE handbook of research on classroom assessment, pp. 394-412 http://dx.doi.org/10.4135/9781452218649.n22
[Topping, 2021a]
K. Topping.
Peer assessment: Channels of operation.
Education Sciences, 11 (2021), pp. 91
[Topping, 2021b]
K. Topping.
Face-to-face peer assessment in teacher education/training: A review.
The Educational Review, 5 (2021), pp. 117-130
[Ulrich, 1984]
W.L. Ulrich.
HRM and culture: History, ritual and myth.
Human Resource Management, 23 (1984), pp. 117-128
[Urry, 2005]
J. Urry.
The complexity turn.
Theory, Culture & Society, 22 (2005), pp. 1-14
[Van Popta et al., 2017]
E. Van Popta, M. Kral, G. Camp, R.L. Martens, P.R.J. Simons.
Exploring the value of peer feedback in online learning for the provider.
Educational Research Review, 20 (2017), pp. 24-34
[Vander Schee and Birrittella, 2021]
B.A. Vander Schee, T.D. Birrittella.
Hybrid and online peer group grading: Adding assessment efficiency while maintaining perceived fairness.
Marketing Education Review, 31 (2021), pp. 275-283
[Woodside, 2013]
A.G. Woodside.
Moving beyond multiple regression analysis to algorithms: Calling for adoption of a paradigm shift from symmetric to asymmetric thinking in data analysis and crafting theory.
Journal of Business Research, 66 (2013), pp. 463-472
[Woodside, 2017]
A.G. Woodside.
The complexity turn: Cultural, management, and marketing applications.
Springer, (2017),
[Yu and Huarng, 2023]
T.H.K. Yu, K.H. Huarng.
Configural analysis of GII’s internal structure.
Journal of Business Research, 154 (2023),
[Yu and Huarng, 2024]
T.H.-K. Yu, K.-H. Huarng.
Causal analysis of SDG achievements.
Technological Forecasting and Social Change, 198 (2024),
Copyright © 2025. The Author
Download PDF
Article options
Tools