metricas
covid
Journal of Innovation & Knowledge Generative artificial intelligence expectations and experiences in management ed...
Journal Information
Vol. 10. Issue 5.
(September - October 2025)
Visits
3600
Vol. 10. Issue 5.
(September - October 2025)
Full text access
Generative artificial intelligence expectations and experiences in management education: ChatGPT use and student satisfaction
Visits
3600
Petr Suchaneka,
Corresponding author
petr.suchanek@mendelu.cz

Corresponding author at: Mendel University in Brno Faculty of Business and Economics, Zemedelska 1665/1 613 00 Brno, Czech Republic.
, Maria Kralovab,c
a Department of Business Economics, Mendel University in Brno Faculty of Business and Economics, Brno, Czech Republic
b Department of Economics and Management, Ambis. University, Brno, Czech Republic
c Department of Informatics, Faculty of Business Management, Brno University of Technology, Brno, Czech Republic
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (2)
Tables (6)
Tables
Table 1. Demographics of the 231 respondents.
Tables
Table 2. Latent variables in the measurement model.
Tables
Table 3. Discriminant validity values of HTMT ratio.
Tables
Table 4. Main effects in the AI 3S-model.
Tables
Table 5. Mediation effects in the AI 3S- model.
Tables
Show moreShow less
Abstract

Generative artificial intelligence (AI) has witnessed a major boom in recent years and is increasingly penetrating the higher education sector. This study focused on the use of ChatGPT by undergraduate management students. We developed a model called the “AI Student Satisfaction with Studies model” (AI 3S-model) to investigate how generative AI, specifically ChatGPT, affects student satisfaction with their management studies. Factors used in the model included AI-related student expectations, AI-related student job expectations, perceived quality of AI among students, and AI-related overall student satisfaction. An online questionnaire was administered to students from economics faculties at various universities in the Czech Republic. We deliberately focused on one specialized economics college and several large economics faculties. The sample comprised 231 respondents. To analyze the data, we applied covariance-based structural equation modelling using maximum likelihood estimation. Our findings indicate that two factors directly and positively affect overall student satisfaction with AI use: their perceived quality of studies and their expectations. Additionally, perceived quality acts as a significant mediator between student expectations and overall satisfaction, as well as between job expectations and overall satisfaction. Students believe that ChatGPT enhances their quality of education, which boosts their overall satisfaction. For management education programs, this means that finding ways to effectively integrate generative AI into students’ learning and establishing reasonable limits is highly beneficial, whereas prohibiting the use of generative AI tools would likely decrease student satisfaction and diminish the perceived quality of their studies.

Keywords:
Generative artificial intelligence (AI)
ChatGPT
Management studies
Student satisfaction
AI student satisfaction with studies model
JEL classification codes:
A22
M31
Full Text
Introduction

With the advent of the digital revolution, technologies such as artificial intelligence (AI) have become the new norm (Lim et al., 2024). In the education sector, AI offers innovative solutions that enhance both teaching and learning processes (Sahu & Sahu, 2024). Owing to this technological tidal wave, innovations in education (including management-related education) are needed to enable economics faculties to shape students into flexible, relevant and socially responsible leaders of tomorrow (Lim et al., 2024). Text generators represent one major innovation of AI in higher education, as they not only generate text in response to questions but also generate articles or summaries (all in real time) (Sahu & Sahu, 2024).

ChatGPT has helped popularize the powerful technology of generative AI, which has now become a distinct class of AI in its own right (Lim et al., 2023). ChatGPT gained one million users within five days and reached 100 million users two months after its release in November 2022, making it the fastest growing consumer app (Hu, 2023). AI-based text generators are going through a proverbial boom (Han et al., 2023), also translating into a large amount of related research (Kaliyadan, Seetharam, 2023). Existing research highlights the ease with which generative AI enables access to information, the interconnection of participants, and participation in digital communities (Koyuturk et al., 2023).

In this article, we focus on a separate class of AI, namely generative AI, as an incredibly powerful technological tool; we specifically consider the product ChatGPT, which has popularized this class of AI (Ratten & Jones, 2023). ChatGPT learns automatically, unlike previous generative AIs that were less sophisticated (Damar, 2022). As a result, result, ChatGPT is more advanced than other forms of generative AI because it can perform multiple tasks at once (Hammer, 2023). It will have a significant impact on management education programs because it is constantly updated with new information (Chatterjee & Dethlefs, 2023). ChatGPT is a disruptive technology that has revolutionized the way management educators assess students and led to a stampede (Dibble, 2023).

The incorporation of generative AI (ChatGPT) into university education can produce better learning outcomes (Tlili et al., 2023). AI is changing the way management students view the quality of their studies (Igbokwe, 2023), their expectations from their studies and future employment (Almaraz-López et al., 2023), and their overall satisfaction with their studies (Liu et al., 2020). Students’ expectations from the use of AI are high (Wang, 2022), as are the expectations associated with the use of ChatGPT (Roganović, 2024). The importance of AI (including ChatGPT) in higher education has increased greatly in recent years (Igbokwe, 2023); currently, neither AI nor ChatGPT can be excluded from the educational process, although the use of AI faces ethical issues, in particular with regard to plagiarism (Chatterjee & Dethlefs, 2023; Stokel-Walker, 2022), including in management education. AI and ChatGPT can both be implemented by lecturers in the process of educating students—for example, in personalized learning (Igbokwe, 2023), intelligent tutoring (VanLehn, 2011), streamlining administrative tasks (Igbokwe, 2023) or efforts to enhance learning outcomes (Gupta, 2020). Further, they can be used directly by students in their studies—for example, to customize their educational pathway (Kuleto et al., 2021); in spaced repetition and knowledge revision (Salas-Pilco & Yang, 2022); or for learning basic content in a responsive, interactive, and confidential way (Chen et al., 2023).

There has been a focus on the advantages that AI-based text generators bring to students and their learning (Du Boulay, 2016), including the positive effect of AI tools on student performance (Kumar, 2021; Pereira et al., 2019; Shahzad et al., 2024; Wu et al., 2020). This is due to the many study- and work-related expectations that students associate with AI-based text generators (Doğaner, 2021), including increased satisfaction with education (Kim et al., 2019). Owing to their ability to manage a significant amount of a student’s workload, AI tools introduce changes to the entire teaching process, which are not only restricted to business and management colleges (Baker et al., 2019; Rudolph et al., 2023).

Recently, the use of AI and ChatGPT by students at business colleges has also been the subject of numerous studies (Raman et al., 2023). However, research that focuses on the effects of using AI-based text generators is currently lacking. It is unclear how management students perceive the effect of using these tools on the quality of their studies, what expectations students have regarding the use of these tools, or how these tools affect student satisfaction with their management studies.

Management students will use AI-based tools and thereby gain advantages in their studies or future employment if these tools affect student satisfaction positively (or if they expect to gain such advantages). Teachers will need to respond to this phenomenon and change their methods of teaching, examining, and monitoring (especially for semester assignments and final theses) (OpenAI, 2022, 2023). Therefore, management teachers need to change their assessment methods to include originality and creativity in the responses in order to surpass ChatGPT’s abilities (Ratten & Jones, 2023). As ever more advanced chatbots are being developed and used with increasing frequency, and as student satisfaction increases with their use, this issue is only set to become more serious.

Generative AI (including ChatGPT)—with its wide availability, extensions, and capabilities—is increasingly being used by college students, including those studying economics. For students in the social sciences (which includes management studies), generative AI is an especially valuable tool, as academic work in this field involves extensive text-based tasks—much more so than in the disciplines of engineering or natural sciences. These students experience a certain level of satisfaction with the use or potential use of generative AI, and this satisfaction is reflected in their overall satisfaction with their studies for which such tools are utilized. Therefore, to properly assess student satisfaction, it is essential to also examine their satisfaction with generative AI, or at least consider the use of generative AI in the context of student satisfaction with learning.

Generative AI is frequently studied independently in the context of higher education (Cui et al., 2023; Liu, 2023). Similarly, user satisfaction with generative AI—and the factors influencing it—is often examined as a separate area of research (Abumalloh et al., 2024; Anwar et al., 2025). Our research brings these two perspectives together by examining student satisfaction with generative AI (specifically ChatGPT) within the context of their educational experience.

Student satisfaction with their studies—including management studies—has been the subject of research for many years. However, the use of generative AI has only become widespread in recent years, meaning that there is a lack of studies examining student satisfaction in the context of AI use (Almulla, 2024; Ngo et al., 2024), particularly in relation to causal variables linked to college students’ use of tools such as ChatGPT (Yu et al., 2024). The studies that have emerged in this field (especially after 2023) have often focused on different aspects, such as the relationship between learning outcomes, quality of education, knowledge acquisition, motivation, and student satisfaction (cf. e.g., Almulla, 2024; Ngo et al., 2024). This gap highlights a clear research opportunity—one that our study pursues.

This study focuses on the use of generative AI (specifically ChatGPT) by management students in their studies. The objective is to develop a model of student satisfaction with their studies (in the context of ChatGPT use) and the factors that influence it. The study investigates the impact of ChatGPT use on students’ satisfaction with their studies, focusing on the role of expectations—whether related to their academic experience or future employment—as well as the perceived quality of their use of ChatGPT.

This study highlights the significance of understanding the impacts of ChatGPT within the context of economics education at the university level, offering insights into students’ perceptions and their levels of satisfaction with its use. Through the study, we aim to provide valuable findings for educators, university policymakers, and AI developers. Gaining a deeper understanding of the implications of ChatGPT use in economics education is essential for assessing its effectiveness and optimizing its integration into academic settings, including through the development of future educational strategies.

The remainder of the paper is structured as follows: first, a theoretical background is presented regarding generative AI and ChatGPT and associations with AI-related student satisfaction with their management studies, AI-related student satisfaction with their management education program, and other AI-related factors that affect them. This is followed by descriptions of the study hypotheses and conceptual model focused on generative AI-related (ChatGPT) student satisfaction with management studies (AI 3S-model). Next, a description of the methods used in the research and the research sample is provided, followed by sections on the results and a subsequent discussion. The paper concludes with a summary of the key findings and research limitations.

Literature reviewExpectation disconfirmation theory

Within the theoretical framework of this study, expectation disconfirmation theory (EDT) plays a central role. Originally formulated by Oliver (1980), EDT is based on the discrepancy between expectations and perceived reality, from which satisfaction is derived. The theory posits that consumer satisfaction with a product is determined by their prior expectations of specific product attributes. These attributes—and how the consumer perceives them—define the product’s quality, or perceived quality. Accordingly, consumers form expectations about a product, project these expectations onto the product experience, assess its quality, and subsequently derive their level of satisfaction with it. These relationships form the foundation of our research. EDT has previously been applied in studies involving ChatGPT (cf. Chen, 2024), student satisfaction (cf. Chandra & Fitriyanto, 2024), and more recently in research exploring the link between generative AI and student satisfaction (cf. Nguyen et al., 2025).

Generative AI

“Generative AI can be defined as a technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)” (Lim et al., 2023, p. 2). In addition to providing answers, generative AI can also create the content of those answers, making it unique and traversing the limits of traditional conversational AI (Lim et al., 2022).

Generative AI (including ChatGPT) brings changes to standard university teaching and can be considered an innovation (Lim et al., 2024) that not only improves access to learning (including in management education) (Sousa et al., 2021) and student satisfaction (Sengupta, Chakraborty, 2020), but also the work of university staff (educators), including economics teachers (Firat, 2023). Unlike traditional teaching, generative AI is more conducive to the development of collaborative learning, enabling better personalization of learning (tailored to specific needs), which includes easier individualization for people with disabilities and collaboration through intelligent tools (Schiff, 2021). One of the most revolutionary aspects of computer-supported collaborative learning is the physical absence of students in the same place (Sousa et al., 2021), in which case an AI system can monitor asynchronous discussion groups, provide teachers with information about these discussions, and support the management of their engagement, including by monitoring the results (Nye, 2015). For teachers, (generative) AI can be an effective tool (to save time) when evaluating various tests, homework, and essays (Sousa et al., 2021).

Existing research has primarily focused on the use of AI in teaching students within the pedagogical process (Sousa et al., 2021), with a strong emphasis on the educators’ perspective (Firat, 2023; Sousa et al., 2021). Research on students’ use of AI has generally concentrated on their satisfaction with learning from a technological standpoint (Almulla, 2024), their satisfaction with the AI tool itself vis-à-vis their expectations for its use in teaching (Ngo et al., 2024), or their satisfaction with the AI tool concerning its potential for future use (Yu et al., 2024). The current research on student satisfaction with the use of specific AI tools (e.g., ChatGPT), focusing on the perceived quality of the tool and the expectations students associate with it—both for their studies and future careers—thus connects to these previous studies.

ChatGPT, the shortened and more commonly used term for Chat Generative Pre-Trained Transformer, can be defined as “an AI-powered chatbot capable of processing and generating natural language text, providing human-like responses to a wide range of questions and prompts” (Doshi et al., 2023, p. 6). Another definition adds that as an AI-powered natural language model, ChatGPT was trained on very broad data (Ge, Lai, 2023; Hassani, Silva, 2023). ChatGPT is an AI-based chatbot. Chatbots are widely used technologies that support education and learning (Okonkwo & Ade-Ibijola, 2020). These intelligent agents communicate with users by providing appropriate responses (Bilquise et al., 2022). Chatbots increase educational satisfaction (Kim et al., 2019) as well as students’ learning performance (Kumar, 2021; Pereira et al., 2019; Wu et al., 2020). It can therefore be assumed that chatbots also have a positive effect on student satisfaction (Sengupta, Chakraborty, 2020).

Student satisfaction

Student satisfaction with their studies can be defined as “a short-term attitude resulting from an evaluation of students’ educational experience, services and facilities” (Weerasinghe, Fernando, 2017, p. 534). Wiers-Jenssen et al. (2002) define this satisfaction as an overall reaction to the student’s experience with their studies, including their evaluation of the services offered by their university. Another definition states that student satisfaction with their studies is “the favorability of a student’s subjective assessment of the various outcomes and experiences associated with education” (Elliot, Shin, 2002, p. 198).

Indeed, there are several definitions of student satisfaction with their studies, with consensus among them. First, student satisfaction with their studies is a subjective reaction to, or an evaluation of, their experience with their studies and the facilities offered by universities. Subjective reactions may be assessed using a questionnaire. It must be noted that satisfaction is a short-term attitude, which means that student satisfaction with their studies changes over time (Kane et al., 2008) and that long-term measurement is appropriate.

Therefore, AI-related overall student satisfaction is defined here as a student’s short-term attitude and overall subjective response resulting from an evaluation of the educational experience of using generative AI, including an evaluation of the AI services offered by the university. Furthermore, the current understanding of student satisfaction with their studies is relatively broad, as it includes or may include several variables and factors that affect it. AI-related overall student satisfaction has been measured at a specific point in time, reflecting a particular level of student satisfaction. The measurements within the construct focus on the experience of using ChatGPT and how this experience meets students’ expectations associated with its use (cf. Almulla, 2024; Gray & DiLoreto, 2016). Additionally, prior research has assessed the perceived benefits and costs associated with using ChatGPT, evaluated subjectively by students based on their experience (cf. Almulla, 2024), as well as the likelihood of recommending ChatGPT to others (cf. Gray & DiLoreto, 2016). This recommendation is primarily influenced by students’ experiences with the tool during their studies and the extent to which their expectations were fulfilled.

In relation to the theoretical background (EDT), our research focused on perceived product quality and students’ expectations. The factors used in the student satisfaction in higher education model (Alves & Raposo, 2007), which focuses on the elements influencing students’ satisfaction with their learning, align with this approach and emphasize the expectations that drive students to use ChatGPT (Raman et al., 2023).

Perceived quality of studies

The definition of perceived quality of studies is based on the difference between customer expectation and perceived performance (Parasuraman et al., 1985), or as a customer’s evaluation of service excellence (Zammuto et al., 1996). Considering our research focus, perceived quality may therefore be defined as the difference between what the student expects to receive and what they actually receive (O’Neill, Palmer, 2004). Extrapolating these definitions onto generative AI results in the characterization of AI-related perceived quality as the difference between what a student expects to gain from AI use and how they perceive actually using the tool.

Product (in this study, a service—education) quality is understood as a factor that affects customer (in this case, student) satisfaction (Carrillat et al., 2007; Wilson et al., 2016; Yavas et al., 2004). The perceived quality of studies affects student satisfaction (Gruber et al., 2010). Considering that ChatGPT positively affects education quality (Firat, 2023), we posit that it will also positively affect student satisfaction through education quality.

The measurement of AI-related student perceived quality focuses on several aspects of student learning, including improving learning quality, enhancing student engagement (cf. Mbwesa, 2014; Raman et al., 2023), improving study ratings (cf. Raman et al., 2023), facilitating easier studying (cf. Almulla, 2024; Mbwesa, 2014; Raman et al., 2023), and providing more time for non-study activities (cf. Raman et al., 2023), all through the use of ChatGPT.

Student expectations

Perceived quality is closely associated with student expectations, which are in turn related to student satisfaction (Alves, Raposo, 2007). Student expectations are based on the standard definition of expectations as customers’ predictions of what is likely to occur during the next transaction (Parasuraman et al., 1988). This definition can be further expanded, “as anticipation of future consequences based on prior experience, current circumstances, or other sources of information” (Tyron, 1994, cited in Kamaruddin et al., 2012, p. 432). Researchers in this field are divided on whether student expectations directly (Del Río-Rama et al., 2021), indirectly (Najimdeen et al., 2021), or both directly and indirectly (Tukiran et al., 2021) influence satisfaction. The nature of expectation depends on the context, and in the case of students, this can be connected to the procedures (tools) and results (abilities and knowledge) of learning (Bordia et al., 2006). In this sense, student expectations may be connected with generative AI as a tool that should yield results during their studies and eventually in future jobs (as the student acquires the knowledge and abilities necessary to use ChatGPT). AI-related student expectations are thus defined here as anticipating or predicting future consequences with the use of AI based on past experiences, current circumstances, or other sources of information regarding AI, especially with regard to their studies.

Focusing research on students’ expectations for the period after they complete their studies (job expectations) is appropriate, given that students often seek out and use programs and tools that align with their employment expectations even while still studying (Bhadra & Rahman, 2021). Job expectations can be defined as the “values that individuals place on various potential job rewards, including both intrinsic and extrinsic types of remunerations” (Bartol, 1976, p. 368). Job expectations usually refer to a person’s belief that they will obtain an outcome (or a specific level of outcome) from a particular job (Greenhaus et al., 1983). Therefore, in the case of student job expectations, they relate to the student’s belief that they will achieve a certain result in a certain (future) job, including (future) job rewards in a broader sense. Interacting with ChatGPT can expose students to different career paths and raise students’ awareness of the diverse options available to them (Delello et al., 2023), thereby influencing their expectations concerning future job opportunities. Students agree that AI has a major impact on their future careers (Bisdas et al., 2021; Gong et al., 2019; Sit et al., 2020). A student’s AI-related job expectations can therefore be defined as the student’s belief that by using generative AI, they will find a certain future job with a certain salary.

The measurement of students’ AI-related job expectations focuses on several aspects: expectations related to career development (cf. Alves & Raposo, 2007), facilitating the process of securing a future job, and enhancing a student’s readiness for work-related tasks (cf. Fulk et al., 2022), all through the use of ChatGPT.

Hypothesis development and conceptual model

In the EDT framework formulated by Oliver (1980), customer satisfaction with a product is assessed based on their initial expectations. The application of EDT in higher education has demonstrated that service quality meeting student expectations leads to increased student satisfaction (Chandra & Fitriyanto, 2024). This is confirmed by previous research by Alves and Raposo (2007), as well as Polas et al. (2020), who found that student expectations regarding their studies had a positive effect on their satisfaction. Scholars disagree about the nature of the relationship between student expectation and satisfaction—whether it is direct, indirect, or both direct and indirect (Del Río-Rama et al., 2021; Najimdeen et al., 2021; Alves, Raposo, 2007; Tukiran et al., 2021). The nature of student expectations can be related to learning methods and the knowledge and skills acquired (Bordia et al., 2006)—also with ChatGPT. These perspectives inform the first hypothesis:

H1

AI-related student expectations regarding ChatGPT use positively impact AI-related overall student satisfaction.

Students’ expectations and perceptions of education quality are closely linked. Expectations, formed before the delivery of education, influence how students subsequently perceive the quality of educational services during their actual delivery (Makoe & Nsamba, 2019; Marimon et al., 2020). A direct relationship between student expectations and the perceived quality of studies was found among undergraduate students in research by Demir (2013). A weak positive relationship was also found among postgraduate students in research by Asim and Kumar (2018). Further, among undergraduate students, Ngo et al. (2024) demonstrated an association between expectations met and the perceived usefulness of ChatGPT. Based on these findings, the second hypothesis is formulated:

H2

AI-related student expectations regarding ChatGPT use positively impact AI-related student perceived quality.

Further, evidence indicates that the quality of studies affects student satisfaction (Gruber et al., 2010; Hasan et al., 2008). Previous research also confirms the positive impact of perceived service quality on student satisfaction (Farahmandian et al., 2013; Subandi & Hamid, 2021; Wong & Chapman, 2023). In these studies, perceived service quality refers to the services that the university provides to students that are directly related to their studies (Farahmandian et al., 2013; Subandi & Hamid, 2021; Wong & Chapman, 2023). It was also found that ChatGPT increases the quality of one’s studies (Firat, 2023) and can therefore be seen as a service that students use in their studies. This evidence informs the third hypothesis:

H3

AI-related student perceived quality of ChatGPT use positively impacts AI-related overall student satisfaction.

Sikyr et al. (2020) have shown that students’ motivation to succeed in their studies is linked to the expectations associated with their employment. At the same time, higher learning expectations can be traced back to the use of ChatGPT (Baidoo-Anu & Ansah, 2023), which is also supported by Tossell et al. (2024). The accuracy of student expectations about their educational and professional careers can be an important determinant of academic success; that is, if students have more accurate job expectations, this should lead to more fulfilled student expectations (Krammer et al., 2016). Thus, the fourth hypothesis is proposed:

H4

AI-related student job expectations regarding ChatGPT use positively impact AI-related student expectations regarding ChatGPT.

The relationship between the perceived quality of studies and perceived job performance of the employee by the employer has been confirmed (Chikazhe et al., 2022). Students have different expectations regarding their studies, which are reflected in the evaluation of education quality (Demir, 2013), including with regard to ChatGPT use (Ngo et al., 2024). Students could have such expectations about their future work, and these translate into the perceived quality of their studies (also in the context of ChatGPT use). Accordingly, the fifth hypothesis is formulated:

H5

AI-related student job expectations regarding ChatGPT use positively impact AI-related student perceived quality.

Research on the relationship between students’ learning expectations and their satisfaction with their studies suggests a positive correlation between these variables (James & Yun, 2018), even among postgraduate students (Beloucif et al., 2022). Student expectations should therefore be broadened to include the period after they have completed their studies (Bhadra, Rahman, 2021). At the same time, students expect AI to have an impact on their future jobs (Bisdas et al., 2021). Here, students’ knowledge of ChatGPT is important (Burke et al., 2020). ChatGPT contributes to student satisfaction, including regarding their careers (Pabreja & Pabreja, 2024). This perspective informs the sixth hypothesis addressing student expectations:

H6

AI-related student job expectations regarding ChatGPT use positively impact AI-related overall student satisfaction.

The conceptual model includes the following constructs: AI-related student expectations (AISE; 4 items), AI-related student job expectations (AISJE; 3 items), AI-related student perceived quality (AISPQ; 5 items), and AI-related overall student satisfaction (AIOSS; 3 items).

The items for AISE, specifically AISE 1–4, were derived from research by Raman et al. (2023) and Strzelecki (2024). The items for AISJE, specifically AISJE 1, were obtained from Alves and Raposo’s (2007) study, while AISJE 2 and 3 were drawn from Fulk et al.’s (2022) research. The items for AISPQ were derived as follows: AISPQ 1 and 2 from research by Mbwesa (2014) and Raman et al. (2023); AISPQ 3 and 5 from Raman et al. (2023); and AISPQ 4 from Almulla (2024); Mbwesa (2014), and Raman et al. (2023). Lastly, the items for AIOSS were obtained as follows: AIOSS 1 from research by Almulla (2024) and Gray and DiLoreto (2016), AIOSS 2 from Almulla (2024), and AIOSS 3 from Gray and DiLoreto (2016).

Mediating roles of AI-related student perceived quality and AI-related student expectations

Based on the model shown in Fig. 1, several indirect (mediating) effects could be observed. Research shows a gap between students’ expectations and their satisfaction (Awang & Ismail, 2010). At the same time, there is a relationship between students’ expectations and the perceived quality of their studies (Asim & Kumar, 2018; Demir, 2013; Makoe & Nsamba, 2019; Marimon et al., 2020), as well as between perceived quality of studies and students’ satisfaction with their studies (Farahmandian et al., 2013; Subandi & Hamid, 2021; Wong & Chapman, 2023). It can therefore be assumed that students’ expectations will influence their satisfaction through their perceived quality of studies. The effect of perceived quality (PQ) as a mediator of the relationship affecting overall student satisfaction (OS) was identified in research by Demircioglu et al. (2021). In our research, we therefore formulate the seventh and eighth hypotheses:

H7

AI-related student perceived quality mediates the relationship between AI-related student job expectations and AI-related overall student satisfaction.

H8

AI-related student perceived quality mediates the relationship between AI-related student expectations and AI-related overall student satisfaction.

The effect of student expectations as a mediator is not very common in research. In research by Khan and Hemsley-Brown (2024), student expectations were found to play a mediating role in relation to student satisfaction. In the case of research on student learning, this factor (in the form of educational expectation) was identified by Fan and Wolters (2014). In other research, it appears as a mediator of teacher expectations (Speybroeck et al., 2012). Research shows that students often perceive the education they receive as insufficient for their future jobs, yet higher education is still considered essential for securing good employment (Zengin et al., 2011). It can be inferred that students’ job expectations will influence their expectations regarding their studies, and in turn impact their overall satisfaction with their education. Therefore, we formulate the ninth hypothesis:

H9

AI-related student expectations mediate the relationship between AI-related student job expectations and AI-related overall student satisfaction.

Materials and methods

In accordance with previous research focused on student satisfaction (Alves, Raposo, 2007; Muzammil et al., 2020; Raman et al., 2023), a questionnaire was used to conduct this study. The questionnaire comprised 21 questions: one closed screening question, five demographic questions (four closed, one-half-open), and 15 questions regarding the factors mentioned above (AISE: 4, AISJE: 3, AISPQ: 5, AIOSS: 3). These 15 questions were evaluated on a five-point Likert scale ranging from 1 (definitely disagree) to 5 (definitely agree), corresponding to other similar studies. The questionnaire was distributed electronically, and the research was conducted online. The questionnaire was administered in Czech; its English version is presented in the Appendix. It must be noted that the translated English version has not been validated and only serves as an overview of the general contents of the questionnaire.

A pre-test for the questionnaire was conducted following Rahi et al. (2019), and the formulation of the questionnaire items was qualitatively verified. Specifically, the targeted method described by Bowden et al. (2002) was used to refine the questions. The pre-test involved five respondents (students from full-time study programs within the target population—one male and four female students; two master’s and three bachelor’s students), which falls at the lower end of the recommended range of 5 − 10 participants, as suggested by Hilton (2017).

We chose covariance-based structural equation modelling (CB-SEM)—implemented in the R package Lavaan with maximum likelihood estimation—as our approach to test the hypothesized relationships within the theoretical model and assess global model fit. Structural equation modelling (SEM) was used to model the effects of individual factors (AISE, AISJE, AISPQ) on AI-related overall student satisfaction (AIOSS). The measurement part of the model was created by first constructing latent variables based on the questionnaire items, mapping the relationships between latent and manifest variables, with the latter represented by the relevant scale items from the questionnaire (as described earlier). The path model representing the relationships between three satisfaction factors formed the structural part in SEM.

In SEM, model fit is typically assessed using a set of standard fit indices that indicate how well the proposed model aligns with the observed data. These indices offer complementary perspectives on model adequacy and are widely used in applied SEM research. Schumacker and Lomax (2016) list the most appropriate indices with acceptable values for assessments of the quality of structural models: Tucker-Lewis index (TLI), comparative fit index (CFI), standardized root-mean-square residual (SRMR), and root mean square error of approximation (RMSEA). TLI and CFI range in value from 0 to 1, where a higher value means a better model. The recommended value of these indices is 0.9 (Hu, Bentler, 1999), though values between 0.8 and 0.9 are considered acceptable (marginal fit) (Husain, 2019). The acceptable values for SRMR range from 0 to 0.8, according to Hu and Bentler (1999). The acceptable value for RMSEA is 0.08 (Hu & Bentler, 1999); given the fact that an RMSEA value above 0.1 would mean that the model needs to be modified (Husain, 2019), a value of up to 0.1 is acceptable. The reliability of the model’s factors was assessed using Cronbach’s alpha. Here, values of 0.7 or higher are considered acceptable (Taber, 2018). Validity was assessed using the average value extracted (AVE) index, where the recommended value is 0.5 and above (Haji-Othman and Yusuff (2022). However, values above 0.45 are acceptable (Durgapal, Saraswat, 2019). SEM models were estimated using the R language (R Core Team, 2021) and the Lavaan package (Rosseel, 2012).

Sample

There are many differences (cultural, linguistic, etc.) between students from different countries that are reflected in the use of AI tools (Wang et al., 2023). There are also differences in the attitudes toward AI (Persson et al., 2021) and rules for using AI in different countries (Stanfill & Marc, 2019), which may also be reflected in the use of AI by management students. Differences have also been found in the behavioral interactions between undergraduate and graduate students (Parahoo et al., 2016). All of the above factors are potential confounding variables. Therefore, our research focused exclusively on undergraduate students from Czechia.

The research sample was composed in such a way that it corresponds as closely as possible to the distribution of full-time students of business and management colleges and faculties in Czechia. Informed consent was obtained from all participants (see Appendix). In compliance with Czech legal regulations, this study did not require review by a research ethics committee. The sample included students from one college specializing in professional education (from a total of two in Czechia), with the remaining colleges and faculties focusing on academia. Additionally, the sample included students from the largest business and management college in Czechia (the University of Economics and Business, Prague) and a number of smaller regional economics faculties in Brno, Ostrava, Hradec Králové, Liberec, and Zlín. In total, eight universities were involved (two each from Prague and Brno and one each from the remaining cities). Most of these faculties and colleges are public, though two private colleges were also included (Ambis University and University College Prague), consistent with the distribution of students enrolled in public and private colleges and faculties in Czechia.

The demographic data collected included age, sex (sex), type (level of study), and city (location) of study (Serban et al., 2013). These were supplemented by data about the type of college, as Czechia has both public and private colleges. The difference between them is that public colleges offer education free of charge, whereas private colleges require students to pay to enroll. The demographic characteristics are presented in Table 1. Students’ demographics for the year 2023 were not available, which is why data from 2022 were used. Considering that the academic year lasts from autumn to spring, the information from the end of 2022 can reasonably be expected to sufficiently resemble the first half of 2023, which was when the research was conducted.

Women constituted 55.6 % of students in 2022, which corresponds to our sample (51.5 %). This proportion remains stable in the long term (cf. CZSO, 2023). Further, 90.7 % of students were enrolled in public colleges, which matches the structure of our sample exactly (90.5 %). This has also remained stable over the last three years (cf. CZSO, 2023). Therefore, the research sample is considered representative (in accordance with Omair, 2025) in terms of the students’ sex and type of college.

Bachelor’s students represent 64.3 % of all students in Czechia, which is less than in our sample (77.1 %). The age distribution of students in Czechia was as follows: under 20 years (13.06 %), 20−24 years (84.6 %), and older than 24 years (23.4 %) (cf. CZSO, 2023). This means that our sample underrepresents students from the “20−24″ and “older than 24” years categories. Our sample also does not accurately represent students in terms of their location. Within Czechia, Prague hosts the highest number of students (39.7 %), followed by Brno (20.4 %), Olomouc (7.5 %), Ostrava (6.5 %), Plzeň (4.3 %), České Budějovice (3.8 %), Hradec Králové (3.1 %), and Zlín (2.5 %) (cf. CZSO, 2023). An overview of the sample proportions is provided in Table 1.

Results

First, latent variables were constructed in the measurement part of the model based on the 15 manifest variables in the questionnaire (see Appendix), as shown in Table 2. In the measurement model, the variance of the latent variables (constructs) was set to one; in addition, the manifest variables (items) were also standardized. Therefore, the loadings (estimates) can be interpreted in the same manner as the standardized regression coefficients.

The soundness of the measurement model was thoroughly assessed by evaluating internal consistency (reliability), convergent validity, and discriminant validity. To verify internal consistency (reliability), two well-established indicators—Cronbach’s alpha and composite reliability—were calculated. The measurement model results produced values surpassing the accepted threshold of 0.7 for both Cronbach’s alpha and composite reliability across all latent variables, demonstrating strong internal consistency. These results have been presented together with the loadings of the manifest variables, their standard errors, and related p-values in Table 2.

Higher loadings (all of which are significant) indicate strong relationships between manifest and latent variables, supporting convergent validity. Convergent validity was also examined via AVE for each latent variable. Two of these closely approached the threshold of 0.5, while the rest safely exceeded the proposed threshold.

Discriminant validity, ensuring that constructs are distinct and mutually uncorrelated, was assessed using the heterotrait-monotrait (HTMT) ratio of construct correlations. A low HTMT ratio indicates that the measurements adequately discriminate between different constructs rather than merely reflecting the same underlying construct. Based on the results of the measurement model, all HTMT values are safely below the critical threshold of 0.85, indicating the absence of concerns regarding discriminant validity. The specific HTMT values are provided in Table 3.

Next, the relationships represented by H1−H6 were modelled in the structural part of the model, shown in Fig. 2. The results are presented in Tables 4 and 5. The structural model depicted in Fig. 2 was enhanced via additional covariance between AISE_4 and AISPQ_5 = 0.324, based on research by Demir (2013); Asim and Kumar (2018), and Ngo et al. (2024) (for more details, see Section 3). The following values were obtained: CFI = 0.912, TLI = 0.888, RMSEA = 0.088, and SRMR = 0.062.

All the relationships represented by H1−H5 (except the relationship between AISJE and AIOSS represented by H6) were found to be significant. These relationships are (all) positive, namely the relationships between AISJE, AISE, and AISPQ, as well as between AISE, AISPQ, and AIOSS, and between AISPQ and AIOSS.

Results in Table 5 show the indirect (mediation) effects within the model examined. We were able to illustrate the mediation effect of AISPQ in the relationship between AISE and AIOSS (H8). The mediation effect of AISE in the AISJE-AIOSS relationship was also demonstrated (H9). However, the path AISJE->AISPQ->AIOSS did not prove to be significant. The resulting model with all of the effects is shown in Fig. 2.

Discussion

The results reveal that two factors directly affect AI-related overall student satisfaction—AI-related student perceived quality of studies and AI-related student expectations. AI-related student perceived quality of studies has been constructed in this study as improving teaching and student evaluation through the use of ChatGPT. This means that students believe that ChatGPT positively affects the quality of their studies (including their evaluations by teachers), which then leads to greater overall student satisfaction with using ChatGPT. This research thus confirms that generative AI (ChatGPT) brings advantages to students’ education (Du Boulay, 2016) and that ChatGPT increases the quality of their studies (Firat, 2023). Regarding the use of generative AI (here, ChatGPT in particular), it is also considered a fact that student perceived quality of studies increases overall student satisfaction (Gruber et al., 2010).

The strong relationship between perceived quality of learning and student satisfaction suggests that the way students are taught—specifically how easily and quickly they can acquire the necessary knowledge and skills—is crucial. In this context, ChatGPT serves as a tool that enhances their performance (Caratiquit & Caratiquit, 2023; Youssef et al., 2024), thereby improving the quality of their learning. This improvement in turn leads to greater satisfaction with the use of this tool in their studies. Accordingly, we can conclude that to enhance the quality of teaching and learning, integrating ChatGPT (or other generative AI tools) into student learning is essential. A similar conclusion was drawn by Kiryakova and Angelova (2023) and Javaid et al. (2023).

AI-related student perceived quality of studies was also found to be an important positive mediator of two relationships: AI-related student expectations and AI-related overall student satisfaction in one part of the model, and AI-related student job expectations and AI-related overall student satisfaction in the other part. Therefore, within the developed model, AI-related student perceived quality can be considered the most important factor affecting AI-related overall student satisfaction. The importance of student perceived quality and its impact on overall student satisfaction has also been confirmed in research by Saoud and Sanséau (2019). At the same time, the double but only indirect effect of AI-related student job expectations on AI-related overall student satisfaction has been confirmed.

Thus, this research demonstrates the importance of perceived quality of studies as a link between students’ expectations of their studies (and ChatGPT use in their studies) and their satisfaction with ChatGPT use. Students’ expectations enhance their perceived quality of studies, meaning that when students’ expectations regarding the use of ChatGPT in their learning are met or exceeded, they become more satisfied with its use. This finding suggests that it is only through actual use of ChatGPT in learning that students fully realize the tool’s potential for enhancing their studies, perceiving its benefits positively. This in turn is reflected in their satisfaction with using ChatGPT in their studies. These findings confirm the anticipated benefits of using ChatGPT in education, as articulated by Fuchs (2023).

Student expectations were constructed in such a way that the use of generative AI (ChatGPT) would increase the quality of their education and their grade point average, assist with their studies, and provide more time for other activities, with these student expectations being confirmed by our research. These expectations resulting from the use of generative AI (ChatGPT) by management students in turn increase overall student satisfaction with the use of generative AI. It is thus confirmed that students consider ChatGPT use in their study expectations (Doğaner, 2021). The research also shows that (AI-related) student expectations affect (AI-related) student satisfaction both directly and indirectly, which is consistent with Tukiran et al. (2021), as opposed to Najimdeen et al. (2021) and Del Río-Rama et al. (2021). Our research has demonstrated the positive mediating effect of AI-related student expectations, specifically the relationship between AI-related student job expectations and AI-related overall student satisfaction.

All respondents in the sample had some experience with ChatGPT use, which suggests that their experiences were positive and that ChatGPT does indeed increase education quality and student satisfaction—the expectations of students regarding the generative AI tool were fulfilled. The impact of a positive ChatGPT experience on increasing the perceived quality of studies (Firat, 2023) and student satisfaction has been supported by research (Ngo et al., 2024). This confirms that perceived quality of studies and student expectations are closely related, and that both factors are related to student satisfaction (Alves, Raposo, 2007).

On the contrary, the research provided no support for a direct relationship between AI-related student job expectations and AI-related overall student satisfaction. The results suggest that there is no such direct relationship (at least in the context of this research), or that it is weak and negligible. Students do not directly associate ChatGPT use with their future job expectations (specifically, they do not directly expect to use ChatGPT to build their careers, help in their job search, or improve their employment readiness) or with an increase in their overall satisfaction with their studies. In other words, they do not directly associate their job expectations with their overall satisfaction. The conclusions of Al-Sharafi et al. (2023), who showed that chatbots affect job expectations, were not confirmed by this research, or only an indirect relationship was confirmed, suggesting that students do not primarily associate the use of ChatGPT with their future employment expectations. However, the results confirm that students seek out and use programs and tools important for their job expectations (Bhadra, Rahman, 2021).

The question of why AI-related student job expectations do not directly affect AI-related overall student satisfaction remains unanswered and a focus on this topic in further research would be useful (see Section 7). Students do not seem to directly fulfil their employment or career expectations with the use of ChatGPT. There are two hypothetical explanations for this. First, they are aware that they are fulfilling their study expectations by using ChatGPT (by increasing their level of education, quality of studies, and also their grade level), but they are focusing their use of ChatGPT primarily on studying. ChatGPT can be seen as a support service; indeed, research by James-MacEachern and Yun (2017) suggests that students’ expectations of employment are not related to their experiences of support services—rather, students are satisfied with their academic and personal development through their academic experiences. Second, as mentioned by Zengin et al. (2011), “most of the students think that the education they have been provided with is not enough either for a profession or to satisfy the expectations of the sector.” This suggests that while students may have high expectations associated with generative AI tools (ChatGPT), including for their future studies, their satisfaction with their studies will not be enhanced by this tool if they feel that their education will not be sufficient for their future profession.

From a theoretical perspective, in this study, an original model was constructed that uniquely combines student expectations, perceived quality of studies, and overall student satisfaction in the context of ChatGPT use. To date, only partial associations between student expectations and overall student satisfaction (H1) have been demonstrated in the literature (Polas et al., 2020), potentially within the context of ChatGPT use, though this can only be inferred indirectly from Bordia et al. (2006), and we have not found any other studies specifically focusing on this relationship. Associations have also been shown between student job expectations and perceived quality of studies (H2), both in general (Marimon et al., 2020) and in the context of ChatGPT use (Ngo et al., 2024)—though here, quality of studies is part of student performance. Furthermore, perceived quality of studies has been linked to overall student satisfaction (H3) (Wong & Chapman, 2023), including in the context of ChatGPT use (Firat, 2023). Associations have also been found between student job expectations and student expectations in general (H4) (Krammer et al., 2016), but no research in the context of ChatGPT use has been identified. Additionally, student job expectations and perceived quality of studies (H5) have been linked both in general (Demir, 2013) and possibly in the context of ChatGPT use (Ngo et al., 2024). Regarding the association between student job expectations and overall student satisfaction (H6), existing research suggests a possible relationship, either in general (Bhadra & Rahman, 2021; James & Yun, 2018) or in the context of ChatGPT use (Burke et al., 2020; Pabreja & Pabreja, 2024).

The model we have constructed builds on the EDT as applied by Chandra and Fitriyanto (2024), but further elaborates on and introduces additional relationships. Students’ expectations were categorized into two distinct areas: study-focused expectations and job-focused expectations. The position of perceived quality of studies was clarified, and its relationships with overall student satisfaction were described in greater detail, including the identification of mediating effects. Additionally, the model was framed within the context of generative AI (specifically ChatGPT), which underscores its uniqueness and relevance.

Limitations and suggestions for future research

One limitation of the research relates to the pilot phase. A small pre-test was conducted with only five students, focusing primarily on the clarity and wording of the questionnaire items. This limited pre-test did not allow for any statistical assessment prior to the main data collection process. However, the reliability and validity of the constructs were evaluated using the full sample, and the results indicated that the instrument was constructed appropriately. Nonetheless, future studies should include a more comprehensive pilot phase, ideally involving at least 30 respondents or more, as recommended by Teresi et al. (2022) and Tsang et al. (2017).

This research was also limited by the relatively small sample size, as the response rate among the students invited to participate was low, possibly due to the length of the questionnaire. However, the final sample size was sufficient to conduct SEM, as it met commonly recommended minimum thresholds. The model estimated 37 free parameters, and with 231 respondents, this resulted in a subject-to-parameter ratio of approximately 6:1, exceeding the frequently cited minimum ratio of 5:1 (Bentler & Chou, 1987). Additionally, Kline (2016) classifies samples over 200 as large in the context of SEM, supporting the adequacy of the present sample for a model of moderate complexity. Taken together, these criteria indicate that the sample size was appropriate for the purposes of the analysis.

In future, it would be useful to also test the model (whether existing or new) with students from other fields at other colleges. Several factors should be added in future research, such as the ethics of student use of ChatGPT and the perceived value of studies by students using ChatGPT. A related limiting factor is the relative simplicity of the individual factors influencing AI-related overall student satisfaction with their studies. It would therefore be useful to rethink the internal structure of the factors and their interrelationships. In this context, it would be appropriate to conduct qualitative research (using grounded theory) and formulate new factors, interrelationships, and models based on this research. Taking into account the limitation of the respondents in the Czech Republic, it would be appropriate to test this (or a newly modified) model in other European Union (EU) countries or outside the EU and compare the results. Further research could also investigate why AI-related student job expectations do not directly affect AI-related overall student satisfaction.

Conclusion

This research demonstrates that generative AI (ChatGPT) increases the quality of studies as perceived by business college students of management, and proves the effect of the perceived quality of studies on overall student satisfaction. It also shows that generative AI (ChatGPT) use is reflected in student expectations (directly in study expectations and indirectly in job expectations), which affects their overall satisfaction with their management studies and use of generative AI (ChatGPT). Research has also shown the positive effect of student expectations from using generative AI (ChatGPT) on their quality of studies (level of learning, ease of studying, learning and extra-curricular activities, and better learning outcomes). We also found evidence of the positive effect of student expectations of future employment on study expectations and quality of studies (both using generative AI—ChatGPT).

Meanwhile, the research provides no support for the direct effect of generative AI (ChatGPT) on student expectations regarding their future employment and its effect on their overall study satisfaction with the use of generative AI (ChatGPT). Specifically, students do not directly expect the use of ChatGPT to build their careers, simplify their search for a job, improve their employment readiness, or increase their overall satisfaction with their studies. The question remains as to why this is so, warranting further research to clarify whether management education in business colleges pursues the correct focus (in terms of preparation for future employment) and student satisfaction with their studies is not enhanced by the use of generative AI (ChatGPT), or whether generative AI (ChatGPT) is merely a support tool without much impact on student satisfaction with their studies.

The research also revealed the mediating effects of generative AI-related student perceived quality of studies in the indirect relationships between generative AI-related student expectations and generative AI-related overall student satisfaction, and also between generative AI-related student job expectations and generative AI-related overall student satisfaction. Generative AI-related students’ perceived quality of studies is thus a mediator of the relationships in the AI 3S-model. The model also demonstrated the mediating effect of generative AI-related student expectations of their studies on the indirect relationship between generative AI-related student job expectations and generative AI-related overall student satisfaction. Generative AI-related student job expectations thus indirectly influence generative AI-related overall student satisfaction in two directions.

The study presents several implications for employees and managers of business colleges. Numerous business college management students currently make active use of generative AI (ChatGPT). They view it positively and believe it helps them in their studies. They are, however, not convinced that this tool will help them find employment and perform their future employment tasks. Considering the positive effects of generative AI (ChatGPT) on studies, it is pointless to criticize or forbid the use of this tool. On the contrary, tutors should become properly acquainted with it and should be able to explain and discuss its advantages, disadvantages, and limitations with students. Furthermore, it is essential for colleges to set appropriate boundaries and rules for the use of generative AI (ChatGPT) so that students may use the tool and increase the quality of their education and their learning satisfaction without worrying about infringing ethical or legal norms.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

CRediT authorship contribution statement

Petr Suchanek: Writing – original draft, Project administration, Methodology, Investigation, Data curation, Conceptualization. Maria Kralova: Writing – original draft, Visualization, Validation, Software, Methodology, Formal analysis.

Declaration of competing interest

The authors report there are no competing interests to declare.

Acknowledgements

There are no acknowledgements.

Appendix
Questionnaire

As a research participant, I hereby agree to the anonymous processing of my answers to the questions in the questionnaire below and I also agree to the publication of the relevant results (in anonymized and aggregated form together with the other respondents) by the authors of the questionnaire.

I have personally used ChatGPT at least once (Experience with ChatGPT):

  • 1 – Yes

  • 2 – No, but I plan to use it in the future

  • 3 – No, and I do not plan to use it in the future

Demographics

I study at (Type of college):

  • 1 – A public college

  • 2 – A private college

I study in (City of education):

  • 1 - Brno

  • 2 - Plzeň

  • 3 - Zlín

  • 4 - Opava

  • 5 - Karviná

  • 6 – Prague

  • 7 – Hradec Králové

  • 8 – Pardubice

  • 9 – Liberec

  • 10 – Jihlava

  • 11 - Other

I study in (Type of education program):

  • 1. A bachelor’s program

  • 2. A follow-up magisterial program

Sex (Sex):

  • 1 – Man

  • 2 – Woman

  • 3 – I do not wish to disclose

  • 4 – Other

Age (Age):

  • 1 - 19−20

  • 2 - 21−22

  • 3 - 23−24

  • 4 - 25−26

  • 5 - 27 and older

Variable  Questionnaire items 
AI-related study expectations
AISE 1  I will use ChatGPT in my studies to improve the quality of my education (i.e., the speed at which I am able to obtain relevant information in my area of study) 
AISE 2  I will use ChatGPT to increase my average grade 
AISE 3  I will use ChatGPT in my studies to make my studies easier 
AISE 4  I will use ChatGPT in my studies to secure more time for other activities (e.g., a job) 
AI-related student job expectations
AISJE 1  ChatGPT use will help me build my career 
AISJE 2  ChatGPT use will make it easier to find a job 
AISJE 3  ChatGPT use will improve my preparedness for job-related tasks 
AI-related student perceived quality
AISPQ 1  The quality of my education (education level) improved with ChatGPT use (e.g., by being able to obtain relevant information faster) 
AISPQ 2  The quality of my education (study activities) improved with ChatGPT use 
AISPQ 3  My percentile of study performance (being ranked among a certain percentage of the best students) improved with ChatGPT use 
AISPQ 4  My studies are easier thanks to ChatGPT use 
AISPQ 5  I have more time for other activities (e.g., a job) thanks to ChatGPT use 
AI-related student overall satisfaction
AIOSS 1  My experiences with ChatGPT fulfilled my expectations 
AIOSS 2  The benefits of ChatGPT use are significant compared with the costs (effort) associated with it 
AIOSS 3  I would recommend ChatGPT to my friends 
Table 1.

Demographics of the 231 respondents.

DemographicsFrequency  % of Total 
SexMale  111  48.1 
Female  119  51.5 
I don't want to specify  0.4 
Type of collegePublic  209  90.5 
Private  22  9.5 
City of educationBrno  74  32.0 
Praha  46  19.9 
Liberec  44  19.0 
Jihlava  27  11.7 
Hradec Králové  19  8.2 
Zlín  17  7.4 
Other  1.7 
Type of programBachelor  178  77.1 
Follow-up magisterial  53  22.9 
Age19−20  30  13.0 
21−22  110  47.6 
23−24  56  24.2 
25−26  16  6.9 
27 and more  19  8.2 

Source: Authors.

Table 2.

Latent variables in the measurement model.

  Estimate  Std.Err  p-value  Aver. Var. Extr  Cronbach Alfa  Composite Reliability 
AI-related study expectations        0.467  0.771  0.781 
AISE1  0.710  0.062  0.000       
AISE2  0.578  0.065  0.000       
AISE3  0.708  0.062  0.000       
AISE4  0.720  0.061  0.000       
AI-related student job expectations        0.680  0.859  0.867 
AISJE1  0.790  0.058  0.000       
AISJE2  0.900  0.055  0.000       
AISJE3  0.773  0.058  0.000       
AI-related student perceived quality        0.523  0.836  0.845 
AISPQ1  0.800  0.057  0.000       
AISPQ2  0.814  0.056  0.000       
AISPQ3  0.515  0.065  0.000       
AISPQ4  0.772  0.058  0.000       
AISPQ5  0.662  0.061  0.000       
AI-related student overall satisfaction        0.486  0.744  0.728 
AIOSS1  0.622  0.065  0.000       
AIOSS2  0.672  0.064  0.000       
AIOSS3  0.782  0.062  0.000       

Source: Authors.

Table 3.

Discriminant validity values of HTMT ratio.

  SE  JE  PQ   
AISE         
AISJE  0.574       
AISPQ  0.783  0.570     
AIOSS  0.758  0.475  0.698   

Source: Authors.

Table 4.

Main effects in the AI 3S-model.

Relationship  Est.  Std.Err  p.value  ci.lower  ci.upper  Std.lv  Remark 
AISJE->AISE  0.672  0.103  6.505  0.000  0.469  0.874  0.558  Supported 
AISE->AISPQ  0.804  0.128  6.259  0.000  0.552  1.055  0.637  Supported 
AISJE->AISPQ  0.277  0.113  2.451  0.014  0.055  0.498  0.182  Supported 
AISE->AIOSS  1.022  0.251  4.071  0.000  0.530  1.514  0.650  Supported 
AISPQ->AIOSS  0.333  0.140  2.378  0.017  0.059  0.608  0.268  Supported 
AISJE->AIOSS  −0.056  0.151  −0.370  0.711  −0.352  0.240  −0.030  Not supported 

Source: Authors.

Table 5.

Mediation effects in the AI 3S- model.

  Est.  Std.Err  p.value  Std.lv  Remark 
Indirect effect AISE->AISPQ->AIOSS  0.268  0.107  2.501  0.012  0.170  Supported 
Total AISE->AISPQ->AIOSS + AISE->AIOSS  1.290  0.244  5.286  0.000  0.820   
Indirect effect AISJE->AISPQ->AIOSS  0.092  0.057  1.623  0.105  0.049  Supported 
Indirect effect AISJE->AISE->AIOSS  0.686  0.195  3.513  0.000  0.362  Supported 

Source: Authors.

References
[Abumalloh et al., 2024]
R.A. Abumalloh, M. Nilashi, O. Halabi, R. Ali.
Does metaverse improve recommendations quality and customer trust? A user-centric evaluation framework based on the cognitive-affective-behavioural theory.
Journal of Innovation & Knowledge, 9 (2024),
[Almaraz-López et al., 2023]
C. Almaraz-López, F. Almaraz-Menéndez, C. López-Esteban.
Comparative study of the attitudes and perceptions of university students in business administration and management and in education toward artificial intelligence.
Education Sciences, 13 (2023), pp. 609
[Almulla, 2024]
M.A. Almulla.
Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students perspective.
[Al-Sharafi et al., 2023]
M.A. Al-Sharafi, M. Al-Emran, M. Iranmanesh, N. Al-Qaysi, N.A. Iahad, I. Arpaci.
Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach.
Interactive Learning Environments, 31 (2023), pp. 7491-7510
[Alves and Raposo, 2007]
H. Alves, M. Raposo.
Conceptual model of student satisfaction in higher education.
Total Quality Management, 18 (2007), pp. 571-588
[Anwar et al., 2025]
R.S. Anwar, R.R. Ahmed, D. Streimikiene, W. Strielkowski, J. Streimikis.
Customer engagement, innovation, and sustainable consumption: Analyzing personalized, innovative, sustainable phygital products.
Journal of Innovation & Knowledge, 10 (2025),
[Asim and Kumar, 2018]
A. Asim, N. Kumar.
Service quality in higher education: Expectations and perceptions of students.
Asian Journal of Contemporary Education, 2 (2018), pp. 70-83
[Awang and Ismail, 2010]
H. Awang, N. Ismail.
Undergraduate education: A gap analysis of students’ expectations and satisfaction.
Problems of Education in the 21st Century, 21 (2010), pp. 21-28
[Baidoo-Anu and Ansah, 2023]
D. Baidoo-Anu, L.O. Ansah.
Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning.
Journal of AI, 7 (2023), pp. 52-62
[Baker et al., 2019]
T. Baker, L. Smith, N. Anissa.
Educ-AI-tion rebooted? Exploring the future of artificial intelligence in schools and colleges.
London, (2019),
[Bartol, 1976]
K.M. Bartol.
Relationship of sex and professional training area to job orientation.
Journal of Applied Psychology, 61 (1976), pp. 368-370
[Beloucif et al., 2022]
A. Beloucif, M. Mehafdi, N.A. Komey.
Expectation as a key determinant of international students’ satisfaction: A case study of business school MSc students.
Journal of Applied Research in Higher Education, 14 (2022), pp. 453-470
[Bentler and Chou, 1987]
P.M. Bentler, C.P. Chou.
Practical issues in structural modeling.
Sociological Methods & Research, 16 (1987), pp. 78-117
[Bhadra and Rahman, 2021]
M.S. Bhadra, M.O. Rahman.
Exploring the student satisfaction and future career expectation in the field of business graduate: A study on undergraduate level student.
Society & Change, XV (2021), pp. 37-52
[Bilquise et al., 2022]
G. Bilquise, S. Ibrahim, K. Shaalan.
Emotionally intelligent chatbots: A systematic literature review.
Human Behavior and Emerging Technologies, 2022 (2022),
[Bisdas et al., 2021]
S. Bisdas, C.C. Topriceanu, Z. Zakrzewska, A.V. Irimia, L. Shakallis, J. Subhash …, E.H. Ebrahim.
Artificial intelligence in medicine: A multinational multi-center survey on the medical and dental students' perception.
Frontiers in Public Health, 9 (2021), pp. 1-10
[Bordia et al., 2006]
S. Bordia, L. Wales, J. Pittam.
The role of student expectations in TESOL: Opening a research agenda.
TESOL in Context, 16 (2006), pp. 10-17
[Bowden et al., 2002]
A. Bowden, J.A. Fox-Rushby, L. Nyandieka, J. Wanjau.
Methods for pre-testing and piloting survey questions: Illustrations from the KENQOL survey of health-related quality of life.
Health Policy and Planning, 17 (2002), pp. 322-330
[Burke et al., 2020]
C. Burke, T. Scurry, J. Blenkinsopp.
Navigating the graduate labour market: The impact of social class on student understandings of graduate careers and the graduate labour market.
Studies in Higher Education, 45 (2020), pp. 1711-1722
[Caratiquit and Caratiquit, 2023]
K.D. Caratiquit, L.J.C. Caratiquit.
ChatGPT as an academic support tool on the academic performance among students: The mediating role of learning motivation.
Journal of Social, Humanity, and Education, 4 (2023), pp. 21-33
[Carrillat et al., 2007]
F.A. Carrillat, F. Jaramillo, J.P. Mulki.
The validity of the SERVQUAL and SERVPERF scales: A meta-analytic view of 17 years of research across five continents.
International Journal of Service Industry Management, 18 (2007), pp. 472-490
[Chandra and Fitriyanto, 2024]
D. Chandra, A. Fitriyanto.
An empirical analysis of student satisfaction with lecturer teaching quality: Applying the expectation-disconfirmation theory.
Indonesian Journal of Economics and Management, 4 (2024), pp. 460-474
[Chatterjee and Dethlefs, 2023]
J. Chatterjee, N. Dethlefs.
This new conversational AI model can be your friend, philosopher, and guide… and even your worst enemy.
[Chen, 2024]
H.J. Chen.
Assessing the influence of optimism on users' continuance use intention of ChatGPT: An expectation-confirmation model perspective.
International Journal of Management Studies and Social Science Research, 6 (2024), pp. 347-353
[Chen et al., 2023]
Y. Chen, S. Jensen, L.J. Albert, S. Gupta, T. Lee.
Artificial intelligence (AI) student assistants in the classroom: Designing chatbots to support student success.
Information Systems Frontiers, 25 (2023), pp. 161-182
[Chikazhe et al., 2022]
L. Chikazhe, C. Makanyeza, N.Z. Kakava.
The effect of perceived service quality, satisfaction and loyalty on perceived job performance: Perceptions of university graduates.
Journal of Marketing for Higher Education, 32 (2022), pp. 1-18
[Cui et al., 2023]
Y. Cui, Z. Ma, L. Wang, A. Yang, Q. Liu, S. Kong, H. Wang.
A survey on big data-enabled innovative online education systems during the COVID-19 pandemic.
Journal of Innovation & Knowledge, 8 (2023),
[Czech Statistical Office (CZSO) 2023]
Czech Statistical Office (CZSO) (2023). Studenti a absolventi vysokých škol v České republice - 2001–2022. Retrieved from https://www.czso.cz/csu/czso/studenti-a-absolventi-vysokych-skol-v-ceske-republice-gr402tsw19. Accessed June 6, 2022.
[Damar, 2022]
M. Damar.
What the literature on medicine, nursing, public health, midwifery, and dentistry reveals: An overview of the rapidly approaching metaverse.
Journal of Metaverse, 2 (2022), pp. 62-70
[Demir, 2013]
Ş.Ş. Demir.
The relationship among expectation, perceived quality-value and satisfaction: A study on undergraduate students of tourism.
Journal of Human Sciences, 10 (2013), pp. 307-324
[Delello et al., 2023]
J.A. Delello, W. Sung, K. Mokhtari, T. De Giuseppe.
Exploring college students' awareness of AI and ChatGPT: Unveiling perceived benefits and risks.
Journal of Inclusive Methodology and Technology in Learning and Teaching, 3 (2023), pp. 1-25
[Del Río-Rama et al., 2021]
M.D.L.C. Del Río-Rama, J. Álvarez-García, N.K. Mun, A. Durán-Sánchez.
Influence of the quality perceived of service of a higher education center on the loyalty of students.
Frontiers in Psychology, 12 (2021), pp. 1-14
[Demircioglu et al., 2021]
A. Demircioglu, F. Bhatti, B. Ababneh.
Improving student satisfaction through social media marketing activities: The mediating role of perceived quality.
International Journal of Data and Network Science, 5 (2021), pp. 143-150
[Dibble, 2023]
M. Dibble.
Schools ban ChatGPT amid fears of artificial intelligence-assisted cheating.
[Doğaner, 2021]
A. Doğaner.
The approaches and expectations of the health sciences students towards artificial intelligence.
Karya Journal of Health Science, 2 (2021), pp. 5-11
[Doshi et al., 2023]
R.H. Doshi, S.S. Bajaj, H.M. Krumholz.
ChatGPT: Temptations of progress.
The American Journal of Bioethics, 23 (2023), pp. 6-8
[Du Boulay, 2016]
B. Du Boulay.
Artificial intelligence as an effective classroom assistant.
IEEE Intelligent Systems, 31 (2016), pp. 76-81
[Durgapal and Saraswat, 2019]
B.P. Durgapal, A. Saraswat.
Tourism destination image of Uttarakhand.
International Journal of Management Studies, 6 (2019), pp. 77-87
[Elliott and Shin, 2002]
K.M. Elliott, D. Shin.
Student satisfaction: An alternative approach to assessing this important concept.
Journal of Higher Education Policy and Management, 24 (2002), pp. 197-209
[Fan and Wolters, 2014]
W. Fan, C.A. Wolters.
School motivation and high school dropout: The mediating role of educational expectation.
British Journal of Educational Psychology, 84 (2014), pp. 22-39
[Farahmandian et al., 2013]
S. Farahmandian, H. Minavand, M. Afshardost.
Perceived service quality and student satisfaction in higher education.
Journal of Business and Management, 12 (2013), pp. 65-74
[Firat, 2023]
M. Firat.
What ChatGPT means for universities: Perceptions of scholars and students.
Journal of Applied Learning and Teaching, 6 (2023), pp. 57-63
[Fuchs, 2023]
K. Fuchs.
Exploring the opportunities and challenges of NLP models in higher education: Is Chat GPT a blessing or a curse?.
Frontiers in education (Vol. 8, Frontiers Media SA, (2023), http://dx.doi.org/10.3389/feduc.2023.1166682
[Fulk et al., 2022]
H.K. Fulk, H.L. Dent, W.A. Kapakos, B.J. White.
Doing more with less: Using AI-based big interview to combine exam preparation and interview practice.
Issues in Information Systems, 23 (2022), pp. 204-217
[Ge and Lai, 2023]
J. Ge, J.C. Lai.
Artificial intelligence-based text generators in hepatology: ChatGPT is just the beginning.
Hepatology Communications, 7 (2023), pp. 1-11
[Gong et al., 2019]
B. Gong, J.P. Nugent, W. Guest, W. Parker, P.J. Chang, F. Khosa, S. Nicolaou.
Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: A national survey study.
Academic Radiology, 26 (2019), pp. 566-577
[Gray and DiLoreto, 2016]
J.A. Gray, M. DiLoreto.
The effects of student engagement, student satisfaction, and perceived learning in online learning environments.
International Journal of Educational Leadership Preparation, 11 (2016), pp. 1-20
[Greenhaus et al., 1983]
J.H. Greenhaus, C. Seidel, M. Marinis.
The impact of expectations and values on job attitudes.
Organizational Behavior and Human Performance, 31 (1983), pp. 394-417
[Gruber et al., 2010]
T. Gruber, S. Fuß, R. Voss, M. Gläser-Zikuda.
Examining student satisfaction with higher education services: Using a new measurement tool.
International Journal of Public Sector Management, 23 (2010), pp. 105-123
[Gupta, 2020]
A. Gupta.
Artificial intelligence in education: A systematic literature review.
Education and Information Technologies, 25 (2020), pp. 4389-4418
[Hammer, 2023]
A. Hammer.
The rise of the machines? ChatGPT CAN pass US medical licensing exam and the bar, experts warn-after the AI chatbot received B grade on Wharton MBA paper.
[Haji-Othman and Yusuff, 2022]
Y. Haji-Othman, M.S.S. Yusuff.
Assessing reliability and validity of attitude construct using partial least squares structural equation modeling.
International Journal of Academic Research in Business and Social Sciences, 12 (2022), pp. 378-385
[Han et al., 2023]
Han, R., Peng, T., Yang, C., Wang, B., Liu, L., & Wan, X. (2023). Is information extraction solved by ChatGPT? An analysis of performance, evaluation criteria, robustness and errors. arXiv preprint arXiv:2305.14450. 1–23.
[Hasan et al., 2008]
H.F.A. Hasan, A. Ilias, R.A. Rahman, M.Z.A. Razak.
Service quality and student satisfaction: A case study at private higher education institutions.
International Business Research, 1 (2008), pp. 163-175
[Hassani and Silva, 2023]
H. Hassani, E.S. Silva.
The role of ChatGPT in data science: How AI-assisted conversational interfaces are revolutionizing the field.
Big Data and Cognitive Computing, 7 (2023), pp. 62
[Hilton, 2017]
C.E. Hilton.
The importance of pretesting questionnaires: A field research example of cognitive pretesting the exercise referral quality of life scale (ER-QLS).
International Journal of Social Research Methodology, 20 (2017), pp. 21-34
[Hu, 2023]
K. Hu.
ChatGPT sets record for fastest-growing user base-analyst note.
[Hu and Bentler, 1999]
L. Hu, P.M. Bentler.
Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.
Structural Equation Modeling: A Multidisciplinary Journal, 6 (1999), pp. 1-55
[Husain, 2019]
T. Husain.
An analysis of modeling audit quality measurement based on decision support systems (DSS).
European Journal OF Scientific Exploration, 2 (2019), pp. 1-9
[Igbokwe, 2023]
I.C. Igbokwe.
Application of artificial intelligence (AI) in educational management.
International Journal of Scientific and Research Publications, 13 (2023), pp. 300-307
[James-MacEachern and Yun, 2017]
M. James-MacEachern, D. Yun.
Exploring factors influencing international students’ decision to choose a higher education institution: A comparison between Chinese and other students.
International Journal of Educational Management, 31 (2017), pp. 343-363
[James and Yun, 2018]
M. James, D. Yun.
Exploring student satisfaction and future employment intentions: A case study examination: Is there a link between satisfaction and getting a job?.
Higher Education, Skills and Work-Based Learning, 8 (2018), pp. 117-133
[Javaid et al., 2023]
M. Javaid, A. Haleem, R.P. Singh, S. Khan, I.H. Khan.
Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system.
BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 3 (2023),
[Kaliyadan and Seetharam, 2023]
F. Kaliyadan, K.A. Seetharam.
ChatGPT-Quo Vadis?.
Indian Dermatology Online Journal, 14 (2023), pp. 457
[Kamaruddin et al., 2012]
R. Kamaruddin, I. Osman, C.A.C. Pei.
Public transport services in Klang valley: Customer expectations and its relationship using SEM.
Procedia-Social and Behavioral Sciences, 36 (2012), pp. 431-438
[Kane et al., 2008]
D. Kane, J. Williams, G. Cappuccini-Ansfield.
Student satisfaction surveys: The value in taking an historical perspective.
Quality in Higher Education, 14 (2008), pp. 135-155
[Khan and Hemsley-Brown, 2024]
J. Khan, J. Hemsley-Brown.
Student satisfaction: The role of expectations in mitigating the pain of paying fees.
Journal of Marketing for Higher Education, 34 (2024), pp. 178-200
[Kim et al., 2019]
N.Y. Kim, Y. Cha, H.S. Kim.
Future english learning: Chatbots and artificial intelligence.
Multimedia-Assisted Language Learning, 22 (2019), pp. 32-53
[Kline, 2016]
R.B. Kline.
Principles and practice of structural equation modeling.
4th ed., The Guilford Press, (2016),
[Koyuturk et al., 2023]
Koyuturk, C., Yavari, M., Theophilou, E., Bursic, S., Donabauer, G., Telari, A., … & Ognibene, D. (2023). Developing effective educational chatbots with chatgpt prompts: Insights from preliminary tests in a case study on social media literacy. arXiv preprint arXiv:2306.10645. https://doi.org/10.48550/arXiv.2306.10645.
[Krammer et al., 2016]
G. Krammer, M. Sommer, M.E. Arendasy.
Realistic job expectations predict academic achievement.
Learning and Individual Differences, 51 (2016), pp. 341-348
[Kuleto et al., 2021]
V. Kuleto, M. Ilić, M. Dumangiu, M. Ranković, O.M. Martins, D. Păun, L. Mihoreanu.
Exploring opportunities and challenges of artificial intelligence and machine learning in higher education institutions.
Sustainability, 13 (2021),
[Kumar, 2021]
J.A. Kumar.
Educational chatbots for project-based learning: Investigating learning outcomes for a team-based design course.
International Journal of Educational Technology in Higher Education, 18 (2021), pp. 1-28
[Kiryakova and Angelova, 2023]
G. Kiryakova, N. Angelova.
ChatGPT—A challenging tool for the university professors in their teaching practice.
Education Sciences, 13 (2023),
[Lim et al., 2024]
W.M. Lim, F. Azmat, R. Voola.
Innovate or stagnate: The new mantra of responsible business schools.
EFMD Global Focus, 18 (2024), pp. 49-56
[Lim et al., 2023]
W.M. Lim, A. Gunasekara, J.L. Pallant, J.I. Pallant, E. Pechenkina.
Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators.
The International Journal of Management Education, 21 (2023), pp. 1-13
[Lim et al., 2022]
W.M. Lim, S. Kumar, S. Verma, R. Chaturvedi.
Alexa, what do we know about conversational commerce? Insights from a systematic literature review.
Psychology & Marketing, 39 (2022), pp. 1129-1155
[Liu, 2023]
Y. Liu.
An innovative talent training mechanism for maker education in colleges and universities based on the IPSO-BP-enabled technique.
Journal of Innovation & Knowledge, 8 (2023),
[Liu et al., 2020]
Y. Liu, S. Zhao, J. Li, B. Zhang.
Intelligent chatbots in education: A review of recent advances.
Educational Technology Research and Development, 68 (2020), pp. 87-104
[Makoe and Nsamba, 2019]
M. Makoe, A. Nsamba.
The gap between student perceptions and expectations of quality support services at the university of South Africa.
American Journal of Distance Education, 33 (2019), pp. 132-141
[Marimon et al., 2020]
F. Marimon, M. Mas-Machuca, J. Berbegal-Mirabent.
Fulfilment of expectations on students’ perceived quality in the Catalan higher education system.
Total Quality Management & Business Excellence, 31 (2020), pp. 483-502
[Mbwesa, 2014]
J.K. Mbwesa.
Students' Perceived quality of distance Education courses as a correlate of learner satisfaction: A case study of the bachelor of education arts program, University of Nairobi, Kenya.
International Journal of Social Science Studies, 2 (2014), pp. 86-99
[Muzammil et al., 2020]
M. Muzammil, A. Sutawijaya, M. Harsasi.
Investigating student satisfaction in online learning: The role of student interaction and engagement in distance learning university.
Turkish Online Journal of Distance Education, 21 (2020), pp. 88-96
[Najimdeen et al., 2021]
A.H.A. Najimdeen, I.H. Amzat, H.B.M. Ali.
The impact of service quality dimensions on students’ satisfaction: A study of international students in Malaysian public universities.
IIUM Journal of Educational Studies, 9 (2021), pp. 89-108
[Ngo et al., 2024]
T.T.A. Ngo, T.T. Tran, G.K. An, P.T. Nguyen.
ChatGPT for educational purposes: Investigating the impact of knowledge management factors on student satisfaction and continuous usage.
IEEE Transactions on Learning Technologies, 17 (2024), pp. 1367-1378
[Nguyen et al., 2025]
M. Nguyen, A. Mehrotra, A. Malik, R. Pandey.
Nexus between generative AI engagement, quality and expectation formation: An application of expectation–confirmation theory.
Journal of Enterprise Information Management, 38 (2025), pp. 798-820
[Nye, 2015]
B.D. Nye.
Intelligent tutoring systems by and for the developing world: A review of trends and approaches for educational technology in a global context.
International Journal of Artificial Intelligence in Education, 25 (2015), pp. 177-203
[Okonkwo and Ade-Ibijola, 2020]
C.W. Okonkwo, A. Ade-Ibijola.
Python-bot: A chatbot for teaching python programming.
Engineering Letters, 29 (2020), pp. 25-34
[Oliver, 1980]
R.L. Oliver.
A cognitive model of the antecedents and consequences of satisfaction decisions.
Journal of Marketing Research, 17 (1980), pp. 460-469
[Omair, 2025]
A. Omair.
Sample size estimation and sampling techniques for selecting a representative sample.
Journal of Health Specialties, 2 (2025), pp. 142-147
[O’Neill and Palmer, 2004]
M.A. O’Neill, A. Palmer.
Importance-performance analysis: A useful tool for directing continuous quality improvement in higher education.
Quality Assurance in Education, 12 (2004), pp. 39-52
[OpenAI 2023]
OpenAI. (2023). OpenAI. https://openai.com/.
[Pabreja and Pabreja, 2024]
K. Pabreja, N. Pabreja.
Understanding college students’ satisfaction with ChatGPT: An exploratory and predictive machine learning approach using feature engineering.
MIER Journal of Educational Studies Trends and Practices, 14 (2024), pp. 37-63
[Parahoo et al., 2016]
S.K. Parahoo, M.I. Santally, Y. Rajabalee, H.L. Harvey.
Designing a predictive model of student satisfaction in online learning.
Journal of Marketing for Higher Education, 26 (2016), pp. 1-19
[Parasuraman et al., 1985]
A. Parasuraman, V.A. Zeithaml, L.L. Berry.
A conceptual model of service quality and its implications for future research.
Journal of Marketing, 49 (1985), pp. 41-50
[Parasuraman et al., 1988]
A. Parasuraman, V.A. Zeithaml, L.L. Berry.
Servqual: A multiple-item scale for measuring consumer perception of service quality.
Journal of Retailing, 64 (1988), pp. 12-37
[Pereira et al., 2019]
J. Pereira, M. Fernández-Raga, S. Osuna-Acedo, M. Roura-Redondo, O. Almazán-López, A. Buldón-Olalla.
Promoting learners’ voice productions using chatbots as a tool for improving the learning process in a MOOC.
Technology, Knowledge and Learning, 24 (2019), pp. 545-565
[Persson et al., 2021]
A. Persson, M. Laaksoharju, H. Koga.
We mostly think alike: Individual differences in attitude towards AI in Sweden and Japan.
The Review of Socionetwork Strategies, 15 (2021), pp. 123-142
[Polas et al., 2020]
M.R.H. Polas, M.K. Juman, A.M. Karim, M.I. Tabash, M.I. Hossain.
Do service quality dimensions increase the customer brand relationship among gen Z? The mediation role of customer perception between the service quality dimensions (SERVQUAL) and brand satisfaction.
International Journal of Advanced Science and Technology, 29 (2020), pp. 1050-1070
[Rahi et al., 2019]
S. Rahi, F.M. Alnaser, M. Abd Ghani.
Designing survey research: Recommendation for questionnaire development, calculating sample size and selecting research paradigms.
Economic and Social Development, (2019), pp. 1157-1169
[Raman et al., 2023]
R. Raman, S. Mandal, P. Das, T. Kaur, J.P. Sanjanasri, P. Nedungadi.
University students as early adopters of ChatGPT: Innovation diffusion study.
Research Square, (2023), pp. 1-32
[Ratten and Jones, 2023]
V. Ratten, P. Jones.
Generative artificial intelligence (ChatGPT): Implications for management educators.
The International Journal of Management Education, 21 (2023), pp. 1-7
[R Core Team 2021]
R Core Team (2021). R: A language and environment for statistical computing. R foundation for statistical computing [computer software].
[Roganović, 2024]
J. Roganović.
Familiarity with ChatGPT features modifies expectations and learning outcomes of dental students.
International Dental Journal, 74 (2024), pp. 1456-1462
[Rosseel, 2012]
Y. Rosseel.
Lavaan: An R package for structural equation modeling.
Journal of Statistical Software, 48 (2012), pp. 1-36
[Rudolph et al., 2023]
J. Rudolph, S. Tan, S. Tan.
ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?.
Journal of Applied Learning and Teaching, 6 (2023), pp. 342-363
[Sahu and Sahu, 2024]
A. Sahu, A. Sahu.
Revolutionary applications of generative AI in higher education institutes (HEIs) and its implications.
Library Philosophy and Practice (E-Journal), (2024), pp. 1-9
[Salas-Pilco and Yang, 2022]
S.Z. Salas-Pilco, Y. Yang.
Artificial intelligence applications in Latin American higher education: A systematic review.
International Journal of Educational Technology in Higher Education, 19 (2022), pp. 1-20
[Saoud and Sanséau, 2019]
S. Saoud, P.Y. Sanséau.
Student loyalty through perceived service quality and satisfaction.
Advances in Social Sciences Research Journal, 6 (2019), pp. 171-185
[Sengupta and Chakraborty, 2020]
S. Sengupta, T. Chakraborty.
Use of chatbots in higher education: A study of student engagement and satisfaction.
Education and Information Technologies, 25 (2020), pp. 5147-5165
[Serban et al., 2013]
D. Serban, M. Gruiescu, C. Mitrut.
Quantitative study on students satisfaction concerning private economics universities in Romania.
Procedia-Social and Behavioral Sciences, 83 (2013), pp. 723-728
[Schiff, 2021]
D. Schiff.
Out of the laboratory and into the classroom: The future of artificial intelligence in education.
AI & Society, 36 (2021), pp. 331-348
[Schumacker and Lomax, 2016]
R.E. Schumacker, R.G. Lomax.
A beginner’s guide to structural equation modeling.
4th ed., Routledge/Taylor Francis Group, (2016),
[Shahzad et al., 2024]
M.F. Shahzad, S. Xu, W.M. Lim, X. Yang, Q.R. Khan.
Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning.
Heliyon, 10 (2024),
[Sikyr et al., 2020]
M. Sikyr, N.I. Basmanova, M. Abrashkin.
Comparison of study motivation and job expectations of Russian full-time and part-time university students.
International Journal of Educational Management, 34 (2020), pp. 549-561
[Sit et al., 2020]
C. Sit, R. Srinivasan, A. Amlani, K. Muthuswamy, A. Azam, L. Monzon, D.S. Poon.
Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: A mutilcentre survey.
Insights into Imaging, 11 (2020), pp. 1-6
[Sousa et al., 2021]
M.J. Sousa, F. Dal Mas, A. Pesqueira, C. Lemos, J.M. Verde, L. Cobianchi.
The potential of AI in health higher education to increase the students’ learning outcomes.
TEM Journal, 10 (2021), pp. 488-497
[Speybroeck et al., 2012]
S. Speybroeck, S. Kuppens, J. Van Damme, P. Van Petegem, C. Lamote, T. Boonen, J. de Bilde.
The role of teachers' expectations in the association between children's SES and performance in kindergarten: A moderated mediation analysis.
[Stanfill and Marc, 2019]
M.H. Stanfill, D.T. Marc.
Health information management: Implications of artificial intelligence on healthcare data and information management.
Yearbook of Medical Informatics, 28 (2019), pp. 56-64
[Stokel-Walker, 2022]
C. Stokel-Walker.
AI bot ChatGPT writes smart essays-should academics worry?.
[Strzelecki, 2024]
A. Strzelecki.
ChatGPT in higher education: Investigating bachelor and master students’ expectations towards AI tool.
Education and Information Technologies, 30 (2024), pp. 10231-10255
[Subandi and Hamid, 2021]
S. Subandi, M.S. Hamid.
Student satisfaction, loyalty, and motivation as observed from the service quality.
Journal of Management and Islamic Finance, 1 (2021), pp. 136-153
[Taber, 2018]
K.S. Taber.
The use of Cronbach’s alpha when developing and reporting research instruments in science education.
Research in Science Education, 48 (2018), pp. 1273-1296
[Teresi et al., 2022]
J.A. Teresi, X. Yu, A.L. Stewart, R.D. Hays.
Guidelines for designing and evaluating feasibility pilot studies.
Medical Care, 60 (2022), pp. 95-103
[Tlili et al., 2023]
A. Tlili, B. Shehata, M.A. Adarkwah, A. Bozkurt, D.T. Hickey, R. Huang, B. Agyemang.
What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education.
Smart Learning Environments, 10 (2023), pp. 1-24
[Tossell et al., 2024]
C.C. Tossell, N.L. Tenhundfeld, A. Momen, K. Cooley, E.J. de Visser.
Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence.
IEEE Transactions on Learning Technologies, 17 (2024), pp. 1069-1081
[Tsang et al., 2017]
S. Tsang, C.F. Royse, A.S. Terkawi.
Guidelines for developing, translating, and validating a questionnaire in perioperative and pain medicine.
Saudi Journal of Anaesthesia, 11 (2017), pp. S80-S89
[Tukiran et al., 2021]
M. Tukiran, P. Tan, W. Sunaryo.
Obtaining customer satisfaction by managing customer expectation, customer perceived quality and perceived value.
Uncertain Supply Chain Management, 9 (2021), pp. 481-488
[Tyron, 1994]
W.W. Tyron.
Expectations. Encyclopedia of human behaviour.
CA Acaemia, (1994),
[VanLehn, 2011]
K. VanLehn.
The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems.
Educational Psychologist, 46 (2011), pp. 197-221
[Wang, 2022]
Z. Wang.
Computer-assisted EFL writing and evaluations based on artificial intelligence: A case from a college reading and writing course.
Library Hi Tech, 40 (2022), pp. 80-97
[Weerasinghe and Fernando, 2017]
I.S. Weerasinghe, R.L. Fernando.
Students' satisfaction in higher education.
American Journal of Educational Research, 5 (2017), pp. 533-539
[Wiers-Jenssen et al., 2002]
J. Wiers-Jenssen, B.R. Stensaker, J.B. Grøgaard.
Student satisfaction: Towards an empirical deconstruction of the concept.
Quality in Higher Education, 8 (2002), pp. 183-195
[Wong and Chapman, 2023]
W.H. Wong, E. Chapman.
Student satisfaction and interaction in higher education.
Higher Education, 85 (2023), pp. 957-978
[Wu et al., 2020]
E.H.K. Wu, C.H. Lin, Y.Y. Ou, C.Z. Liu, W.K. Wang, C.Y. Chao.
Advantages and constraints of a hybrid model K-12 E-learning assistant chatbot.
IEEE Access, 8 (2020), pp. 77788-77801
[Yavas et al., 2004]
U. Yavas, M. Benkenstein, U. Stuhldreier.
Relationships between service quality and behavioral outcomes: A study of private bank customers in Germany.
International Journal of Bank Marketing, 22 (2004), pp. 144-157
[Yu et al., 2024]
C. Yu, J. Yan, N. Cai.
ChatGPT in higher education: Factors influencing ChatGPT user satisfaction and continued use intention.
Frontiers in Education, 9 (2024),
[Youssef et al., 2024]
E. Youssef, M. Medhat, S. Abdellatif, M. Al Malek.
Examining the effect of ChatGPT usage on students’ academic learning and achievement: A survey-based study in Ajman, UAE.
Computers and Education: Artificial Intelligence, 7 (2024),
[Wang et al., 2023]
T. Wang, B.D. Lund, A. Marengo, A. Pagano, N.R. Mannuru, Z.A. Teel, J. Pange.
Exploring the potential impact of artificial intelligence (AI) on international students in higher education: Generative AI, chatbots, analytics, and international student success.
Applied Sciences, 13 (2023),
[Wilson et al., 2016]
A. Wilson, V. Zeithaml, M.J. Bitner, D. Gremler.
Services marketing: Integrating customer focus across the firm.
3th ed., McGraw Hill, (2016),
[Zammuto et al., 1996]
R.F. Zammuto, S.M. Keaveney, E.J. O'Connor.
Rethinking student services: Assessing and improving service quality.
Journal of Marketing for Higher Education, 7 (1996), pp. 45-70
[Zengin et al., 2011]
B. Zengin, L.M. Sen, S.A. Solmaz.
A research on sufficiency of university education about satisfaction of university student's career expectations.
Procedia-Social and Behavioral Sciences, 24 (2011), pp. 496-504
Download PDF
Article options
Tools