Artificial Intelligence (AI) increasingly influences daily life, yet a comprehensive understanding of responsible AI usage, particularly from legal, ethical, and practical perspectives, remains limited. This quantitative study, conducted in Lebanon and grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT), employs Structural Equation Modeling (SEM) to examine responsible AI usage. Findings reveal that Responsible AI practices significantly influence technology adoption, demonstrating positive correlations between model variables and users’ behavioral intentions. The study offers both theoretical and practical implications: theoretically, it extends the UTAUT model by integrating the Responsible AI construct, contributing insights into user behavior in emerging markets; practically, it highlights how Responsible AI shapes user engagement. The paper concludes with recommendations for Lebanese organizations to promote ethical AI practices and implement guidelines encouraging responsible adoption.
In recent years, scholars have shown growing interest in the evolving domain of Artificial Intelligence (AI) (He, 2024), which is reshaping various aspects of human life (Mpinga et al., 2022; Wang et al., 2020). AI technologies, which influence individual behaviors and drive economic value creation, have emerged as a crucial force across multiple domains (Liu et al., 2024). Forecasts from Statista indicate that the global AI software market is projected to exceed 126 billion U.S. dollars by 2030 (Statista, 2024).
AI, composed of algorithms that enable machines to learn autonomously from data, encompasses technologies that collaboratively function through artificial neural networks, allowing machines to perceive, understand, act, and learn at an intelligence level akin to human cognition (Kokina et al., 2025). These responsive systems mimic human intelligence by strategically and abstractly thinking while creatively solving complex problems across diverse tasks (Mhlanga, 2022). AI’s capacity extends to perceiving its surroundings by capturing and processing visual, auditory, and spoken data, making informed decisions via inference engines, and performing actions within the physical environment (Gursoy & Cai, 2025; Wang et al., 2020).
Despite AI’s vast benefits, there is an urgent need to examine its legal, ethical, and practical ramifications, revealing a significant gap in our preparedness to manage these swiftly advancing technologies (Weber et al., 2023). Addressing these concerns, known as Responsible AI, is crucial, as questions around the ethical and responsible implementation of AI remain underexplored (Kar & Kushwaha, 2023). Although academic interest in AI has surged, research on its societal impact and ethical considerations remains limited (Aler et al., 2024). Over the past decade, AI tools have drawn substantial attention from scholars and businesses (Borges et al., 2021). While Information Systems researchers have long envisioned seamless human-computer interaction (Schuetzler et al., 2020; Shi et al., 2024), the ethical and societal implications still require thorough investigation. This gap is particularly pertinent in Lebanon, a nation facing severe socio-economic challenges after the largest Ponzi scheme in history, coupled with a skyrocketing level of corruption. The newly formed government designated a ministry for digital transformation and AI, hoping to shift the trajectory toward a more sustainable economy. In this context, examining AI’s potential in fostering responsible organizational practices amid crisis conditions is highly relevant. Therefore, integrating ethical and societal considerations is essential to ensure Responsible AI implementation (Arrieta et al., 2020).
Moreover, exploring the impact of Responsible AI usage on users’ behavioral intentions is also critical in today’s digital age, where AI technology increasingly influences daily life and decision-making (Kunz et al., 2024). This study helps policymakers, businesses, technology developers, educators, and entrepreneurs in Lebanon and similar socio-economic environments evaluate Responsible AI standards, such as openness, fairness, privacy, and accountability, which affect behavioral intentions toward its adoption. These considerations carry important implications for deploying AI in ways consistent with social norms.
Growing global concerns about AI’s ethical consequences, especially in regions with distinct cultural and regulatory contexts like Lebanon, underscore the need for user-centered, socially Responsible AI systems (Ipos, Lebanon, 2025). Lebanon presents a distinctive research setting, grappling with a severe socio-economic and political crisis, compounded by COVID-19, the Beirut port explosion of August 2020, and rampant inflation. According to IDAL (2024), Lebanon’s Information and Communication Technology (ICT) sector is projected to surpass US$ 7 billion by 2028, with ICT service exports doubling recently. This makes the country an opportune case to study the responsible deployment of AI under crisis conditions.
Amid rapid AI adoption, we propose an enhanced UTAUT model to evaluate how Responsible AI practices influence technology use among end users (employees and customers). This study assesses how organizations apply AI responsibly. Notably, organizational discussions on AI often generate uncertainty and apprehension, with “responsibility” frequently serving as shorthand for moral integrity or ethical acceptance (Tigard, 2021). Pineiro-Chousa, Vizcaíno-González, and Ribeiro-Navarrete (2019) argue that multiple factors influence individual behavior, underscoring the need for broader ethical frameworks to capture AI’s effects on human behavior.
To this end, we first provide a contextual and theoretical foundation, including an overview of AI, AI usage, and Responsible AI. We then present the quantitative methodology using the Structural Equation Model (SEM) to extend the UTAUT model through a survey-based test. Finally, we discuss the study’s conclusions, contributions, managerial implications, limitations, and directions for future research.
Contextual and theoretical backgroundAcross multiple industries, organizations are increasingly emphasizing the importance of emerging technologies such as AI (Kranz et al., 2022). AI consists of algorithm-driven machines capable of autonomously learning from data to perceive, reason, predict, and exhibit intelligent behavior. AI systems can address real-world challenges and make decisions in real-time or near-real-time, often substituting for human judgment (Wang et al., 2020). By transforming environments, practices, and social interactions, AI is revolutionizing how tasks are performed and decisions are made (Venumuddala & Kamath, 2023).
The defining characteristics of AI have evolved significantly, advancing beyond traditional technological capabilities (Kulkov et al., 2023; Sharma & Biros, 2021). Within organizations, AI often replicates human cognitive performance, with the potential not only to assist human decision-making but also, in some cases, to replace humans in tasks requiring complex reasoning and analysis (Chakravorti et al., 2019; French & Shim, 2024). Advances in AI and intelligent machinery are also driving innovative services for citizens, governments, and businesses, enhancing efficiency across nearly all aspects of life (Haddad & Chedrawi, 2019).
AI and ethical considerationsAs technological advancements accelerate, many studies highlight the increased outsourcing of innovation across a wide range of industries (Ribeiro-Soriano & Huarng, 2013). According to Kokina et al. (2025), the rapid rise of AI has sparked public debate about its societal and employment effects, which appear more extreme than those of previous technologies. Several studies outline the future of work and stress the need to re-educate employees and re-engineer organizations in preparation for these changes (Chedrawi & Haddad, 2022; Haddad & Chedrawi, 2019). Dignum (2020) outlines four possible scenarios regarding AI’s social and economic impact. The “business-as-usual” scenario anticipates steady long-term growth with short-term instability, as businesses drive economic expansion (Ribeiro-Soriano, 2017). In contrast, the “techno-revolutionist” scenario foresees substantial job losses as AI surpasses human skills. The “techno-utopist” view envisions AI fostering prosperity and new wealth distribution models, characterized by surplus rather than deprivation, with humans working less as automated machines take over tasks. Finally, the “techno-pessimist” view forecasts limited employment impact, potentially slowing economic growth.
In today’s digital era, emerging AI tools are disrupting established business frameworks, as noted by Felicetti et al. (2024). Dignum (2020) observes that AI can manage complex, hazardous, or repetitive tasks, support disaster response, and aid sectors such as fraud detection, healthcare, and defense. Yet realizing AI’s full potential requires integrating ethical, legal, and social considerations into system design. Although AI enhances consistency, reduces costs, mitigates risks, and tackles intricate problems, overreliance without human oversight could lead to harmful outcomes (Taddeo & Floridi, 2018).
With their predictive capabilities, AI systems subtly influence human choices and behaviors, potentially compromising human control over societal and environmental aspects. As responsibilities shift to AI, risks may compound and become deeply embedded. Balancing AI’s benefits with its dangers is therefore essential (Dignum, 2020; Taddeo & Floridi, 2018). Few companies currently deploy AI in alignment with ethical guidelines, organizational principles, and societal values. Increased focus is essential for Responsible AI frameworks that enable firms to apply AI effectively and ethically (Jobin et al., 2019).
Given their predictive power, AI systems may influence human decisions and behaviors in ways that reduce human autonomy. As they take on complex tasks, potential negative consequences may grow (Papagiannidis et al., 2025). Therefore, it is essential to strike a balance between AI’s potential and the risks associated with its development (Dignum, 2020; Rane et al., 2024; Taddeo & Floridi, 2018). To minimize these risks and maximize benefits, organizations must prioritize societal needs when implementing AI technologies (Dwivedi et al., 2021; Shen & Zhang, 2024).
In practice, however, few companies fully align AI deployment with ethical principles, organizational goals, and societal expectations. Greater attention is needed to develop Responsible AI frameworks that empower businesses to harness AI both effectively and ethically (Abdulqader et al., 2024; Jobin et al., 2019).
Responsible and ethical AIResponsible AI is a structured governance approach that ensures the ethical design, deployment, and management of AI, promoting transparency, accountability, and public trust while minimizing risks such as privacy breaches and misuse (Radanliev et al., 2024). It emphasizes responsible use throughout the AI lifecycle, including design, deployment, evaluation, and monitoring, to advance both privacy protection and technological development (Wang et al., 2020).
Despite AI’s benefits, its risks must be acknowledged (Polyportis & Pahos, 2024). Dignum (2019) argues that AI should be applied responsibly to enhance human welfare and business operations. Responsible AI, he adds, goes beyond ethical compliance and involves three pillars: (1) preparing society to take responsibility for AI’s impact, (2) developing AI systems that align with ethical principles and human values, and (3) creating socio-technical frameworks enabling people across cultures to coexist and interact with AI (Dignum, 2020).
For Papagiannidis et al. (2025), the term “Responsible AI” is frequently used interchangeably with “ethical AI,” which Eitel-Porter (2021) describes as the conscientious application of AI aimed at empowering employees, enhancing organizational performance, and improving customer experiences through principled intent. This concept highlights AI as more than a technological tool: it is a means to benefit both individuals and society while upholding ethical standards. Likewise, Jobin et al. (2019) describe Responsible AI as a governance model for developing ethical, transparent, and accountable AI. They argue that such frameworks should prioritize human-centeredness, align with stakeholder expectations, and comply with legal standards. Organizations should therefore implement Responsible AI practices that promote social good, strengthen ethical usage, and build trust in AI. Responsible AI is expected to encourage fairness, clarify AI outcomes, and ensure robust, secure systems (Arrieta et al., 2020), ultimately preventing severe negative impacts on human welfare (Trocin et al., 2021).
Floridi and Cowls (2021) highlight five major institutional initiatives aimed at establishing guidelines for responsible, socially beneficial AI. Collectively, these initiatives articulate 47 guiding principles that align closely with four main principles from bioethics: beneficence, non-maleficence, autonomy, and justice. Floridi and Cowls suggest that bioethics parallels these principles and can be adapted to AI’s challenges (Floridi, 2013; Floridi et al., 2018). They further propose a fifth principle, explicability, which emphasizes intelligibility and accountability. Their contribution underscores that technological advancements should serve humanity’s welfare without causing harm (Liu et al., 2021).
Beneficence emphasizes human dignity and sustainability, ensuring AI fosters conditions for flourishing and environmental stewardship for future generations. It promotes a positive social impact, focusing on humanity’s common good (Floridi & Cowls, 2021; Floridi et al., 2018). Non-maleficence stresses the obligation to avoid intentional harm, including privacy violations caused by human actions or unintended machine behavior. Responsible AI prioritizes safeguarding security, safety, and consistent system performance within ethical parameters; its core concerns include privacy protection, safety assurance, and reliable system operation in alignment with predefined moral standards (Floridi et al., 2018; Jobin et al., 2019). Autonomy pertains to how AI systems influence human agency, particularly in workplaces. Responsible AI should support rather than undermine autonomy, allowing humans to retain control while selectively delegating decision-making (Dignum, 2020; Floridi et al., 2018). The principle holds only when authority is balanced between human oversight and machine responsibilities (Floridi et al., 2021). Justice emphasizes the fair distribution of AI benefits, focusing on minimizing bias in training datasets. Responsible AI should prevent discrimination, promote equitable access, and advance global justice (Floridi et al., 2018, 2021). Newman et al. (2020) observed that AI-driven decisions may sometimes appear less fair than human ones, though Responsible AI could foster fairness and provide organizations with more accurate insights (Wang et al., 2020).
The fifth principle, explicability, combines intelligibility—understanding how AI systems operate—and accountability—identifying who is responsible for them (Floridi et al., 2018, 2021). Intelligibility requires making AI processes transparent so that stakeholders can interpret automated decisions (Jobin et al., 2019). This transparency allows society to assess AI’s impact, whether positive or harmful. Accountability is equally vital, ensuring that the parties deploying AI are identifiable and their intentions understood, thereby fostering trust between people and transformative AI technologies (Floridi et al., 2018, 2021).
Extended UTAUT model incorporating responsible AIThe Unified Theory of Acceptance and Use of Technology (UTAUT) is widely utilized across fields to predict system usage and support technology adoption (Chatterjee et al., 2023). Developed by Venkatesh et al. (2003) within information systems (IS) research, the UTAUT framework explores factors influencing technology acceptance. It has since been applied to contexts such as interactive whiteboards (Šumak & Šorgo, 2016), mobile health applications (Hoque & Sorwar, 2017), and ERP software adoption (Chauhan & Jaiswal, 2016), demonstrating strong explanatory power (Venkatesh et al., 2003). Reyes-Mercado (2018) emphasized the role of performance expectancy in emerging markets, highlighting how perceived productivity gains drive user acceptance.
Building on this, our study introduces an extended UTAUT model incorporating Responsible AI to assess its acceptance among stakeholders. The model’s four core constructs, Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), and Facilitating Conditions (FC), influence users’ Behavioral Intentions (BI) (Venkatesh et al., 2003).
Behavioral Intention (BI) is shaped by individual attitudes and subjective norms and serves as a predictor of actual behavior. It reflects users’ commitment to adopt technology and is influenced by the primary UTAUT factors: PE, EE, SI, and FC (Blut et al., 2022; Pickett et al., 2012).
The UTAUT model (Table 1) identifies four essential factors—performance expectancy, effort expectancy, social influence, and facilitating conditions—that significantly affect technology adoption. According to Venkatesh et al. (2003): (1) performance expectancy refers to the belief that technology will improve performance or effectiveness, with customers more likely to adopt AI if they perceive clear benefits; (2) effort expectancy denotes the perceived ease of use, with intuitive and user-friendly systems driving adoption; (3) social influence refers to the impact of peers, leaders, or social standards on a person’s decision to adopt technology, particularly in collectivist or tightly knit cultures; and (4) facilitating conditions denote the availability of resources and support systems, such as training, infrastructure, or technical assistance, that enable technology adoption. Together, these variables shape behavioral intention by influencing users’ perceptions of value, usability, social appeal, and feasibility.
Main UTAUT Variables.
| Variables | Description | Sources | Hypothesis |
|---|---|---|---|
| Performance Expectancy PE | The degree to which an individual believes using a system will enhance job performance. It reflects employees’ confidence in the system’s ability to improve productivity and work efficiency, directly influencing Behavioral Intention (BI). In Higher Education Settings (HES), PE indicates the extent to which teachers believe adopting e-learning tools will improve teaching effectiveness. | Venkatesh et al. (2003); Oliveira et al. (2014). | H1: Performance Expectancy of AI is positively related to employees’ behavioral intention to use it in organizations. |
| Effort Expectancy EE | The ease with which users can interact with a system, influencing their willingness to adopt it. Studies show that PE, task-technology fit, SI, and FC affect user adoption. Moreover, EE positively impacts Behavioral Intention (BI). | Venkatesh et al. (2003); Tosuntas et al. (2015). | H2: Effort Expectancy is positively related to employees’ behavioral intention to use AI in organizations. |
| Social Influence SI | The degree to which an individual perceives that peers, supervisors, or family members expect them to use a new system. It reflects how much one feels encouraged or pressured by others. In educational settings, SI reflects how strongly teachers believe colleagues see e-learning tools as essential. | Venkatesh et al. (2003); Venkatesh and Brown (2001). | H3: Social Influence is positively related to employees’ behavioral intention to use AI in organizations. |
| Facilitating Conditions FC | The degree to which individuals believe organizational resources and support are available for effective system use. It reflects users’ confidence that necessary tools, infrastructure, and assistance are accessible. In educational settings, FC includes access to computer hardware, software, and data quality assurance (Hayhurst, 2019). | Venkatesh et al. (2003); Oliveira et al. (2014); Tosuntas et al. (2015). | H4: Facilitating Conditions are positively related to employees’ behavioral intention to use AI in organizations. |
The question arises whether individuals can be expected to use AI responsibly, particularly with the Responsible AI factor in play. This study adopts Floridi and Cowls’ (2021) definition of responsible and socially beneficial AI, which builds on four bioethics principles—beneficence, non-maleficence, autonomy, and justice—and adds a fifth principle, explicability, combining intelligibility and accountability.
Integrating these principles into the UTAUT model adds ethical and human-centric dimensions not fully captured by the original framework (Sadek et al., 2024). While the UTAUT model emphasizes functionality and user perception (Venkatesh et al., 2013), adoption of AI also depends on trustworthiness, safety, fairness, and alignment with personal beliefs (Shin, 2021).
According to Sun and Zhang (2025), incorporating beneficence and non-maleficence pertains to an AI system's capacity to enhance well-being and prevent harm, directly affecting users’ acceptance or rejection. Iyer et al. (2025) note that autonomy enables consumers to retain control and make educated decisions, thereby enhancing trust and perceived fairness. Justice pertains to equity and impartiality in outcomes, fostering social acceptability, particularly in heterogeneous societies such as Lebanon. Finally, explicability, encompassing intelligibility (comprehension of AI functionality) and accountability, addresses a significant deficiency in the UTAUT model by tackling transparency and responsibility, both critical to AI adoption decisions.
These additional principles are essential given AI systems’ opacity and potential societal impact. Incorporating them into the adoption framework ensures the enhanced model captures both functional and moral rationales influencing user behavior. This integration acknowledges that ethical assurance, alongside performance and usability, increasingly drives AI acceptance (Cowls et al., 2021).
Within Responsible AI, beneficence emphasizes human dignity and sustainability, fostering social impact while ensuring personal well-being and collective benefits for humanity (Floridi & Cowls, 2021; Floridi et al., 2018). Accordingly, this research theorizes that employees will show positive behavioral intentions if they perceive AI systems as beneficial.
Non-maleficence, the duty to avoid harm—such as privacy violations—is central to Responsible AI, which prioritizes individual security, personal safety, and reliable system performance in line with ethical values (Floridi et al., 2018; Jobin et al., 2019). Numerous studies (Floridi et al., 2021; Roca et al., 2009; Wang, Zhao, Zhang, & Evans, 2021) suggest that perceived technological security significantly influences trust and behavioral intentions. Hence, this research posits that employees’ perception of AI’s non-maleficence will positively impact their intention to adopt it.
As for autonomy, it pertains to the effect of AI systems on human independence in the workplace. Responsible AI promotes autonomy by ensuring technology supports rather than diminishes individual agency (Floridi et al., 2018). Employees who feel AI does not compromise their autonomy are more inclined to adopt positive behaviors toward its use (Wang et al., 2020).
Justice involves the fair and equitable distribution of benefits. Responsible AI seeks to reduce bias, promote shared benefits, prevent social harms, and foster global justice (Floridi et al., 2018, 2021). Research indicates that employees are more receptive to decisions they perceive as fair and data-driven, enhancing their sense of justice in organizational processes (Colquitt et al., 2001; Wang et al., 2020).
Finally, explicability comprises two elements: intelligibility, which focuses on consistent communication with stakeholders about AI actions, processes, outcomes, benefits, and harms of AI’s decision-making capabilities (Jobin et al., 2019), and accountability, which ensures clarity on who deploys AI, its purpose, and expected results. Together, intelligibility and accountability form the core of Floridi et al.’s (2021) Responsible AI framework. Wang et al. (2020) link explicability to satisfaction, suggesting that employees who comprehend AI operations, outcomes, and deployment intentions are more likely to feel informed and adopt supportive behaviors toward AI use.
Accordingly, the proposed extended UTAUT model, integrating these principles, is illustrated in Fig. 1 below.
Table 2 below summarizes our hypotheses:
Research Hypotheses.
This study utilizes concepts from AI technology and ethics, Responsible AI, and the UTAUT paradigm to build on previous research. Concerns around privacy, prejudice, and accountability in AI underline the need for frameworks to ethically build and utilize AI. Responsible AI emphasizes human-centric and ethical development, while the UTAUT model explains user adoption through performance expectancy, effort expectancy, social influence, and facilitating conditions. This paper investigates how ethical and Responsible AI principles affect users’ behavioral intention in Lebanon, offering a contextualized, interdisciplinary perspective that addresses theoretical gaps and provides practical recommendations.
Research methodology and descriptive analysisMethodologyAs mentioned earlier, Lebanon presents a distinctive setting, with stakeholders increasingly interested in advanced technologies such as AI (Ipos, Lebanon, 2025). The study employs Structural Equation Modeling (SEM), a powerful statistical tool for exploratory and predictive research in the social sciences (Valaei et al., 2017). SEM is particularly suited to analyzing complex relationships among latent constructs, including perceptions of Responsible AI, ethical concerns, and user behavior, while handling limited samples and non-normative data distributions frequent in emerging economies such as Lebanon. SEM enables concurrent evaluation of measurement models (validity and reliability) and structural models (hypothesized linkages), offering comprehensive theoretical and empirical insights (Sarstedt et al., 2014).
Adopting a quantitative approach grounded in a positivist ontological and epistemological stance and a hypothetico-deductive methodology (Creswell, 2003; Klein & Myers, 1999; Orlikowski & Baroudi, 1991), this study proposes an extended UTAUT model to assess the influence of Responsible AI practices on adoption and usage behavior in Lebanon. Following the positivist philosophy, which values objective, observable “factual” knowledge (Creswell, 2003; Klein & Myers, 1999), the researchers employed a solid theoretical framework (UTAUT) rooted in philosophical realism and the hypothetico-deductive model (Orlikowski & Baroudi, 1991). After developing theoretical perspectives and extending UTAUT with relevant hypotheses, we designed a survey to collect data and assess the validity of the proposed relationships. As noted in IS literature, most studies adopt a positivist, hypothetico-deductive approach for data-driven analysis (Orlikowski & Baroudi, 1991; Walsh, 2014).
Data collection took place between January and March 2022, targeting a random sample across Lebanese sectors. A covariance-based SEM was estimated using AMOS to test hypothesized relationships among UTAUT constructs, Responsible AI dimensions, and Behavioral Intention (BI) toward AI adoption in organizations. The sample comprised 432 respondents.
The model yielded a Chi-square (χ²) value of 134.545 with 5 degrees of freedom, which was statistically significant (p < 0.001) [RMSEA = 0.245 (90 % CI: 0.210–0.282, p-close = 0.000), CFI = 0.921, and TLI = 0.669].
All statistical analysis was performed using version 25.0, SPSS, Chicago, IL, USA. Composite scores (PE, EE, SI, FC, BI, Responsible AI) were calculated by summing related variables, and reliability was confirmed with Cronbach’s alpha values above 0.7.
Descriptive, statistical, and reliability analysisIn this section, we present the descriptive statistical analysis conducted on our sample, including mean, standard deviation (SD), median, minimum, and maximum for continuous data, while categorical variables were reported as absolute and relative frequencies (n and %), providing a comprehensive overview of the dataset’s distribution and variability. The sample comprised 244 females (56.5 %), 171 males (39.6 %), and 17 individuals identifying with another gender (3.9 %). The majority of participants were under 50 years old: 18–25 years (23.8 %), 26–49 years (67.8 %), and 50 years or older (8.3 %). Participants represented diverse sectors, including banking and finance (23.6 %), industry (15.5 %), healthcare (13.9 %), education (11.1 %), governmental/public (9 %), and other sectors (26.8 %). Concerning their educational level, 187 (43.3 %) had a master’s or Ph.D., 192 (44.4 %) had a bachelor’s or equivalent, and 53 (12.3 %) had high school or equivalent (refer to Appendix 1).
Regarding the reliability of the UTAUT variables, the results showed the following:
- •
Performance Expectancy (PE), reflecting the belief that technology improves personal performance, was measured on a 4-item Likert scale from 1 (“strongly disagree”) to 7 (“strongly agree”). The reliability test indicated PE’s validity, with a Cronbach’s Alpha of 0.837. The mean PE score was 21.93 ± 4.66 out of 28, with a range of 4 to 28.
- •
Effort Expectancy (EE), denoting the perceived ease of technology use, was assessed with a 4-item Likert scale from 1 to 7. The reliability test confirmed EE’s validity with a Cronbach’s Alpha of 0.929. The mean EE score was 21.20 ± 4.66 out of 28, ranging from 4 to 28.
- •
Social Influence (SI), capturing adjustment to social expectations, was measured on a 5-item Likert scale from 1 to 7. SI was found valid with a Cronbach’s Alpha of 0.900. The mean SI score was 23.0 ± 6.6 out of 35, ranging from 5 to 35.
- •
Facilitating Conditions (FC), referring to organizational and technical support, was measured on a 7-item Likert scale from 1 (“strongly disagree”) to 7 (“strongly agree”). FC achieved a Cronbach’s Alpha of 0.777, confirming its reliability. The mean FC score was 11.12 ± 2.38 out of 14, with scores ranging from 2 to 14.
The Responsible AI construct was assessed as a composite score of five components: (1) Beneficence (B), referring to positive societal contribution; (2) Non-Maleficence (NM), the commitment to avoiding intentional harm; (3) Autonomy (A), supporting individual independence; (4) Justice (J), which involves equitable distribution of benefits; and (5) Explicability (E), focusing on making system decisions understandable, accessible, and accountable. These components were measured with 16 items on a Likert scale of 1 to 7. The resulting reliability was very high, with a Cronbach’s Alpha of 0.961. The mean AI score was 78.08 ± 19.36 out of 112, with values ranging from 16 to 112 (see Table 3).
- •
Beneficence (B), promoting human dignity, sustainability, and overall social good (Floridi & Cowls, 2021; Floridi et al., 2018), was assessed on a 4-item Likert scale. The mean Beneficence score was 20.82, with a range from 3 to 28.
- •
Non-Maleficence (NM), which emphasizes avoiding harm to individuals and safeguarding personal security (Floridi et al., 2018; Jobin et al., 2019), was assessed with a 3-item Likert scale from 1 (“strongly disagree”) to 7 (“strongly agree”). The mean NM score was 13.46, with scores ranging from 3 to 21.
- •
Autonomy (A), pertaining to the impact of AI systems on workplace agency (Floridi et al., 2021), was measured with a 3-item Likert scale. The mean Autonomy score was 14.92, with a range from 3 to 21.
- •
Justice (J), focusing on promoting global fairness and preventing social harm (Floridi et al., 2018, 2021), was assessed on a 3-item Likert scale. The mean Justice score was 14.26, with values between 3 and 21.
- •
Explicability (E), integrating intelligibility (understanding how it works) and accountability (knowing who is responsible for its actions), was measured with a 3-item Likert scale. The mean Explicability score was 14.28, with a range from 3 to 21.
Reliability statistics.
As for Behavioral Intention (BI), representing an individual’s intention to use a technology, which directly influences actual usage, it was measured with a 3-item Likert scale from 1 (“strongly disagree”) to 7 (“strongly agree”). The reliability test confirmed BI’s validity, with a Cronbach’s Alpha of 0.909. The mean BI score was 16.38 ± 3.69 out of 21, ranging from 3 to 21. Reliability statistics and Responsible AI scores are shown in Tables 3 and 4 below.
Representation of responsible AI Score.
The structural equation model produced a Chi-square (χ²) value of 134.545 with 5 degrees of freedom (p < 0.001). Additional fit indices were: RMSEA = 0.245 (90 % CI: 0.210–0.282, p-close = 0.000), CFI = 0.921, and TLI = 0.669, providing moderate model adequacy. Path analysis revealed that Behavioral Intention (BI) was significantly predicted by Performance Expectancy (p < 0.001; β = 0.177), Effort Expectancy (p = 0.008; β = 0.123), Facilitating Conditions (p < 0.001; β = 0.142), and Responsible AI (p < 0.001; β = 0.515). Social Influence was significant but negatively associated with BI (p = 0.017; β = −0.097). The model explained a substantial proportion of variance in Behavioral Intention (R² = 0.595), suggesting that nearly 60 % of the variability in intention to use AI was accounted for by the model predictors. The SEM results are represented in Table 5 and Fig. 2 below.
Extended UTAUT model integrating responsible AI to predict users’ behavioral intentions toward adoption of AI technologies in organizations.
A comprehensive analysis was conducted to examine the influence of UTAUT moderators on the responsible adoption of AI technology, evaluating their contributions to the ethical and effective implementation of AI systems. Venkatesh et al. (2003) included moderators in the UTAUT model such as gender, age, and experience. The extended structural equation model that included these variables produced a Chi-square (χ²) value of 179.393 with 14 degrees of freedom (p < 0.001), yielding a CMIN/DF ratio of 12.814. Additional fit indices were: RMSEA = 0.166 (90 % CI: 0.144–0.188, p-close = 0.000), CFI = 0.905, and TLI = 0.695.
Path analysis showed that Behavioral Intention (BI) was significantly predicted by Performance Expectancy (PE) (p < 0.001; β = 0.177), Effort Expectancy (EE) (p = 0.008; β = 0.123), Facilitating Conditions (FC) (p < 0.001; β = 0.142), and Responsible AI (AI) (p < 0.001; β = 0.515). Social Influence (SI) had a significant but negative effect on BI (p = 0.018; β = −0.097).
The model explained 59.6 % of the variance in Behavioral Intention (R² = 0.596). While control variables (age, gender, and AI experience) were not modeled as direct predictors, their covariances with core constructs (e.g., PE, EE, FC, AI) suggest indirect influences—particularly AI experience showed significant positive associations with several predictors (e.g., PE, EE, FC, AI). The SEM results are presented in Table 6 and Fig. 3 below.
Extended UTAUT with control variables (Gender, Age, AI Experience).
While previous studies suggest that age and gender can influence the relationships, often because older individuals face greater challenges with technology, this was not observed in our sample. Participants were already familiar with technology, which may explain the lack of association with age and gender. A contributing factor to these findings is that the participants were already experienced technology users. Therefore, their age neither increased nor decreased their intention to use AI technologies responsibly, representing a unique contribution in this study’s context. The absence of a relation between gender and the independent variables, as well as the non-significant relationship between age and Responsible AI use, further emphasizes that familiarity and experience with technology might play a more critical role than demographic factors alone in influencing Responsible AI adoption.
Deconstructing responsible AI: A structural equation model of ethical dimensions influencing behavioral intention and use of AI in organizationsThe revised structural equation model, which replaced the aggregate “Responsible AI” variable with its five ethical dimensions—Beneficence, Non-Maleficence, Autonomy, Justice, and Explicability—produced a Chi-square (χ²) value of 1643.256 with 19 degrees of freedom (p < 0.001), yielding a CMIN/DF ratio of 86.487. Model fit indices were: RMSEA = 0.445 (90 % CI: 0.427–0.464, p-close = 0.000), CFI = 0.491, and TLI = −0.473.
Path analysis revealed that Behavioral Intention (BI) was significantly predicted by Performance Expectancy (PE) (p < 0.001; β = 0.173), Effort Expectancy (EE) (p = 0.018; β = 0.107), Facilitating Conditions (FC) (p = 0.004; β = 0.123), Beneficence (p < 0.001; β = 0.377), Justice (p < 0.001; β = 0.165), Autonomy (p < 0.001; β = 0.217), and Explicability (p = 0.018; β = 0.080). Non-Maleficence had a significant but negative effect on Behavioral Intention (p < 0.001; β = −0.231), while Social Influence (SI) was nonsignificant (p = 0.105; β = −0.062).
The model accounted for 53.3 % of the variance in Behavioral Intention (R² = 0.533). Despite incorporating more granular ethical constructs of Responsible AI, the overall model fit decreased, potentially due to multicollinearity or added complexity in the expanded structure. These findings underline the importance of ethical AI attributes, especially Beneficence, Autonomy, and Justice, as critical drivers of AI adoption in organizational contexts. The SEM results are represented in Table 7 and Fig. 4 below.
Deconstructing responsible AI: A structural equation model of ethical dimensions influencing behavioral intention toward the use of AI in organizations.
Based on the above results from the SEM, this section discusses each hypothesis.
H1 (PE → BI): Performance Expectancy had a positive, significant effect on Behavioral Intention (β = 0.177, p < 0.001), supporting H1. The validation of Hypothesis 1 demonstrates that PE substantially affects the intention to embrace Responsible AI, as consumers are more likely to use AI when they view it as beneficial and performance enhancing. This corresponds with UTAUT research identifying PE as a significant predictor of adoption (Reyes-Mercado, 2018; Venkatesh et al., 2003). It underscores PE’s significance for AI usage, as users are inclined to adopt technologies that enhance their productivity, strengthening their intention to use it. It highlights the need to demonstrate concrete advantages such as enhanced efficiency, precision, or decision-making assistance to promote user adoption. Recent studies have underscored that trust in AI’s ethical functioning strengthens impressions of utility, increasing adoption likelihood (Shahzad et al., 2024; Shin, 2021). Thus, advocating the performance advantages of ethically designed AI can strategically enhance user engagement. Similar findings are observed in mobile banking and enterprise systems (Yu et al., 2024), reinforcing PE’s robustness as a predictor in emerging technologies.
H2 (EE → BI): Effort Expectancy also showed a positive effect on Behavioral Intention (β = 0.123, p = 0.008), confirming H2. The validation of Hypothesis 2 indicates that EE exerts a positive and statistically significant influence on Behavioral Intention to utilize Responsible AI. This suggests that consumers are more inclined to embrace Responsible AI when they view it as simple to comprehend and operate. This discovery aligns with the UTAUT paradigm, wherein EE, characterized as ease of use, remains a crucial factor, especially in early adoption stages (Venkatesh et al., 2003). In Responsible AI, simplicity and user-friendliness reduce cognitive hurdles and enhances user confidence. Prior research shows intuitive interfaces and reduced complexity encourage greater user engagement (Shin, 2021). Furthermore, responsible design strategies emphasizing explainability, openness, and user-friendliness can strengthen effort expectancy (Malhan et al., 2024). Consequently, streamlined user engagement with Responsible AI can accelerate acceptability and ethical application, highlighting EE’s positive impact on behavioral intention towards Responsible AI.
H3 (SI → BI): Social Influence showed a weak but negative association with Behavioral Intention (β = −0.097, p = 0.017), thus not supporting H3. The rejection of Hypothesis 3 indicates that, contrary to earlier UTAUT findings (Venkatesh et al., 2003), consumers may resist Responsible AI when experiencing external social pressure or normative expectations. A plausible explanation is that AI is still evolving, leading users to respond with skepticism or defensiveness when they perceive pressure from peers, organizations, or societal narratives to embrace these technologies. Studies suggest that pressure without personal conviction or evident usefulness may elicit resistance or a perceived loss of autonomy in technology utilization (Russo, 2023). Within Responsible AI, users may scrutinize the authenticity of social messages advocating for ethical AI utilization that lack empirical support (Kumar et al., 2022). This finding underscores the necessity for firms and policymakers to cultivate intrinsic motivation and deliver authentic, value-oriented communication instead of depending on social persuasion to encourage Responsible AI adoption.
H4 (FC → BI): Facilitating Conditions had a positive, significant effect (β = 0.142, p < 0.001), supporting H4. The validation of Hypothesis 4 indicates that sufficient organizational, technological, and infrastructural support is essential for promoting user engagement with Responsible AI systems. In the UTAUT paradigm, facilitating conditions such as access to resources, training, and support are critical facilitators of technology adoption, especially in intricate fields like AI (Venkatesh et al., 2003). Within Responsible AI, institutional procedures, ethical guidelines, and user support are paramount, as these elements facilitate adoption and bolster confidence and accountability (Turlapati et al., 2024). Prior studies indicate that users adopt Responsible AI more readily when it is characterized by dependable infrastructure, explicit usage regulations, and continuous instruction (Dwivedi et al., 2021). This underscores the need for comprehensive implementation strategies and user-centric support to ensure ethical and effective AI use.
H5 (Responsible AI → BI): The validation of Hypothesis 5 indicates that Responsible AI had the most significant positive impact on Behavioral Intention (β = 0.515, p < 0.001), highlighting its crucial role in influencing user interaction with AI technologies. This notable effect indicates that when AI systems are viewed as ethically constructed, emphasizing openness, fairness, accountability, and privacy, users are far more likely to embrace and trust them. This corresponds with recent studies highlighting the role of ethical AI practices in establishing trust and sustained usage (Dignum, 2020; Eitel-Porter, 2021; Floridi & Cowls, 2021). Users now assess not only AI’s capabilities but also its design and governance. Research indicates that ethical factors strongly affect trust and acceptability, especially in sensitive or high-stakes situations (Shin, 2021). The pronounced effect observed underscores that mere technological performance is inadequate; users expect AI to conform to human values and societal standards (Rai et al., 2020). Thus, embedding accountability in AI development and communication is crucial to build ethical alignment and encourage behavioral adoption.
The focus of this study was on behavioral outcomes, meaning that the main concern is understanding and predicting how people will act in the future. Therefore, behavioral intention is considered a key predictor of future behavior. In other words, what users intend to do is the strongest indicator of their eventual actions.
Overall, findings indicate that participants perceived AI as embodying beneficence, non-maleficence, autonomy, justice, and explicability, leading to high behavioral intentions (BI). As these ethical dimensions strengthen, BI rises accordingly. Consequently, if AI technology is applied responsibly, it will see greater adoption by employees and deployment across organizations.
Finally, in order to create a better society in Lebanon through Responsible AI technology, with ethics as a priority, we suggest the following recommendations adapted from Floridi et al. (2018) and based on our findings:
Recommendation 1 - Assessment. Evaluate existing Lebanese institutions’ ability to address mistakes caused by AI agents and systems. Additionally, assess whether current regulations provide an ethical legislative framework for technological use. Design AI systems that reduce inequality and uphold human autonomy in Lebanon.
Recommendation 2 - Development. Establish frameworks for explainable AI, since transparency is essential for trust building in technology. Additionally, strengthen IT infrastructure within Lebanese organizations and implement oversight mechanisms to ensure accountability, prevent bias, and promote fairness and equity.
Recommendation 3 – Incentivization. Provide financial incentives for AI projects prioritizing social and environmental benefits. Furthermore, offer funding to integrate ethical, legal, and social considerations into AI development.
Recommendation 4 - Ethical Infrastructure and Awareness. Promote the creation of self-regulatory codes of conduct for AI and data professionals, clarify ethical responsibilities, and strengthen corporate governance through training for board members of Lebanese organizations. Additionally, support the development of educational curricula and nationwide awareness campaigns to inform the public of AI’s societal, legal, and ethical implications. These efforts are essential to build a culture of accountability, transparency, and trust in AI-driven transformation.
The figure below presents the main Responsible AI components along with the suggested recommendations to ensure a better society in Lebanon using AI technology responsibly (Fig. 5).
Conclusion, implications, and future researchConclusionIn conclusion, this study deepens our understanding of how Responsible AI practices shape adoption and behavioral intention to use technology among employees and customers in organizational contexts. It offers both theoretical and practical contributions by providing a structured overview of AI and Responsible AI and empirically validating an extended UTAUT model. Using survey-based quantitative methods and SEM, the findings highlight the significance of ethical dimensions in shaping users’ intentions and actual use. Extending the UTAUT model provides a timely and context-specific contribution to AI adoption research, especially in environments marked by socio-economic uncertainty.
Theoretical implicationsThe contribution is twofold. First, this paper highlights the responsibility of AI systems—Responsible AI. Given AI’s widespread adoption and presence in daily life, we emphasized the necessity of examining its impact on human behavior from ethical and responsibility-driven perspectives, thereby advancing organizational research in this field. Second, we extend the UTAUT model by incorporating the Responsible AI variable. Using the foundational UTAUT structures of Venkatesh et al. (2003), our model assesses stakeholder acceptance of AI technology in the Lebanese context and contributes to the literature on user intentions during times of crisis.
Practical implicationsAnalysis revealed a significant positive correlation between model variables and users’ behavioral intention toward AI technologies. Practical findings demonstrate that PE, EE, SI, FC, and Responsible AI positively influence employees’ willingness to adopt and utilize AI in organizational settings. Although AI technology is widely seen as beneficial for its users, ethical considerations, privacy concerns, and trust remain paramount. Thus, we urge users to deploy, implement, and use this technology responsibly while adhering to ethical responsibilities. We highlight the importance for managers to adopt Responsible AI practices to facilitate seamless integration, especially in Lebanon’s times of crisis, where responsible adoption could turn challenges into opportunities. We encourage organizations and managers to enhance user awareness of AI and Responsible AI principles through campaigns, training, and effective communication to shape future users’ behavioral intentions.
To achieve these objectives, we recommend Lebanese organizations take the following steps: (1) Assess institutional and regulatory capacity to ensure ethical frameworks for technological advancements and design AI systems that foster equality and autonomy; (2) Develop explainable AI frameworks to build trust, with oversight practices to prevent bias and ensure fairness; (3) Provide financial incentives for AI projects that prioritize social and environmental welfare while integrating ethical, legal, and social considerations; and (4) Encourage self-regulatory codes of conduct in data- and AI-related professions, alongside educational programs and public awareness initiatives. Implementing these measures will foster a more ethical and trustworthy environment grounded in Responsible AI practices.
Limitations and future researchThis study, despite its contributions, has some limitations that should be acknowledged. First, the sample size, although sufficient for SEM, may constrain the generalizability of the findings to broader populations. Second, the Lebanese focus limits applicability to other cultural or regional settings. Third, the extended conceptual model was not tested in specific industries—such as education, healthcare, or banking—thereby limiting sector-specific insights.
Future research could address these constraints by employing larger, more diverse samples and validating the model across multiple sectors and geographical regions. Additionally, expanding this work through longitudinal designs or mixed-methods approaches may further deepen understanding of how Responsible AI influences user behavior over time.
CRediT authorship contribution statementCharbel Chedrawi: Writing – review & editing, Writing – original draft, Investigation, Data curation, Conceptualization. Gloria Haddad: Writing – review & editing, Methodology, Investigation, Formal analysis, Data curation. Abbas Tarhini: Writing – review & editing, Software, Methodology, Conceptualization. Souheir Osta: Resources, Methodology, Investigation. Nahil Kazoun: Validation, Data curation.














