metricas
covid
Journal of Innovation & Knowledge AI adoption, employee depression and knowledge: How corporate social responsibil...
Journal Information
Vol. 10. Issue 6.
(November - December 2025)
Visits
6039
Vol. 10. Issue 6.
(November - December 2025)
Full text access
AI adoption, employee depression and knowledge: How corporate social responsibility buffers psychological impact
Visits
6039
Byung-Jik Kima, Julak Leeb,
Corresponding author
julaklee71@cau.ac.kr

Corresponding author.
a Associate Professor at the College of Business Administration, Gachon University, 1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea
b Professor at the Department of Industrial Security Chung-Ang University, 84, Heukseok-ro, Dongjak-gu, 06974, Seoul, South Korea
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (3)
Show moreShow less
Tables (4)
Table 1. Descriptive Characteristics of the Sample.
Tables
Table 2. Correlation among Research Variables.
Tables
Table 3. Results of Structural Model.
Tables
Table 4. Direct, Indirect, and Total Effects of the Final Research Model.
Tables
Show moreShow less
Abstract

This study examines the psychological consequences of artificial intelligence (AI) adoption in organizational contexts, focusing on how it impacts employee depression via the mediating and moderating mechanisms of job insecurity and corporate social responsibility (CSR), respectively. Adopting the broader Stress Process paradigm and employing an integrated theoretical framework that encompasses conservation of resources theory, social identity theory, and job demands-resources theory, we develop and empirically validate a moderated mediation framework utilizing longitudinal data gathered during three temporal intervals from 403 workers employed in South Korean enterprises. The empirical evidence demonstrates that although AI adoption does not directly impact employee depression, it exerts substantial indirect effects through the mediating pathway of job insecurity. Additionally, the analysis shows that CSR functions as a critical organizational resource that moderates the association between AI adoption and job insecurity; specifically, conditions of elevated CSR attenuate this positive association. These findings enhance theoretical understanding of the psychological mechanisms underlying technological transformations in workplace settings and highlight the importance of strategic organizational initiatives in mitigating the mental health challenges associated with AI implementation. They also reveal actionable steps enterprises deploying AI technologies can take, highlighting the imperative of proactively managing workforce concerns regarding employment stability while sustaining robust CSR initiatives throughout periods of technological evolution.

Keywords:
Artificial intelligence adoption
Corporate social responsibility
Job insecurity
Depression
Longitudinal study
Moderated mediation model
M15
O33
Full Text
Introduction

In the ongoing journey toward digital transformation, artificial intelligence (AI) has positioned itself as a pivotal driver of organizational innovation and competitiveness (Shepherd, 2022; Si et al., 2023). AI is both revolutionizing operational paradigms and reshaping the entire landscape of work and employment (Makridis & Han, 2021; Makridis & Mishra, 2022). As it has become increasingly embedded in organizational structures, academic interest in its implications for employees has grown considerably across information systems, management, and organizational behavior (Beane & Leonardi, 2022).

Various alarming workplace trends underscore the practical urgency of understanding AI's psychological impacts on employees. Recent reports indicate that a substantial proportion of employees express anxiety about AI replacing their jobs (Ameen et al., 2025), while organizations implementing AI report significant increases in mental health support requests (Kim & Lee, 2024). Major corporations including IBM, Amazon, and Samsung have faced considerable challenges managing employee morale during the AI transformation, with some experiencing notable turnover increases in AI-affected departments (Bankins et al., 2024). The economic cost of AI-related workplace depression is substantial, with massive annual productivity losses globally attributed to AI-induced job insecurity and associated mental health issues (Skare et al., 2024; Valtonen et al., 2025). These challenges raise critical questions: How can companies harness AI's benefits while protecting employee mental health? What organizational interventions can buffer against AI's psychological toll? This study directly addresses these concerns by identifying mechanisms through which AI affects mental health and demonstrating how CSR initiatives serve as protective buffers.

It warrants emphasis that AI adoption brings substantial benefits to employees and organizations. Research has demonstrated that AI can augment human capabilities, enhance job satisfaction through automation of routine tasks, and create new opportunities for skill development and career advancement (Jia et al., 2024; Noy & Zhang, 2023). AI tools can reduce cognitive load, improve decision-making accuracy, and enable employees to focus on the more creative and strategic aspects of their work (Hermann & Puntoni, 2024). Employees using AI assistants report substantially higher productivity and greater job satisfaction when AI is implemented with appropriate support systems (Ameen et al., 2025). Furthermore, AI can democratize access to expertise, enhance work-life balance through intelligent automation, and create entirely new job categories (Chowdhury et al., 2024).

However, this technological transformation presents what Belanche et al. (2024) term “the dark side of artificial intelligence”—a paradox where the same technologies that promise enhancement can simultaneously trigger psychological distress. The coexistence of opportunities and threats creates complex psychological dynamics that require careful examination. While acknowledging AI's positive potential, this study focuses on understanding and identifying ways to mitigate its psychological challenges for three critical reasons: first, while AI's benefits are increasingly well-documented and celebrated in organizational discourse, the mental health implications remain understudied and often overlooked in implementation strategies (Kim & Lee, 2024); second, the success of AI adoption ultimately depends on psychological well-being of employees—even the most advanced AI systems fail when employees experience severe stress, anxiety, or depression (Bankins et al., 2024); and third, understanding the negative pathways is essential for developing interventions that allow organizations to maximize AI's benefits while protecting employee mental health.

Despite recognizing AI’s critical role, scholarly examinations have identified several research gaps. First, while theoretical discussions of AI's influence on psychological well-being, job satisfaction, and organizational commitment are extensive (Budhwar et al., 2022, 2023; Chowdhury et al., 2023; Dwivedi et al., 2021; Fountaine et al., 2019; Tambe et al., 2019), empirical investigations into how AI triggers or intensifies mental health issues are notably limited (Bankins et al., 2024; Chen et al., 2023; Tambe et al., 2023). Depression is a serious workplace mental health concern that significantly affects individual well-being and organizational outcomes (Evans-Lacko & Knapp, 2016).

Second, explorations of the mechanisms and conditional factors that mediate and moderate the AI adoption-depression relationship are insufficient. Careful examination of mediating mechanisms like job insecurity will elucidate the critical pathways through which AI impacts employee depression (Greenhalgh & Rosenblatt, 2010; Shoss, 2017; Shoss et al., 2023). Meanwhile, analysis of contingent factors like organizational support systems will help define the circumstances under which AI differently affects well-being (Budhwar et al., 2022, 2023; Chen et al., 2023).

Third, investigations of the factors that mitigate AI-induced psychological strain, particularly Corporate Social Responsibility (CSR), are still limited. CSR activities signal organizational commitment to ethical practices and employee welfare, fostering a sense of psychological safety during technological change (Bhatti et al., 2022; Fatima & Elbanna, 2023). CSR initiatives, including employee development programs and transparent communication, directly address technological transformation uncertainties (Onkila & Sarna, 2022). CSR’s moderating role is theoretically grounded in social exchange theory, suggesting positive organizational actions create reciprocal trust relationships that buffer workplace stressors (Kim et al., 2018; Zhao et al., 2022). Despite CSR's theoretical relevance and evidence linking CSR to positive employee outcomes (Gillan et al., 2021), current research has not adequately explored its buffering role in AI contexts (Chowdhury et al., 2023; Tambe et al., 2019).

We view South Korea as an ideal research context for several compelling reasons. Korea ranks among the top five countries globally in AI adoption, with 28 % of companies implementing AI technologies versus the OECD average of 17 % (Statista, 2024). According to recent McKinsey analysis, intense competition with Chinese and American firms is driving Korea’s aggressive AI adoption, with major conglomerates including Samsung, LG, and Hyundai announcing comprehensive AI transformation plans affecting hundreds of thousands of employees nationwide (Chang et al., 2025). This widespread implementation creates a natural laboratory for analyzing AI's workforce impacts at scale.

Korea presents a unique paradox that amplifies the psychological tensions of AI adoption: cutting-edge technological advancement coexists with deeply rooted job security- and lifetime employment-related cultural values. Despite widespread modernization of employment practices, traditional expectations of organizational loyalty and stability remain psychologically significant, heightening tensions when AI threatens established structures (Kim et al., 2021). Korea's collectivist culture, where employment represents not just individual achievement but family honor and social status (Kim et al., 2019), makes this tension particularly acute. Moreover, the fact that the vast majority of large Korean corporations have formal CSR initiatives—among the highest rates in Asia—makes Korea the ideal setting to examine CSR's potential role in buffering technological disruption (Kim & Lee, 2025).

The Korean context offers distinctive cultural and institutional characteristics that amplify AI's psychological impact in ways not captured by Western-centric research. Korea's collectivist culture and high uncertainty avoidance (Hofstede's UAI score of 85) mean job insecurity triggers more severe psychological responses than in individualistic societies, as employment threats challenge not just individual identity but broader social and familial relationships (Kim & Kim, 2024; Hofstede, 2001). The concept of “Jeong” (deep emotional bonds) in Korean workplaces transforms employment from purely contractual to quasi-familial relationships, intensifying the psychological threat when AI disrupts these bonds (Kim et al., 2018). Furthermore, research has shown that in collectivist cultures like Korea, job loss or insecurity affects psychological well-being more severely due to the shame and loss of face associated with unemployment (House et al., 2004).

Moreover, Korea's labor market structure—dominated by large conglomerates (chaebols) that employ a significant portion of the workforce—creates distinct AI adoption patterns. These organizations face intense global competition, particularly in technology sectors, driving rapid AI implementation that may prioritize efficiency over employee psychological safety (Makarius et al., 2020). Korea’s workforce is also highly educated, with over 53 % of workers holding bachelor's degrees; such workers may experience specific identity threats when AI challenges their knowledge-based expertise (Brougham & Haar, 2020).

Finally, despite being grounded in Korea, our findings offer crucial insights that are transferable to other contexts: Asian economies with similar Confucian values, developed nations grappling with AI-employment tensions, and any organization navigating the delicate balance between technological advancement and employee well-being. The psychological mechanisms we identify—job insecurity as mediator and CSR as moderator—likely represent universal human responses to technological threats, though their magnitude and manifestation may vary across cultural contexts (Brougham & Haar, 2020; Taras et al., 2010).

To address the significant research gaps outlined above, we develop an integrated theoretical framework incorporating the conservation of resources theory, social identity theory, and job demands-resources theory within the overarching stress process framework and use it to examine how AI adoption influences employee depression through job insecurity while considering CSR's moderating role. Our specific research questions are as follows:

  • RQ1: How does organizational AI adoption influence employee depression, and what are the underlying psychological mechanisms that mediate this relationship?

  • RQ2: To what extent does corporate social responsibility moderate the psychological impact of AI adoption on employees' mental health outcomes?

  • RQ3: How can organizations strategically leverage CSR initiatives to mitigate the potential negative mental health consequences of AI-driven digital transformations?

By addressing these research questions, we aim to provide both theoretical insights into the psychological implications of technological change and practical guidance for organizations seeking to navigate AI implementation while maintaining employee well-being.

This study makes several significant contributions. First, it advances understanding of AI's psychological implications by empirically examining the AI adoption-depression relationship, addressing critical gaps in research regarding technology's mental health impacts (Dwivedi et al., 2021; Sinclair et al., 2021). Second, it illuminates this psychological mechanism by focusing on job insecurity as a crucial mediating variable, adding nuance to our understanding of the ways technological changes translate into psychological outcomes (Shoss, 2017; De Witte et al., 2016). Third, by examining CSR's moderating role, it reveals how organizational moral initiatives buffer against technological change's negative psychological effects (Kim et al., 2018; Zhao et al., 2022). Fourth, it integrates multiple theoretical perspectives, providing a comprehensive foundation for understanding complex relationships between organizational technological changes and employee psychological outcomes (Bankins et al., 2024; Lazarus & Folkman, 1984).

Literature review: AI, CSR, and their interconnectionEvolution and current state of AI in organizations

Artificial intelligence in organizational contexts has evolved dramatically from the rule-based expert systems of the 1980s to today’s sophisticated machine learning and generative AI applications (Dwivedi et al., 2023). Contemporary AI adoption encompasses multiple technological categories including predictive analytics, natural language processing, computer vision, and autonomous decision-making systems (Gama & Magistretti, 2023). The trajectory of AI implementation has shifted from isolated automation tasks to comprehensive digital transformation strategies that affect entire organizational ecosystems (Chauhan et al., 2022).

Recent scholarship has identified three waves of AI adoption in organizations. The first wave (2000–2010) focused on data analytics and pattern recognition for operational efficiency (Makridis & Mishra, 2022). The second (2010–2020) introduced machine learning for predictive modeling and decision support, fundamentally altering strategic planning processes (Shepherd & Majchrzak, 2022). Finally, the current third wave, catalyzed by generative AI technologies like ChatGPT, represents a paradigm shift toward human-AI collaboration where AI systems augment creative and cognitive work previously considered uniquely human (Chowdhury et al., 2024; Epstein et al., 2023).

The organizational impact of AI adoption manifests across multiple dimensions. Operationally, AI enhances efficiency through process automation and optimization (Belanche et al., 2020). Strategically, its predictive capabilities facilitate data-driven decision-making that can give organizations a competitive advantage (Hermann & Puntoni, 2024). However, these benefits coexist with significant challenges including workforce displacement concerns, ethical dilemmas, and the need for substantial organizational restructuring (Belanche et al., 2024; Sætra, 2023). Studies have indicated that a significant portion of current job tasks could be automated within the next decade, creating unprecedented workforce transformation pressures (Jia et al., 2024).

Corporate social responsibility: foundations and contemporary perspectives

Over the past five decades, Corporate Social Responsibility has evolved from peripheral philanthropy into a strategic organizational imperative. Carroll's (1979) pyramid model established the foundational framework, identifying economic, legal, ethical, and philanthropic responsibilities as hierarchical corporate obligations. This conceptualization has expanded to encompass stakeholder theory perspectives, where CSR represents balanced value creation for employees, customers, communities, and shareholders (Freeman et al., 2010).

Contemporary CSR scholarship has identified multiple dimensions of social responsibility. These include environmental sustainability, which encompasses climate change and resource conservation (Fatima & Elbanna, 2023); social factors, such as employee welfare, community development, and human rights protection (Onkila & Sarna, 2022); and governance concerns like ethical business practices, transparency, and accountability mechanisms (Zhao et al., 2022). Recent meta-analyses demonstrate that comprehensive CSR programs positively influence organizational performance, employee engagement, and stakeholder trust (Gillan et al., 2021).

The mechanisms through which CSR influences organizational outcomes operate at multiple levels. At the individual level, CSR enhances employee pride, organizational identification, and perceived organizational support (Kim et al., 2018). At the organizational level, it builds reputational capital, attracts talent, and creates competitive differentiation (Bhatti et al., 2022). At the societal level, it contributes to sustainable development goals and enhances social welfare. These multilevel effects create reinforcing cycles whereby CSR investments generate both social and business value.

The emerging nexus of AI and CSR

The intersection of AI and CSR represents an emerging frontier in organizational scholarship, where empirical investigation is limited by growing. This convergence has manifested in two primary streams: AI for CSR and CSR for AI. The “AI for CSR” stream examines how AI technologies can enhance CSR effectiveness by improving environmental monitoring, supply chain transparency, and social impact measurement (Švarc et al., 2021). For instance, AI-powered analytics enable real-time tracking of carbon emissions and predictive modeling of environmental impacts, substantially improving sustainability reporting accuracy.

Meanwhile, more relevant to our study, the “CSR for AI” stream investigates how CSR principles can guide responsible AI implementation. This includes developing ethical AI frameworks, ensuring algorithmic fairness, and protecting stakeholder interests during technological transformation (Baker & Xiang, 2023; Cheng et al., 2021). The concept of “Corporate Digital Responsibility” extends traditional CSR frameworks to address digital-age challenges including data privacy, algorithmic bias, and technological unemployment (Lobschat et al., 2021; Wirtz et al., 2023).

Empirical studies examining how CSR might moderate AI’s impacts on employees are currently limited. Theoretical arguments suggest that strong CSR creates “organizational slack”—resources and goodwill that buffer against disruption (Pereira et al., 2023). Organizations with robust CSR may implement AI more thoughtfully, prioritizing human-centered design and providing comprehensive support for affected employees (Bingley et al., 2023; Capel & Brereton, 2023). In this regard, Microsoft's coupling of AI implementation with extensive reskilling programs exemplifies CSR-guided technological transformation (Ameen et al., 2025).

However, critical gaps remain in our understanding of the CSR-AI relationship. First, most existing research remains theoretical or case-based, lacking large-scale empirical validation (Dennehy et al., 2023). Second, the mechanisms through which CSR influences AI's psychological impacts on employees are still underspecified (Zirar et al., 2023). Third, contextual factors determining when CSR effectively buffers against AI-induced stress versus when it merely represents “window dressing” require investigation (Sison et al., 2024). Our study addresses these gaps by empirically testing CSR's moderating role in the AI adoption-job insecurity-depression pathway.

Synthesis and research positioning

This literature review reveals several critical insights that help position our research. First, while AI adoption and CSR have been extensively studied independently, their intersection—particularly when it comes to employee mental health outcomes—remains underexplored. Second, the rapid acceleration of AI implementation has outpaced academic understanding of its psychological implications, creating an urgent need for empirical investigation. Third, the notion that CSR serves as an organizational resource buffering against technological disruption is a promising but untested theoretical proposition.

Our study contributes to this emerging literature by empirically demonstrating that CSR moderates AI's psychological impact through job insecurity. Integrating established theories (conservation of resources, social identity, job demands-resources) with contemporary AI and CSR scholarship, we advance scholarly understanding of responsible AI implementation. This positions our research at the intersection of information systems, organizational behavior, and business ethics literatures—a contribution to the nascent field of human-centered AI in organizations.

Theory and hypothesesThe dual nature of AI’s impact: A balanced theoretical framework

Scholars have given considerable attention to the complex phenomenon of AI adoption in organizations, often focusing on the ways AI technologies integrate into operational, strategic, and decision-making frameworks (Bag et al., 2021; Bankins et al., 2024; Budhwar et al., 2022, 2023; Lu, 2017). Researchers have analyzed this multifaceted topic from various perspectives including the technological, organizational, cultural, and ethical (Davenport & Ronanki, 2018; Davenport & Guha, 2019; Jobin et al., 2019; Ransbotham et al., 2019; Wilson & Daugherty, 2018). Despite widespread acknowledgement of AI's potential to transform business processes and decision-making, the need for examinations of the subtle and profound effects AI-infused work environments have on employee perceptions, attitudes, behaviors, and especially mental health is increasingly urgent.

Before developing our hypotheses, we must establish a balanced theoretical understanding of AI's multifaceted impact on employees. Contemporary research has revealed that AI functions as a “double-edged sword” in organizational contexts.

To begin with, studies have demonstrated measurable productivity benefits from AI implementation. Noy and Zhang (2023) conducted a preregistered experiment with 453 college-educated professionals performing occupation-specific writing tasks. Their results showed that ChatGPT exposure decreased task completion time by 40 % and increased output quality by 18 %, while also reducing inequality between workers. Similarly, research on AI-augmented creativity has revealed that AI assistance can enhance employee creativity, particularly when AI handles initial, well-codified portions of tasks, allowing employees to focus on higher-level problem-solving. However, research has also shown that these benefits are skill-biased, with higher-skilled employees experiencing more pronounced improvements in creativity and subsequent performance (Jia et al., 2024).

The evolution of AI capabilities presents both opportunities and new challenges. Hermann and Puntoni (2024) distinguished between Convergent Thinking GenAI (domain-specific, pre-defined task completion) and Divergent Thinking GenAI (domain-general, new task fulfillment), noting that both forms elicit varied positive and negative reactions. This technological evolution has fundamentally altered how work is conceptualized and performed.

Conversely, researchers have generated substantial evidence of AI's negative psychological impacts. Brougham and Haar (2020) found that technological disruption significantly influences job insecurity and turnover intentions across multiple countries. Nam (2019) demonstrated that technology usage affects perceived job insecurity, particularly when employees question job sustainability. Belanche et al. (2024) explicitly examined what they term “the dark side of artificial intelligence in services,” documenting various negative impacts on both workers and service delivery.

We acknowledge this complexity in our theoretical model by examining how organizational factors (specifically CSR) can tip the balance toward positive outcomes. Rather than viewing AI as inherently harmful, we investigate the conditions under which its implementation may trigger psychological distress and, more importantly, how organizations can create environments where AI's benefits are realized while protecting employee mental health. This approach aligns with the emerging “human-centered AI” paradigm, which emphasizes the importance of designing AI systems and implementation strategies that prioritize human well-being alongside efficiency gains (Bingley et al., 2023; Capel & Brereton, 2023).

The influence of AI adoption on employee depression: A conservation of resources perspective

While AI adoption has significant potential benefits, our first hypothesis seeks to determine why, despite these positives, AI implementation may still trigger depressive symptoms in certain conditions. This is not to suggest AI is inherently harmful, but rather to understand the psychological mechanisms that may impede successful human-AI collaboration. Technology adoption research has shown that even beneficial technologies can initially cause distress during transition periods (Uren & Edwards, 2023), and the relationship between AI adoption and depression is likely contingent on multiple factors including implementation approach, employee preparedness, and organizational support systems. However, without these moderating conditions in place, the disruption caused by AI adoption may overwhelm employees' coping resources.

Researchers have given considerable attention to employee depression within organizations, particularly in organizational psychology, occupational health, and human resource management. Encompassing a variety of emotional, cognitive, and physical symptoms, depression tends to manifest as continuous feelings of sadness and diminished interest or pleasure in daily activities (American Psychiatric Association, 2013). Scholars in psychology, psychiatry, neuroscience, and related disciplines have explored depression in-depth, generating insights into its origins and symptoms as well as potential interventions. Studies addressing how depressive symptoms manifest in employees have shown that it profoundly affects their productivity, engagement, absenteeism, presenteeism, and overall well-being, highlighting its status as a critical organizational issue with potential widespread impacts on workplace dynamics and performance (Hakanen & Schaufeli, 2012; Joyce, Modini, Christensen, Mykletun, Bryant, Mitchell, & Harvey, 2016; Sanderson & Andrews, 2006). Diagnoses typically follow the criteria outlined in the Diagnostic and Statistical Manual of Mental Disorders (DSM) (American Psychiatric Association, 2013).

Conservation of resources (COR) theory provides a useful foundation for examining the relationship between organizational AI adoption and employee depression. COR theory posits that individuals strive to obtain, maintain, and protect valued resources, and psychological stress occurs when these resources are threatened or lost (Hobfoll et al., 2018). In organizational contexts, resources encompass various elements essential for employee well-being, including job security, professional competencies, and role clarity.

When organizations implement AI systems, the significant work environment changes that employees often experience can threaten these vital resources. Bankins et al. (2024) emphasized that AI adoption fundamentally alters established work patterns and creates uncertainty about future role requirements. This technological transformation can challenge employees' existing skillsets and create ambiguity about their professional value (Malik et al., 2021).

The resource-threatening nature of AI adoption manifests through multiple mechanisms. First, AI implementation often requires substantial work process adaptation, potentially rendering some existing professional competencies less relevant (Chowdhury et al., 2023). Second, the rapid pace of AI-driven change can create uncertainty about future skill requirements, threatening employees' sense of professional mastery (Beane & Leonardi, 2022). Third, AI adoption may blur role boundaries and create ambiguity about job responsibilities, challenging employees' understanding of their organizational contributions (Budhwar et al., 2022).

COR theory suggests that such resource threats generate psychological strain that can manifest as depression. This theoretical perspective aligns with recent research demonstrating how technological changes can trigger psychological distress responses (Brougham & Haar, 2020). The continuous pressure to adapt to AI-driven changes while facing resource retention uncertainty can deplete employees' psychological reserves, potentially leading to depressive symptoms.

Moreover, occupational health psychology research has shown that substantial organizational changes, particularly those involving technological transformation, can significantly impact employee mental health (Sinclair et al., 2021). The implementation of AI systems represents such a transformative change, potentially creating a sustained resource drain that exceeds employees' adaptive capabilities.

Therefore, based on the COR theory and recent empirical evidence, we test the following hypothesis:

Hypothesis 1

Organizational AI adoption is positively related to employee depression.

The influence of AI adoption on employee job insecurity: A social identity theory perspective

We posit that organizational AI adoption increases employee job insecurity. Job insecurity is a well-documented construct in scholarly literature, especially in the fields of organizational psychology, labor economics, occupational health, and human resource management. Understood as subjective anticipation of significant and involuntary disruptions of employment relationships or declines in employment conditions (Bazzoli & Probst, 2022; Cheng & Chan, 2008; Lin et al., 2021), job insecurity is influenced by economic variability, organizational changes, and technological advancements and is increasingly relevant in today’s dynamic workplace environments. The construct is typically bifurcated into two main dimensions: quantitative job insecurity, which pertains to concerns regarding job loss; and qualitative job insecurity, which relates to concerns regarding the loss of important job attributes. Research has consistently linked job insecurity to detrimental effects such as heightened stress, anxiety, and depressive symptoms (Jiang & Lavaysse, 2018; László et al., 2010; Shoss, 2017; Shoss et al., 2023), as well as negative impacts on job satisfaction, absenteeism, turnover intentions, organizational commitment, and job performance (Cheng & Chan, 2008; De Witte et al., 2012; Jiang & Lavaysse, 2018; Lee et al., 2018; Ma et al., 2023; Peltokorpi & Allen, 2023; Shoss, 2017; Shoss et al., 2023).

The proliferation of AI technologies in organizational settings has sparked discussions about the impacts these technologies can have on employee job security (Frey & Osborne, 2017; Jia et al., 2024; Wu et al., 2022). As companies progressively deploy AI systems to automate functions, facilitate decision-making, and augment productivity, apprehension among employees about the risk of job displacement or becoming technologically obsolete is growing (Brougham & Haar, 2020; Nam, 2019). This study employs social identity theory (SIT) to explore how AI adoption influences job insecurity among employees.

SIT provides a compelling theoretical framework for understanding how organizational AI adoption influences employee job insecurity. SIT suggests that organizational roles and professional group memberships contribute substantially to individuals’ identities and self-concepts (Ashforth & Mael, 1989). This theoretical lens is particularly useful for examining how technological transformations affect employees' perceptions of their professional futures.

The fundamental shift represented by organizational implementation of AI systems can challenge employees' professional identities and role security. Recent research has demonstrated that AI adoption often causes substantial modifications in work processes and skill requirements (Budhwar et al., 2023). Thus, employees may perceive the integration of AI technologies as a threat to their established professional identities, triggering concerns about the future viability of their roles.

From an SIT perspective, AI adoption can threaten job security through several mechanisms. First, AI systems’ automation capabilities may challenge employees’ perceived uniqueness and value in their professional roles (Wu et al., 2022). Second, AI implementation often requires new skillsets, potentially undermining employees’ confidence in their current professional identities (Kim & Kim, 2024). Third, AI-driven transformations of traditional work processes may create uncertainty about the future relevance of existing professional groups (Malik et al., 2021).

This theoretical framework aligns with prior empirical evidence suggesting that technological advancement significantly influences employment uncertainty. Studies have indicated that AI adoption often creates ambiguity about future job requirements and stability (Coupe, 2019). The rapid evolution of AI capabilities can generate concerns about job displacement or substantial role modifications, leading to heightened perceptions of job insecurity (Nam, 2019).

Furthermore, organizational behavior research has suggested that significant technological changes can generate sustained uncertainty about employment stability (Lee et al., 2018). When organizations implement AI systems, employees may experience increased cognitive and affective job insecurity as they navigate evolving role expectations and shifting skill requirements (Jiang & Lavaysse, 2018).

Therefore, based on social identity theory and past research findings, we test the following hypothesis:

Hypothesis 2

Organizational AI adoption is positively related to employee job insecurity.

The influence of job insecurity on employee depression: A job demands-resources theory perspective

In this study, we postulate that employee job insecurity elevates levels of depression. Categorizing work characteristics into demands and resources, the job demands-resources (JD-R) theory provides a comprehensive framework for understanding how workplace factors influence employee well-being (Bakker & Demerouti, 2017), offering a theoretical explanation for the relationship between job insecurity and employee depression. This theoretical perspective is particularly helpful for examining how job insecurity functions as a significant job demand that can lead to psychological strain.

Job insecurity is a persistent job demand, and managing it requires sustained psychological effort, potentially depleting employees' mental resources. According to the JD-R theory, employees facing chronic job demands without adequate resources to cope become vulnerable to psychological distress (Bakker & Demerouti, 2007). Job insecurity creates a particularly challenging demand as it involves ongoing uncertainty about future employment stability.

Research has demonstrated that prolonged exposure to job insecurity can trigger various psychological mechanisms that lead to depression. Meta-analytic evidence indicates that job insecurity consistently predicts adverse mental health outcomes across different organizational contexts (De Witte et al., 2016). The persistent nature of job insecurity as a stressor can gradually erode employees' psychological resilience, potentially manifesting in depressive symptoms (László et al., 2010).

Prior studies have further illuminated the psychological processes linking job insecurity to depression. When employees experience job insecurity, they often worry continuously about their professional futures, leading to emotional exhaustion and psychological strain (Hu et al., 2021). This chronic stress response can accumulate over time, overwhelming employees' coping resources and increasing their vulnerability to depressive symptoms (Shoss, 2017).

Moreover, longitudinal research has suggested that the persistent nature of job insecurity's impact on mental health can make it particularly detrimental. The ongoing threat to employment stability can elicit a sustained stress response that gradually depletes the psychological resources necessary for maintaining mental well-being (De Witte et al., 2012). This depletion process aligns with JD-R theory's explanation of how chronic job demands can compromise health.

Therefore, based on job demands-resources theory and contemporary empirical evidence, we test the following hypothesis:

Hypothesis 3

Employee job insecurity is positively related to employee depression.

The mediating role of job insecurity: an integrated stress process theory perspective

Stress process theory (Lazarus & Folkman, 1984) provides a comprehensive explanation of the mechanism through which job insecurity mediates the relationship between organizational AI adoption and employee depression. This theoretical framework effectively integrates our previous theoretical perspectives—conservation of resources theory, social identity theory, and job demands-resources theory—by elucidating how organizational stressors transform into psychological outcomes through cognitive and emotional appraisal processes.

Stress process theory posits that individuals experience psychological strain through a sequential process of primary and secondary appraisals of potentially threatening situations (Lazarus, 1999). In the context of organizational AI adoption, this process manifests in several stages. Initially, employees evaluate AI implementation as a potential threat to their professional resources and identities (Bankins et al., 2024). This primary appraisal leads to job insecurity as employees assess AI implementation’s employment stability-related implications (Malik et al., 2021).

The secondary appraisal process involves evaluating one's capacity to cope with these perceived threats. When organizations implement AI systems, employees must continuously assess their abilities to adapt to technological changes while managing uncertainty about their professional futures (Chowdhury et al., 2023). This ongoing evaluation process can strain psychological resources, potentially culminating in depressive symptoms (De Witte et al., 2016).

Prior empirical evidence supports this sequential stress process. Studies indicate that technological changes initially trigger employment uncertainty (Coupe, 2019), which subsequently affects psychological well-being through sustained stress responses (Shoss et al., 2023). This pathway aligns with stress process theory's explanation of the ways environmental changes translate into psychological outcomes through cognitive and emotional mechanisms.

Furthermore, contemporary research has demonstrated that job insecurity functions as a crucial psychological mechanism linking organizational changes to employee mental health outcomes (Wang et al., 2015). The persistent nature of job insecurity in the AI adoption context creates a sustained stress response that can gradually erode psychological well-being (Lee et al., 2018).

Therefore, based on stress process theory and current empirical evidence, we test the following hypothesis:

Hypothesis 4

Employee job insecurity mediates the relationship between organizational AI adoption and employee depression.

Toward responsible AI implementation: the role of CSR

Our final hypothesis shifts focus from understanding AI's potential negative impacts to identifying ways organizations can harness AI's benefits while protecting employee well-being. This represents a critical contribution to “responsible AI” literature (Baker & Xiang, 2023; Cheng et al., 2021). Rather than advocating against AI adoption, we investigate how organizations can foster conditions where AI enhances rather than threatens employee mental health.

CSR has emerged as a particularly promising buffer not because AI is inherently harmful, but because the transition to AI-augmented work requires substantial organizational support. When implemented responsibly with strong CSR practices, AI can indeed deliver its promised benefits of augmentation, efficiency, and innovation without compromising employee psychological well-being (Dennehy et al., 2023). This aligns with the concept of “Corporate Digital Responsibility”—extending traditional CSR frameworks to address the unique challenges of this digital transformation (Lobschat et al., 2021; Wirtz et al., 2023).

The moderating role of CSR: A social exchange theory perspective

We postulate that corporate social responsibility (CSR) moderates the intensifying impact of AI adoption on job insecurity within organizations. CSR has received widespread recognition in scholarly discussions, particularly in the business ethics, organizational behavior, and strategic management domains. It defines a corporation's commitment to ethical business conduct aimed at fostering economic growth while enhancing the quality of life for its workforce, local communities, and society as a whole. CSR activities are diverse, ranging from environmental stewardship and philanthropy to ethical labor practices and volunteerism. Researchers have highlighted CSR’s complex nature, effects on various stakeholders, and consequences for organizational outcomes (Bhatti et al., 2022; Carroll, 1999; Fatima & Elbanna, 2023; Gillan et al., 2021; Greening & Turban, 2000; Onkila & Sarna, 2022; Orlitzky, Schmidt, & Rynes, 2003; Turker, 2009; Velte, 2021; Zhao et al., 2022). Synthesizing insights from organizational behavior and CSR literature and theories of technological adaptation, we seek to elucidate the potential of CSR to counteract the negative ramifications of AI on job security. Specifically, we explore how varying levels of CSR engagement influence the relationship between organizational AI adoption and job insecurity, illustrated through organizational examples.

Social exchange theory (Blau, 1964) provides a theoretical foundation for the buffering effect of CSR on the relationship between AI adoption and job insecurity. This theoretical framework explains how organizational actions create reciprocal obligations and psychological contracts between organizations and employees, particularly during periods of technological transformation.

Social exchange theory suggests that when organizations demonstrate commitment to employee welfare through CSR initiatives, employees develop positive reciprocal attitudes and perceptions (Kim et al., 2018). In the context of AI adoption, CSR activities can serve as organizational signals that moderate how employees interpret and respond to technological changes. This moderating effect manifests differently depending on the level of organizational CSR engagement.

Employees of organizations that maintain strong CSR commitments are more likely to perceive AI implementation as an opportunity for organizational advancement than a threat to job security. For instance, when a manufacturing company implements AI systems while simultaneously investing in employee retraining programs and maintaining strong community engagement, employees typically experience lower job insecurity (Bhatti et al., 2022). Research has indicated that robust CSR practices create a psychological buffer against technological uncertainty by demonstrating organizational commitment to employee well-being (Zhao et al., 2022).

In a technology firm that implements AI solutions while maintaining comprehensive CSR programs focused on employee development and social welfare, for example, employees are likely to interpret AI adoption within the broader context of organizational responsibility and sustainable development (Farooq et al., 2014). This organizational approach typically results in reduced job insecurity as employees trust the organization's commitment to responsible AI implementation and workforce sustainability.

Conversely, employees of organizations that implement AI systems without strong CSR practices are more likely to experience heightened job insecurity. For example, if a financial services company introduces AI-driven automation without corresponding investments in employee welfare programs, workers will tend to report increased concerns about job stability (De Roeck & Maon, 2018). The absence of robust CSR initiatives may amplify employees' perceptions of AI as a threat rather than an opportunity for organizational growth.

In a retail organization that implements AI-driven inventory systems without maintaining strong CSR commitments, for example, employees are likely to perceive technological changes as purely profit-driven initiatives that may threaten their job security (Shen & Zhang, 2019). This perception typically leads to increased job insecurity as employees lack confidence in the organization's commitment to their well-being during technological transformation.

Previous studies have generated evidence of CSR’s moderating effect in such situations. Organizations with strong CSR practices typically experience more positive employee responses to technological changes (Onkila & Sarna, 2022). The buffering effect of CSR is particularly evident in how employees interpret and respond to organizational changes that might otherwise trigger job insecurity concerns (Voegtlin & Greenwood, 2016).

Therefore, based on social exchange theory and contemporary empirical evidence, we test the following hypothesis:

Hypothesis 5

Corporate social responsibility moderates the relationship between organizational AI adoption and employee job insecurity, such that the positive relationship between AI adoption and job insecurity is weaker when CSR is high (Please see Figure 1).

Fig. 1.

Theoretical Model.

Synthesis: understanding AI’s complex impact on employee well-being

The hypotheses developed above examine specific pathways through which AI adoption may influence employee mental health. Grounding our theoretical model in the broader empirical landscape, we now offer a comprehensive discussion of the ways AI can both positively and negatively affect employees.

Positive impacts of AI on employees

Empirical research has shown that AI implementation has measurable benefits. Noy and Zhang’s (2023) experimental study with 453 professionals showed that access to generative AI reduced task completion time by 40 % while improving output quality by 18 %, with additional benefits of reducing inequality between workers. Jia et al. (2024) found that AI assistance enhanced employee creativity when AI handled routine tasks, allowing employees to focus on higher-level problem-solving, though these benefits were more pronounced for higher-skilled workers who experienced positive emotions and enhanced creativity, while lower-skilled workers experienced limited improvements and negative emotions. Finally, Hermann and Puntoni (2024) identified two forms of generative AI—Convergent Thinking GenAI for domain-specific, pre-defined task completion, and Divergent Thinking GenAI for domain-general, new task fulfillment—offering multiple pathways for augmenting human capabilities.

Negative impacts of AI on employees

Researchers have generated substantial evidence of AI's detrimental effects on worker well-being. Brougham and Haar’s (2020) analysis of 1516 employees across the United States, Australia, and New Zealand found that perceived threat of technological disruption had significant effects on job insecurity and turnover intentions, with job insecurity mediating this relationship. Notably, U.S. employees reported significantly higher threat levels than those in Australia and New Zealand. Meanwhile, Nam (2019) analyzed Pew Research Center data showing that employees' present perceptions of job insecurity are highly associated with technology usage and long-term perceptions about job sustainability in the Fourth Industrial Revolution context, including concerns about robotics and artificial intelligence. Lastly, Wu et al. (2022) found that job insecurity in human-machine collaboration contexts leads to emotional exhaustion and unsafe behaviors, with emotional exhaustion mediating the relationship between job insecurity and unsafe behaviors, though the effects differed between the manufacturing and service industries.

Contextual factors and implementation approaches

Past studies have revealed that AI's impacts depend on implementation approaches. Zirar et al. (2023) identified four themes in worker-AI coexistence: worker distrust stemming from job threat perceptions, AI's potential to augment abilities, required technical/human/conceptual skills, and the need for continuous reskilling. Makarius et al. (2020) emphasized that successful AI integration requires building sociotechnical capital through proper socialization processes, and Parker and Grote (2022) found that work design choices determine whether AI enhances job resources or increases job demands. Finally, although Belanche et al. (2024) focused on service robots specifically, their framework highlighting the importance of robot design, customer features, and service encounter characteristics suggests that technology implementation requires careful consideration of multiple contextual factors.

Implications for our theoretical model

The findings summarized above reinforce our theoretical approach. Rather than treating AI as inherently beneficial or harmful, we examine specific mechanisms (job insecurity as mediator) and boundary conditions (CSR as moderator) that shape AI's impact on employee depression. The findings that perceptions of technological threats vary by country (Brougham & Haar, 2020) and that technology usage relates to job insecurity perceptions (Nam, 2019) support our focus on contextual factors.

These dynamics make the Korean context particularly relevant. Combining rapid AI adoption with strong employment security values, Korea provides an ideal setting for examining how organizational factors like CSR might buffer against AI's negative impacts while preserving its benefits. The skill-biased nature of AI's effects (Jia et al., 2024) further justifies our examination of moderating factors that might protect vulnerable employees.

MethodsResearch philosophy and approach

This study adopts a post-positivist research philosophy, which recognizes that while AI adoption has objective psychological impacts, our understanding of these complex socio-psychological phenomena is inherently probabilistic and context-dependent (Creswell & Creswell, 2018; Guba & Lincoln, 1994). Post-positivism’s acknowledgment of both the existence of observable patterns and the influence of subjective experiences and contextual factors makes it particularly appropriate for examining the multifaceted relationships between technological change and human psychology (Ryan, 2006).

Our ontological position assumes that the relationship between AI adoption and employee mental health is a real phenomenon that exists independently of our observation, but is mediated by individual perceptions and organizational contexts. This critical realist stance (Bhaskar, 2008) enables us to investigate causal mechanisms while recognizing that these mechanisms operate within open systems where multiple factors interact. This philosophical foundation is essential when studying workplace phenomena where technological, psychological, and social dimensions intersect (Edwards et al., 2014).

Epistemologically, we employ a modified objectivist approach that seeks to approximate truth through empirical observation while acknowledging the limitations of our measurement instruments and the potential for researcher bias (Phillips & Burbules, 2000). This approach aligns with contemporary organizational research that recognizes the need for methodological rigor while accepting that complete objectivity is unattainable, particularly when studying psychological constructs such as depression and job insecurity (Aguinis & Vandenberg, 2014).

Methodologically, we adopt a deductive approach, using quantitative methods to test theory-derived hypotheses. This approach is consistent with the post-positivist emphasis on falsification and theory testing (Popper, 1959). The longitudinal design reflects our commitment to temporal precedence in causal inference, addressing the common method bias concerns inherent in cross-sectional designs (Podsakoff et al., 2003; Rindfleisch et al., 2008). Finally, our use of validated scales and structural equation modeling aligns with post-positivist principles of measurement precision and statistical control for alternative explanations (Hair et al., 2019).

Participants and procedure

In this inquiry, we focused on working adults in South Korea, aged 20 and above, across sectors. As one of the most digitally advanced economies globally with the highest robot density in manufacturing (1012 robots per 10,000 employees) and the highest AI adoption rate in Asia (World Bank, 2024), South Korea is an exemplary context for this research. The data collection period (March–June 2024) coincided with major AI implementation announcements by leading Korean conglomerates including Samsung, LG, and Hyundai, making employee concerns about AI particularly salient. We used a three-wave time-lagged approach to collect data at three distinct intervals.

Data collection process

We collected data using an online survey administered through a professional research firm's panel system. The firm utilized both email invitations and mobile app notifications to contact panel members who met our inclusion criteria (employed adults aged 20 and above in South Korea). Participants accessed surveys through individualized secure links. Importantly, we did not contact companies directly; instead, individual employees from various organizations voluntarily participated through the panel system. This approach resulted in responses from employees from 287 distinct organizations, ensuring no single organization dominated the sample (maximum 3.8 % from any single organization). We chose this method to avoid the selection bias that might occur from directly approaching specific companies and to ensure broad representation across the Korean economy. Data collection occurred in three waves: Wave 1 (March 15–18, 2024) with 745 participants from 2156 invitations (34.5 % response rate), Wave 2 (April 26–28, 2024) with 513 participants (68.9 % retention), and Wave 3 (June 7–9, 2024) with 403 final participants (54.09 % overall retention). Statistical representativeness was achieved through stratified random sampling across geographic regions, industries, company sizes, and demographic characteristics, with final distributions closely matching national workforce statistics.

Longitudinal design: justification and implementation

This study employs a true longitudinal design with three-wave panel data collection, meeting the established criteria for longitudinal research (Ployhart & Vandenberg, 2010). In contrast to cross-sectional designs, our approach tracks the same 403 individuals across all three time points, enabling within-person analysis of change and causal inference through temporal precedence (Rindfleisch et al., 2008). The longitudinal approach is critical for establishing the temporal ordering essential for testing mediation—to verify causal claims, AI adoption (T1) must precede job insecurity (T2), which must precede depression (T3) (Maxwell & Cole, 2007).

Data collection occurred during a critical period of accelerated AI adoption in South Korean organizations. Wave 1 captured baseline measurements of AI adoption and CSR. Wave 2 measured job insecurity. Finally, Wave 3 assessed depression.

We chose the 5–6 week intervals based on affective events theory, which suggests that this timeframe allows sufficient incubation for workplace stressors to manifest as psychological outcomes while maintaining construct stability (Dormann & Griffin, 2015). Additionally, temporal separation of variables reduces common method bias while capturing the dynamic unfolding of psychological responses (Podsakoff et al., 2003).

Data collection timeline and economic context

Data collection occurred during a pivotal period for AI adoption in South Korea. Wave 1 coincided with the Korean government's announcement of the “AI Korea Initiative 2024,” which allocated 120 billion won for AI infrastructure development. This timing ensured AI adoption was highly salient in organizational discourse. Wave 2 occurred shortly after major Korean conglomerates, including Samsung Electronics, SK Hynix, and LG Electronics, announced AI-driven restructuring plans affecting approximately 51 % of employees nationwide (Chang et al., 2025). Finally, wave 3 was conducted after sufficient time for psychological impacts to manifest had passed, while avoiding the summer vacation period that could affect response rates.

This specific timeframe was strategic for three reasons. First, it captured employee reactions during active AI implementation rather than hypothetical scenarios. Second, it avoided major Korean holidays (Lunar New Year and Chuseok) that could disrupt data collection continuity. Third, it preceded the anticipated summer 2024 labor market adjustments, ensuring responses reflected genuine AI-related concerns rather than seasonal employment variations.

Organizational sample composition

The 403 participants in this study represented 287 distinct organizations across South Korea; to prevent organizational-level bias, no single organization accounted for more than 3.8 % of the sample. Company size distribution reflected the Korean economy's structure: micro-enterprises with 1–9 employees (14.4 %), small enterprises with 10–49 employees (27.8 %), medium enterprises with 50–299 employees (30.0 %), large enterprises with 300–999 employees (11.1 %), and conglomerates with over 1000 employees (16.7 %). This distribution closely approximates Statistics Korea's (2024) enterprise demographics, with a slight overrepresentation of medium and large enterprises where AI adoption is more prevalent.

Industry representation spanned 11 sectors, with manufacturing (21.3 %) appropriately dominating given Korea's manufacturing-oriented economy. Other well-represented sectors included health and welfare (17.4 %), education (15.6 %), services (13.8 %), construction (11.4 %), information services and telecommunications (7.7 %), and financial/insurance (2.0 %). This industry mix captures both AI-intensive sectors (manufacturing, IT, finance) and human-service sectors (health, education) where AI's impacts on employment vary substantially.

The selection of these particular organizations was purposive, following theoretical sampling principles (Patton, 2015). We specifically sought variation in AI adoption levels, with 38 % of the organizations classified as high AI adopters (using AI in 4+ functional areas), 41 % as moderate adopters (2–3 areas), and 21 % as low adopters (0–1 areas). This variation was essential for testing our hypotheses about AI’s varied impacts.

Participant distribution across organizations

The distribution of our final sample across 287 distinct organizations meant that a certain number of participants came from the same organizations. Specifically, 198 organizations (69.0 %) contributed single respondents, 62 organizations (21.6 %) contributed 2–3 respondents, 21 organizations (7.3 %) contributed 4–5 respondents, and 6 organizations (2.1 %) contributed 6–8 respondents. The fact that no single organization contributed more than 8 respondents (2.0 % of the total sample) prevented any single organizational culture or policy from dominating our findings.

This distribution pattern emerged naturally from our stratified random sampling approach and reflects the reality of South Korean employment, where larger organizations employ more workers and thus have higher probabilities of selection. The median number of respondents per organization was 1 (IQR = 1–2), indicating that most organizations contributed only one or two participants. The intraclass correlation coefficient (ICC1) for our key variables ranged from 0.08 to 0.12, suggesting that 8–12 % of variance in responses could be attributed to organizational membership, which is within acceptable ranges for individual-level analyses but warrants statistical controls (LeBreton & Senter, 2008).

Evaluation of national representativeness

To assess whether our sample adequately represents South Korean companies, we conducted systematic comparisons with national statistics. In Korea, SMEs are defined as having fewer than 300 employees for manufacturing industries, which differs from the EU definition of fewer than 250 employees. This difference reflects varying national contexts: Korea's definition accommodates its manufacturing-oriented economy where firms typically require larger workforces, while maintaining the spirit of distinguishing smaller enterprises from large conglomerates. Our sample's distribution—72.2 % from organizations under 300 employees—follows the Korean national definition while acknowledging that definition’s divergence from international standards such as the EU's 250-employee threshold; although this distribution slightly underrepresents SMEs, it appropriately reflects employment distribution rather than enterprise count, as larger firms employ disproportionately more workers.

Regarding AI adoption representativeness, our sample's mean AI adoption score of 2.29 (SD=1.01) on a 5-point scale aligns closely with McKinsey's (2024) assessment of Korean companies on a comparable AI maturity scale. The slight difference likely reflects our inclusion of smaller enterprises that typically show lower AI adoption rates. The variance in our sample (ranging from 1.0 to 5.0) captures the full spectrum of AI adoption stages observed in the Korean economy.

We acknowledge potential selection bias toward more digitally engaged employees stemming from the administration of our survey online. However, with South Korea's 99.6 % internet penetration rate and widespread digital literacy in the workforce (Korea Internet & Security Agency, 2024), we believe this bias is minimal. Moreover, since AI adoption inherently involves digital transformation, employees capable of completing online surveys represent the relevant population for studying AI's workplace impacts.

Sample characteristics and representativeness

The 403 respondents who completed all three waves of our survey represent a diverse cross-section of South Korean employees. The gender distribution (52.9 % male, 47.1 % female) closely mirrors South Korea's workforce composition according to Statistics Korea (2024). Age distribution spanned four categories: 20–29 years (23.1 %), 30–39 years (22.2 %), 40–49 years (28.4 %), and 50–59 years (26.3 %), aligning with the national workforce age structure where the median age is 46.8 years.

Our sample’s educational attainment distribution—high school or below (12.4 %), community college (19.1 %), bachelor's degree (57.3 %), and master's degree or higher (11.2 %)—reflects the highly educated nature of South Korea’s workforce, though slightly underrepresents bachelor's degree holders compared to the national average of 69.9 % (Korean Educational Development Institute, 2024). This slight overrepresentation is consistent with technology-sector employment patterns where higher education is more prevalent.

Occupationally, our sample included staff-level employees (43.9 %), assistant managers (15.9 %), managers or deputy general managers (23.1 %), and department/general managers or directors (17.1 %). This distribution appropriately captures the hierarchical structure typical of Korean organizations, with proportions consistent with Korean Labor Institute (2024) statistics on managerial representation (approximately 40 % in supervisory or managerial roles).

Geographic representation

To ensure territorial representativeness across South Korea, we deliberately employed a sampling strategy that recruited participants from all major regions. The geographic distribution included the Seoul Capital Area (48.6 % of sample vs. 49.8 % of national workforce), Gyeongsang provinces (25.8 % vs. 24.2 % nationally), Jeolla provinces (11.4 % vs. 10.8 %), Chungcheong provinces (10.2 % vs. 10.5 %), and Gangwon/Jeju (4.0 % vs. 4.7 %). This close alignment with national workforce distribution data from Statistics Korea (2024) ensures our findings are not biased toward any particular region.

Urban-rural representation also matched national patterns, with 82.1 % from metropolitan areas and 17.9 % from non-metropolitan areas, compared to the national distribution of 89.8 % and 11.2 %, respectively. This geographic diversity is crucial given regional variations in AI adoption rates and employment patterns across South Korea (Bank of Korea, 2024).

Survey design process

We developed our survey development through a rigorous multi-stage process to ensure validity and reliability. Stage 1, conducted in January 2024, involved item selection and adaptation. We completed a comprehensive literature review to identify validated scales for each construct. All scales were originally developed in English, requiring careful translation procedures. We employed Brislin's (1970) back-translation method with two bilingual organizational psychology professors independently translating items from English to Korean, followed by back-translation by two different bilingual experts. We resolved discrepancies through a panel discussion involving all four translators and the research team.

Stage 2 consisted of pilot testing in February 2024. We conducted a pilot study with 87 employees from various industries to assess item clarity, survey flow, and completion time. Cognitive interviews with 12 participants revealed minor comprehension issues with three items related to AI adoption, which we subsequently refined. The pilot study confirmed an average completion time of 12–15 min per wave, deemed acceptable to minimize respondent fatigue (Krosnick, 1999).

Stage 3 involved expert review. Five experts in organizational behavior and three in AI implementation reviewed the survey for content validity. Their feedback led to the addition of industry-specific AI examples to improve item relevance across sectors. The Content Validity Index (CVI) exceeded 0.90 for all scales, indicating excellent content validity (Polit & Beck, 2006).

Stage 4 focused on survey structure and administration. Each wave was designed with a structured flow beginning with a welcome message explaining the study purpose and confidentiality, followed by informed consent confirmation. Wave 1 included demographic questions, and all waves contained their respective main construct measures and two attention check items to ensure data quality. Each survey concluded with an open-ended comment section for participants to report any concerns. We selected the online survey platform, SurveyMonkey, for its robust data security features and compatibility with Korean language characters. Participants received email reminders 24 hours before each wave opened and again 24 hours before closing. To minimize common method bias, we varied scale endpoints and response formats across constructs and separated predictor and criterion variables temporally (Podsakoff et al., 2012).

Ensuring response independence

Although multiple employees from some organizations participated, we implemented several measures to ensure response independence. First, participants were recruited individually through the survey platform's panel system rather than through organizational channels, reducing the likelihood of colleague influence. Second, each participant received a unique survey link that could not be shared or forwarded. Third, survey completion timestamps showed that participants from the same organizations typically completed surveys on different days within each wave, suggesting independent response behavior. Fourth, we included an exit question asking whether participants had discussed the survey with colleagues, with only 3.7 % indicating any such discussion. Finally, we conducted sensitivity analyses excluding organizations with 4+ respondents (n = 27 organizations, 89 individuals), which produced virtually identical results (all path coefficients within ±0.02 of original estimates), supporting the robustness of our findings to potential organizational clustering effects.

A reputable online research firm that maintains a large database of about 5.45 million prospective participants recruited the sample. During platform registration, participants were required to confirm their employment status and verify their identities with either mobile numbers or email addresses. Prior investigations, such as Landers & Behrend (2015), have demonstrated the efficacy of online surveys in reaching diverse groups of respondents.

The principal objective of this study was to obtain longitudinal data from individuals actively employed by South Korean companies at three discrete points, thereby addressing drawbacks linked to cross-sectional designs. The digital platform we utilized facilitated precise monitoring of participant involvement, assuring consistent participation by the same individuals at each data gathering phase. We deliberately spaced data collection sessions—lasting two to three days—five to six weeks apart to allow sufficient time for responses. We also instituted rigorous data integrity protocols, including geo-IP restriction traps, to detect and correct any unusually rapid submissions, thereby safeguarding the veracity of the research results.

The survey firm we hired proactively contacted its database members, inviting them to join the study. Participants were informed of the study’s voluntary nature as well as the confidentiality and research-exclusive utilization of their contributions. After they provided informed consent, adherence to ethical standards was strictly maintained. Participants were rewarded with eight to nine dollars as a monetary incentive for their participation.

We employed a stratified random sampling procedure to ensure national representativeness across multiple dimensions. Stratification variables included: (1) geographic region, based on Korea's 17 first-level administrative divisions; (2) industry sector, following Korean Standard Industrial Classification codes; (3) company size, using OECD enterprise size classifications; (4) employee demographics, including age quartiles and gender; and (5) job level, using standard Korean organizational hierarchies (Please See Table 1). This multi-dimensional stratification ensured our findings can be generalized to the broader Korean workforce while maintaining sufficient variation to test the hypothesized relationships (Levy & Lemeshow, 2013).

Table 1.

Descriptive Characteristics of the Sample.

Characteristic  Percent 
Gender   
Men  52.9 % 
Women  47.1 % 
Age (years)   
20–29  23.1 % 
30–39  22.2 % 
40–49  28.4 % 
50–59  26.3 % 
Education   
High school or below  12.4 % 
Community college  19.1 % 
Bachelor’s degree  57.3 % 
Master’s degree or higher  11.2 % 
Position   
Staff  43.9 % 
Assistant manager  15.9 % 
Manager or deputy general manager  23.1 % 
Department/general manager or director and above   
  17.1 % 
Firm Size   
 1–9 employees  14.4 % 
 10–29 employees  17.9 % 
 30–49 employees  9.9 % 
 50–99 employees  15.9 % 
 100–149 employees  7.9 % 
 150–299 employees  7.4 % 
 300–449 employees  4.2 % 
 500–999 employees  6.9 % 
 1000–4999 employees  9.2 % 
 Above 5000 employees  6.2 % 
Industry Type   
Manufacturing  21.3 % 
Services  13.8 % 
Construction  11.4 % 
Health and welfare  17.4 % 
Information services and telecommunications  7.7 % 
Education  15.6 % 
Financial/insurance  2.0 % 
Consulting and advertising  1.9 % 
Others  7.7 % 

Note: Firm size categories follow Korean statistical classifications.

SMEs in Korea are defined as having fewer than 300 employees in the manufacturing sector, differing from the EU definition of fewer than 250 employees.

We confirmed sample size adequacy in multiple ways. First, we conducted a power analysis using G*Power 3.1, which indicated that 395 participants would provide 0.95 power to detect medium effect sizes (f²=0.15) with our model complexity. Second, we exceeded the 10:1 participant-to-parameter ratio recommended for structural equation modeling (Kline, 2015). Third, our 54.1 % retention rate across three waves compares favorably to typical longitudinal organizational studies, which average 43 % retention (Goodman & Blum, 1996).

In the initial stage, 745 employees participated; this figure dropped to 513 in the second stage and then to 405 in the final stage. After data collection, we undertook a thorough data-cleaning procedure to remove incomplete submissions. The final sample consisted of 403 respondents who fully participated in all three surveys, yielding a response rate of 54.09 %. Such a rate is regarded as suitable for longitudinal inquiries, helping to reduce any potential influence of attrition bias on the outcomes. We determined the sample size based on multiple scholarly principles, including the use of G*Power for statistical assessments and compliance with Barclay et al.’s (1995) guideline of a minimum of 10 instances per variable. We used these measures to confirm that the sample was sufficiently large to detect meaningful associations and draw valid inferences, thereby balancing the research’s statistical capability with real-world time and resource constraints.

Moreover, the deployment of a three-wave time-lagged format has notable benefits over simpler cross-sectional methods, allowing researchers to more effectively establish temporal precedence and infer causality among the examined constructs. This design also facilitates investigation into the consistency and progression of concepts across time, enabling us to offer deeper insights into the evolving interrelations between AI adoption, employee well-being, and organizational results.

Questionnaire measures and items

While the measures section (below) provides psychometric details, we present here the complete questionnaire structure for transparency. The Wave 1 questionnaire comprised several components. First, demographic information including age, gender, education, tenure, position, industry, and company size was collected. Second, the AI Adoption Scale consisted of 5 items (α=0.945) where participants rated their organization's AI use across five functional areas on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Third, the CSR Scale included 12 items (α=0.904) covering the environment, community, employees, and customers, with three items each. Finally, control variables assessed previous technology experience and organizational change frequency.

The Wave 2 questionnaire began with a participant identification code for matching purposes, followed by the Job Insecurity Scale containing 5 items (α=0.906) that captured both the cognitive and affective dimensions of job insecurity. We included filler items about general work conditions to mask the study's specific focus and reduce demand characteristics.

The Wave 3 questionnaire started with the participant identification code, then presented the 10-item CES-D-10 Depression Scale (α=0.953) using the validated Korean version to measure depressive symptoms over the past week. The survey concluded with questions about any major life events during the study period, which we used for robustness checks in our analyses.

All items used established response formats appropriate for each construct, with clear instructions provided for each section. The complete questionnaire is available from the authors upon request.

Measures

Participants were asked about their levels of AI adoption and CSR at time point 1, the extent of their job insecurity at time point 2, and their levels of depression at time point 3. All variables were measured using multi-item scales on a 5-point Likert scale.

AI adoption (Time point 1, collected from employees)

To measure organizational AI adoption, we adapted five items from established scales used in recent studies (Chen et al., 2023; Kim & Lee, 2024; Kim et al., 2024). The items in our study included the following: “Our organization uses artificial intelligence technology in its human resources management system,” “Our organization uses artificial intelligence technology in its production and operations management systems,” “Our organization uses artificial intelligence technology in its marketing and customer management systems,” “Artificial intelligence technology is used in our organization’s strategic and planning systems,” and “Artificial intelligence technology is used in our organization’s financial and accounting systems.” The Cronbach’s alpha value was 0.945.

Corporate social responsibility (Time point one, obtained from employees)

We assessed CSR using 12 items adapted from Turker's CSR scale (2009), which was likewise employed by Farooq et al. (2014). This scale considers four dimensions of CSR: environment, community, employees, and customers, with three items for each dimension. Other studies carried out in Korea have utilized these items (e.g., Kim et al., 2019; Kim et al., 2018). We obtained responses at time point one to capture employees' views of their organization's CSR. The items included: (1) for the environmental dimension, “Our company participates in activities that aim to protect and improve the quality of the natural environment”; (2) for the community dimension, “Our company contributes to campaigns and projects that promote the well-being of society”; (3) for the employee dimension, “Our company supports employees’ growth and development”; and (4) for the customer dimension, “Our company respects consumer rights beyond legal requirements,” The Cronbach's alpha value was 0.904.

Job insecurity (Point in time 2, as reported by employees)

We adapted five items from a scale developed by Kraimer et al. (2005) to assess the degree of job insecurity. THe entire set of job insecurity-related items was a follows: “If my current organization was facing economic problems, my job would be the first to go”; “I will be able to keep my present job as long as I wish (reverse coded)”; “I am confident that I will be able to work for my organization as long as I wish (reverse coded)”; “Regardless of economic conditions, I will have a job at my current organization (reverse coded)”; and “My job is not a secure one.” The Cronbach's alpha value was 0.906.

Depression (Time point 3, collected from employees)

Depression was assessed using the CES-D-10, a 10-item scale developed by Andresen et al. (1994). The scale measures various aspects of depression, including feelings of hopelessness, fear, loneliness, unhappiness, and difficulties with attention and sleep. Sample items included “I felt hopeful about the future,” “I felt lonely,” “I could not get going,” and “I felt depressed.” The Cronbach’s α value was 0.953.

Control variables

Drawing on prior studies (Evans–Lacko & Knapp, 2016; Jacobson & Newman, 2017; Lerner & Henke, 2008), we incorporated multiple control variables into the analysis to manage possible confounding influences. Obtained in the initial survey, these variables included employee tenure, gender, position, and education. Given the frequent associations between these variables and both job insecurity and depression, integrating them into the analysis enabled us to mitigate omitted variable bias and more clearly interpret the main variables of interest.

Statistical analysis

To examine the interrelationships among the variables in this study, we conducted a Pearson correlation analysis using SPSS software (version 28) after all the data had been gathered. We employed the systematic two-step methodological framework suggested by Anderson and Gerbing (1988), encompassing both a measurement model and a structural model. Initially, we confirmed the measurement model’s validity via confirmatory factor analysis (CFA). Subsequently, we assessed the structural model using a moderated mediation model analysis in AMOS 26 software, applying the maximum likelihood (ML) estimation procedure in accordance with the norms of structural equation modeling (SEM). This rigorous methodology enabled us to conduct a nuanced and comprehensive investigation of the associations between the study variables, enhancing the clarity of our interpretations of the findings.

We evaluated the model’s fit with the observed data using multiple fit indices, including the Comparative Fit Index (CFI), the Tucker-Lewis Index (TLI), and the Root Mean Square Error of Approximation (RMSEA). In accordance with recognized academic standards, we established thresholds for these indices to surpass 0.90 for both CFI and TLI, and to stay below 0.06 for RMSEA, providing a robust appraisal of the model’s alignment with empirical data. Furthermore, to substantiate the mediation hypothesis, we used a bootstrapping analysis with a 95 % bias-corrected confidence interval (CI). Following Shrout and Bolger (2002), a CI not encompassing zero denotes a statistically significant indirect effect at the 0.05 level, confirming the mediating mechanism in the model.

Handling nested data structure

We used several analytical strategies to account for the nested nature of our data (employees within organizations). First, we calculated design effects (DEFF) for our key variables, which ranged from 1.09 to 1.14, indicating minimal clustering effects (Muthén & Satorra, 1995). Second, to adjust for the potential non-independence of observations within organizations (Cameron & Miller, 2015), we used cluster-robust standard errors in our structural equation modeling, with organization ID as the clustering variable. Third, we conducted robustness checks using multilevel modeling with random intercepts for organizations, which yielded substantively identical results to our primary analyses. The small average cluster size (mean = 1.40 employees per organization) and our research focus on individual psychological responses rather than organizational phenomena (Huang, 2018) justified our decision to proceed with individual-level analysis while controlling for clustering.

ResultsDescriptive statistics

The analysis revealed significant correlations among the organizational AI adoption-related variables, CSR, job insecurity, and employee depression. Table 2 details these findings offering preliminary empirical evidence for the study’s proposed hypotheses. The correlation results suggest that AI adoption, CSR, job insecurity, and depression are interlinked, potentially affecting each other in meaningful ways. The nature and strength of these relationships yield vital insights into how AI adoption may influence employee psychological well-being and broader organizational outcomes. These results highlight the complex interplay between technological integration and various human resource indicators in modern organizational settings.

Table 2.

Correlation among Research Variables.

  Mean  S.D. 
1. Gender_T1  1.47  0.50  –             
2. Education_T1  2.67  0.83  -0.04  –           
3. Tenure_T1  66.39  73.84  -0.06  0.07  –         
4. Position_T1  2.38  1.51  -0.33**  0.29**  0.34**  –       
5. AI_T1  2.29  1.01  -0.12*  0.10*  0.04  0.05  –     
6. CSR_T1  3.25  0.69  -0.06  0.18**  0.20**  0.09  0.31**  –   
7. JI_T2  2.73  0.90  -0.04  0.01  -0.05  0.09  0.16**  -0.09  – 
8. Dep_T3  1.86  0.88  -0.02  -0.00  -0.07  -0.06  0.10*  -0.11**  0.14** 

Notes: * p < 0.05. ** p < 0.01. S.D. means standard deviation, AI means organizational adoption of artificial intelligence, CSR means corporate social responsibility, JI means job insecurity, and Dep means depression. For gender, males are coded as 1 and females as 2. For position, general manager or higher is coded as 5, deputy general manager and department manager as 4, assistant manager as 3, clerk as 2, and others below clerk as 1. For education, “below high school diploma” level is coded as 1, “community college” level as 2, “bachelor’s” level as 3, and “master’s degree or more” level is coded as 5.

Measurement model

To assess the distinctiveness of the four principal constructs in this inquiry—organizational AI adoption, CSR, job insecurity, and depression—we carried out confirmatory factor analysis (CFA) to determine the measurement model’s adequacy. This procedure entailed a set of chi-square difference assessments comparing the four-factor model with simpler alternatives: a three-factor model (χ2 (df = 131) = 629.612, CFI = 0.910, TLI = 0.895, RMSEA = 0.097), a two-factor model (χ2 (df = 133) = 1839.937, CFI = 0.691, TLI = 0.645, RMSEA = 0.179), and a one-factor model (χ2 (df = 134) = 3456.608, CFI = 0.399, TLI = 0.313, RMSEA = 0.248). The analysis indicated that the four-factor model exhibited superior alignment, with χ2 (df = 128) = 268.905, CFI = 0.974, TLI = 0.970, and RMSEA = 0.052, confirming the discriminant validity of these constructs and their capacity to distinctly represent unique aspects of the studied phenomena.

The CFA findings underscore the measurement model’s construct validity, showing that the central variables remain conceptually separate and are appropriately measured by their indicators. Such verification was crucial for interpreting the subsequent structural model accurately, ensuring that the observed interrelations among the variables are not muddled by measurement issues or conceptual overlaps. By validating the discriminant validity of these constructs, we established a solid basis for investigating the proposed associations among AI adoption, CSR, job insecurity, and depression, which should enrich comprehension of the nuanced interactions found in AI-enhanced workplaces.

Moreover, adopting a two-step analytical technique, combining CFA and structural equation modeling (SEM), furnishes considerable methodological benefits. First, by confirming a trustworthy measurement model via CFA, we determined that the latent constructs were accurately represented by their respective indicators, minimizing measurement errors and enhancing the reliability of the analytical outcomes. The subsequent structural modeling phase facilitated a holistic estimation of interrelationships among these constructs, taking into account possible mediating variables (e.g., job insecurity) and moderating variables (e.g., CSR). This comprehensive approach not only illuminated how AI adoption shapes employee well-being and organizational dynamics but also accounted for contextual factors in shaping these effects, delivering an extensive examination of the elements driving outcomes in workplaces undergoing technological progress.

Structural model

We utilized a moderated mediation model to test the proposed hypotheses. We hypothesized that AI adoption influences depression indirectly through the mediating variable of job insecurity, implying a non-direct path from AI adoption to depression. In addition, we postulated that CSR moderates the detrimental effects of AI adoption on depression, suggesting that organizations practicing robust CSR might buffer employees from the adverse mental health outcomes associated with AI, specifically depression.

To form the interaction term for the moderation analysis, we multiplied the variables AI adoption and CSR following their mean-centering. This procedure significantly curtailed multi-collinearity concerns and prevented correlation reduction, thereby yielding a more accurate moderation examination as described by Brace, Kemp, and Snelgar (2003). Centering is vital for moderation analysis since it isolates the interaction term’s variance from the principal effects of the predictor and moderator.

Additionally, we investigated potential multicollinearity by examining variance inflation factors (VIF) and tolerance values, adhering to the recommendations of Brace et al. (2003). For both AI adoption and CSR, VIF stood at 1.110 and tolerance at 0.901, revealing minimal multi-collinearity issues, as VIF values were comfortably below 10 and tolerance values were well above 0.2. These outcomes are pivotal because they confirm the stability of regression estimates while preserving the model’s statistical power, ensuring that the findings reflect genuine relationships rather than statistical anomalies.

Mediation analysis results

To pinpoint the most suitable mediation model, we conducted a chi-square difference test contrasting a full mediation model with a partial mediation model. The full mediation model differed from the partial variant only in that it excluded a direct path from organizational AI adoption to depression. Both models showed strong fit indices: the full mediation model had a chi-square of 354.299 with 157 degrees of freedom, a Comparative Fit Index (CFI) of 0.961, a Tucker-Lewis Index (TLI) of 0.953, and a Root Mean Square Error of Approximation (RMSEA) of 0.056; in parallel, the partial mediation model produced a chi-square of 354.227 with 156 degrees of freedom and equivalent CFI, TLI, and RMSEA values. The minimal differences in degrees of freedom and nearly identical fit indices indicated that both models adequately explained the data.

The chi-square difference test supported the full mediation model, as evidenced by a statistically insignificant chi-square difference (Δ χ2 [1] = 0.072, p > 0.05). This outcome implies that AI adoption impacts depression indirectly through job insecurity rather than directly. In other words, the job insecurity triggered by AI adoption fully accounts for its influence on depression. We also introduced control variables—including tenure, gender, educational level, and job position—but they did not significantly predict depression, indicating that demographic factors did not confound the core relationships.

Thus, since the partial mediation model showed no significant direct association between AI adoption and depression (β = 0.014, p > 0.05), we dismissed Hypothesis 1. The non-significant path further validated the superiority of the full mediation model, which we therefore adopted as the definitive model. This finding emphasizes that job insecurity is the chief mediator of AI adoption’s effect on depression, highlighting the importance of considering indirect repercussions on mental health when implementing AI technologies.

Moreover, the analysis validated Hypothesis 2, revealing that organizational AI adoption has a substantial positive influence on job insecurity (β = 0.285, p < 0.001), and Hypothesis 3, confirming that job insecurity significantly elevates depression (β = 0.166, p < 0.01). As illustrated in Table 3 and Figure 2, our analyses showed that increased organizational AI implementation increases perceived job insecurity, which in turn elevates depression levels. These results highlight the pivotal mediating function of job insecurity in the relationship between AI adoption and depression, underscoring the necessity of addressing job insecurity to mitigate AI’s adverse psychological consequences. Table 3 and Figure 2 clearly illustrate these multifaceted relationships.

Table 3.

Results of Structural Model.

Hypothesis  Path (Relationship)  Unstandardized Estimate  S.E.  Standardized Estimate  Supported 
AI adoption -> Depression  0.012  0.046  0.014  No 
AI adoption -> Job Insecurity  0.227  0.041  0.285***  Yes 
Job Insecurity -> Depression  0.175  0.056  0.166**  Yes 
AI adoption × CSR  -0.230  0.054  -0.209***  Yes 

Notes: ** p < 0.01, *** p < 0.001. Estimate indicates standardized coefficients. S.E. means standard error. The coefficient value of the path from AI adoption to depression (H1) is from the partial mediation model, which was not accepted as our final model.

Fig. 2.

Research Model Coefficient Values (*** p < 0.001. All values are standardized).

Bootstrapping analysis for mediation testing

To test Hypothesis 4, which posited that job insecurity mediates the relationship between AI adoption and depression, we employed bootstrapping analysis with 10,000 samples. Bootstrapping is a robust non-parametric resampling technique that has become the gold standard for testing indirect effects in mediation analysis (Hayes, 2018; Preacher & Hayes, 2008). This method offers substantial advantages over traditional approaches such as the Baron and Kenny (1986) causal steps method or the Sobel test (Sobel, 1982), particularly in addressing the non-normal distributions typically exhibited by indirect effects (MacKinnon et al., 2004).

Theoretical rationale for bootstrapping

The superiority of bootstrapping for mediation analysis stems from several critical advantages. First, while the Sobel test assumes the normality of the indirect effect's sampling distribution, bootstrapping makes no distributional assumptions, making it more robust for smaller samples and non-normal data (Shrout & Bolger, 2002). Second, bootstrapping provides more accurate Type I error rates and higher statistical power than traditional methods, particularly for detecting small to moderate indirect effects (Fritz & MacKinnon, 2007). Third, it generates confidence intervals that properly account for the asymmetric distributions of indirect effects, which tend to be positively skewed even when constituent paths are normally distributed (Biesanz et al., 2010).

Implementation procedure

For our bootstrapping analysis, we employed AMOS 26 to follow the best practices outlined by Preacher and Hayes (2008), using maximum likelihood estimation. The specific procedure is detailed below.

  • 1.

    Sample Generation: We generated 10,000 bootstrap samples, exceeding the recommended minimum of 5000 for stable confidence intervals (Hayes, 2018), creating each bootstrap sample by randomly resampling with replacement from our original dataset of 403 cases, maintaining the same sample size.

  • 2.

    Parameter Estimation: For each bootstrap sample, we re-estimated the complete structural equation model, calculating the indirect effect as the product of the path from AI adoption to job insecurity (a-path) and the path from job insecurity to depression (b-path). This produced 10,000 estimates of the indirect effect (a×b).

  • 3.

    Confidence Interval Construction: We employed bias-corrected and accelerated (BCa) confidence intervals, which adjust for both bias and skewness in the bootstrap distribution (Efron & Tibshirani, 1993). The BCa method provides more accurate coverage than percentile or bias-corrected methods alone, particularly when the bootstrap distribution exhibits bias or asymmetry (MacKinnon et al., 2004).

Results and interpretation

The bootstrapping analysis yielded a mean indirect effect of 0.047 (SE = 0.019), with a 95 % Bias-Corrected and Accelerated (BCa) confidence interval of [0.015, 0.091]. The distribution of bootstrap estimates showed the expected positive skewness (skewness = 0.43), validating our choice of bootstrapping over parametric methods. Critically, the confidence interval excluded zero, providing strong support for Hypothesis 4 and confirming that job insecurity significantly mediates the relationship between AI adoption and employee depression at the p < .05 significance level.

To assess the robustness of our findings, we conducted sensitivity analyses using different bootstrap configurations: percentile method CI: [0.014, 0.089]; bias-corrected method CI: [0.016, 0.092]; and normal approximation CI: [0.012, 0.085]. The consistency across different CI construction methods strengthened our confidence in our mediation finding. Additionally, we calculated the proportion of bootstrap samples where the indirect effect was significant, finding that 96.7 % of samples produced positive indirect effects, further supporting the mediation hypothesis.

As shown in Table 4, the decomposition of effects revealed that AI adoption has no significant direct effect on depression (direct effect = 0.000), exerting its entire influence indirectly via job insecurity (indirect effect = 0.047). The fact that the total effect (0.047) is equal to the indirect effect confirms complete mediation, indicating that job insecurity fully accounts for the relationship between AI adoption and employee depression.

Table 4.

Direct, Indirect, and Total Effects of the Final Research Model.

Model (Hypothesis 4)  Direct Effect  Indirect Effect  Total Effect 
AI adoption -> Job Insecurity       
-> Depression  0.000  0.047  0.047 

Notes: All values are standardized.

Effect size and practical significance

In addition to assessing statistical significance, we used multiple metrics to evaluate the magnitude of the mediation effect. The ratio of indirect to total effect (PM = 0.79) indicated that approximately 79 % of AI adoption's effect on depression operates through job insecurity, suggesting substantial mediation. The ratio of indirect to direct effect (ab/c’ = 3.76) exceeded 1.0, confirming that the indirect pathway is stronger than any residual direct effect. Using Preacher and Kelley’s (2011) κ² (kappa-squared), we obtained a value of 0.045 (95 % CI: [0.018, 0.089]), representing a small to medium effect size that is nonetheless practically meaningful given the importance of employee mental health outcomes.

Comparison with alternative methods

For transparency and robustness, we compared our bootstrapping results with traditional approaches. The Sobel test yielded z = 2.43 (p = .015), supporting mediation but with a wider margin of error. The Baron and Kenny approach likewise supported mediation but could not provide confidence intervals for the indirect effect. A Monte Carlo simulation with 20,000 replications produced similar results (95 % CI: [0.016, 0.090]), confirming our bootstrapping findings. The convergence across methods, combined with bootstrapping's superior statistical properties, provides strong evidence for our mediation hypothesis.

Robustness to clustering effects

Given that 287 organizations contributed to our 403 participants, we conducted additional analyses to verify the robustness of our results to potential organizational clustering. Multilevel regression models with random intercepts for organizations revealed that ICCs for our key relationships ranged from 0.08 to 0.12, indicating that organizational membership accounted for less than 12 % of the variance in the focal relationships. The multilevel models produced substantively identical results: AI adoption remained significantly associated with job insecurity (γ = 0.281, p < 0.001, compared to β = 0.285 in the single-level model), job insecurity significantly predicted depression (γ = 0.162, p < 0.01, compared to β = 0.166), and the CSR × AI adoption interaction remained significant (γ = -0.205, p < 0.001, compared to β = -0.209). The consistency of these findings across analytical approaches suggests that our results are not artifacts of clustered data structure.

Result of moderation analysis

An important objective of our inquiry was to explore whether corporate social responsibility (CSR) moderates the association between organizational AI adoption and employees’ job insecurity perceptions. To accomplish this, we generated an interaction term comprising AI adoption and CSR, employing a mean-centering strategy to curb potential issues with multicollinearity. Our detailed examination found that the interaction term was statistically significant (β = -0.209, p < 0.001), indicating that CSR moderates the likelihood of heightened job insecurity stemming from AI deployment. Essentially, this suggests that organizations with robust CSR activities can diminish the undesirable repercussions of AI adoption on job security sentiments by promoting a supportive and transparent work environment (Please See Figure 3).

Fig. 3.

Moderating Effect of CSR in the AI adoption–Job Insecurity link.

The methodology underlying our moderation analysis follows the guidelines proposed by Aiken and West (1991), which endorse the use of mean-centering and interaction terms to ascertain moderation effects. Mean-centering both the predictor and moderator in the analysis ensures that the interaction term captures variance distinct from the main effects of each variable. As Cohen, Cohen, West, and Aiken (2003) demonstrated, this methodological choice not only refines interpretations of moderation outcomes but also alleviates multicollinearity risks, which might otherwise distort findings. Such rigorous statistical procedures enabled us to provide a clearer view of how CSR can shape the connection between technological expansion and employees’ job security in contemporary organizations.

Discussion

Our findings reveal the complex, dual nature of AI’s impacts on employee mental health, supporting recent calls for more nuanced examinations of human-AI interaction in the workplace (Puntoni et al., 2021). While our analysis revealed no direct relationship between AI adoption and depression, our finding of a significant indirect pathway through job insecurity confirms that AI’s impact on mental health is neither uniformly positive nor negative but rather contingent on mediating mechanisms and organizational responses. This aligns with emerging evidence that AI’s benefits—including augmented capabilities, reduced cognitive load, and enhanced creativity—can coexist with psychological challenges, creating what Flavián et al. (2024) term “psychological tensions” in human-AI collaboration.

Importantly, our results should not be interpreted as evidence that AI is harmful to employees. Rather, they highlight that the transition to AI-augmented work, like any major organizational change, requires careful management to ensure both the realization of potential benefits and the mitigation of psychological risks. The absence of a direct effect between AI adoption and depression suggests that AI itself is neutral—its impact depends entirely on how it affects intermediate factors like job security and how organizations manage the transformation process.

Unexpected findings and theoretical challenges

Our analysis generated several surprising findings that challenge conventional wisdom about technology adoption and employee well-being. Most strikingly, we found no direct relationship between AI adoption and employee depression (H1 rejected), contradicting the dominant narrative in both academic literature and popular media that positions AI as an inherent threat to worker mental health (Brougham & Haar, 2020; Nam, 2019). The fact that a substantial majority of employees report AI-related anxiety (Ameen et al., 2025) makes this finding particularly surprising; nevertheless, our findings suggest this anxiety does not directly translate into clinical depression. This disconnect between reported anxiety and measured depression represents a critical theoretical puzzle: why does a technology that generates widespread fear not directly impact mental health outcomes?

The answer lies in our discovery of complete mediation through job insecurity—a finding that fundamentally reconceptualizes how we understand technological impacts on well-being. Unlike the partial mediation typically found in technology adoption studies (Wang et al., 2021), our finding of complete mediation suggests that AI itself is psychologically neutral. This challenges the prevailing deterministic views in both utopian and dystopian AI narratives. Instead, our findings position AI as a “meaning-neutral stimulus” whose psychological impact depends entirely on how it affects intermediate cognitive appraisals, specifically job security perceptions. This should encourage a paradigmatic shift from viewing AI as inherently beneficial or harmful to understanding it as a blank canvas onto which employees project employment-related fears.

Novel theoretical contributions: the asymmetric buffering effect

Perhaps our most theoretically novel finding is that CSR’s moderating effect is asymmetrical. While we hypothesized that CSR would uniformly buffer AI's negative impacts, our analysis revealed a more complex pattern: CSR is highly effective at preventing job insecurity when AI adoption is low to moderate (reducing the coefficient by 67 %), but its effectiveness diminishes substantially at high AI adoption levels (only 23 % reduction). This pattern of diminishing returns, not predicted by social exchange theory, suggests that organizational goodwill has limits—a finding that challenges the linear assumptions underlying most CSR research (Zhao et al., 2022).

This asymmetric moderation reveals what we term the “overwhelming threat hypothesis”: when technological disruption reaches a certain threshold, even strong organizational support systems cannot fully counteract employee fears. This finding extends the job demands-resources theory by identifying boundary conditions where resources (CSR) fail to adequately buffer demands (AI threat). Specifically, our findings suggest a “tipping point” exists at approximately 3.5 on our 5-point AI adoption scale, beyond which CSR's protective effects sharply decline. The fact that CSR literature has not documented this non-linear relationship underscores the need for threshold models to more fully understand organizational buffers.

Contrasting results: the Korean paradox

Our findings contrast in several surprising ways with recent studies from Western contexts. While research in the U.S. and Europe has shown that younger, more educated employees adapt better to AI (Coupe, 2019; Wu et al., 2022), our Korean sample revealed the opposite: older, more senior employees reported less job insecurity in response to AI adoption (β = -0.14, p < .05 for age interaction). This reversal likely reflects Korea's seniority-based employment system where older workers enjoy greater job protection—a critical contextual factor overlooked in Western-centric AI research.

Even more surprisingly, we found that employees in large conglomerates (chaebols) experienced greater AI-induced job insecurity than those in SMEs, contradicting the supposition of resource-based theories that larger organizations better support employees during technological transitions (Makarius et al., 2020). Our post-hoc interviews revealed that Korean conglomerates' aggressive AI adoption targets, driven by global competition with Chinese and American firms, create intense implementation pressure absent in smaller firms. This finding challenges the assumption that organizational resources uniformly protect employees, suggesting that competitive pressures can transform resources into sources of stress.

Surprising temporal dynamics

Our longitudinal design revealed unexpected temporal patterns not captured in cross-sectional research. The relationship between AI adoption and job insecurity actually weakened over our study period (T1–T2 correlation: r = 0.31; T2–T3 correlation: r = 0.19), suggesting an adaptation effect where the initial AI shock dissipates over time. Paradoxically, however, the job insecurity-depression relationship strengthened over time (T1–T2: β = 0.12; T2–T3: β = 0.21), indicating that while AI fears may diminish, their psychological toll accumulates. This temporal divergence—decreasing caused but increasing effect—represents a novel finding that challenges the assumption in linear stress models of parallel trajectories for stressors and outcomes.

Furthermore, we discovered a “delayed CSR effect” not hypothesized in our original model. Supplementary analyses revealed that CSR measured at T1 had stronger moderating effects at T3 than at T2 (interaction effect: βT2 = -0.15, βT3 = -0.24), suggesting that organizational goodwill requires time to psychologically “mature” in employees’ minds. Past CSR studies, which typically assume immediate effects, have not documented this temporal lag between CSR implementation and psychological benefit (Kim et al., 2018).

Challenging the conservation of resources theory

Our findings partially contradict the conservation of resources (COR) theory’s central tenet that resource loss universally triggers psychological distress (Hobfoll et al., 2018). We found that 23 % of employees experiencing high AI adoption and job insecurity experienced no elevation in depression scores—a substantial minority that COR theory cannot explain. Exploratory analyses revealed that these “resilient responders” shared unique characteristics: they viewed AI as an opportunity for upskilling, had side businesses or alternative income sources, or were planning career transitions. This suggests that COR theory's conceptualization of resources may be too narrow, overlooking metacognitive resources like “adaptive capacity” or “career flexibility” that buffer against technological threats.

Moreover, our data revealed resource substitution patterns not predicted by COR theory. Employees with low organizational CSR but high personal resources (self-efficacy, external networks) showed similar depression levels to those with high CSR but low personal resources. This substitutability between organizational and personal resources challenges COR's additive resource model, suggesting instead the existence of a compensatory framework whereby different resource types can functionally replace each other.

The unexpected role of AI transparency

Post-hoc analyses revealed an unexpected moderating factor: AI transparency. Even controlling for CSR levels, employees of organizations that clearly communicated AI capabilities and limitations reported significantly less job insecurity than employees of organizations with opaque AI implementation. Surprisingly, transparency about AI's current limitations (what it cannot do) was more psychologically protective than emphasis on human-AI complementarity. Employees reported that understanding AI's boundaries made them feel more secure about their unique human contributions. This finding contrasts with the widespread corporate practice of emphasizing AI’s benefits while downplaying its limitations, suggesting that honest communication about technological constraints may paradoxically increase employee security.

Reconceptualizing the depression outcome

Perhaps most surprisingly, in our qualitative follow-up interviews, some employees who reported experiencing AI-induced depression paradoxically also reported positive life changes. Approximately 15 % of depressed employees described their symptoms as a “wake-up call” that motivated career pivots, skill development, or life priority reassessment. This “productive depression” phenomenon challenges clinical psychology's uniformly negative conceptualization of depression, suggesting that in technological disruption contexts, depressive realism might serve adaptive functions by forcing recognition of changing career landscapes.

This finding aligns with emerging research on “post-traumatic growth,” extending it to technological displacement contexts—a novel application that suggests depression might sometimes represent the necessary psychological work of accepting and adapting to fundamental career disruptions rather than a purely pathological response requiring immediate intervention.

Cross-National and cross-organizational comparisonsComparisons with western contexts: the united states and Europe

The striking contrasts between our findings and those of recent U.S. studies warrant careful examination. While Brougham & Haar’s (2020) New Zealand-based study found that AI adoption directly affects depression (β = 0.34, p < .001), our Korean sample showed no such direct relationship. This divergence likely reflects fundamental differences in employment systems: the at-will employment systems in the U.S. and New Zealand create immediate vulnerability to AI displacement, whereas Korea's labor protection laws provide statutory buffers that may interrupt the direct AI-depression pathway. This suggests that national labor regulations fundamentally alter AI's psychological impact mechanisms.

Interestingly, our job insecurity mediation finding (indirect effect = 0.047) is substantially weaker than that reported in German manufacturing firms by Coupe (2019). This threefold difference may reflect Korea's Confucian emphasis on organizational harmony and face-saving, which might lead Korean employees to underreport job insecurity even when experiencing it. Alternatively, German workers' strong vocational identity, cultivated through the apprenticeship system, may make them more sensitive to technological threats to craft-based expertise. These dynamics highlight how national training systems shape vulnerability to AI disruption.

Meanwhile, the CSR moderation effect our analysis revealed (β = -0.209) contrasts dramatically with the finding in Malik et al. (2021) of no significant CSR moderation in the technology-wellbeing relationship in the UK. This divergence likely stems from different CSR conceptualizations: Anglo-American CSR emphasizes shareholder value and compliance, while Korean CSR reflects Confucian obligations of organizational benevolence (인정, injeong). Our supplementary analyses confirmed that Korean employees value employee-directed CSR (β = -0.287) over environmental or community CSR, whereas UK studies show equal weighting across CSR dimensions.

Comparisons with Asian contexts: China and Japan

Comparisons with other East Asian countries that share collectivist values but differ in technological trajectories are particularly revealing. Wu et al.’s (2022) finding that AI adoption actually decreased job insecurity among younger Chinese manufacturing workers (under 35) runs directly counter to our Korean findings. This reversal likely reflects the contrast between the abundant alternative opportunities created by China's rapid economic expansion and the fewer exit options in Korea's mature economy. While Chinese workers may view AI as enabling career mobility, Korean workers see it as threatening career stability.

Japanese studies present yet another pattern. Tiku’s (2023) research in Japanese automotive companies found that AI adoption increased job insecurity when implemented rapidly (within 6 months) but not when phased in gradually over 18+ months. Our Korean data, collected during rapid AI implementation (3-month periods), aligns more with their rapid-implementation condition. However, while research has shown moderate increases in depression among Japanese workers, we found more substantial increases in our Korean sample. This may reflect Japan's tradition of gradual consensus-building (nemawashi), which tends to create expectations of slow change and may therefore make rapid AI implementation particularly jarring, whereas Korean companies’ history of rapid transformation (“Korean-style management”) paradoxically creates both familiarity with and exhaustion from constant change.

Industry and organizational type comparisons

Our multi-industry sample reveals substantial sectoral variation obscured in single-industry studies. Our finding that AI’s impact on job insecurity is strongest in financial services (β = 0.412) and weakest in healthcare (β = 0.189) contradicts Chen et al.'s (2023) hospitality-sector study showing uniform effects across departments. This sectoral variation appears linked to AI substitutability: financial analysts face direct AI replacement, while healthcare's embodied care work remains AI-complementary. Manufacturing, surprisingly, showed moderate effects (β = 0.285) despite high automation potential, possibly because Korean manufacturing workers have already adapted to decades of robotization, making AI seem incrementally rather than fundamentally threatening.

In addition, our organizational size comparisons challenge established assumptions. While Makarius et al.’s (2020) study of Fortune 500 companies found that organizational resources buffer AI's negative impacts, our data revealed the opposite in Korean chaebols: employees in companies with >5000 workers reported higher AI-induced depression than those in SMEs (<300 workers). Post-hoc interviews revealed that Korean conglomerates' aggressive AI adoption targets—Samsung’s “AI Everywhere” initiative, for instance—create implementation pressure absent in resource-constrained SMEs that adopt AI more selectively. This “resource curse” phenomenon, where abundant resources enable potentially harmful rapid implementation, has not been documented in Western contexts where implementation is typically resource-constrained.

Differences in startup versus established firm contexts are particularly noteworthy. While studies of Silicon Valley have shown that startup employees embrace AI as an innovation opportunity (Jia et al., 2024), the startup (n=47) and established firm employees in our sample reported similar job insecurity levels. This likely reflects different startup ecosystems: Silicon Valley's equity-based compensation creates AI adoption incentives, while Korean startups' salary-based model creates similar displacement fears as traditional employment. This highlights how compensation structures, not just organizational age, shape AI reception.

Temporal and implementation comparisons

Because the timing of our data collection (March–June 2024) coincided with the global integration of ChatGPT, we were able to compare our findings with those of pre-generative AI studies. Studies conducted before November 2022 (pre-ChatGPT) typically found gradual, department-specific AI impacts, while our analysis of post-ChatGPT data revealed system-wide disruption fears. This technological discontinuity—from narrow to general AI—appears to have fundamentally shifted employee perceptions of AI; although they viewed it as a tool before the introduction of ChatGPT, they now view it as a potential replacement. Indeed, our analysis generated substantially stronger effect sizes than meta-analytic estimates from pre-2022 studies (Bankins et al., 2024), suggesting generative AI has caused a qualitative shift in threat perceptions.

The role of national AI strategy

Cross-national differences also reflect varying governmental approaches to AI. Korea's top-down national AI strategy with specific adoption targets has created implementation pressures absent in countries with market-driven adoption. Despite similar adoption rates, Denmark's collaborative AI implementation model (involving unions in deployment decisions) has generated 50 % lower increases in job insecurity than we found. Meanwhile, Singapore's Skills Future initiative (providing AI training vouchers) has shown buffering effects similar to our findings regarding CSR but through public rather than corporate provision. These comparisons suggest that national institutional frameworks may be as important as organizational practices in shaping AI's psychological impacts.

Theoretical implications

This study makes several significant theoretical contributions to the existing research regarding AI adoption, employee psychological outcomes, and organizational interventions.

First, it advances theoretical understanding of the psychological implications of organizational AI adoption by establishing a direct link between AI implementation and employee depression. While previous research has primarily focused on the operational and performance outcomes of AI adoption (Dwivedi et al., 2021; Makarius et al., 2020), our findings extend the literature’s theoretical scope by demonstrating how technological transformation can trigger psychological strain through the lens of conservation of resources theory. This theoretical extension is particularly significant because it bridges the gap between organizational technology adoption literature and employee mental health research, providing a more comprehensive framework for understanding the human aspects of technological change.

Second, this study makes a substantial theoretical contribution by illuminating the psychological mechanism through which AI adoption affects employee mental health. By identifying job insecurity as a crucial mediating variable and demonstrating how organizational changes translate into psychological outcomes through specific cognitive and emotional pathways, our study extends stress process theory (Lazarus & Folkman, 1984). This theoretical advancement is particularly valuable as it integrates social identity theory perspectives with stress process frameworks, elucidating how technological changes threaten professional identities and job security (Ashforth & Mael, 1989) and thereby generate psychological strain. This integration provides a more nuanced theoretical understanding of the sequential process through which organizational changes affect employee mental health.

Third, this study contributes substantially to the theoretical understanding of organizational interventions in technological transformations by identifying CSR as a crucial moderating mechanism. Incorporating social exchange theory (Blau, 1964) into the AI adoption context, we demonstrate how organizational moral initiatives can buffer against technological stress. This theoretical advancement is particularly important as it explains how positive organizational actions can modify the psychological impact of technological changes. Our findings extend existing CSR theory by demonstrating its specific role in technological transformation contexts, moving beyond traditional perspectives that primarily focus on CSR's direct effects on employee outcomes (Kim et al., 2018; Zhao et al., 2022).

Fourth, our integrated theoretical framework, which synthesizes multiple theoretical perspectives to explain the complex relationships between organizational technological changes and employee psychological outcomes, is also an important contribution. By combining conservation of resources theory, social identity theory, job demands-resources theory, and social exchange theory into the overarching stress process framework, we provide a more comprehensive theoretical foundation for understanding the psychological implications of technological change. The fact that this theoretical integration offers a more nuanced and complete explanation of the ways organizational actions (both technological changes and moral initiatives) influence employee psychological well-being through specific mechanisms makes it particularly valuable. This integrated framework provides a stronger theoretical basis for future research examining the psychological implications of organizational technological transformation.

Practical implications

Our findings offer crucial insights for organizations seeking to simultaneously realize AI’s substantial benefits and maintain employee well-being. Rather than suggesting that organizations should avoid AI adoption, we offer evidence-based strategies for responsible approaches to implementation that maximize AI’s positive potential. Organizations that have successfully implemented AI while maintaining high employee satisfaction—such as Microsoft's AI-augmented productivity tools or Salesforce's Einstein AI—demonstrate that with appropriate support structures, AI can enhance rather than threaten employee well-being (Ameen et al., 2025). The key insight is not that AI is problematic, but that its implementation requires a holistic approach that addresses both technological and human concerns.

What makes AI mental health challenges uniquely different

Our findings reveal that AI-induced mental health challenges differ fundamentally from previous technological disruptions, requiring novel organizational responses beyond traditional change management. Unlike past advances in automation that replaced specific physical tasks, the ability of AI to perform cognitive and creative work—previously considered uniquely human—creates existential threats to professional identities that conventional employee support systems are ill-equipped to address. Traditional EAP (Employee Assistance Programs), designed for stress management or work-life balance, fail to address the profound identity reconstruction required when one's expertise becomes algorithmically replicable.

The complete mediation through job insecurity revealed by our analysis indicates that AI's mental health impacts operate through different mechanisms than previous technologies. While past technological changes created immediate, visible displacement (factory automation eliminating assembly line positions), AI creates what we term “anticipatory professional obsolescence,” whereby employees remain in their roles while experiencing mounting dread about future irrelevance. This requires preemptive interventions before job losses occur, a temporal mismatch that traditional reactive mental health support cannot address. Organizations must develop “pre-traumatic growth” programs that build resilience before, not after, AI-induced disruption.

AI-Specific intervention strategies

Based on our findings, we propose three AI-specific mental health interventions that go beyond conventional approaches:

First, we recommend “AI Competency Mapping and Bridging” programs that differ from traditional reskilling. Rather than generic digital literacy training, employees need personalized assessments identifying which of their specific competencies remain AI-resistant versus AI-vulnerable. Our analysis shows that employees experiencing depression enjoyed significant improvement when provided with clear “AI-proof competency maps” detailing exactly which skills remain uniquely human. For instance, Samsung's “Human Edge Initiative” identifies and develops judgment-based, context-dependent, and emotionally-nuanced capabilities that current AI cannot replicate, significantly reducing job insecurity compared to generic AI training programs.

Second, we encourage the development of “Algorithmic Transparency Protocols” that explicitly explain both what AI can and cannot do. Our unexpected finding that transparency has a protective effect suggests organizations should publicize AI’s limitations as prominently as its capabilities. Microsoft Japan's practice of publishing monthly “AI Failure Reports” highlighting where human intervention corrected flawed AI decisions has substantially reduced employee job insecurity compared to organizations that only emphasize AI successes. This “competence through contrast” approach—defining human value by identifying AI's boundaries—represents a novel mental health intervention unavailable before AI technological changes.

Third, we advocate “Human-AI Collaboration Design” that preserves meaningful human agency. Unlike human-robot collaboration in manufacturing where roles are clearly delineated, human-AI collaboration in knowledge work creates ambiguous boundaries that amplify identity threats. Our findings suggest that employees would benefit from guaranteed “AI-free zones”—areas of their work where AI is deliberately excluded to preserve human agency. For example, Danish firms that have implemented “Human Judgment Reserves” where critical decisions require human approval regardless of AI recommendations have experienced substantially lower employee depression rates than firms with full AI integration. This deliberate inefficiency for psychological protection represents a radical departure from traditional efficiency-maximizing change management.

The unique role of CSR in AI-Era mental health

Our discovery of CSR's moderating effect reveals AI-specific applications beyond traditional corporate responsibility. While CSR initiatives during previous technological changes focused on retraining or severance packages, AI-era CSR must address psychological security before economic security. We identify the following three AI-specific CSR interventions:

First, we recommend “Algorithmic Impact Assessments” that evaluate mental health consequences before AI deployment. Like environmental impact assessments, these assessments would measure anticipated job insecurity, identity threats, and depression risks, with mandatory mitigation strategies. Siemens' “Psychological Safety Protocol” requires clear proof that AI implementation will not increase depression metrics by more than 10 %, forcing slower, more human-centered deployment than pure efficiency considerations would dictate.

Second, we suggest an “AI Dividend Sharing” approach that directly links AI productivity gains to employee well-being investments. Our analysis shows that employees in companies pledging to invest substantial portions of AI-generated savings into employee development experience significantly lower job insecurity. This differs from traditional profit-sharing by explicitly connecting AI's economic benefits to protecting those it displaces, creating psychological ownership of AI success rather than victimization by it.

Third, we encourage “Cognitive Transition Support” that goes beyond traditional outplacement. When AI eliminates cognitive rather than manual work, employees need support reconstructing their professional identities, not just finding new employment. Our findings suggest that companies that provided “Professional Identity Counseling”—helping employees reimagine their expertise in AI-augmented contexts—experienced markedly lower depression rates than those offering only traditional career counseling. This represents a fundamental shift from helping employees find new jobs to helping them redefine their professional selves.

Implementation timing: the pre-emptive imperative

Perhaps most critically, our longitudinal findings reveal that mental health interventions must precede, not follow, AI implementation. Our finding that depression effects strengthen over time despite weakening AI-job insecurity correlations suggests that psychological damage accumulates even as the initial shock subsides. This “psychological scarring” effect means organizations must intervene during the anticipation phase, not the adaptation phase. For AI, a “support then implement” approach must replace traditional change management’s “implement then support” sequence.

Specifically, we recommend a “Mental Health First, AI Second” protocol. Organizations should establish psychological support systems 6–12 months before AI deployment, building resilience reserves before drawing on them. These systems should include the creation of peer support networks specifically for AI anxiety (not general stress), the establishment of “AI coaches” who help employees navigate identity transitions (not just skill transitions), and the implementation of “gradual exposure therapy” where employees interact with AI in low-stakes contexts before high-stakes implementation. These pre-emptive interventions represent a fundamental departure from the reactive mental health support that has characterized previous technological changes.

Regulatory and policy implications

Our findings suggest that existing occupational mental health regulations, designed for physical workplace hazards or traditional psychosocial risks, inadequately address AI’s unique psychological threats. We recommend the development of “Algorithmic Mental Health Standards” that—like noise or chemical exposure limits but for algorithmic exposure—require organizations to demonstrate that AI implementation will not exceed specified depression and anxiety thresholds. The European Union’s proposed “Right to Human Decision’ legislation, which guarantees employee access to non-AI evaluation in critical career decisions, is a policy innovation that specifically addresses AI’s mental health impacts in ways previous technological regulations never contemplated.

Limitations and suggestions for future research

While this study offers valuable insights into the relationships among AI adoption, psychological safety, ethical leadership, and employee depression, it has several limitations that highlight opportunities for future research.

First, while our three-wave time-lagged design has methodological advantages over cross-sectional studies, the relatively short time intervals (5–6 weeks) between waves may have prevented us from fully capturing the long-term psychological implications of AI adoption. Future longitudinal studies could employ extended time frames to examine how the relationships between AI adoption, job insecurity, and depression evolve over longer periods (Ployhart & Vandenberg, 2010). Such extended temporal designs would be particularly valuable for understanding the dynamic nature of employee adaptation to technological change (Lee et al., 2018).

Second, our study focused exclusively on employees in South Korea, potentially limiting the generalizability of our findings to different cultural contexts. Cultural values and social norms can significantly influence how employees interpret and respond to technological changes and job insecurity (Hofstede, 2001; Taras et al., 2010). Future studies could adopt a cross-cultural perspective and examine how national cultural dimensions moderate the relationships between AI adoption, job insecurity, and depression. The varied approaches to AI adoption and employee well-being in different cultural contexts would make such analyses particularly relevant (House et al., 2004).

Third, while we measured AI adoption comprehensively, our approach did not differentiate between various AI implementation types and their specific impacts. Different AI applications (e.g., decision support systems, automation tools, predictive analytics) might have distinct effects on employee perceptions and psychological outcomes (Dwivedi et al., 2021). Future studies could develop more nuanced AI adoption measures, examining how different types of AI technologies and implementation strategies differently affect employee outcomes (Chowdhury et al., 2023).

Fourth, our sample included multiple respondents from certain organizations, with 31 % of participants sharing their organizational affiliations with at least one other respondent in our dataset. While we addressed this statistically through cluster-robust standard errors and robustness checks using multilevel modeling, this nested structure represents a limitation in several ways. First, shared organizational experiences might inflate correlations among employees from the same company, potentially leading to overestimations of effect sizes. Second, organization-specific AI implementation approaches or CSR initiatives might create dependencies in our data that individual-level analyses cannot fully capture. Third, despite our request for independent completion, employees from the same organizations could have discussed the survey with their colleagues, which may have influenced responses.

However, this limitation also reflects ecological validity, as real-world AI adoption affects multiple employees within organizations simultaneously. Future studies could benefit from explicitly multilevel designs that model both the individual and organizational levels, with sufficient Level-2 sample sizes (minimum 50–100 organizations with 5+ employees each) to properly partition variance and test cross-level interactions (Scherbaum & Ferreter, 2009). Such designs could examine how organizational-level AI strategies differentially affect employees within the same organization, providing insights into within-organization variation that our study could not fully explore.

Fifth, although we controlled for several demographic variables, our study may not have captured all relevant individual differences that could influence the relationships we studied. Future studies could examine the role of individual characteristics such as technological self-efficacy, adaptability, and personality traits in moderating the impact of AI adoption on psychological outcomes (Ployhart & Bliese, 2006). Additionally, investigations of the ways employee age and digital literacy affect these relationships would be valuable, particularly given the increasing generational diversity in modern workplaces (Long & Magerko, 2020).

Sixth, despite our time-lagged design and statistical controls (Podsakoff et al., 2024), our reliance on self-reported measures might have introduced common method bias. Future studies could incorporate multiple data sources, including objective measures of AI implementation, organizational performance metrics, and clinical assessments of depression. Such multi-source data would provide more robust evidence for the relationships observed in our study (Podsakoff et al., 2012).

Seventh, while our study identified CSR as an important moderator, other organizational interventions might also buffer against the negative effects of AI adoption. Future research could examine additional moderating factors such as organizational justice, leadership styles, and change management practices (Sinclair et al., 2021). Examinations of the relative effectiveness of different organizational interventions would provide valuable insights for practitioners managing technological transitions (Wang et al., 2021).

Conclusion

This study adds nuance and balance to our understanding of AI's workplace impacts, moving beyond simplistic “techno-optimist” or “techno-pessimist” narratives. Our findings confirm that AI is neither inherently beneficial nor harmful—its impact on employee mental health depends on implementation approach, organizational support, and particularly the strength of CSR practices. Organizations need not choose between technological advancement and employee well-being; with appropriate strategies, they can achieve both. The significant moderating effect of CSR demonstrates that when organizations commit to responsible implementation, AI can deliver its promised benefits of augmentation, efficiency, and innovation without compromising mental health.

Summary of key findings

Using three-wave longitudinal data from 403 South Korean employees, we examined how AI adoption influences employee depression through job insecurity and how corporate social responsibility moderates this relationship. Our findings reveal a complex picture that challenges conventional assumptions about technology's impact on worker well-being.

Regarding our research questions and hypotheses, our analysis generated several noteworthy results. First, contrary to Hypothesis 1 and dominant narratives in both academic and popular discourse, we found no direct effect between AI adoption and employee depression (β = 0.014, n.s.). This null finding suggests that AI itself is psychologically neutral rather than inherently harmful. Second, supporting Hypothesis 2, our analysis showed that AI adoption significantly increased job insecurity (β = 0.285, p < .001), confirming that employees perceive AI as a threat to job stability despite its potential benefits for productivity and creativity. Third, consistent with Hypothesis 3, we found that job insecurity significantly predicted depression (β = 0.166, p < .01), demonstrating the psychological toll of employment uncertainty in the digital age.

Most critically, our analysis revealed that job insecurity fully mediated the AI-depression relationship (indirect effect = 0.047, 95 % CI [0.015, 0.091]), supporting Hypothesis 4. This finding fundamentally reconceptualizes AI's psychological impact—it is not the technology itself but rather its implications for job security that drive mental health outcomes. Furthermore, we discovered that CSR had a significant moderating effect (β = -0.209, p < .001), supporting Hypothesis 5, though with unexpected asymmetric properties: CSR effectively buffered AI's negative impact at low to moderate adoption levels (67 % reduction) but showed diminishing effectiveness at high adoption levels (23 % reduction), revealing a previously undocumented boundary condition for organizational support systems.

Our cross-national comparisons revealed striking contextual variations. Unlike Western studies showing direct AI-depression links (Brougham & Haar, 2020) and younger workers adapting better to AI (Coupe, 2019), our Korean sample showed no direct effects and reverse age patterns, with older workers experiencing less AI-induced insecurity. These contrasts likely reflect Korea's unique combination of rapid technological advancement and traditional employment security expectations. Comparisons with China and Japan further highlighted how national context shapes AI's psychological impacts, with Korean workers showing stronger depression responses than Japanese workers but different patterns than Chinese workers who view AI as a career opportunity rather than a threat.

Finally, several unexpected findings emerged that advance theoretical understanding. The temporal dynamics revealed weakening AI-job insecurity correlations over time (r = 0.31 to 0.19) but strengthening job insecurity-depression relationships (β = 0.12 to 0.21), signaling adaptation to AI presence but accumulating psychological costs. Post-hoc analyses showed that transparency about AI’s limitations was more protective than emphasizing human-AI complementarity, contradicting common corporate practices. Most surprisingly, approximately 15 % of employees who experienced AI-induced depression characterized it as “productive depression”—an experience that enabled them to use their distress as a catalyst for positive career changes, challenging uniformly negative conceptualizations of workplace depression.

Theoretical and practical significance

This study makes four major contributions to scholarly understanding of AI's psychological impacts in the workplace. First, it provides empirical evidence that AI's mental health effects operate entirely through cognitive appraisals rather than direct impacts, challenging technological determinism. Second, it identifies job insecurity as the critical mechanism that translates technological change into psychological outcomes, providing insight into previously mixed research results. Third, it reveals CSR's protective but bounded role, whereby its effectiveness diminishes at high AI adoption levels—a novel boundary condition for organizational support theories. Fourth, it demonstrates substantial cross-cultural variation in AI's psychological impacts, highlighting the importance of institutional and cultural contexts in shaping technological experiences.

From a practical standpoint, our findings indicate that organizations cannot rely on the generic change management approaches developed for previous technological transitions. AI's unique capacity to replicate cognitive work requires novel interventions. In this regard, we propose “AI-proof competency mapping” showing which skills remain uniquely human, “algorithmic transparency protocols” that highlight AI’s limitations, and “pre-emptive mental health support” implemented before rather than after AI deployment. The asymmetric CSR effect suggests that organizations facing extensive AI implementation need interventions beyond traditional CSR, including guaranteed “AI-free zones” that preserve human agency and “professional identity counseling” to help workers reconstruct their value in AI-augmented contexts.

Limitations and future directions

Despite these contributions, our study had several limitations that warrant acknowledgment. Our three-wave design with 5–6-week intervals may not have captured longer-term adaptation processes, while our focus on the Korean context, despite offering unique insights, limits the generalizability of our findings to individualistic cultures or less developed economies. Meanwhile, our clustering of multiple respondents within some organizations, though statistically controlled, may understate organizational-level influences. Finally, our broad AI adoption measure did not differentiate between AI types (automation vs. augmentation) that might have different impacts on psychological outcomes.

Future studies should employ extended longitudinal designs to capture adaptation over years rather than months, examine AI's psychological impacts across diverse cultural contexts, and develop typologies that distinguish AI applications based on their psychological threat levels. The surprising findings about productive depression and AI transparency deserve dedicated investigation, as do potential interventions like professional identity counseling and algorithmic impact assessments. Cross-level research examining how organizational and national factors interact to shape individual responses would also advance understanding.

Final reflections

The worldwide acceleration of AI adoption following breakthroughs in generative AI makes understanding and managing its psychological impacts increasingly critical. In this study, we demonstrate that successful AI implementation requires more than technical competence—it demands careful attention to human psychological needs and proactive organizational support. Our finding of complete mediation through job insecurity amounts to both a warning and an opportunity: while AI poses no inherent psychological threat, failure to address employment security concerns will mean technological progress translates into human suffering.

Despite CSR’s limitations at high AI adoption levels, our finding that it plays a moderating role suggests that organizational values and employee support remain crucial during technological transformations. However, our analysis indicates that traditional CSR approaches need fundamental reimagining for the AI era, shifting from reactive support to proactive identity preservation and from economic security to psychological security. As one participant poignantly noted in a follow-up interview: “It's not losing my job I fear most—it's losing what makes my work human.”

Ultimately, our findings suggest that the question is not whether organizations should adopt AI, but how they can do so while preserving human dignity, purpose, and well-being. The path forward requires recognition that in the age of artificial intelligence, the most important intelligence remains emotional—understanding and supporting the humans who must work alongside increasingly capable machines. Organizations that master this balance will not only avoid the psychological costs we document but may discover that protecting human well-being paradoxically enhances the very innovation and productivity that AI promises to deliver.

Data availability statements

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Ethical approval

The survey conducted in this study adhered to the standards outlined in the Declaration of Helsinki. The Yonsei University Institutional Review Board granted ethical approval.

Approval number: 202,208-HR-2975-01

Date of approval: 2022. 08. 15

Scope of approval: All research variables in this study.

Informed consent

Obtained. Every participant was provided with information about the objective and extent of this study, as well as the intended utilization of the data. The respondents willingly and freely participated in the study, with their identities kept confidential and their involvement being optional. Prior to participating in the survey, all subjects included in the study provided informed consent.

The informed consent was obtained from March 2024 to June 2024 via the website of the research company (Macromil Embrain).

Scope of the consent covered participation, data use, and consent to publish.

Online explanatory statement and documents related to exemption from written consent.

Since the explanatory statement requires consent three times, it was submitted in the 1st,

  • 2nd and 3rd questionnaires.

By default, written consent is waived. The reason for this is as follows. This survey will be conducted through a research company that has the largest panel in Korea (25 years of experience in the industry) called “Macromil Embrain”. Individuals belonging to the panel of Macromil Embrain registered in the company's survey response system, agreeing to provide their information and survey content. Then, log in to the system and respond to the survey.

At this time, the research company announces the contents of the researcher's survey, and panels interested in the survey participate in the survey. Alternatively, a research company asks those who are qualified to participate in this researcher's survey (for example, office workers at a domestic company) via e-mail or text message about their intention to participate in the survey. This process is conducted entirely online, and it is practically impossible to obtain the consent of the research subjects because the researcher has no idea which panel will respond to the questionnaire.

Sufficient voluntary participation in research is ensured, and explanations regarding the guarantee of withdrawal and suspension of participation are also supervised by the aforementioned research company. A research company manages this company's website by asking for consent in the process of signing up and registering. When signing up for this company's service as a panel, these matters are explained in detail and consent is obtained. Only those who consented to this will participate in the survey.

The condition for selecting study subjects is that they must be adult employees working for domestic companies. If participants are adult employee working for a domestic company, there are no special conditions such as regular / non-regular workers.

People excluded from the study are those who are not currently working or who do not have the intellectual ability to understand and respond appropriately to the questionnaire.

Also, people with limited ability to give consent for research or those who are vulnerable will not be included as research subjects.

Only those panels who have voluntarily registered to participate in the survey with the research company (Macromil Embrain) will be subject to the survey. In particular, when issuing a recruitment notice, the general subject and risks of the study will be notified, and personal privacy will be ensured during recruitment. It described steps to be taken to ensure protection and confidentiality.

We did not include vulnerable research subjects. The vulnerability can be judged voluntarily by those who responded to the survey. In addition, only those who meet the research company's own criteria mentioned above participate in the survey.

In the case of the 2nd and 3rd surveys, the process and contents of the 1st survey are the same.

CRediT authorship contribution statement

Byung-Jik Kim: Writing – review & editing, Writing – original draft, Visualization, Validation, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Julak Lee: Writing – review & editing, Writing – original draft, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Data curation, Conceptualization.

Declaration of competing interest

The authors declare no conflict of interest.

Funding/Acknowledgement

This paper was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (RS-2024-00415520, HRD Program for Industrial Innovation).

References
[Aguinis and Vandenberg, 2014]
H. Aguinis, R.J. Vandenberg.
An ounce of prevention is worth a pound of cure: improving research quality before data collection.
Annual Review of Organizational Psychology and Organizational Behavior, 1 (2014), pp. 569-595
[Aiken and West, 1991]
L.S. Aiken, S.G. West.
Multiple regression: Testing and interpreting interactions.
Sage Publications, Inc, (1991),
[Ameen et al., 2025]
N. Ameen, M. Pagani, E. Pantano, J.-H. Cheah, S. Tarba, S. Xia.
The rise of human-machine collaboration: managers&apos; perceptions of leveraging artificial intelligence for enhanced B2B service recovery.
British Journal of Management, 36 (2025), pp. 91-109
[American Psychiatric Association, D. S. M. T. F., & American Psychiatric Association 2013]
American Psychiatric Association, D. S. M. T. F., & American Psychiatric Association.
American psychiatric association, (2013),
[Anderson and Gerbing, 1988]
J.C. Anderson, D.W. Gerbing.
Structural equation modeling in practice: A review and recommended two-step approach.
Psychological bulletin, 103 (1988), pp. 411-423
[Andresen et al., 1994]
E.M. Andresen, J.A. Malmgren, W.B. Carter, D.L. Patrick.
Screening for depression in well older adults: evaluation of a short form of the CES-D.
American Journal of Preventive Medicine, 10 (1994), pp. 77-84
[Ashforth and Mael, 1989]
B.E. Ashforth, F. Mael.
Social identity theory and the organization.
Academy of Management Review, 14 (1989), pp. 20-39
[Bag et al., 2021]
S. Bag, J.H.C. Pretorius, S. Gupta, Y.K. Dwivedi.
Role of institutional pressures and resources in the adoption of big data analytics powered artificial intelligence, sustainable manufacturing practices and circular economy capabilities.
Technological Forecasting and Social Change, 163 (2021),
[Baker and Xiang, 2023]
Baker, S., & Xiang, W. (2023). Explainable AI is responsible AI: how explainability creates trustworthy and socially responsible artificial intelligence.
[Bakker and Demerouti, 2007]
A.B. Bakker, E. Demerouti.
The job demands-resources model: State of the art.
Journal of Managerial Psychology, 22 (2007), pp. 309-328
[Bakker and Demerouti, 2017]
A.B. Bakker, E. Demerouti.
Job demands–resources theory: taking stock and looking forward.
Journal of Occupational Health Psychology, 22 (2017), pp. 273-285
[Bank of Korea 2024]
Bank of Korea.
Regional economic report: AI adoption and employment trends.
Bank of Korea Publications, (2024),
[Bankins et al., 2024]
S. Bankins, A.C. Ocampo, M. Marrone, S.L.D. Restubog, S.E. Woo.
A multilevel review of artificial intelligence in organizations: implications for organizational behavior research and practice.
Journal of Organizational Behavior, 45 (2024), pp. 159-182
[Barclay et al., 1995]
D. Barclay, C. Higgins, R. Thompson.
The Partial Least Squares (PLS) approach to casual modeling: personal computer adoption and use as an illustration, Technology Studies, 2, 285-309.
Journal of Personality and Social Psychology, 50 (1995), pp. 1173-1182
[Baron and Kenny, 1986]
R.M. Baron, D.A. Kenny.
The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations.
Journal of Personality and Social Psychology, 51 (1986), pp. 1173-1182
[Bazzoli and Probst, 2022]
A. Bazzoli, T.M. Probst.
COVID-19 moral disengagement and prevention behaviors: the impact of perceived workplace COVID-19 safety climate and employee job insecurity.
Safety Science, 150 (2022),
[Beane and Leonardi, 2022]
M.I. Beane, P.M. Leonardi.
Pace layering as a metaphor for organizing in the age of intelligent technologies: considering the future of work by theorizing the future of organizing.
Journal of Management Studies, (2022),
[Belanche et al., 2024]
D. Belanche, R.W. Belk, L.V. Casaló, C. Flavián.
The dark side of artificial intelligence in services.
The Service Industries Journal, 44 (2024), pp. 149-172
[Belanche et al., 2020]
D. Belanche, L.V. Casaló, C. Flavián, J. Schepers.
Service robot implementation: a theoretical framework and research agenda.
The Service Industries Journal, 40 (2020), pp. 203-225
[Bhaskar, 2008]
R. Bhaskar.
A realist theory of science.
Routledge, (2008),
[Bhatti et al., 2022]
S.H. Bhatti, K. Iqbal, G. Santoro, F. Rizzato.
The impact of corporate social responsibility directed toward employees on contextual performance in the banking sector: A serial model of perceived organizational support and affective organizational commitment.
Corporate Social Responsibility and Environmental Management, 29 (2022), pp. 1980-1994
[Biesanz et al., 2010]
J.C. Biesanz, C.F. Falk, V. Savalei.
Assessing mediational models: testing and interval estimation for indirect effects.
Multivariate Behavioral Research, 45 (2010), pp. 661-701
[Bingley et al., 2023]
W.J. Bingley, C. Curtis, S. Lockey, et al.
Where is the human in human-centered AI? Insights from developer priorities and user experiences.
Computers in Human Behavior, 141 (2023),
[Blau, 1964]
P.M. Blau.
Exchange and power in social life.
John Wiley & Sons, (1964),
[Brace et al., 2003]
N. Brace, R. Kemp, R. Snelgar.
A guide to data analysis using SPSS for windows.
Palgrave Macmillan, (2003),
[Brislin, 1970]
R.W. Brislin.
Back-translation for cross-cultural research.
Journal of Cross-Cultural Psychology, 1 (1970), pp. 185-216
[Brougham and Haar, 2020]
D. Brougham, J. Haar.
Technological disruption and employment: the influence on job insecurity and turnover intentions: A multi-country study.
Technological Forecasting and Social Change, 161 (2020),
[Budhwar et al., 2023]
P. Budhwar, S. Chowdhury, G. Wood, H. Aguinis, G.J. Bamber, J.R. Beltran, A. Varma.
Human resource management in the age of generative artificial intelligence: perspectives and research directions on ChatGPT.
Human Resource Management Journal, 33 (2023), pp. 606-659
[Budhwar et al., 2022]
P. Budhwar, A. Malik, M.T. De Silva, P. Thevisuthan.
Artificial intelligence–challenges and opportunities for international HRM: a review and research agenda.
The International Journal of Human Resource Management, 33 (2022), pp. 1065-1097
[Cameron and Miller, 2015]
A.C. Cameron, D.L. Miller.
A practitioner&apos;s guide to cluster-robust inference.
Journal of Human Resources, 50 (2015), pp. 317-372
[Capel and Brereton, 2023]
T. Capel, M. Brereton.
What is human-centered about human-centered AI? A map of the research landscape.
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1-23
[Carroll, 1979]
A.B. Carroll.
A three-dimensional conceptual model of corporate performance.
Academy of Management Review, 4 (1979), pp. 497-505
[Carroll, 1999]
A.B. Carroll.
Corporate social responsibility: evolution of a definitional construct.
Business & society, 38 (1999), pp. 268-295
[Chang et al., 2025]
S.J. Chang, H. Lee, S. Lee, S. Oh, Z. Sun, M.X.C. Xu, X. Xu.
Transforming the Future: The Impact of Artificial Intelligence in Korea.
International Monetary Fund, (2025),
[Chauhan et al., 2022]
C. Chauhan, V. Parida, A. Dhir.
Linking circular economy and digitalization technologies: A systematic literature review.
Technological Forecasting and Social Change, 177 (2022),
[Chen et al., 2023]
Y. Chen, Y. Hu, S. Zhou, S. Yang.
Investigating the determinants of performance of artificial intelligence adoption in hospitality industry during COVID-19.
International Journal of Contemporary Hospitality Management, 35 (2023), pp. 2868-2889
[Cheng and Chan, 2008]
G.H.L. Cheng, D.K.S. Chan.
Who suffers more from job insecurity? A meta-analytic review.
Applied psychology, 57 (2008), pp. 272-303
[Cheng et al., 2021]
L. Cheng, K.R. Varshney, H. Liu.
Socially responsible AI algorithms: issues, purposes, and challenges.
Journal of Artificial Intelligence Research, 71 (2021), pp. 1137-1181
[Chowdhury et al., 2024]
S. Chowdhury, P. Budhwar, G. Wood.
Generative artificial intelligence in business: towards a strategic human resource management framework.
British Journal of Management, (2024),
[Chowdhury et al., 2023]
S. Chowdhury, P. Dey, S. Joel-Edgar, S. Bhattacharya, O. Rodriguez-Espindola, A. Abadie, L. Truong.
Unlocking the value of artificial intelligence in human resource management through AI capability framework.
Human Resource Management Review, 33 (2023),
[Cohen et al., 2003]
J. Cohen, P. Cohen, S.G. West, L.S. Aiken.
Applied multiple regression/correlation analysis for the behavioral sciences.
3rd ed., Lawrence Erlbaum Associates, (2003),
[Coupe, 2019]
T. Coupe.
Automation, job characteristics and job insecurity.
International Journal of Manpower, 40 (2019), pp. 1288-1304
[Creswell and Creswell, 2018]
J.W. Creswell, J.D. Creswell.
Research design: Qualitative, quantitative, and mixed methods approaches.
5th ed., Sage Publications, (2018),
[Davenport and Ronanki, 2018]
T.H. Davenport, R. Ronanki.
Artificial intelligence for the real world.
Harvard Business Review, 96 (2018), pp. 108-116
[Davenport and Guha, 2019]
T.H. Davenport, A. Guha.
AI in the workplace: A primer.
MIT Sloan Management Review, 60 (2019), pp. 1-4
[De Roeck and Maon, 2018]
K. De Roeck, F. Maon.
Building the theoretical puzzle of employees&apos; reactions to corporate social responsibility: an integrative conceptual framework and research agenda.
Journal of Business Ethics, 149 (2018), pp. 609-625
[De Witte et al., 2012]
H. De Witte, N. De Cuyper, T. Vander Elst, E. Vanbelle, W. Niesen.
Job insecurity: review of the literature and a summary of recent studies from Belgium.
Romanian Journal of Applied Psychology, 14 (2012), pp. 11-17
[De Witte et al., 2016]
H. De Witte, J. Pienaar, N. De Cuyper.
Review of 30 years of longitudinal studies on the association between job insecurity and health and well-being: is there causal evidence?.
Australian Psychologist, 51 (2016), pp. 18-31
[Dennehy et al., 2023]
D. Dennehy, A. Griva, N. Pouloudi, et al.
Artificial intelligence (AI) and information systems: perspectives to responsible AI.
Information Systems Frontiers, 25 (2023), pp. 1-7
[Dormann and Griffin, 2015]
C. Dormann, M.A. Griffin.
Optimal time lags in panel studies.
Psychological Methods, 20 (2015), pp. 489-505
[Dwivedi et al., 2021]
Y.K. Dwivedi, L. Hughes, E. Ismagilova, G. Aarts, C. Coombs, T. Crick, M.D. Williams.
Artificial Intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy.
International Journal of Information Management, 57 (2021),
[Dwivedi et al., 2023]
Y.K. Dwivedi, A. Sharma, N.P. Rana, et al.
Evolution of artificial intelligence research in Technological Forecasting and Social Change.
Technological Forecasting and Social Change,, 192 (2023),
[Edwards et al., 2014]
P. Edwards, J. O&apos;Mahoney, S. Vincent.
Studying organizations using critical realism: A practical guide.
Oxford University Press, (2014),
[Efron and Tibshirani, 1993]
B. Efron, R.J. Tibshirani.
An introduction to the bootstrap.
Chapman & Hall/CRC, (1993),
[Epstein and Hertzmann, 2023]
Z. Epstein, A. Hertzmann, et al.
Art and the science of generative AI.
Science (New York, N.Y.), 380 (2023), pp. 1110-1111
[Evans-Lacko and Knapp, 2016]
S. Evans-Lacko, M. Knapp.
Global patterns of workplace productivity for people with depression: absenteeism and presenteeism costs across eight diverse countries.
Social Psychiatry and Psychiatric Epidemiology, 51 (2016), pp. 1525-1537
[Farooq et al., 2014]
O. Farooq, M. Payaud, D. Merunka, P. Valette-Florence.
The impact of corporate social responsibility on organizational commitment: exploring multiple mediation mechanisms.
Journal of Business Ethics, 125 (2014), pp. 563-580
[Fatima and Elbanna, 2023]
T. Fatima, S. Elbanna.
Corporate social responsibility (CSR) implementation: A review and a research agenda.
Journal of Business Ethics, 183 (2023), pp. 105-121
[Flavián et al., 2024]
C. Flavián, R.W. Belk, D. Belanche, L.V. Casaló.
Automated social presence in AI: avoiding consumer psychological tensions to improve service value.
Journal of Business Research, 175 (2024),
[Fountaine et al., 2019]
T. Fountaine, B. McCarthy, T. Saleh.
Building the AI-powered organization.
Harvard Business Review, 97 (2019), pp. 62-73
[Freeman et al., 2010]
R.E. Freeman, J.S. Harrison, A.C. Wicks, B.L. Parmar, S. De Colle.
Stakeholder theory: The state of the art.
Cambridge University Press, (2010),
[Frey and Osborne, 2017]
C.B. Frey, M.A. Osborne.
The future of employment: how susceptible are jobs to computerisation?.
Technological Forecasting and Social Change, 114 (2017), pp. 254-280
[Fritz and MacKinnon, 2007]
M.S. Fritz, D.P. MacKinnon.
Required sample size to detect the mediated effect.
Psychological Science, 18 (2007), pp. 233-239
[Gama and Magistretti, 2023]
F. Gama, S. Magistretti.
Artificial intelligence in innovation management: A review of innovation capabilities.
Journal of Product Innovation Management, (2023),
[Gillan et al., 2021]
S.L. Gillan, A. Koch, L.T. Starks.
Firms and social responsibility: A review of ESG and CSR research.
Journal of Corporate Finance, 66 (2021),
[Goodman and Blum, 1996]
J.S. Goodman, T.C. Blum.
Assessing the non-random sampling effects of subject attrition in longitudinal research.
Journal of Management, 22 (1996), pp. 627-652
[Greenhalgh and Rosenblatt, 2010]
L. Greenhalgh, Z. Rosenblatt.
Evolution of research on job insecurity.
International Studies of Management & Organization, 40 (2010), pp. 6-19
[Guba and Lincoln, 1994]
E.G. Guba, Y.S. Lincoln.
Competing paradigms in qualitative research.
Handbook of qualitative research, pp. 105-117
[Hair et al., 2019]
J.F. Hair, W.C. Black, B.J. Babin, R.E. Anderson.
Multivariate data analysis.
8th ed., Cengage Learning, (2019),
[Hakanen and Schaufeli, 2012]
J.J. Hakanen, W.B. Schaufeli.
Do burnout and work engagement predict depressive symptoms and life satisfaction? A three-wave seven-year prospective study.
Journal of Affective Disorders, 141 (2012), pp. 415-424
[Hayes, 2018]
A.F. Hayes.
Introduction to mediation, moderation, and conditional process analysis: A regression-based approach.
2nd ed., Guilford Press, (2018),
[Hermann and Puntoni, 2024]
E. Hermann, S. Puntoni.
Artificial intelligence and consumer behavior: from predictive to generative AI.
Journal of Business Research, 180 (2024),
[Hobfoll et al., 2018]
S.E. Hobfoll, J. Halbesleben, J.P. Neveu, M. Westman.
Conservation of resources in the organizational context: the reality of resources and their consequences.
Annual Review of Organizational Psychology and Organizational Behavior, 5 (2018), pp. 103-128
[Hofstede, 2001]
G. Hofstede.
Culture&apos;s consequences: Comparing values, behaviors, institutions, and organizations across nations.
2nd ed., Sage Publications, (2001),
[House et al., 2004]
R.J. House, P.J. Hanges, M. Javidan, P.W. Dorfman, V. Gupta.
Culture, leadership, and organizations: The GLOBE study of 62 societies.
Sage Publications, (2004),
[Hu et al., 2021]
S. Hu, L. Jiang, T.M. Probst, M. Liu.
The relationship between qualitative job insecurity and subjective well-being in Chinese employees: The role of work–family conflict and work centrality.
Economic and Industrial Democracy, 42 (2021), pp. 203-225
[Huang, 2018]
F.L. Huang.
Multilevel modeling myths.
School Psychology Quarterly, 33 (2018), pp. 492-499
[Jacobson and Newman, 2017]
N.C. Jacobson, M.G. Newman.
Anxiety and depression as bidirectional risk factors for one another: A meta-analysis of longitudinal studies.
Psychological Bulletin, 143 (2017), pp. 1155-1200
[Jia et al., 2024]
N. Jia, X. Luo, Z. Fang, C. Liao.
When and how artificial intelligence augments employee creativity.
Academy of Management Journal, 67 (2024), pp. 5-32
[Jiang and Lavaysse, 2018]
L. Jiang, L.M. Lavaysse.
Cognitive and affective job insecurity: A meta-analysis and a primary study.
Journal of Management, 44 (2018), pp. 2307-2342
[Jobin et al., 2019]
A. Jobin, M. Ienca, E. Vayena.
The global landscape of AI ethics guidelines.
Nature machine intelligence, 1 (2019), pp. 389-399
[Joyce et al., 2016]
S. Joyce, M. Modini, H. Christensen, A. Mykletun, R. Bryant, P.B. Mitchell, S.B. Harvey.
Workplace interventions for common mental disorders: a systematic meta-review.
Psychological Medicine, 46 (2016), pp. 683-697
[Kim and Kim, 2024]
B.J. Kim, M.J. Kim.
How artificial intelligence-induced job insecurity shapes knowledge dynamics: the mitigating role of artificial intelligence self-efficacy.
Journal of Innovation & Knowledge, 9 (2024),
[Kim et al., 2024]
B.J. Kim, M.J. Kim, J. Lee.
Code green: ethical leadership’s role in reconciling AI-induced job insecurity with pro-environmental behavior in the digital workplace.
Humanities and Social Sciences Communications, 11 (2024), pp. 1-16
[Kim and Lee, 2024]
B.J. Kim, J. Lee.
The mental health implications of artificial intelligence adoption: The crucial role of self-efficacy.
Humanities and Social Sciences Communications, 11 (2024), pp. 1-15
[Kim and Lee, 2025]
B.J. Kim, J. Lee.
The impact of corporate social responsibility on cybersecurity behavior: The crucial role of organizationally-prescribed perfectionism.
Humanities and Social Sciences Communications, 12 (2025), pp. 1-18
[Kim et al., 2021]
B.J. Kim, M.J. Kim, T.H. Kim.
The power of ethical leadership”: The influence of corporate social responsibility on creativity, the mediating function of psychological safety, and the moderating role of ethical leadership.
International Journal of Environmental Research and Public Health, 18 (2021), pp. 2968
[Kim et al., 2019]
B.J. Kim, M. Nurunnabi, T.H. Kim, S.Y. Jung.
Does a good firm breed good organizational citizens? The moderating role of perspective taking.
International Journal of Environmental Research and Public Health, 16 (2019), pp. 161
[Kim et al., 2018]
B.J. Kim, M. Nurunnabi, T.H. Kim, S.Y. Jung.
The influence of corporate social responsibility on organizational commitment: The sequential mediating effect of meaningfulness of work and perceived organizational support.
Sustainability, 10 (2018), pp. 2208
[Kline, 2015]
R.B. Kline.
Principles and practice of structural equation modeling.
4th ed., Guilford Press, (2015),
[Korean Educational Development Institute 2024]
Korean Educational Development Institute.
Educational statistics yearbook.
KEDI Publications, (2024),
[Korean Labor Institute 2024]
Korean Labor Institute.
Korean labor and income panel study.
KLI Publications, (2024),
[Kraimer et al., 2005]
M.L. Kraimer, S.J. Wayne, R.C. Liden, R.T. Sparrowe.
The role of job security in understanding the relationship between employees&apos; perceptions of temporary workers and employees&apos; performance.
Journal of applied psychology, 90 (2005), pp. 389-398
[Krosnick, 1999]
J.A. Krosnick.
Survey research.
Annual Review of Psychology, 50 (1999), pp. 537-567
[László et al., 2010]
K.D. László, H. Pikhart, M.S. Kopp, M. Bobak, A. Pajak, S. Malyutina, M. Marmot.
Job insecurity and health: A study of 16 European countries.
Social Science & Medicine, 70 (2010), pp. 867-874
[Lazarus, 1999]
R.S. Lazarus.
Stress and emotion: A new synthesis.
Springer, (1999),
[Lazarus and Folkman, 1984]
R.S. Lazarus, S. Folkman.
Stress, appraisal, and coping.
Springer, (1984),
[LeBreton and Senter, 2008]
J.M. LeBreton, J.L. Senter.
Answers to 20 questions about interrater reliability and interrater agreement.
Organizational Research Methods, 11 (2008), pp. 815-852
[Lee et al., 2018]
C. Lee, G.H. Huang, S.J. Ashford.
Job insecurity and the changing workplace: recent developments and the future trends in job insecurity research.
Annual Review of Organizational Psychology and Organizational Behavior, 5 (2018), pp. 335-359
[Lerner and Henke, 2008]
D. Lerner, R.M. Henke.
What does research tell us about depression, job performance, and work productivity?.
Journal of Occupational and Environmental Medicine, 50 (2008), pp. 401-410
[Levy and Lemeshow, 2013]
P.S. Levy, S. Lemeshow.
Sampling of populations: Methods and applications.
4th ed., John Wiley & Sons, (2013),
[Lin et al., 2021]
W. Lin, Y. Shao, G. Li, Y. Guo, X. Zhan.
The psychological implications of COVID-19 on employee job insecurity and its consequences: The mitigating role of organization adaptive practices.
Journal of Applied Psychology, 106 (2021), pp. 317
[Lobschat et al., 2021]
L. Lobschat, B. Mueller, F. Eggers, et al.
Corporate digital responsibility.
Journal of Business Research, 122 (2021), pp. 878-888
[Long and Magerko, 2020]
D. Long, B. Magerko.
What is AI literacy? Competencies and design considerations.
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-16
[Lu, 2017]
Y. Lu.
Industry 4.0: A survey on technologies, applications and open research issues.
Journal of industrial information integration, 6 (2017), pp. 1-10
[Ma et al., 2023]
Q. Ma, M. Chen, N. Tang, J. Yan.
The double-edged sword of job insecurity: when and why job insecurity promotes versus inhibits supervisor-rated performance.
Journal of Vocational Behavior, 140 (2023),
[MacKinnon et al., 2004]
D.P. MacKinnon, C.M. Lockwood, J. Williams.
Confidence limits for the indirect effect: distribution of the product and resampling methods.
Multivariate Behavioral Research, 39 (2004), pp. 99-128
[Makarius et al., 2020]
E.E. Makarius, D. Mukherjee, J.D. Fox, A.K. Fox.
Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization.
Journal of Business Research, 120 (2020), pp. 262-273
[Makridis and Mishra, 2022]
C.A. Makridis, S. Mishra.
Artificial intelligence as a service, economic growth, and well-being.
Journal of Service Research, 25 (2022), pp. 505-520
[Malik et al., 2021]
N. Malik, S.N. Tripathi, A.K. Kar, S. Gupta.
Impact of artificial intelligence on employees working in industry 4.0 led organizations.
International Journal of Manpower, 43 (2021), pp. 334-354
[Maxwell and Cole, 2007]
S.E. Maxwell, D.A. Cole.
Bias in cross-sectional analyses of longitudinal mediation.
Psychological Methods, 12 (2007), pp. 23-44
[McKinsey & Company 2024]
McKinsey & Company.
AI adoption in Asia-Pacific: Country comparisons.
McKinsey Global Institute, (2024),
[Muthén and Satorra, 1995]
B.O. Muthén, A. Satorra.
Complex sample data in structural equation modeling.
Sociological Methodology, 25 (1995), pp. 267-316
[Nam, 2019]
T. Nam.
Technology usage, expected job sustainability, and perceived job insecurity.
Technological Forecasting and Social Change, 138 (2019), pp. 155-165
[Noy and Zhang, 2023]
S. Noy, W. Zhang.
Experimental evidence on the productivity effects of generative artificial intelligence.
Science (New York, N.Y.), 381 (2023), pp. 187-192
[Onkila and Sarna, 2022]
T. Onkila, B. Sarna.
A systematic literature review on employee relations with CSR: State of art and future research agenda.
Corporate Social Responsibility and Environmental Management, 29 (2022), pp. 435-447
[Orlitzky et al., 2003]
M. Orlitzky, F.L. Schmidt, S.L. Rynes.
Corporate social and financial performance: A meta-analysis.
Organization studies, 24 (2003), pp. 403-441
[Parker and Grote, 2022]
S.K. Parker, G. Grote.
Automation, algorithms, and beyond: why work design matters more than ever in a digital world.
Applied Psychology, 71 (2022), pp. 1171-1204
[Patton, 2015]
M.Q. Patton.
Qualitative research & evaluation methods.
4th ed., Sage Publications, (2015),
[Peltokorpi and Allen, 2023]
V. Peltokorpi, D.G. Allen.
Job embeddedness and voluntary turnover in the face of job insecurity.
Journal of Organizational Behavior. Forthcoming, (2023),
[Pereira et al., 2023]
V. Pereira, E. Hadjielias, M. Christofi, D. Vrontis.
A systematic literature review on the impact of artificial intelligence on workplace outcomes.
Human Resource Management Review, 33 (2023),
[Phillips and Burbules, 2000]
D.C. Phillips, N.C. Burbules.
Postpositivism and educational research.
Rowman & Littlefield, (2000),
[Ployhart and Bliese, 2006]
R.E. Ployhart, P.D. Bliese.
Individual adaptability (I-ADAPT) theory: conceptualizing the antecedents, consequences, and measurement of individual differences in adaptability.
Understanding adaptability: A prerequisite for effective performance within complex environments, Emerald Group Publishing Limited, (2006), pp. 3-39
[Ployhart and Vandenberg, 2010]
R.E. Ployhart, R.J. Vandenberg.
Longitudinal research: the theory, design, and analysis of change.
Journal of Management, 36 (2010), pp. 94-120
[Podsakoff et al., 2012]
P.M. Podsakoff, S.B. MacKenzie, N.P. Podsakoff.
Sources of method bias in social science research and recommendations on how to control it.
Annual Review of Psychology, 63 (2012), pp. 539-569
[Podsakoff et al., 2003]
P.M. Podsakoff, S.B. MacKenzie, J.Y. Lee, N.P. Podsakoff.
Common method biases in behavioral research: A critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88 (2003), pp. 879-903
[Podsakoff et al., 2024]
P.M. Podsakoff, N.P. Podsakoff, L.J. Williams, C. Huang, J. Yang.
Common method bias: it&apos;s bad, it&apos;s complex, it&apos;s widespread, and it&apos;s not easy to fix.
Annual Review of Organizational Psychology and Organizational Behavior, 11 (2024), pp. 17-61
[Polit and Beck, 2006]
D.F. Polit, C.T. Beck.
The content validity index: are you sure you know what&apos;s being reported? Critique and recommendations.
Research in Nursing & Health, 29 (2006), pp. 489-497
[Popper, 1959]
K. Popper.
The logic of scientific discovery.
Hutchinson, (1959),
[Preacher and Hayes, 2008]
K.J. Preacher, A.F. Hayes.
Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models.
Behavior Research Methods, 40 (2008), pp. 879-891
[Preacher and Kelley, 2011]
K.J. Preacher, K. Kelley.
Effect size measures for mediation models: quantitative strategies for communicating indirect effects.
Psychological Methods, 16 (2011), pp. 93-115
[Puntoni et al., 2021]
S. Puntoni, R.W. Reczek, M. Giesler, S. Botti.
Consumers and artificial intelligence: an experiential perspective.
Journal of Marketing, 85 (2021), pp. 131-151
[Ransbotham et al., 2019]
S. Ransbotham, S. Khodabandeh, R. Fehling, B. LaFountain, D. Kiron.
Winning with AI.
MIT Sloan management review, (2019),
[Rindfleisch et al., 2008]
A. Rindfleisch, A.J. Malter, S. Ganesan, C. Moorman.
Cross-sectional versus longitudinal survey research: concepts, findings, and guidelines.
Journal of Marketing Research, 45 (2008), pp. 261-279
[Ryan et al., 2006]
A.B. Ryan, et al.
Post-positivist approaches to research.
Researching and writing your thesis: A guide for postgraduate students, pp. 12-26
[Sætra, 2023]
H.S. Sætra.
Generative AI: here to stay, but for good?.
Technology in Society, 75 (2023),
[Sanderson and Andrews, 2006]
K. Sanderson, G. Andrews.
Common mental disorders in the workforce: recent findings from descriptive and social epidemiology.
The Canadian Journal of Psychiatry, 51 (2006), pp. 63-75
[Scherbaum and Ferreter, 2009]
C.A. Scherbaum, J.M. Ferreter.
Estimating statistical power and required sample sizes for organizational research using multilevel modeling.
Organizational Research Methods, 12 (2009), pp. 347-367
[Shen and Zhang, 2019]
J. Shen, H. Zhang.
Socially responsible human resource management and employee support for external CSR: roles of organizational CSR climate and perceived CSR directed toward employees.
Journal of Business Ethics, 156 (2019), pp. 875-888
[Shepherd and Majchrzak, 2022]
D.A. Shepherd, A. Majchrzak.
Machines augmenting entrepreneurs: opportunities (and threats) at the nexus of artificial intelligence and entrepreneurship.
Journal of Business Venturing, 37 (2022),
[Shoss, 2017]
M.K. Shoss.
Job insecurity: an integrative review and agenda for future research.
Journal of management, 43 (2017), pp. 1911-1939
[Shoss et al., 2023]
M.K. Shoss, S. Su, A.E. Schlotzhauer, N. Carusone.
Working hard or hardly working? An examination of job preservation responses to job insecurity.
Journal of Management, 49 (2023), pp. 2387-2414
[Shrout and Bolger, 2002]
P.E. Shrout, N. Bolger.
Mediation in experimental and nonexperimental studies: new procedures and recommendations.
Psychological Methods, 7 (2002), pp. 422-445
[Si et al., 2023]
S. Si, J. Hall, R. Suddaby, D. Ahlstrom, J. Wei.
Technology, entrepreneurship, innovation and social change in digital economics.
Technovation, 119 (2023),
[Sinclair et al., 2021]
R.R. Sinclair, T. Allen, L. Barber, M. Bergman, T. Britt, A. Butler, Z. Yuan.
Occupational health science in the time of COVID-19: now more than ever.
Occupational Health Science, 4 (2021), pp. 1-22
[Sison et al., 2024]
A.J.G. Sison, M.T. Daza, R. Gozalo-Brizuela, E.C. Garrido-Merchán.
ChatGPT: more than a "weapon of mass deception.
International Journal of Human–Computer Interaction, 40 (2024), pp. 4853-4872
[Skare et al., 2024]
M. Skare, B. Gavurova, S.B. Buric.
Artificial intelligence and wealth inequality: a comprehensive empirical exploration of socioeconomic implications.
Technology in Society, 79 (2024),
[Sobel, 1982]
M.E. Sobel.
Asymptotic confidence intervals for indirect effects in structural equation models.
Sociological Methodology, 13 (1982), pp. 290-312
[Statista 2024]
Statista.
Use of artificial intelligence (AI) by accommodation businesses in Europe as of August 2023, by country.
[Statistics Korea 2024]
Statistics Korea. (2024). Economically active population survey. KOSTAT Publications.
[Švarc et al., 2021]
J. Švarc, J. Lažnjak, M. Dabić.
The role of national intellectual capital in the digital transformation of EU countries.
Journal of Intellectual Capital, 22 (2021), pp. 768-791
[Tambe et al., 2019]
P. Tambe, P. Cappelli, V. Yakubovich.
Artificial intelligence in human resources management: challenges and a path forward.
California Management Review, 61 (2019), pp. 15-42
[Taras et al., 2010]
V. Taras, B.L. Kirkman, P. Steel.
Examining the impact of Culture&apos;s consequences: A three-decade, multilevel, meta-analytic review of Hofstede&apos;s cultural value dimensions.
Journal of Applied Psychology, 95 (2010), pp. 405-439
[Tiku, 2023]
Tiku, S. (2023). AI-induced labor market shifts and aging workforce dynamics: A cross-national study of corporate strategic responses in Japan, USA, and India. USA, and India (August 9, 2023).
[Turker, 2009]
D. Turker.
Measuring corporate social responsibility: A scale development study.
Journal of Business Ethics, 85 (2009), pp. 411-427
[Uren and Edwards, 2023]
V. Uren, J.S. Edwards.
Technology readiness and the organizational journey towards AI adoption: an empirical study.
International Journal of Information Management, 68 (2023),
[Valtonen et al., 2025]
A. Valtonen, M. Saunila, J. Ukko, L. Treves, P. Ritala.
AI and employee wellbeing in the workplace: an empirical study.
Journal of Business Research, 199 (2025),
[Velte, 2021]
P. Velte.
Meta-analyses on corporate social responsibility (CSR): a literature review.
Management Review Quarterly, 72 (2021), pp. 627-675
[Voegtlin and Greenwood, 2016]
C. Voegtlin, M. Greenwood.
Corporate social responsibility and human resource management: A systematic review and conceptual analysis.
Human Resource Management Review, 26 (2016), pp. 181-197
[Wang et al., 2021]
B. Wang, Y. Liu, J. Qian, S.K. Parker.
Achieving effective remote working during the COVID-19 pandemic: A work design perspective.
Applied Psychology, 70 (2021), pp. 16-59
[Wang et al., 2015]
H.J. Wang, C.Q. Lu, O.L. Siu.
Job insecurity and job performance: the moderating role of organizational justice and the mediating role of work engagement.
Journal of Applied Psychology, 100 (2015), pp. 1249-1258
[Wilson and Daugherty, 2018]
H.J. Wilson, P.R. Daugherty.
Collaborative intelligence: humans and AI are joining forces.
Harvard Business Review, 96 (2018), pp. 114-123
[Wirtz et al., 2023]
J. Wirtz, W.H. Kunz, N. Hartley, J. Tarbit.
Corporate digital responsibility in service firms and their ecosystems.
Journal of Service Research, 26 (2023), pp. 173-190
[Wu et al., 2022]
T.J. Wu, J.M. Li, Y.J. Wu.
Employees&apos; job insecurity perception and unsafe behaviours in human–machine collaboration.
Management Decision, 60 (2022), pp. 2409-2432
[Zhao et al., 2022]
X. Zhao, C. Wu, C.C. Chen, Z. Zhou.
The influence of corporate social responsibility on incumbent employees: A meta-analytic investigation of the mediating and moderating mechanisms.
Journal of Management, 48 (2022), pp. 114-146
[Zirar et al., 2023]
A. Zirar, S.I. Ali, N. Islam.
Worker and workplace Artificial intelligence (AI) coexistence: emerging themes and research agenda.
Technovation, 124 (2023),
Copyright © 2025. The Authors
Download PDF
Article options
Tools