This conceptual study explores how artificial intelligence (AI) is transforming the nature of work and reconfiguring the experience of humanness, particularly among low-skilled and informal workers.
MethodUsing an integrative literature review methodology, the study synthesises interdisciplinary research from organisational studies, sociology, and AI ethics to examine the mechanisms through which AI-driven labour displacement, algorithmic management, and structural precarity contribute to new forms of exploitation.
FindingsThe study develops a novel conceptual framework that links technological transformation to the erosion of the relational, moral, and emotional dimensions of work conditions, resulting in conditions increasingly resembling modern slavery.
Originalitythe study’s novelty lies in its reframing of AI as a socio-technical actor with ontological consequences for worker identity, autonomy, and dignity. The findings underscore the need for ethical AI design, inclusive policy frameworks, and human-centred organisational practices.
Practical implicationsThis paper offers practical implications for policymakers, technologists, and business leaders seeking to align innovation with social justice and sustainable labour futures.
Plain summaryArtificial intelligence (AI) is reshaping the nature of work and disrupting the human experience, especially for low-skilled and informal workers, highlighting the urgency and complexity of this research. AI-driven labour displacement and algorithmic management contribute to new forms of exploitation that echo modern slavery. The erosion of humanness at work is linked to reduced autonomy, empathy, and moral agency under opaque algorithmic systems. A socio-technical framework is needed to address AI’s impact on dignity and agency, with ethical design and inclusive governance at its core.
JEL CodeO330, O31, O32
We are entering an era where artificial intelligence (AI) is not merely transforming industries; it is reshaping the very essence of what it means to be human at work. As intelligent systems increasingly replace human judgment, automate decision-making, and govern labour through opaque algorithms, urgent questions arise about autonomy, dignity, and the future of human agency in the workplace (Kellogg et al., 2020a; Moore, 2018). AI, as the spearhead of this revolution, presents a double-edged sword: while it promises efficiency, innovation, and economic growth, it simultaneously threatens to deepen inequality, displace vulnerable workers, and erode the essential qualities of humanness in the workplace (Freedom United, 2025; Haslam, 2006; LeBaron, 2020; Schwab, 2016; UNDP, 2025).
The rapid proliferation of AI technologies marks a profound shift in the structure and meaning of work. Celebrated for its potential to enhance productivity, streamline operations, and optimise decision-making, AI is simultaneously reshaping the social and moral fabric of the workplace (Pereira et al., 2023; Schwab, 2016). This dual capacity to enhance and to erode raises critical questions about what it means to be human in technologically mediated work environments. While AI promises efficiency, it also challenges core dimensions of humanness by displacing workers, eroding autonomy, and mediating relationships through algorithmic systems that obscure accountability and empathy (Huang & Rust, 2018; Einola et al., 2024).
This paper responds to these tensions by exploring the less visible but increasingly urgent risks associated with AI deployment, particularly its role in reproducing conditions of exploitation for low-skilled and informal workers. As AI displaces human labour and inserts algorithmic management into everyday organisational processes, many workers find themselves in precarious positions marked by digital surveillance, economic insecurity, and limited recourse to redress (Kellogg et al., 2020b; Moore, 2018). These changes raise concerns not only about labour rights but about the degradation of humanness itself: the erosion of agency, moral judgment, and the capacity for meaningful human connection at work (Einola, 2023; Haslam, 2006).
We contribute to the growing literature on innovation and artificial intelligence as follows: first, we reframe AI not as a neutral tool of efficiency, but as a socio-technical actor embedded within exploitative labour architectures. Second, our study advances a novel conceptual framework that links AI-driven labour displacement, algorithmic control, and structural precarity to the erosion of humanness in the workplace, particularly for low-skilled and informal workers. Third, by connecting AI deployment to the conditions that enable modern slavery, our paper fills a critical gap in existing research, which often overlooks the emotional, relational, and ethical dimensions of technologically mediated work. Drawing on interdisciplinary insights, we challenge techno-optimistic narratives and call for a deeper interrogation of how AI reconfigures moral agency, autonomy, and dignity in organisational life.
MethodologyOur study adopts a conceptual and integrative literature review to examine how AI reshapes work conditions and contributes to the erosion of humanness, particularly among low-skilled and informal workers. It synthesises interdisciplinary scholarship across organisational studies, sociology, AI ethics, and labour economics to develop a theoretical framework that links AI-driven labour displacement, algorithmic management, and structural precarity to emerging forms of exploitation. Although numerous studies (e.g., Kellogg et al., 2020; Moore, 2018; LeBaron, 2020; Einola & Khoreva, 2023) have explored the economic and managerial implications of AI, there remains a critical gap in understanding its ontological and relational consequences for workers. This paper addresses that gap by systematically reviewing empirical and theoretical literature that interrogates the intersection of AI, labour transformation, and humanness.
The literature search was conducted across major academic databases such as Scopus, Web of Science, Google Scholar and EBSCOhost (Bako & Syed, 2018; Pepple & Olowookere, 2021) and supplemented with grey literature from international institutions and advocacy organisations, including the International Labour Organisation, Fairwork Foundation, Human Rights Watch, the Business & Human Rights Resource Centre, and the OECD. EU Ethics Guidelines for Trustworthy AI, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and IEEE’s Ethically Aligned Design documents were also included to contextualise the ethical dimensions of AI deployment. The search covered the period 2006–2025, capturing developments from the early adoption of algorithmic management to the most recent debates on AI ethics, digital coercion, and worker rights.
Search terms included combinations of: “artificial intelligence,” “algorithmic management,” “digital labour,” “humanness,” “worker autonomy,” “exploitation,” “modern slavery,” and “ethical governance.” Boolean operators (and/or) were used to refine searches and capture interdisciplinary work across management, sociology, information systems, and human rights studies.
To ensure analytical rigour and relevance, the review applied structured inclusion and exclusion criteria. Sources were included if they comprised peer-reviewed journal articles, book chapters, or credible institutional reports that examined AI, algorithmic systems, or digital technologies in relation to work, labour, or human rights. Particular emphasis was placed on studies that explored worker well-being, autonomy, moral agency, or emerging forms of coercion and control. The review also incorporated theoretical, conceptual, and policy-oriented contributions that advanced debates on the ethical and organisational implications of AI. Conversely, publications were excluded if they were predominantly technical or engineering-focused and did not address social or ethical dimensions, lacked academic credibility, were duplicates, or consisted of commentaries and opinion pieces without substantive analytical grounding.
The analytical process followed a concept-driven synthesis approach (Webster & Watson, 2002), integrating insights from diverse disciplinary perspectives to construct theoretical linkages between AI systems and the erosion of humanness in contemporary labour. Each selected source was examined for core analytical dimensions, including mechanisms of algorithmic control and surveillance, the effects of AI on worker autonomy, dignity, and emotional well-being, the role of governance and regulation, and the emergence of new forms of digital coercion and dependency. Through an inductive, iterative coding process, these dimensions were refined to identify relationships among key constructs and uncover patterns connecting technological design with socio-organisational outcomes. The synthesis process was inherently reflexive and cyclical, involving continuous comparison across empirical contexts and theoretical paradigms.
The iterative engagement with literature and relevant reports allowed for the recognition of both convergent and divergent interpretations, ultimately contributing to the development of a multi-level conceptual framework that explains how algorithmic systems mediate control, dependency, and dehumanisation in the evolving landscape of work. Following the literature synthesis, the article presents a conceptual framework that theorises the mechanisms through which AI contributes to exploitative labour conditions and the erosion of humanness. This is followed by a discussion of the framework’s theoretical contributions and concludes with policy and design recommendations aimed at restoring dignity, empathy, and moral agency in technologically mediated work environments.
Literature reviewThe accelerating integration of AI in organisational life demands critical reflection not only on how work is changing, but also on how such changes are altering the fundamental human experience in the workplace. While the role of technology in reshaping labour has been studied extensively since the Industrial Revolution, the advent of AI represents a qualitatively different transformation. AI systems do not simply augment human work; they often replace it, mediate decision-making processes, and increasingly manage workers through non-human, opaque algorithmic systems (Kellogg et al., 2020b; Kim et al., 2024).
The rise of AI in labour marketsAI is broadly defined as the simulation of human intelligence processes by machines, particularly computer systems that can perform tasks such as learning, reasoning, and self-correction (Acemoglu & Restrepo, 2017), In the context of labour markets, AI intersects with automation the use of technologies to perform tasks without human intervention and algorithmic management, which refers to the use of data-driven algorithms to supervise, evaluate, and direct worker behaviour (Kellogg et al., 2020). Together, these technologies are reshaping the nature of work across industries, altering labour dynamics and employment structures at a global scale.
The sectors most impacted by AI and automation are those that rely heavily on routine or repetitive tasks. Manufacturing has long been at the forefront of automation, with robotics replacing assembly-line workers in roles that require speed and precision (Acemoglu & Restrepo, 2017). However, more recent developments have brought AI into service-oriented industries such as retail, hospitality, and logistics, where chatbots, automated checkout systems, and delivery route optimisation tools are displacing human workers (Chui et al., 2016).
Beyond the workplace, AI-related transformations are entangled with broader patterns of digital inequality and structural marginalisation. The increasing reliance on digital platforms, whether for banking, employment, or service access, risks excluding populations already at the margins. For instance, the closure of bank branches in rural UK communities has disproportionately impacted older, lower-income, and digitally excluded groups demonstrating how technological transitions, when poorly managed, can deepen inequality and isolation (UK Parliament, 2025). Similarly, the adoption of AI in work systems may lead to a withdrawal of human oversight and access, particularly affecting informal or geographically remote workers, thereby reinforcing the very conditions under which exploitative labour arrangements flourish.
Furthermore, agriculture, once considered immune to digitisation, is also undergoing rapid transformation through smart farming technologies, drone surveillance, and autonomous harvesting systems (Klerkx et al., 2019; Rotz et al., 2019). While these innovations contribute to efficiency and productivity, they simultaneously generate job displacement and precarity, particularly among low-skilled workers who are unable to transition into higher-skilled roles.
Modern slaveryMS is an umbrella term encompassing a range of exploitative labour practices in which individuals are coerced into work through the use of violence, threats, deception, or debt. According to the International Labour Organization (2022), MS includes forced labour, debt bondage, human trafficking, and slavery-like practices such as forced marriage and exploitative child labour (ILO & Walk Free Foundation, 2017; Walk Free Foundation & Minderoo Foundation, 2018). It is characterised by the denial of personal freedom and the inability to leave the work situation without significant penalty.
One of the emerging and less visible forms of MS is exploitative gig work. In many cases, workers are drawn into digital labour platforms with the promise of flexibility and income generation, only to find themselves trapped in precarious, underpaid conditions controlled by opaque algorithmic systems (Wood et al., 2019). These platforms often classify workers as independent contractors, denying them basic labour rights such as minimum wage, paid leave, and collective bargaining conditions that, in extreme cases, can mirror coercive or exploitative labour.
The structural drivers of MS remain persistent and deeply entrenched. Poverty, limited access to education, and weak labour protections are foundational conditions that push individuals into exploitative work arrangements (Crane et al., 2019). Migration, particularly irregular or forced migration, further exacerbates vulnerability, as displaced individuals may lack legal status or social support, making them easy targets for traffickers and exploitative employers ( Landman & Silverman, 2019; Milivojevic et al., 2020). In many low-income countries, the informal economy dominates, and regulatory oversight is minimal (Helfaya et al., 2024). These conditions create an enabling environment in which MS not only persists but adapts to new technological and economic landscapes.
As AI continues to disrupt traditional labour markets, it is critical to examine how the same structural vulnerabilities that facilitate MS are being reconfigured and intensified by new forms of technological control (Russell & Norvig, 2021; Walk Free et al., 2020). The following section outlines a conceptual framework linking AI-driven labour displacement to heightened risks of exploitation, with particular attention to the mechanisms that enable this transition (Boyd et al., 2018; Klerkx et al., 2019; Walk Free et al., 2020).
As organisations adopt AI technologies to automate routine tasks, manage gig workers, and optimise performance, the nature of work becomes less relational and more computational. Interactions once mediated through social norms, empathy, or ethical deliberation are now often governed by algorithms designed for efficiency and control. These changes have profound implications for humanness in the workplace, defined here as the constellation of relational, moral, emotional, and social attributes that constitute being human at work (Haarjärvi & Laari-Salmela, 2024; Haslam & Loughnan, 2014). The erosion of such attributes can lead to experiences of alienation, depersonalisation, and dehumanisation.
This concern resonates with long-standing sociological theories. (Mead & Blumer, 1966) emphasised that humanness is constructed through social interaction and symbolic communication. When AI mediates or replaces these interactions, opportunities for mutual recognition, empathy, and identity formation may be diminished. Meanwhile, Parsons (1951) and Merton (1968) highlighted the importance of role performance and social integration in maintaining a sense of human purpose and order—both of which are destabilised in algorithmically managed, precarious labour environments.
The concern extends beyond the displacement of workers or the disruption of workflows. AI reconfigures the foundational conditions under which humanness can be enacted or diminished. In digitally mediated, low-skilled, or informal labour contexts, work is increasingly stripped of its humanising elements such as relational connection, mutual recognition, personal agency, and dignity (Al-Amoudi, 2021; Haslam, 2006). As algorithmic systems take over functions once grounded in social interaction and ethical reasoning, the nature of work becomes more mechanical and less meaningful. This paper therefore, underscores the need for a critical examination of the shifting contours of humanness in technologically governed workplaces and asks: how do we sustain human flourishing when judgment, empathy, and morality are outsourced to artificial systems?
Convergence of AI, organisational transformation, and the shifting meaning of humannessScholars from across disciplines, including organisational studies, sociology, information systems, psychology, and AI ethics, have increasingly turned their attention to how AI is not only altering the nature of work but also reshaping the social and moral foundations of the workplace (Bankins et al., 2024; Einola & Khoreva, 2023; Kim et al., 2024). While early scholarship on automation primarily emphasised economic efficiency and job displacement, more recent work has begun to interrogate how AI technologies intersect with power, identity, emotional life, and the structure of human relations at work (Moore, 2018; Orlikowski & Scott, 2008).
Recent studies highlight that AI introduces a fundamentally different kind of technological mediation, one that is not merely mechanical but algorithmic, data-driven, and often opaque (Graham et al., 2017; Kellogg et al., 2020b). In this new regime, decisions once made through human deliberation and social negotiation are increasingly handled by machine-learning models trained to optimise efficiency and control. As a result, the very conditions under which human beings engage, relate, and make sense of their roles at work are being reorganised by technologies that often bypass empathy, judgment, and dialogue (Haslam, 2006).
Moreover, while many accounts celebrate the performance gains associated with AI integration, there is growing concern that these gains come at the cost of essential human values such as autonomy, dignity, and mutual recognition (Crăiuț & Iancu, 2022; Einola & Khoreva, 2023). This concern is particularly acute for workers in low-skilled, informal, or digitally mediated environments, where the pressures of precarious work intersect with algorithmic control and social invisibility (LeBaron, 2020; Wood et al., 2019). Despite a surge in interest, few studies have offered a holistic synthesis of how AI technologies simultaneously displace labour, reconfigure social interactions, and alter the ontological status of workers.
The intersection of AI, labour transformation, and humanness has emerged as a critical domain of inquiry across organisational studies, sociology, human resource management, and AI ethics1 (European Commission, 2021; IEEE Standards Association, 2025; UNESCO, 2021). While a growing body of literature explores how AI transforms work and organisations, there remains a limited yet urgent need to examine the deeper implications of such transformations for human identity, dignity, and moral agency at work. This review synthesises three interrelated strands of scholarship: (1) AI and labour displacement, (2) algorithmic control and the erosion of worker autonomy, and (3) the dehumanising and rehumanising potentials of AI in the workplace.
AI and labour displacementAI-driven automation has rapidly expanded beyond manufacturing to disrupt a wide range of sectors, including banking, logistics, customer service, agriculture, and the gig economy (Acemoglu & Restrepo, 2017; Chui et al., 2016). Research shows that low-skilled and manual labourers are disproportionately affected by these disruptions, often without access to retraining or adequate social protections (International Labour Organisation., 2021; Walk Free et al., 2020). As AI substitutes human input in routinised tasks, workers are pushed into informal or precarious work arrangements where oversight is minimal and protections are weak (Sachs et al., 2020).
These shifts mirror a broader trend in which technology is deployed in ways that deepen inequality and marginalisation (Standing, 2016). However, beyond economic displacement, there is growing concern that automation also erodes the social and ethical fabric of work, undermining the relational and identity-affirming aspects that contribute to humanness in the workplace (Arias-Pérez & Vélez-Jaramillo, 2022; Bankins et al., 2024).
Algorithmic management and the erosion of autonomyAlongside automation, AI is increasingly used to manage human workers through algorithmic systems that assign tasks, monitor behaviour, and administer discipline (Kellogg et al., 2020b; Kim et al., 2024; Wood et al., 2019). Such systems are prevalent in platform-based gig work, where human judgment and discretion are replaced by opaque performance metrics and automated penalties (Graham et al., 2017; Liu et al., 2024; Wood et al., 2019). Studies have shown that workers subjected to these systems experience a loss of control, autonomy, and psychological safety—factors critical to human dignity and agency (Moore, 2018; Wood et al., 2019).
A salient example is Uber’s algorithmic deactivation system, which can automatically suspend drivers based on customer ratings or behavioural thresholds, often without explanation or recourse (Rosenblat, 2016). This represents a new form of automated discipline in which labour relations are governed not through interpersonal negotiation or social norms, but by unchallengeable computational authority. Similarly, in Amazon’s warehouses, algorithmic “time-on-task” systems monitor workers’ every movement and issue automated warnings or terminate them for exceeding break limits, regardless of physical strain or individual context (Cant, 2019; Holland & Vickers, 2021). These practices demonstrate how AI-enabled management enforces compliance and productivity through depersonalised systems that obscure accountability and moral responsibility.
Beyond gig and logistics sectors, such systems are increasingly embedded in white-collar settings through performance dashboards, AI-powered scheduling tools, and automated hiring platforms. As Einola and Khoreva (2023) show, the coexistence of humans and anthropomorphised AI agents in these environments reshapes interpersonal dynamics. It introduces new emotional tensions, as workers are simultaneously expected to treat AI as a co-worker while complying with its non-negotiable authority. This aligns with insights from Socio-material theorists who argue that technologies not only mediate but fundamentally reconfigure the ontological status of human actors within organisations (Orlikowski & Scott, 2008).
Collectively, these developments point to a growing managerial infrastructure in which algorithmic logic supersedes relational judgment. As the capacity for human discretion, empathy, and mutual recognition is eroded, work becomes less about interaction and more about behavioural compliance within rigid digital architectures. The consequence is not merely managerial efficiency, but the restructuring of work itself in ways that diminish human agency and expose workers to structurally coercive conditions.
Dehumanisation, humanness, and the role of workA smaller but important strand of research examines how AI systems may inadvertently dehumanise workers by diminishing opportunities for empathy, connection, and ethical deliberation. Haslam (2006) and Haslam and Loughnan (2014) argue that dehumanisation occurs when individuals are denied attributes central to personhood, such as moral agency, individuality, and warmth. In the context of algorithmically controlled labour, these attributes are often constrained or overridden by non-human systems designed to prioritise efficiency over care.
Conversely, a few scholars call for a more hopeful view of AI’s potential to enhance humanness when used intentionally. Recent studies emphasise the need to reconceptualise innovation and knowledge creation as socio-technical processes where human capabilities and digital systems co-evolve (Bankins et al., 2024; Einola & Khoreva, 2023; Pokubo et al., 2024). Further, Budhwar et al. (2023) and Malik et al. (2022) highlight how AI can be leveraged to improve employee experiences, personalise support, and reduce menial tasks. However, such outcomes depend heavily on ethical design and human-centred implementation (Crăiuț & Iancu, 2022; Nadeem et al., 2022). Philosophers and sociologists alike have raised the question of whether AI systems that mimic human traits challenge or dilute what is considered essentially human (socioMead & Blumer, 1966; Orlikowski & Scott, 2008). Al-Amoudi (2021) ask whether creativity, empathy, or moral reasoning can truly coexist with automated decision-making in the workplace or whether the substitution of these traits signals a deeper erosion of humanness.
Conceptual framework: AI, precarity, and the erosion of humannessThis section advances a conceptual framework that theorises the relational and systemic mechanisms through which AI contributes to the erosion of humanness in contemporary labour systems. Specifically, it interrogates how AI-driven transformations, when embedded in socioeconomically vulnerable contexts and deployed without adequate ethical or institutional safeguards, can give rise to new forms of precarity, dependency, and dehumanisation.
Rather than framing AI as a neutral or purely technical innovation, this framework positions it as a socio-technical actor whose effects are co-constituted by the political economy of work, organisational power dynamics, and the algorithmic infrastructures that mediate labour. Drawing on interdisciplinary insights, it conceptualises the erosion of humanness as a cumulative outcome of four interlocking processes: (1) labour displacement and disintermediation, (2) heightened economic and legal insecurity, (3) algorithmic governance and digital coercion, and (4) the structural marginalisation of already precarious worker populations.
This framework contends that AI does not merely automate tasks or restructure workflows; it fundamentally reshapes the relational and moral dimensions of work by reducing the space for human judgment, mutual recognition, and ethical deliberation. These transformations risk undermining the very conditions under which workers experience agency, autonomy, and social belonging. In doing so, the framework aligns with recent calls in critical organisational theory to move beyond instrumentalist accounts of AI and attend instead to its ontological and affective consequences for working lives. It seeks to illuminate how technologies designed for efficiency can, paradoxically, erode the very human foundations upon which meaningful, dignified labour depends.
Labour displacement and economic insecurityThis shift exacerbates economic insecurity and social exclusion, diminishing the stabilising role that work traditionally plays in constructing identity and social integration (International Labour Organization., 2021; Mead & Blumer, 1966).
At the foundation of the conceptual framework is the phenomenon of labour displacement, wherein AI and related technologies automate both physical and cognitive tasks once performed by humans. This displacement is not uniform it disproportionately affects low-skilled, routine-based, and manual occupations that are most susceptible to automation (Acemoglu & Restrepo, 2017; Chui et al., 2016). As AI systems take over manufacturing lines, customer service roles, agricultural labour, and even some white-collar tasks, large segments of the workforce are pushed out of traditional employment channels without access to meaningful reskilling pathways.
In many economies, particularly in the Global South or within informal labour markets, displaced workers are not absorbed into more sophisticated or digitally oriented roles. Instead, they are often relegated to precarious or informal employment arrangements, where protections are weak or non-existent (International Labour Organisation, 2021). These include temporary gigs, contract work with algorithmic oversight, or underpaid roles in informal supply chains. For example, in agriculture, the introduction of AI-powered autonomous tractors by companies like John Deere and FarmWise has begun to replace seasonal field workers, many of whom lack digital skills or mobility to transition into alternative roles (Klerkx et al., 2019).
This economic marginalisation leads to heightened insecurity, characterised by irregular income, lack of social protection, and limited access to grievance mechanisms or collective bargaining (Helfaya et al., 2025). Such insecurity has implications far beyond material deprivation it disrupts the stabilising and identity-affirming functions of work, which sociologists have long argued are central to social cohesion and individual well-being (Mead & Blumer, 1966). When employment becomes unpredictable or exploitative, it severs the relational bonds that connect individuals to their communities, organisations, and sense of self.
The closure of public services under digital transformation can further compound these vulnerabilities. For example, in the UK, over 1100 high street bank branches closed between 2023 and 2024, disproportionately affecting elderly and rural populations who depend on physical access to financial services (UK Parliament, 2025). This demonstrates how digital infrastructure, when implemented without inclusive planning, can deepen the marginalisation of already vulnerable groups paralleling how AI in the workplace may displace and exclude those least equipped to adapt.
Algorithmic management and digital coercionAI not only displaces labour but also introduces new forms of managerial control through algorithmic management systems. These systems allocate tasks, assess performance, and enforce discipline based on data-driven metrics, often without human oversight (Kellogg et al., 2020b). Workers in digitally mediated environments, such as the gig economy, report experiences of surveillance, lack of transparency, and reduced autonomy (Liu et al., 2024; Moore, 2018). This algorithmic governance transforms the employment relationship, shifting it from a social contract based on mutual recognition to a computational process driven by behavioural outputs (Einola & Khoreva, 2023; Kim et al., 2024; Wood et al., 2019). As workers become subjects of systems they do not control, they lose not only power but also relational connection and meaning key components of humanness in organisational life.
Structural vulnerability and pathways to exploitationThese technological shifts occur within structural conditions that exacerbate their harmful effects. Economic inequality, migration, gender disparities, and weak labour enforcement create fertile ground for exploitative labour practices (Crane et al., 2019; Rogerson & Parry, 2020). AI systems, rather than mitigating these inequities, may amplify them by reinforcing historical biases and making labour markets even more opaque (Eubanks, 2018; Nadeem et al., 2022). Displaced and economically desperate individuals are often funnelled into informal gig work, where algorithmic systems determine access to income, reputation, and opportunities without recourse or transparency (Graham et al., 2017). This interaction between technological control and socio-economic vulnerability can entrench patterns of coercion and dependency that echo characteristics of MS (LeBaron, 2020).
Conceptual model: from technological displacement to MS riskFig. 1 shows a conceptual flow of how AI may contribute to MS and the erosion of humanness:
This model illustrates the dynamic interaction between technological transformation and social precarity. AI-driven displacement acts as the catalyst, forcing vulnerable workers into informal labour markets where protections are weak or absent (Sachs et al., 2020). In these spaces, exploitative offers proliferate, often disguised as flexible digital entrepreneurship. The situation is compounded by algorithmic systems that monitor, evaluate, and discipline workers based on opaque and unchallengeable data logics (Rosenblat, 2016).
The upward feedback loop emphasises the cyclical nature of digital coercion: algorithmic control not only reinforces economic dependency but also erodes workers’ ability to negotiate better conditions or exit harmful work. This recursive dynamic creates a technologically mediated dependency that parallels more traditional forms of forced labour, where workers stay not due to physical constraint, but because the digital architecture of work leaves them with no viable alternative (van Doorn & Badger, 2020).
Critically, the model underscores that AI’s exploitative potential is not deterministic. Rather, it emerges from the interaction between technological systems and socio-political contexts. Without legal accountability, ethical design principles, and human-centred governance, AI becomes a tool of structural violence, producing new architectures of dehumanisation (Al-Amoudi, 2021; Haslam & Loughnan, 2014). The model offers a theoretical lens through which to examine not only the risks of technological deployment but the shifting boundaries of humanness in the workplace.
Algorithmic management and worker impact: real-world evidenceTo enrich the analytical depth of this study and reinforce its conceptual propositions, this section integrates recent empirical evidence documenting the material consequences of algorithmic management on workers across diverse economic and geographical contexts. Drawing upon authoritative sources (Fairwork, 2023; 2025a; 2025b); Human Rights Watch (2025) and peer-reviewed research (e.g., ; Ustek Spilda et al. (2024)), there is evidence to illustrate how algorithmic systems have reconfigured managerial control and labour relations in the digital economy.
These studies collectively reveal that ostensibly neutral technologies such as dynamic pricing algorithms, performance-scoring systems, automated scheduling, and AI-based dispute resolution frequently operate as mechanisms of digital coercion. Organisations investigated embedded managerial decisions in opaque computational processes. Thus, their AI systems reproducing asymmetrical power relations, intensifying precarity, and eroding the moral and emotional dimensions of work.
Table 1 presents sixteen emblematic cases of algorithmic labour governance across major global platforms in transportation, logistics, delivery, and online freelance sectors. Each case identifies the core algorithmic feature, its intended managerial function, and its documented effects on workers’ autonomy, security, and well-being. Collectively, these examples demonstrate that algorithmic exploitation constitutes not a series of isolated incidents but a pervasive structural condition of twenty-first-century labour. They provide tangible evidence of how the architecture of algorithmic exploitation materialises the processes of dehumanisation theorised in the preceding framework, thereby strengthening the study’s conceptual-empirical integration.
Platform algorithmic management & worker impact.
| Platform / Employer | Algorithmic Feature | Function / Purpose | Reported Worker Impact | Source |
|---|---|---|---|---|
| Uber (Global) | Automated performance ratings and deactivation algorithms. Automated penalties for cancellations | Enforce service quality and behavioural compliance | Sudden account deactivation without explanation; income insecurity; psychological distress | (Human Rights Watch, 2025) |
| DoorDash (US) | Dynamic pricing and order-assignment algorithm | Allocate tasks and optimise delivery routes | Earnings volatility; opaque pay deductions; pressure to accept every order | (Human Rights Watch, 2025) |
| Amazon (Warehouses) | “Time-on-task” AI surveillance system | Track productivity and idle time | Work intensification; physical injuries; stress from continuous monitoring | (Ustek Spilda et al., 2024) |
| Amazon Flex (US/UK) | Automated scheduling and driver-rating system | Coordinate delivery availability and punctuality | Reduced flexibility; penalties for lateness; opaque performance thresholds | (Human Rights Watch, 2025) |
| Swiggy (India) | Dynamic pay and incentive algorithm | Adjust earnings based on time, location, demand, and performance | Unpredictable income; pressure to stay logged in; loss of benefits; emotional distress | (Fairwork, 2023) |
| Instacart (US) | Shopper-rating and algorithmic task distribution | Match workers to customers based on metrics | Biased scoring; income deductions; lack of human oversight | (Human Rights Watch, 2025) |
| Lyft (US) | Opaque pay algorithms | Calculate earnings and assign rides | Subminimum wages; fear of deactivation | (Human Rights Watch, 2025) |
| Favor (US) | Algorithmic scheduling | Assign shifts and manage availability | Unstable hours; lack of benefits | (Human Rights Watch, 2025) |
| Shipt (US) | Performance scoring | Evaluate task completion and customer feedback | Pressure to overwork; lack of transparency | (Human Rights Watch, 2025) |
| Freelancer | Algorithmic reputation and feedback weighting | Rank freelancers and determine job visibility | Pay suppression; exclusion from premium projects; emotional exhaustion | (Fairwork 2025a) |
| Papa (US) | Automated shift approvals | Manage scheduling and client matching | No human contact; GPS tracking | (Fairwork, 2025b) |
| ShiftKey (US) | Algorithmic shift matching | Match healthcare workers to facilities | No recourse for disputes; isolation | (Fairwork, 2025b) |
| Ola (India) | Poor grievance redressal | Handle worker complaints and disputes | No support; zero Fairwork rating | (Fairwork, 2023) |
| Porter (India) | Algorithmic dispatch | Assign delivery tasks based on location and demand | Unpredictable workload; low pay | (Fairwork, 2023) |
| Upwork (Global) | Skill-based matching algorithms | Match freelancers to job listings | Income instability; lack of transparency | (Fairwork, 2025a) |
The increasing adoption of AI to manage, monitor, and displace human labour raises important theoretical questions about the evolving nature of humanness in the workplace. Drawing on interdisciplinary literature across sociology, philosophy, psychology, and organisational studies, this section explores how AI-driven transformations challenge long-standing assumptions about what it means to be human at work.
Classical sociological theorists, such as George Mead and Blumer (1966), understood humanness as an emergent property of social interaction and developed through communication, empathy, and symbolic exchange. In this view, identity, agency, and personhood are co-constructed in workplace relationships. When AI systems mediate, replace, or obscure these interactions, the conditions for mutual recognition, symbolic meaning, and relational identity are disrupted. Algorithmic management systems do not recognise workers as moral agents or communicative beings. Instead, they treat workers as data points to be optimised, evaluated, and ranked (Kellogg et al., 2020b). This technocratic framing fragments the self, reduces visibility of worker voice, and may ultimately deconstruct the social bonds that foster belonging, dignity, and ethical responsibility.
Human dignity in the workplace often rests on the ability to exercise autonomy, make ethical judgments, and exert agency over one’s tasks and time (Eubanks, 2018; Haarjärvi & Laari-Salmela, 2024; Haslam, 2006). The shift toward AI-driven systems displaces these functions. When decisions are made by black-box algorithms, workers lose their ability to understand or challenge the basis of evaluation, leading to increased psychological stress and disengagement (Einola & Khoreva, 2023; Kim et al., 2024; Moore, 2018). This erosion of control signals not just a practical loss, but a transformation in how humans relate to work and to themselves as workers. When humans are treated as functionally interchangeable with machines, the qualities that distinguish them like moral judgment, creativity, and empathy are side lined or undervalued (Al-Amoudi, 2021; Budhwar et al., 2023; Liu et al., 2024).
The concept of humanness is not neutral or universal (Haarjärvi & Laari-Salmela, 2024; Manne, 2016). It is shaped by social power relations, institutional norms, and now, increasingly, by digital infrastructures (International Labour Organization., 2021; Manne, 2016). AI systems encode and reproduce structural inequalities, particularly along lines of race, gender, and class (Crăiuț & Iancu, 2022; Nadeem et al., 2022). As a result, some groups may be more likely to be excluded, monitored, or dehumanised in technologically mediated workplaces. In this context, humanness must be understood not just as an individual attribute, but as a status that is socially conferred or denied. Following Haslam and Loughnan (2014). We can conceptualise dehumanisation as the denial of uniquely human characteristics (e.g., moral sensitivity, self-control, depth of emotion). When AI systems treat workers as passive outputs rather than as social beings, they risk institutionalising a new form of organisational dehumanisation.
Finally, we call for a more expansive and critical theorisation of humanness in the age of AI. It challenges the notion that humanness is a static essence and instead conceptualises it as relational, socially produced, and technologically mediated (Einola & Khoreva, 2023; Kim et al., 2024; Manne, 2016). As such, the presence of AI in organisational life does not merely automate work it transforms the ethical, emotional, and symbolic landscapes of labour itself. This approach invites future research into how workers make sense of their relationships with AI systems, how organisational practices enable or constrain expressions of humanness, and how new digital configurations support MS and human rights(Kim et al., 2024; Milivojevic et al., 2020). Table 2 below visually illustrates how AI systems may undermine core aspects of humanness in the workplace. It links each aspect of humanness to its associated threat from AI and outlines the resulting implications.
Humanness and AI risks.
The conceptual model proposed in this paper highlights how AI, when implemented without ethical safeguards or regulatory oversight, can perpetuate exploitative labour conditions and erode human dignity in the workplace (Einola & Khoreva, 2023; Pereira et al., 2023). These consequences, however, are not technological inevitabilities; they are design and governance outcomes. Reclaiming humanness in the digital workplace requires reorienting AI’s role from a mechanism of control to a tool of ethical enablement (European Commission, 2021; Haarjärvi & Laari-Salmela, 2024; UNESCO, 2021). It begins with a fundamental recognition: AI is no longer merely an efficiency-enhancing tool, but a powerful actor in labour governance. Systems that assign shifts, evaluate productivity, and enforce discipline are displacing traditional human oversight with opaque algorithmic decision-making. In gig economy platforms such as Uber and food delivery services like Glovo and Deliveroo, workers are managed not by supervisors but by systems that optimise performance without providing opportunities for feedback, fairness, or appeal (Graham et al., 2017; Kellogg et al., 2020b; van Doorn & Badger, 2020; Wood et al., 2019). These technologies reshape labour relations, often removing the human face of management and replacing it with data-driven, unchallengeable decisions.
To ensure AI systems uphold worker dignity, governments and organisations must embed algorithmic accountability through transparent decision-making, human oversight, and the right to contest automated actions (van Doorn & Badger, 2020; Eubanks, 2018). From a managerial perspective, this means that human resources leaders and operational managers must move beyond viewing AI as a plug-and-play tool and instead treat it as a strategic actor that directly impacts organisational culture and human experience(Budhwar et al., 2023; Kim et al., 2024). Companies should establish AI ethics committees that include not only technologists but also employees, union representatives, and ethicists ensuring diverse voices shape system development and deployment. For instance, some leading firms, such as Microsoft, have adopted AI impact assessment protocols that screen for bias, fairness, and psychological impacts before systems go live (Microsoft, 2024; Raji et al., 2020; Whittlestone et al., 2019).
Parallel to governance, labour protections must be extended to those on the margins of the formal economy, particularly informal, contract-based, and platform workers (Crane et al., 2019; Wood et al., 2019). As AI increasingly governs access to work, regulates pay, and enforces productivity thresholds, regulatory frameworks must evolve to ensure fair treatment regardless of employment classification (Graham et al., 2017; Langer & Landers, 2021). In many jurisdictions, platform workers lack access to minimum wage protections, paid leave, or legal recourse, even when their livelihoods are entirely controlled by algorithmic systems (Budhwar et al., 2023; Wood et al., 2019). The case of Amazon’s warehouse operations provides a stark illustration: while the company employs AI to optimise workflow, enforce pick rates, and monitor movement patterns, the relentless pace imposed by algorithms has been linked to high injury rates and worker burnout (Moore, 2018). These outcomes reveal that algorithmic efficiency often comes at the expense of worker health and dignity. A managerial implication here is the need to rethink productivity metrics: leaders must move beyond surveillance-based KPIs and introduce more holistic indicators that integrate worker well-being, retention, and psychological safety.
Moreover, AI should be leveraged not only as a management tool but also as a mechanism to detect and combat MS and forced labour (Business & Human Rights Resource Centre, 2022; LeBaron, 2020; Russell & Norvig, 2016b). Real-world applications of this are emerging in both public and private sectors. For example, machine learning algorithms are now used to analyse global supply chain data, flag high-risk subcontractors, and detect anomalies in working patterns that may signal exploitative conditions (HAI Stanford University, 2025; Russell & Norvig, 2016a). Governments have deployed AI to process satellite imagery and identify informal mining camps and unregulated brick kiln industries closely associated with bonded and forced labour (Boyd et al., 2018). In practice, these technologies can help compliance officers, CSR managers, and procurement leaders proactively identify and address risk, but only if the intent is remediation rather than concealment. Practically, this means businesses must not only implement AI audit tools but also build internal capacity to interpret these outputs, engage affected workers, and act on violations, which requires coordination among ethics teams, supply chain managers, and frontline supervisors.
The ethical design of AI must also move beyond risk avoidance and toward human-centred innovation. Many current AI systems prioritise cost-saving and performance optimisation while treating human concerns such as fairness, recognition, and relationality as secondary (Kellogg et al., 2020b; Kim et al., 2024; Malik et al., 2022). However, frameworks such as the IEEE’s Ethically Aligned Design (2019) and UNESCO’s AI Ethics Guidelines (2021) advocate embedding core values into system architecture, such as accountability, transparency, and social inclusion (IEEE Standards Association, 2025; UNESCO, 2021). These principles must be translated into design processes that are participatory and inclusive. As Einola and Khoreva (2023) argue, involving affected workers in co-design processes increases trust, reduces alienation, and fosters a greater sense of agency in AI-mediated environments. In practice, this implies that design teams within firms must collaborate with employees, unions, and human rights experts when developing or procuring AI solutions, particularly those that affect core HR functions such as hiring, evaluation, and scheduling.
The transnational nature of many algorithmic systems also necessitates global policy coordination. Digital labour platforms often outsource service delivery to regions with weak labour protections, enabling companies to benefit from algorithmic control while avoiding responsibility for human consequences (International Labour Organization., 2021). LeBaron (2020) describes this phenomenon as the creation of “digital grey zones,” where exploitation thrives behind layers of subcontracting and data abstraction. National policies alone cannot address this challenge. International frameworks such as the UN Guiding Principles on Business and Human Rights must be updated to reflect the unique risks posed by AI-mediated labour systems, including the mandate for AI-related due diligence across global value chains (Bonnitcha & McCorquodale, 2017; Walk Free et al., 2020). Managers operating across international subsidiaries must align procurement, compliance, and risk policies with these evolving standards, and ensure AI tools used abroad are held to the same ethical benchmarks applied at headquarters (Benstead et al., 2021; Raji et al., 2020; UNESCO, 2021; Whittlestone et al., 2019).
Finally, this moment presents a rare opportunity to reshape organisational life in ways that actively promote humanness. AI can be designed and governed to support human flourishing by offloading repetitive tasks, enhancing work-life balance, and expanding opportunities for meaningful contribution. But this requires deliberate cultural leadership. Organisations must cultivate environments where moral agency, empathy, and mutual recognition are not sacrificed for optimisation. Ethical leadership, worker voice, and inclusive communication are not just cultural ideals; they are essential practices for ensuring that algorithmic systems serve human ends. From a managerial standpoint, this calls for investment in ethical leadership development, integrating AI literacy into HR and operations training, and embedding ethical review into technology procurement processes.
ConclusionThis paper has developed a conceptual framework to examine how the deployment of AI in labour systems contributes to the erosion of humanness and the reproduction of exploitative conditions that echo MS. The analysis demonstrates that AI’s impact is not technologically neutral but deeply shaped by the socio-political and economic contexts in which it operates (LeBaron, 2020; Orlikowski & Scott, 2008). When implemented without ethical safeguards, AI can displace workers, obscure accountability, and entrench inequality, generating new architectures of control and dependency (Kellogg et al., 2020b; Moore, 2018).
Humanness defined by autonomy, empathy, dignity, and moral agency is increasingly under threat in workplaces governed by algorithmic systems (Einola & Khoreva, 2023; Haslam & Loughnan, 2014). As AI replaces human judgment with automated evaluation and surveillance, workers lose agency over both their labour and identity. For many low-skilled and informal workers, algorithmic governance not only restructures employment but redefines the conditions of existence itself, reducing individuals to data points within opaque digital infrastructures (Graham et al., 2017; Wood et al., 2019).
Moreover, this erosion of humanness is cumulative and self-reinforcing. AI-driven labour displacement leads to economic precarity, which in turn pushes workers into informal or digitally mediated work arrangements with little to no protection (International Labour Organization., 2021; Sachs et al., 2020). These environments are often governed by algorithmic systems that prioritise efficiency and control over well-being, thereby fostering new forms of coercion and dependency (van Doorn & Badger, 2020; Liu et al., 2024). As this paper has argued, these dynamics reproduce key elements of MS not through physical force, but through structural and technological compulsion. The consequence is a profound ethical challenge: how to ensure that technological advancement does not come at the cost of human dignity.
Policy and organisational recommendationsIn light of the risks posed by AI-driven labour transformations, a coordinated set of policy, organisational, and research strategies is urgently required to mitigate harm and reclaim humanness in the workplace. First, governments and organisations should mandate AI labour impact assessments prior to system deployment, particularly for tools used in hiring, performance monitoring, or scheduling. These assessments should evaluate potential threats to autonomy, fairness, and psychological safety, with particular attention to historically marginalised or low-skilled worker populations (Raji et al., 2020; Whittlestone et al., 2019). In parallel, labour laws must be adapted to account for the growing prevalence of informal and platform-mediated employment. Non-standard workers must be guaranteed the right to minimum wage, occupational health, social protection, and collective bargaining regardless of their employment classification (Budhwar et al., 2023; International Labour Organization., 2021). Without these updates, many workers will remain unprotected in the face of increasingly automated and opaque systems of control.
Transparency and oversight are also essential. Organisations that deploy algorithmic systems should be required to provide clear explanations of how decisions are made, ensure workers can appeal algorithmic outcomes, and guarantee that key decisions, particularly those affecting hiring, discipline, or dismissal, are subject to human judgment (van Doorn & Badger, 2020; Eubanks, 2018). Regulatory bodies should actively audit these systems, enforce transparency standards, and penalise systems that systematically harm worker well-being. To counter emerging risks, governments and organisations should institutionalise algorithmic accountability by conducting regular audits that assess bias, fairness, and human impact. Worker grievance and appeal mechanisms must accompany these systems to ensure transparency and redress. Labour legislation should evolve to protect platform-based and informal workers, guaranteeing rights to fair pay, social protection, and collective bargaining regardless of employment status. At an international level, policy coordination across borders is essential to regulate digital supply chains and prevent transnational forms of algorithmic exploitation.
Moreover, there is a need to shift away from efficiency-driven models of technological design toward ethical, human-centred approaches. Developers should adopt value-sensitive design principles that embed empathy, fairness, and justice into AI architecture (European Commission, 2021; IEEE Standards Association, 2025; UNESCO, 2021). Participatory co-design processes involving workers, unions, and civil society are crucial for ensuring that these systems support, rather than erode, human dignity (Einola & Khoreva, 2023).
Given the transnational nature of many AI platforms, international labour institutions must update legal frameworks to regulate AI-driven exploitation across borders. Multinational companies should be held accountable not only for physical labour conditions but also for digital sourcing practices, platform-based gig work, and algorithmically mediated subcontracting chains (Business & Human Rights Resource Centre, 2022; LeBaron, 2020; Wood et al., 2019). Additionally, organisations must move beyond procedural compliance and invest in cultures that promote human flourishing. This includes fostering leadership that prioritises relationality, empathy, and shared decision-making, alongside practices that uphold employee voice, work-life balance, and psychosocial well-being (Haarjärvi & Laari-Salmela, 2024). Such initiatives are essential for counteracting the alienation and depersonalisation that often accompany AI-mediated work environments.
Limitations and future research directionsThere is a critical need for interdisciplinary and empirically grounded research to test and refine the conceptual model developed in this paper. While this study offers a theoretically informed exploration of how AI contributes to the erosion of humanness in labour systems, it is limited by its conceptual nature and reliance on secondary literature. The framework, though grounded in emerging empirical findings, has not yet been validated through large-scale field studies or longitudinal data. It also primarily reflects trends observed in platform-based gig work and low-skilled sectors in the Global North, which may not fully capture the complexities and cultural nuances of AI-labour dynamics in other geopolitical contexts. Future research should therefore expand the scope of investigation to include a wider range of industries, organisational forms, and regional labour markets, particularly in the Global South, where informal economies intersect most acutely with technological precarity.
Moreover, qualitative and mixed-methods research that captures the lived experience of workers under algorithmic management will be particularly valuable in uncovering the psychological, moral, and social consequences of AI at work (Haslam, 2006; Einola et al., 2024). Ethnographic studies, narrative interviews, and participatory action research can offer granular insights into how workers interpret, resist, or adapt to algorithmic systems in ways that quantitative models may overlook. There is also a pressing need for comparative cross-cultural studies to examine how socio-technical systems of control manifest differently across legal protections, digital literacy, cultural norms, and social safety nets. Finally, researchers must examine not only the harms of AI but also the conditions under which these technologies can be leveraged to enhance humanness by promoting autonomy, dignity, and meaningful work. Such research is vital not only for advancing academic theory but also for informing the ethical design, regulation, and governance of AI systems that shape the future of work. In an era of accelerating automation, the imperative is clear: to ensure that technological progress deepens, rather than diminishes, our shared humanity.
A human-centred future for innovation and knowledgeThe future of work and knowledge systems will hinge on how humans and AI jointly construct meaning, insight, and value. Rather than positioning AI merely as a technological disruptor or mechanism of control, it should be re-envisioned as a partner in human flourishing—one that augments rather than supplants moral judgment and relational intelligence. Future knowledge architectures must be intentionally designed to safeguard the tacit, ethical, and empathic dimensions of human cognition that elude codification. This requires the development of collaborative intelligence models in which AI systems actively enhance creativity, learning, and ethical deliberation within organisations. Embedding humanness as a foundational design principle will enable societies to progress toward a post-algorithmic paradigm in which technological advancement and human dignity are mutually reinforcing.
Ultimately, the goal is not to reject AI, but to ensure that it serves human ends. Technology should be a tool for emancipation, not a mechanism of entrapment. As we navigate the future of work, we must ask not only what AI can do, but what it should do and for whom. Reclaiming humanness in the age of intelligent machines requires deliberate and collaborative effort across disciplines, institutions, and borders. Only by centring dignity, agency, and ethical responsibility can we build work systems that are not only efficient but truly humane.
In bridging the domains of AI innovation, organisational theory, and human rights, this paper offers a timely intervention into the ethical governance of intelligent technologies. It highlights the urgent need to move beyond instrumentalist accounts of AI and attend to its ontological and affective consequences for workers, especially those in precarious, digitally mediated environments. By foregrounding humanness as a central analytic category, the paper contributes to a more holistic understanding of how AI reshapes labour, identity, and social belonging. Future research must continue to explore these intersections, while policy and design efforts should prioritise human-centred approaches that restore dignity, empathy, and moral agency in the age of algorithmic labour.
CRediT authorship contribution statementDennis Pepple: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Data curation, Conceptualization. Nadeesha Muthuthantrige: Writing – review & editing, Writing – original draft, Project administration, Formal analysis, Conceptualization.
The European Commission (EU) published Ethics Guidelines for Trustworthy AI in 2019, identifying key principles such as transparency, human agency, and accountability. UNESCO (2021) followed with its Recommendation on the Ethics of Artificial Intelligence, the first globally adopted ethical AI framework. The IEEE (2019) contributed with its multi-volume Ethically Aligned Design documents, which promote human rights and value-based AI system development (European Commission, 2021; IEEE Standards Association, 2025; UNESCO, 2021).




