Artificial intelligence (AI) holds transformative potential for human resources (HR), yet its adoption remains limited, particularly within the broader context of digital transformation (DT). Although trust is widely recognised as a critical enabler of AI adoption, little is known about the organisational conditions under which this trust develops, especially in firms with low digital maturity. This study investigates how configurations of organisational factors, namely technology, digital skills, culture supporting DT and HR’s involvement in DT initiatives, can shape trust in AI adoption within HR practices. The methodological approach followed two steps: (1) the questionnaire design was validated through insights derived from interviews conducted during a case study analysis, and (2) a fuzzy-set qualitative comparative analysis using survey data was employed. The findings contribute to shedding light on trust enablers and hindering factors that positively influence AI adoption in HR practices. Results show that trust can emerge even in digitally less mature firms when HR functions are strategically involved in broader DT initiatives. Conversely, HR digital skills and functional involvement together with cultural readiness can foster trust independently without top-down managerial involvement. These findings challenge conventional assumptions that digital maturity and leadership engagement are conditions for fostering trust in the adoption of new technologies. By uncovering multiple pathways to trust, this study contributes theoretically by framing trust as a configurational, organisational-level outcome. This work aims to advance the discourse on technological innovation in HR, providing valuable insights for practitioners and scholars to support digitally lagging organisations navigating the challenges of AI adoption.
Research often highlights the dichotomy between early technological adopters and cautious observers lacking trust in emergent technologies (Annosi et al., 2024; Glikson & Woolley, 2020; Hengstler et al., 2016; Kim & Kim, 2024; Rampersad, 2020). Indeed, technological adoption is a transformative and nonlinear journey for companies undergoing digital transformation (DT; Lucas et al., 2013; Spanos & Voudouris, 2009). The literature highlights significant diversity in technology adoption patterns (Nguyen et al., 2015; Spanos & Voudouris, 2009), particularly for artificial intelligence (AI; Berlingieri et al., 2020; Kinkel et al., 2022).
These adoption patterns are ‘not only influenced by specific technological characteristics, but also by further structural prerequisites of the companies’ (Kinkel et al., 2022, p. 3). Internal organisational factors—such as the presence of a qualified workforce with the necessary digital skills (Kinkel et al., 2022) and managerial digital competencies (Rialti et al., 2019), a digitally oriented organisational culture (Karimi & Walter, 2015), an agile and strategic human resources (HR) function structure (Thite, 2022), and the presence of supportive roles (Zoppelletto et al., 2023)—are hugely relevant in the context of technological adoption, particularly for innovative technologies such as AI. Conversely, the literature also highlights some barriers to technology diffusion (Berlingieri et al., 2020); for instance, a lack of digital skills and limited proficiency in using digital tools can hinder the ability to recognise the value of digital technologies (Berlingieri et al., 2020). Additionally, managers’ resistance to computer-aided decision-making (Akter et al., 2016) or concerns about potential job losses (Ransbotham et al., 2018) significantly impede technological innovation and adoption.
As a relatively recent phenomenon, AI represents a groundbreaking technological advancement for firms (Boden, 2016). AI’s technological adoption (or acceptance) depends on the context in which the agent is used (Hansen et al., 2024; Kelly et al., 2023). For instance, in HR, where technology has moved the field into a ‘new realm’ (McWhorter, 2010; Phan et al., 2017; Thite, 2022), AI still has a limited practical implementation (Chowdhury et al., 2023; Thite, 2022). AI-based systems promise efficiency, accuracy and strategic advantage, but they also introduce uncertainty, resistance and scepticism, particularly in people-centric domains such as HR, where decisions are ethically sensitive and emotionally charged (Chowdhury et al., 2023; Thite, 2022). Indeed, AI applications in the HR domain elicit a complex interplay of fear and trust in technological adoption (Budhwar et al., 2023; Charlwood & Guenole, 2022; Chowdhury et al., 2023; Rampersad, 2020; Suseno et al., 2022). Fear of these technologies is a major barrier to adoption (Rampersad, 2020; Ransbotham et al., 2018), especially among HR managers exhibiting mistrust in AI and so-called ‘AI anxiety’ (Suseno et al., 2022). This anxiety is often driven by concerns over the ethical implications, the creation or amplification of inequalities, and the information security and privacy risks associated with the technology (Kim & Kim, 2024; Lazazzara et al., 2025; Thite, 2022). Conversely, managerial roles’ trust in AI is fuelled by its potential to enhance efficiency, decision-making and innovation, serving as a critical enabler of successful technological implementation (Kelly et al., 2023) that improves their change readiness for AI adoption (Glikson & Woolley, 2020). Indeed, unlike traditional technologies, AI operates with a level of autonomy and opacity that challenges existing notions of control and reliability, intensifying ambivalent feelings towards its adoption (Cao et al., 2021; Charlwood & Guenole, 2022).
Trust has been identified as a key predictor of behavioural intention, willingness and use acceptance of technology, particularly in the case of new digital technologies such as AI, in traditional models such as the Technology Acceptance Model (Kelly et al., 2023). Additionally, in the case of specific technologies such as AI, the literature highlights the success of its integration into organisations depending on the level of trust in AI (e.g. Glikson & Woolley, 2020; Hengstler et al., 2016; Kelly et al., 2023).
However, it remains unclear which set of factors builds trust in AI, facilitating its acceptance and adoption, and which factors can be associated with AI’s ‘dark side’, fuelling concerns and scepticism towards its use (Cao et al., 2021). Despite broad agreement in the literature that trust is critical for AI adoption (Glikson & Woolley, 2020), the conditions under which trust in AI emerges remain theoretically underdeveloped. Further, the literature still lacks an in-depth investigation of elements that drive resistance or avoidance behaviour towards AI (e.g. Zhan et al., 2023), particularly in the HR domain. Moreover, most prior research examines trust as a linear or individual-level factor (e.g. user beliefs or attitudes), neglecting organisational-level configurations that may enable or hinder trust, especially in the context of low digital maturity.
This study addresses these gaps by exploring specific organisational conditions emerging from the technology adoption literature, focusing on AI adoption in HR. The theoretical problem we address is the lack of understanding about how trust in AI emerges from specific configurations of organisational factors, particularly in digitally transitioning firms. We argue that trust in AI is a configurational outcome, resulting from multiple interacting factors such as technology, digital skills, culture supporting DT, and HR’s involvement in DT initiatives. Our objective is to identify which combinations of these factors are associated with high levels of trust in AI within HR practices, and which combinations hinder it.
To achieve this objective, we adopt a set-theoretic configurational perspective using fuzzy-set qualitative comparative analysis (fs-QCA; Ragin, 2008). This approach is particularly suited to studying complex, nonlinear and non-additive causal relationships at the organisational level and identifying different combinations of conditions that are sufficient (or necessary) for the outcome of interest.
Our findings reveal that there is no single path to building trust in AI. Instead, multiple organisational configurations can lead to high trust outcomes. We find that strong managerial involvement or high technological readiness may not always foster trust. Conversely, digital skills and cultural readiness can foster trust independently without top-down managerial involvement. Moreover, the results indicate that trust in AI can even develop in less digitally mature firms, provided the HR function is strategically engaged in wider DT efforts.
This article comprises five sections. Following the introduction, Section 2 outlines the theoretical background and presents our theoretical proposition by emphasising the role of trust in AI, the influence of different organisational factors on digital maturity, and a reflection on HR’s strategic role in facilitating trust and adoption in AI-driven contexts. Section 3 details the methodology, Section 4 presents the analysis and results, and Section 5 offers the discussion and key conclusions.
Literature reviewTechnological adoption is often the first and most tangible element of a disruptive or challenging organisational transformation (Bailey et al., 2019). The literature on AI adoption emphasises that organisations should view AI implementation as a key component of a broader and ongoing DT process rather than a one-time event (Thite, 2022). Indeed, although the adoption of a specific technology such as AI represents a pivotal step, it is merely a critical component of a broader, more comprehensive process referred to as DT that organisations must undergo (Ha et al., 2022; Hanelt et al., 2021; Vial, 2019).
DT is a holistic and systemic process that extends beyond implementing new technologies (Lanzolla et al., 2020). It necessitates a fundamental shift in several organisational elements, including skills, roles, leadership approaches, organisational structures, processes and corporate culture, all of which are crucial for determining the long-term success of this fundamental and challenging transformation (Vial, 2019).
Culture plays a crucial role in driving DT (Hanelt et al., 2021; Hartl & Hess, 2017) because it facilitates innovation adoption, trust in technologies, flexibility to risks, uncertainties and new opportunities (Hooi & Chan, 2023; Mohelská & Sokolova, 2018; Nafchi & Mohelská, 2020), encompassing key elements such as beliefs, values, a shared language and a unified mindset (Solberg et al., 2020). Indeed, a recurring theme in the DT literature highlights the importance of fostering a risk-taking attitude and a readiness to experiment (Fehér & Varga, 2017; Hooi & Chan, 2023). Organisations that foster an environment in which experimentation is encouraged and employees are empowered to explore new ideas tend to be more digitally mature and capable of navigating the complexities of transformation (Gust et al., 2017).
Organisational design—encompassing key elements such as formal structures, functions, units and processes (Nadler et al., 1992)—is also crucial in driving DT (Hanelt et al., 2021). To remain competitive in a digital landscape, organisations must prioritise cross-functional collaboration, agility and ambidexterity (Earley, 2014). Technology adoption can significantly broaden a firm’s scope by enhancing coordination and collaboration across various units (e.g. Yoo et al., 2012; Volberda et al., 2021). Further, it has the potential to reshape internal structures (Annosi et al., 2024) and functions (Thite, 2022), enabling firms to achieve fluidity and more efficient routines (Birkinshaw, 2018) by automating and digitising existing processes (Bailey et al., 2019).
As a holistic process, DT also significantly affects managerial roles by reshaping leadership approaches, skills and responsibilities (Vial, 2019). Leadership and culture emerge as key enablers of DT (Hanelt et al., 2021; Hooi & Chan, 2023); managerial approaches and leadership styles are essential elements for successful DT (Bag et al., 2021; Hanelt et al., 2021; Solberg et al., 2024). Indeed, DT necessitates organisational change and adaptation, including shifts in roles, leadership styles and managerial approaches (Zoppelletto et al., 2023).
The literature underscores that the presence of trust accelerates the DT transformative process and drives deeper, less visible changes in culture, skills, structure and roles (Borghans & Ter Weel, 2011; Glikson & Woolley, 2020; Hengstler et al., 2016). In the context of technological adoption, trust is pivotal. As Kelly et al. (2023) assert, ‘trust in technology allows users to believe that using a device will achieve the desired goal’. This sense of trust is critical because it influences user acceptance and behaviours towards adopting new technologies (Kelly et al., 2023). Thus, the presence of digital maturity—such as a solid base of technological skills and infrastructure—can help build trust in new technologies, including AI, making it more likely that organisations will embrace these innovations.
The influence of technological and organisational factors on the adoption of AIDT can be identified as an ‘organizational change that is triggered and shaped by the widespread diffusion of digital technologies’ (Hanelt et al., 2021, p. 2). Emerging technologies such as AI, blockchain, big data, robotics, Internet of Things and cloud computing foster significant potential in automating several organisational processes and activities (Ivaldi et al., 2022; Lacity & Willcocks, 2016). However, many organisations struggle with technological adoption during their DT journey, and the differences in their ability to adopt these technologies are often rooted in their level of digital maturity.
According to the scientific literature, digitally mature firms are better positioned to adopt technologies such as AI (Ladu et al., 2024). Hansen et al. (2024) suggest that technology adoption (particularly for AI) is facilitated in organisational contexts that demonstrate a certain level of digital readiness, typically characterised by a clear strategic vision and an effective governance of data assets. These conditions, commonly found in digitally mature organisations, not only support implementation but also contribute to building organisational trust in AI, making adoption more likely and effective (Kelly et al., 2023).
Conversely, some authors argue that firms with mature technological capabilities may become resistant to adopting newer technologies (e.g. Shah et al., 2013). Indeed, stronger expertise in conventional technologies can lead to rigidity, pushing firms to continue exploiting mature technologies rather than adopting innovative digital solutions (Nisar et al., 2013).
In contrast, digitally lagging firms may lack strategic vision, infrastructure or digital skills, yet this does not always prevent them from adopting AI. Recent literature has noted that digital maturity alone may not fully reflect an organisation’s ability to leverage AI effectively (Hansen et al., 2024). For instance, Hansen et al. (2024) presented a multiple case study in which less mature firms were forced to adopt AI in response to external environmental pressures. In these contexts, AI projects were initiated in isolated organisational functions, despite lacking enthusiastic leadership, sufficient technological investment and a supportive organisational culture (Hansen et al., 2024). Indeed, it is unclear if, despite these limitations, trust in AI can still emerge through practical experimentation, localised learning or necessity rather than formal readiness.
Beyond technological readiness, organisational skills and culture are central to AI adoption. DT often alters required employee skills and competencies, creating new expectations around digital fluency (Van Laar et al., 2017; Vial, 2019). When implemented effectively, digital technologies can help break down organisational silos, enable the exchange of knowledge and ideas, and instil a sense of global community mindset (Thite, 2022). When used smartly and strategically, digital technologies can revolutionise learning and career development, empowering individuals to take charge of their work and professional journeys (Thite, 2022). As organisations adopt these technologies, employees must adapt by acquiring new digital competencies (Vial, 2019).
In digitally mature firms, employees tend to develop these skills proactively, recognising the need to stay ahead in an increasingly digital environment (Fernandes Dos Santos & Aires, 2023; Ostmeier & Strobel, 2022). These include not only job-specific and technological skills (Andriole, 2018; Sousa & Rocha, 2019), but also soft skills such as collaboration, adaptability, resilience and problem-solving (Faina & Almeida, 2020; Frey & Osborne, 2017; Ivaldi et al., 2022; Van Laar et al., 2017). Digitally mature organisations are also more likely to cultivate a learning culture, supporting continuous reskilling and professional development aligned with DT and AI implementation (Thite, 2022; Van Laar et al., 2017).
Conversely, digitally lagging firms may lack many of the organisational competencies typically associated with successful AI adoption, and prior research suggests that this leads to distrust in AI. However, the literature does not suggest that such adoption is precluded in these contexts (e.g. Hansen et al., 2024). Indeed, a small body of work asserts that trust in AI can emerge even without high digital literacy, particularly when systems are perceived as reliable and error-free. Workers can build trust in AI tools predicated on the perceived flawless functionality of the systems, which can positively shape their attitudes towards adoption regardless of their digital readiness (Johnson et al., 2021; Lazazzara et al., 2025). Learning often occurs through trial and error in such settings, and capabilities evolve incrementally. The concept of ‘metaskills’ (i.e. the ability to continuously adapt or acquire new skills as technologies change) is crucial in this context (Ciarli et al., 2021). Even in low-maturity environments, cultivating metaskills can allow employees and organisations to align with technological shifts progressively and ‘chameleonically’ adapt to the digital technological change.
Fostering a digital-oriented culture towards AI: The HR function’s strategic roleDespite the transformation triggered by digital technologies, many organisations face ongoing challenges bridging gaps between business units or functions to align organisational and digital strategies (Duerr et al., 2017; W. Li et al., 2016). In the HR domain, the adoption of emerging technologies such as AI for people management is currently in its infancy (Charlwood & Guenole, 2022; Lazazzara et al., 2025), and technological adoption still needs to face several issues. Despite this, among organisational functions, the HR function has a key role in fostering a culture that embraces digital change, ensuring the success of digital strategies across the organisation (Thite, 2022). Indeed, the HR function is particularly well-positioned to promote the adoption of new technologies by cultivating trust and building digital readiness (Faiz et al., 2024; Lazazzara et al., 2025; Thite, 2022). Recent studies highlight that HR can effectively leverage AI to optimise both internal and external business processes (Faiz et al., 2024; Guenole & Feinzig, 2018; Van Esch et al., 2019).
Although upper management often drives technological change, the established literature highlights the essential role of HR managers in translating this vision into action by developing workforce capabilities, aligning strategy and facilitating change. Importantly, HR involvement can help compensate for lower levels of digital maturity within organisations, thereby enhancing the likelihood of successful AI and digital technology adoption (Faiz et al., 2024).
To speed up technological adoption, scholars increasingly emphasise the importance of the so-called ‘promotor roles’ in facilitating DT (e.g. Petzolt & Seckler, 2025), especially in areas such as HR management (P. Li et al., 2023). Some scholars even introduce the term ‘e-leader’ to describe managers directing significant digital initiatives within organisations (e.g. Berman & Korsten, 2014; W. Li et al., 2016).
As central figures in DT, HR managers play a pivotal role in shaping strategic decision-making (Thite, 2022), managing tensions (Annosi et al., 2024) and steering organisational transitions during the transformation process (Papagiannidis et al., 2020; Schiuma et al., 2024). In the digital era, HR managers’ responsibilities include guiding employees through change, framing organisational routines and operating models to cope with DT (Annosi et al., 2024), and fostering the development of workers’ digital skills (Pan et al., 2022). Today, HR managers are tasked with using AI to build a competitive advantage by enhancing speed and responsiveness, and strategically reassessing company structures to effectively integrate AI (P. Li et al., 2023).
However, although effective leadership can accelerate DT (Thite, 2022), some scholars have reported mixed findings concerning managers’ roles in DT trajectories (Akter et al., 2016; Caldwell, 2001). For instance, managerial myopia may hinder DT progress, delaying or mismanaging the process (Guo et al., 2023; Suseno et al., 2022). Elkins et al. (2013) posit that leaders may feel threatened by specific technologies such as AI that could contradict their own judgement.
Conversely, some scholars argue that managers can play a key role in accelerating the DT process (e.g. Caldwell, 2001; Charlwood & Guenole, 2022; Petzolt & Seckler, 2025). Specifically, there is growing attention on bottom-up DT promoter roles operating at lower levels in organisational hierarchies, emphasising the value of a horizontal approach in DT (Petzolt & Seckler, 2025; Zoppelletto et al., 2023; Sheehan et al., 2020), challenging traditional top-down dynamics (Maedche, 2016; Petzolt & Seckler, 2025). These figures often act as ‘transformational leaders’ (Sheehan et al., 2020), ‘champions of change’ (Caldwell, 2001) or ‘evangelists’ by leveraging their ‘ability to convince important stakeholders within the firm’ to overcome challenges in implementing new technologies such as AI (Jorzik et al., 2024, p. 7047).
This is particularly the case when companies adopt disruptive technologies such as AI, as a lack of trust can arise from managers’ limited confidence in the perceived objectivity and reliability of data-driven decision-making processes (Charlwood & Guenole, 2022; Elkins, 2013). Indeed, several authors identify fear of AI (e.g. Ransbotham et al., 2018) and anxiety towards AI implementation (Suseno et al., 2022) among those in managerial roles (Thite, 2022).
Thus, the literature offers mixed evidence on managers’ critical role in driving digital transformation, highlighting the need for a more nuanced understanding of how managerial actions influence DT outcomes, particularly in the HR domain. To advance this understanding, further research is needed to explore the specific role of managers in contexts involving the adoption of disruptive technologies such as AI.
Theoretical proposition and configurational conditions for AI adoption in HRAI adoption in HR results from multiple interactions among technological and organisational factors. Although existing research stresses trust as a key enabler of AI adoption (Glikson & Woolley, 2020), there is still limited understanding of the organisational configurations that foster or hinder trust development. A configurational approach is well-suited to addressing that complexity by identifying multiple paths (Fiss, 2011), possibly leading to AI trust in HR practices.
Building on existing literature, our study considers the role of HR’s digital technologies and skills, HR function involvement in DT, and organisational culture favourable for DT as potential enablers of AI trust. The extent to which HR departments are digitally prepared and possess the necessary skills to integrate AI into their practices influences their ability to assess its reliability and utility (Ostmeier & Strobel, 2022; Thite, 2022). Besides, HR’s engagement in broader DT processes also promotes alignment between technological innovations and organisational strategy, facilitating AI acceptance within the firm. Beyond technological and structural factors, a DT-supportive culture is crucial in mitigating AI-related scepticism because it promotes openness to algorithm-driven decision-making and reduces concerns over control and automation (Hanelt et al., 2021).
Conversely, technology itself can play a dual role in this process. It could catalyse organisational change, restructuring HR processes and enhancing operational efficiency (Vial, 2019); however, organisations with mature technological capabilities may become resistant to adopting newer technologies (Shah et al., 2013), such as AI tools.
Lastly, the direct involvement of HR managers in DT processes may not be a relevant factor in promoting AI trust. Traditionally, managerial leadership has been viewed as a catalyst for digital change. Yet, recent research suggests that a bottom-up approach, supported by broader cultural readiness and functional involvement, may be more influential than top-down managerial participation in AI adoption (Petzolt & Seckler, 2025). Following these theoretical argumentations, we develop our theoretical proposition:
Organisations characterised by high levels of HR digital skills, HR functional involvement in DT, and a supportive organisational culture are associated with a high level of trust in AI adoption in HR practices. Conversely, a high level of direct HR managers’ engagement and HR digital technology structures can be associated with both positive and negative outcomes regarding trust in AI for HR practices.
MethodologyTo evaluate the empirical support for the theoretical propositions advanced in this study, we adopted a two-step methodology combining an initial case study analysis with a fs-QCA using survey data (see Table 1). The fs-QCA method is ideal for analysing nonlinear and non-additive combinations of factors (Fiss, 2011; Ragin, 2008), particularly when the directions of the relationships among the factors are not theoretically straightforward, as in this study. The six case studies draw on interviews with HR managers from various companies and aim to reach two goals: (1) validate the questionnaire survey and ensure the clarity and relevance of its items, and (2) enhance the researchers’ understanding of the overall emerging phenomena of trusting AI tools in HR practices, allowing us to better discuss the results in terms of empirical relevance inside organisations.
The interview and survey questions.
Case studies provide a structured method for examining phenomena and their evolution within a defined context (Eisenhardt & Graebner, 2007). This methodology is particularly suitable for exploratory research because it allows for an in-depth examination of emerging phenomena in different organisational contexts, facilitating the development of insights and hypotheses over them (Ebneyamini & Sadeghi Moghadam, 2018).
Given that AI has only recently begun affecting the HR field, it is critical first to understand how the actors in the field perceive these emerging technologies and their influence on HR practices. Therefore, a multiple case study approach was employed, offering several advantages over a single case study. We could compare results by analysing multiple instances of AI emergence in the HR field, building a more comprehensive understanding of this emerging phenomenon.
The interview included several questions concerning the interplay between DT and HR practices, focusing on AI (see Table 1). The questions were rooted in the relevant literature debating the role of AI and digital technologies inside HR practices (Chowdhury et al., 2023; Jarrahi, 2018; Pan et al., 2022; Vrontis et al., 2022) and were organised around the four main themes of investigation of this study.
First, we investigated the level of maturity—in terms of technologies and skills—of the HR functions to enlighten their preparedness to AI implementation in their practices. Second, we collected information about the level of involvement of the HR function and managers in the organisational DT processes. Then, we addressed the organisational culture, as it could enhance or hamper the level of overall propensity and trust towards digital technologies and AI. Lastly, the questions investigated whether there was trust that AI would positively affect HR practices.
The interview sample comprised HR managers from Italian organisations spanning diverse sectors, including medium and large enterprises. Located in central-northern Italy, these companies are recognised as leading companies within their industries. The main details of these organisations are summarised in Table 2 with anonymised labels for the involved organisations.
Organisations involved in case studies.
The six case study organisations and all the surveyed organisations were selected using a purposive sampling strategy (Patton, 2002; Zickar & Keith, 2023) to ensure theoretical relevance to the research question. Specifically, all firms participated in the ‘SMACT Competence Center—Observatory 4.0 for Digitalization’, a national initiative supporting innovation and DT in Italian organisations. This strategic criterion ensured that selected cases had direct exposure to DT challenges and were likely to have engaged in reflection on the adoption of advanced technologies in HR practices, including AI.
The sample was designed to include firms varying in size and sector, but all were situated in the Italian context, which is characterised by strong heterogeneity in digital readiness (Berlingieri et al., 2020). This setting is particularly appropriate given the study’s focus on digitally lagging organisations. Although not intended to be statistically representative, the sample provides analytic generalisability (Yin, 2018) for theory-building purposes and offers variation in digital maturity and organisational structures, supporting the identification of relevant trust-building configurations.
The insights gathered from the multiple case studies were iteratively translated into the questionnaire items. After the first company interviews were transcribed, a thematic analysis was conducted to identify concepts related to DT, HR digital skills, cultural readiness, managerial involvement and trust in AI. These themes were mapped against the theoretical constructs derived from the literature, allowing us to align empirical observations with conceptual dimensions (Dubois & Gadde, 2002; Eisenhardt, 1989). This abductive and iterative logic ensures theoretical consistency and empirical grounding, and is recommended for enhancing construct validity in organisational research (Miles et al., 2014).
We proceeded iteratively. For each subsequent company, we presented the emerging version of the survey and discussed the clarity and relevance of the items with the HR respondent. This refinement phase enabled us to verify each item’s coherence and contextual fit and to adjust the wording or structure when new insights emerged (Barrett & Walsham, 2004; Creswell & Plano Clark, 2011). As the mixed methods literature suggests, this form of embedded instrument development strengthens content validity and ensures alignment with both practice and theory (Venkatesh et al., 2013). Therefore, the survey instrument evolved as an integrated outcome of theoretical elaboration and practitioner-informed revision.
Survey data collectionAfter the questionnaire was validated through analysis of the case study interviews, we expanded the data collection to a larger sample of Italian firms, collecting answers from 30 organisations. Data collection employed a computer-assisted web interview survey (Braunsberger et al., 2007), partially collected in collaboration with SMACT Competence Center—Observatory 4.0. for Digitalization (see https://www.smact.cc/osservatorio). Data were collected between April 2023 and April 2024.
To ensure temporal consistency, the questionnaire remained unchanged throughout the data collection period, and each response was anchored to the respondent’s current organisational context and recent HR digital practices. As already noted, we employed a purposive sampling according to firms’ involvement in digital innovation initiatives, willingness to share organisational insights and accessibility through partner networks. All the organisations involved answered the survey. A dataset with 30 observations was employed to test whether the theoretical propositions advanced by this study were supported by empirical data.
The questionnaire was grounded in the literature and qualitative insights from the case study phase. Specifically, each item reflected either a theoretical construct from prior research (e.g. Glikson & Woolley, 2020; Thite, 2022; Vial, 2019) or themes that emerged from our interview-based thematic analysis. Internal validation was carried out through iterative discussions with HR practitioners during the interview phase to refine item clarity and coherence. Given the configurational nature of our analysis (fs-QCA), the small sample size and the single-item measurements employed, we did not conduct a reliability analysis, as the method does not rely on internal consistency metrics but on set calibration and sufficiency logic (Fiss, 2011; Ragin, 2008).
Details of the conditions and the output are as follows:
- -
HR Digital Transformation Technologies & Skills: two different questions were employed to measure the level of maturity in terms of HR digital technologies and related skills. From Section B1, we employed: ‘Our organisation has extensively integrated digital tools and technologies into its human resources (HR) practices’ (labelled HRDT) to measure the level of deployment of HR-related digital technologies. From Section B2, we employed: ‘Our organisation’s HR function possesses the necessary skills to effectively implement and manage digital tools and technologies’ (labelled HRDTS), to measure the level of HR skills related to digital technologies. Both questions used a five-point Likert scale.
- -
HR Function & Manager Involvement: two different questions were employed to measure the involvement of HR in DT processes. From Section C1, we employed: ‘Our organisation actively involves the HR function in the processes of digital transformation’ to measure the level of involvement of the overall HR function (labelled HRFI). From Section C2, we employed: ‘The HR manager in our organisation plays an active role in driving and supporting digital transformation processes’ to measure the level of HR manager involvement (labelled HRMI). Both questions used a five-point Likert scale.
- -
Digital Transformation Supportive Culture: We measured the supportiveness of organisational culture towards DT by drawing on Section D1: ‘Our organisation has a culture that encourages and facilitates digital transformation’ (labelled DTSC), which used a five-point Likert scale.
- -
Trust in AI for HR practices: We measured the level of trust towards the positive effect of AI on HR practices by drawing on Section E1: ‘In our organisation, we trust that artificial intelligence (AI) will bring positive advancements to HR practices’ (labelled HRTAI), which used a five-point Likert scale.
We adopted a set-theoretic configurational approach to analyse our empirical data because it is particularly well-suited to studying complex, nonlinear and non-additive interactions among organisational-level variables (Furnari et al., 2020). Specifically, we employed fs-QCA (Ragin, 2008) to examine how different organisational configurations of technology, skills, HR involvement and culture are associated with specific outcomes regarding trust in implementing AI in HR practices. This method focuses on identifying ‘causal recipes’ (Woodside, 2013)—combinations of the presence or absence of conditions—that lead to the outcome of interest (Fiss, 2011). Unlike traditional correlation-based techniques, fs-QCA recognises the potential for multiple, non-symmetrical pathways to the same outcome (Ragin, 2008; Woodside, 2013).
Fs-QCA is inherently asymmetrical; it seeks to identify both necessary and sufficient conditions associated with a target outcome, providing a more in-depth understanding of causal complexity in organisational contexts. This analytical method involved three different phases: (1) condition calibration, (2) necessary condition analysis, and (3) sufficient condition analysis. All phases were conducted using fs-QCA 4.1, the open-source software developed by Professor Ragin, which is designed for systematic and rigorous implementation of set-theoretic methods.
CalibrationA critical step in the fs-QCA methodology is the calibration of variables, which determines the degree of membership for each case in the relevant sets. In this study, we employed the direct calibration method (Ragin, 2008), using empirical values to define the thresholds for full membership, full non-membership and the crossover point. Consistent with prior research, we adopted widely accepted thresholds: 0.9 or higher for full membership, 0.1 or lower for full non-membership, and 0.5 as the crossover point (Alofan et al., 2020; Felício et al., 2016). These thresholds were applied to the value of the five-point Likert scale usually employed in QCA research (Fiss, 2011), where 1 represents full membership, 3 is the crossover point and 5 is full membership. For example, if a firm answered with complete agreement (5 on the Likert scale) to the question about AI trust for HR practices, it is considered fully in the set of firms where the condition of high HRTAI is present.
Analysis of necessary conditionsThe second step in the fs-QCA procedure involves analysing necessary conditions to determine whether the presence or absence of any selected variables consistently precedes the outcome. In the current study, this entailed assessing whether cases with high membership scores for a given condition were also consistently associated with high membership in the outcome set (HRTAI). A condition is considered necessary when it must be present for a given result to occur (Rihoux & Ragin, 2009). Following established guidelines, a threshold of 0.9 was applied to identify necessary conditions (Schneider & Wagemann, 2012); the results are presented in Table 3.
Necessary conditions analysis for high trust in AI for HR practices.
This section presents the results of the necessary conditions analyses for the presence of HRTAI. As anticipated to be considered necessary, a condition must exhibit a consistency level higher than 0.9 and apply to two distinct factors and be close to 0.9 for another one. Starting from the latter, the presence of DT-related skills inside the HR function displays a value of 0.87, which is very close to the threshold of 0.9. Therefore, the presence of the necessary DT skills inside the HR function is highly relevant for enhancing the functional trust towards AI implementation in HR practices. Besides, HRFI (present) exhibits a value of 0.98, highlighting that the presence of a consistent involvement of the overall HR function in the organisational DT processes is necessary to promote the HRTAI. Lastly, HRMI (absent) has a value of 1.00 with a coverage of 1.00, meaning that for all the cases that display a full membership in the set of firms with HRTAI, the condition of involving the HR manager in organisation DT processes is absent.
Analysis of sufficient conditionsThe third and most relevant step in configurational analyses, such as fs-QCA, is the examination of sufficient conditions. A condition is considered sufficient when cases belonging to the set defined by the condition are also consistently part of the set associated with the outcome (Ragin, 2008). That is, the presence of a sufficient condition ensures the occurrence of the outcome, although its absence does not necessarily preclude it. In the sufficient condition analysis, we employed a consistency threshold of 0.8 or higher, which is stricter and advisable (Schneider & Wagemann, 2012) compared with the 0.75 usually considered the baseline under which the combination is inconsistent (Fiss, 2011; Ragin, 2008). This threshold indicates that cases exhibiting a specific combination of conditions reliably align with the set of cases displaying the outcome.
Our analysis of sufficient conditions relies on both parsimonious and intermediate solutions, in line with established guidelines in fs-QCA research (Felício et al., 2016; Fiss, 2011; Frambach et al., 2016; Ragin, 2008). According to Mas-Verdú et al. (2015), the parsimonious solution incorporates all simplifying assumptions, including those drawn from plausible and less plausible counterfactuals, whereas the intermediate solution focuses only on assumptions derived from plausible counterfactuals. Lastly, the findings are presented using the notation introduced by Fiss (2011), which has become a standard in fs-QCA studies (see Table 4).
Sufficient conditions analysis for high trust in AI for HR practices.
Note.● = Peripheral causal condition present ● = Core causal condition present.
⊗ = Peripheral causal condition absent ⌔ = Core causal condition absent.
Conditions appearing in both the parsimonious and intermediate solutions are identified as ‘core conditions’. These are marked with “●” to indicate their presence is required to achieve the outcome, or “⊗” to indicate their absence is necessary. Conditions unique to the intermediate solution, referred to as ‘peripheral conditions’, are depicted with smaller black circles (to signify presence) or crossed-out circles (to signify absence), following the conventions outlined by Fiss (2011).
Results of the analyses of sufficient conditionsThis section presents the results of the sufficient conditions analyses for delineating the possible combinations of organisational factors empirically associated with the presence of HRTAI. As shown in Table 4, there are two possible combinations of factors consistently present in the observed firms that are also inside the set of the HRTAI outcome. Solution 1 suggests that 12 % of the observed firms that display high HRTAI display an absence of HRDT, HRDTS and HRMI while simultaneously exhibiting the presence of HRFI. These firms can be viewed as organisations lagging in their level of DT maturity in HR, with low deployment of HR-related digital technologies and skills, and involvement of HR in the other organisational DT processes, but without the specific involvement of the HR manager. This configuration illustrates that even in the absence of digital tools and skills within the HR function, trust in AI can emerge when the function is actively integrated into broader DT efforts. This suggests that strategic visibility and organisational alignment may partially compensate for technological immaturity. This pathway is consistent with our theoretical proposition to the extent that it emphasises the importance of HR functional involvement; however, it challenges the expectation that digital skills and leadership involvement are always a necessary ingredient for technology trust-building. For example, in one interview, an HR professional from a medium company in the food sector (Company Zeta) emphasised that although ‘we are not digitally advanced internally in HR’, the function was ‘strongly involved in cross-departmental digital projects’; consequently, ‘we are confident that AI can be successfully experimented also in HR practices’. This quotation illustrates how functional engagement in broader DT transformation efforts can foster trust even in contexts with limited HR technological capability.
Solution 2 refers to 48 % of the observed organisations that display the presence of HRDTS, HRFI and DTSC and again the absence of HRMI. These firms can be conceptualised as more digitally mature organisations that have developed their trust in AI implementation in HR practices through a supportive organisational culture, more developed digital skills and a functional involvement in the organisational DT processes. This configuration is strongly aligned with our theoretical proposition, as it confirms the positive role of HR digital skills, cultural readiness and HR involvement in fostering AI trust. The absence of HR manager involvement in both solutions supports the notion that top-down leadership is not a necessary condition for AI trust. Illustratively, an HR manager from a large consulting firm (Company Alpha) noted, ‘We have built trust in AI not because someone pushed it, but because digital thinking is embedded in our team culture and people are trained and confident in using digital technologies’. The second solution once again highlights that the involvement of the HR manager is absent in the set of firms displaying trust in AI for HR practices, reinforcing our initial assumption that trust can emerge from functional integration and cultural support, even without individual managerial champions.
DiscussionThe current study’s findings contribute to the recent debate on AI adoption in HR practices by approaching the understanding of organisational conditions that foster or hinder trust in AI with a configurational approach. As highlighted in the introduction, AI implementation in HR is still an emergent phenomenon in which managerial scepticism and digital lag are relevant barriers (Rampersad, 2020; Suseno et al., 2022) alongside ethical concerns, inequality risks, and issues of data security and privacy (Kim & Kim, 2024; Lazazzara et al., 2025). The empirical results underscore the central role of organisational culture and digital skills within the HR function as enablers of AI trust while challenging the assumption that HR managers’ direct involvement is essential for AI adoption.
The analysis of necessary conditions reveals that although an engaged HR function in DT is essential for fostering AI trust, the involvement of individual HR managers in DT processes is not a determining factor. Conversely, the absence of HR manager involvement is strongly associated with higher organisational trust in AI as a tool for HR practices. This finding aligns with the broader literature on DT, which suggests that bottom-up cultural and functional enablers may substantially influence AI adoption more than top-down managerial interventions (Petzolt & Seckler, 2025). In contrast to earlier studies that emphasise the role of leadership in digitalisation (Thite, 2022; Vial, 2019), our findings suggest that the presence of a broader DT-supportive culture is more strongly associated with trusting AI as a tool for HR practices. This suggests that collective organisational readiness, rather than individual managerial advocacy, plays a central role in reducing resistance and possibly fostering AI adoption in digitally lagging organisations.
Further, the configurational analysis provides two primary pathways to AI trust in HR. The first solution is characterised by firms that exhibit low digital maturity but actively engage the HR function in broader DT initiatives. In these firms, AI trust emerges despite a lack of extensive HR digital readiness. This configuration provides evidence of causal asymmetry, whereby the absence of certain conditions (such as digital skills) does not prevent the presence of the outcome (AI trust), as long as other enabling factors (such as HR’s strategic involvement) are present. This reflects the nonlinear and equifinal nature of the process: organisations with limited technological infrastructure can still build AI trust through alternative pathways.
The second solution represents organisations with a more advanced digital orientation, whereby trust in AI is driven by a convergence of strong HR digital skills, a digitally supportive culture and an engaged HR function; again, without the direct involvement of HR managers. This configuration demonstrates a more conventional path to AI trust, aligning with expected assumptions about digital readiness. Yet, the absence of HR manager involvement in both configurations highlight that trust-building may follow different logics across organisations, challenging traditional models of strategic leadership. Moreover, the coexistence of these solutions reinforces the evidence of equifinality, whereby multiple, equally valid configurations can lead to the same outcome.
Theoretical implicationsThese findings offer several conceptual implications. First, the study challenges established assumptions that digital maturity and managerial involvement are prerequisites for fostering trust in the adoption of new technologies. It extends digital transformation research beyond leadership-centric perspectives (e.g. Thite, 2022; Vial, 2019) by aligning with emerging views that emphasise bottom-up drivers of DT (Petzolt & Seckler, 2025), particularly in the context of AI-driven digital transformations. Second, the study’s findings challenge the linear conception of technology adoption by emphasising the interplay of multiple elements that collectively shape AI trust in organisations. By focusing on organisational rather than individual factors (e.g. Suseno et al., 2022), our results show that trust in AI can emerge through distinct configurations of organisational pathways. Eventually, the study contributes to the emerging literature on AI implementation in the HR domain by highlighting that organisational trust in AI depends not only on the inherent features of the technology, but also on specific organisation-level factors. These include a pervasive culture supportive of DT, the presence of HR digital skills, and functional HR stakeholder involvement.
Practical implicationsOur findings provide relevant insights for both scholars and practitioners. The study contributes to scholarly perspectives on AI trust by investigating the role of digital skills, organisational culture and the HR function integration beyond the leadership-centric approaches to organisational transformations.
For practitioners, the study suggests that fostering AI trust does not necessitate an over-reliance on HR managers as champions of change; rather, organisations should invest in developing HR’s digital capabilities and developing a DT-oriented culture. Organisations with low digital maturity might prioritise cross-functional exposure of HR professionals to digital innovation projects, even before investing heavily in HR-specific technologies. Conversely, firms with established digital infrastructures may focus on strengthening cultural support and upskilling efforts within HR teams. Both pathways imply actionable levers for designing AI trust strategies tailored to a firm’s internal conditions.
Our results caution against one-size-fits-all strategies. Organisations may need to identify which trust-building path is better aligned with their context and capabilities. Importantly, the two identified configurations are not simply alternative routes to the same goal; they reflect different organisational approaches or strategic trajectories. One emphasises inclusive, cross-functional engagement in transformation despite limited digital resources, whereas the other relies on systemic digital maturity and a supportive culture. These underlying logics can coexist in the ecosystem but may also reflect different values, resource allocations, and power dynamics within organisations.
ConclusionIn conclusion, this study contributes to the debate about DT and trust in innovative technologies (such as AI) by highlighting the complex configurational nature of trust in technology adoption. Our findings reveal that trust in AI can emerge through two distinct organisational configurations. In one, digitally less mature firms develop trust when HR functions are strategically engaged in broader DT initiatives. In the other, more digitally mature organisations develop trust through a combination of a supportive organisational culture, strong HR digital skills, and functional involvement in DT processes, even in the absence of direct HR manager involvement.
The study underscores the importance of configurational thinking as an alternative to traditional linear models, where outcomes do not result from a single dominant factor but distinct, and sometimes unexpected, combinations. These findings call for greater attention to dynamic processes, as configurations may evolve over time or vary depending on industry, organisational maturity, or external pressures. They further suggest that scholars and practitioners should move beyond generic best practices and consider the interplay of context-specific conditions. By adopting a configurational lens, this study extends existing theoretical insights by emphasising asymmetry, non-linearity, and the coexistence of multiple logics of change within DT processes.
LimitationsThis study also acknowledges some limitations. First, although the small sample size and exploratory design are consistent with the methodological requirements of fs-QCA, we recognise the potential risk of self-selection bias given the voluntary participation of firms engaged in digital initiatives. Second, data collection occurred over a one-year period. Although the questionnaire remained stable, organisational priorities or external events could have varied over that timeframe, potentially affecting perceptions. Future studies with longitudinal designs could explore this further. Third, our analysis focuses exclusively on Italian organisations. The context—characterised by high variability in digital maturity, a strong presence of small to medium enterprises, and a specific institutional and labour regulation framework—may influence trust dynamics in ways not generalisable across countries. We encourage future comparative research to test the robustness of our configurations in other national settings
Future research directionsIn terms of future research directions, our findings open up some promising avenues. The identified pathways may lead to different long-term outcomes or tensions as organisations evolve. The observed configurations may be equifinal and emblematic of distinct institutional logics—one pragmatic and emergent, the other planned and capability-driven. A richer theoretical engagement with this tension opens new avenues for understanding how and why trust in AI develops differently across organisations. A particularly fruitful direction would be to examine how these configurations evolve over time: are they stable trajectories or transitional phases?
More research is also needed to understand how trust in AI interacts with outcomes such as HR performance, employee acceptance or innovation readiness. Finally, further theorisation could explore how organisational contingencies (e.g. hierarchical vs. participatory, exploitative v. exploratory) shape AI trust pathway.
Declaration of generative AI and AI-assisted technologies in the writing processWhile preparing this work, the authors used Grammarly and ChatGPT to edit the language. After using these tools, the authors reviewed and edited the content as needed, and they take full responsibility for the publication’s content.
CRediT authorship contribution statementAlessia Zoppelletto: Writing – review & editing, Writing – original draft, Validation, Methodology, Data curation, Conceptualization. Ludovico Bullini Orlandi: Writing – review & editing, Writing – original draft, Methodology, Formal analysis, Conceptualization. Eleonora Veglianti: Writing – review & editing, Writing – original draft, Validation, Conceptualization. Cecilia Rossignoli: Supervision, Project administration.





