metricas
covid
Journal of Innovation & Knowledge Organizational sustainable artificial intelligence capabilities scale developmen...
Journal Information
Visits
893
Vol. 11. (In progress)
(January - February 2026)
Full text access
Organizational sustainable artificial intelligence capabilities scale development, validation, and implications
Visits
893
Sikandar Ali Qalatia,
Corresponding author
qalati@lcu.edu.cn

Corresponding author at: School of Business, Liaocheng University, Shandong, Liaocheng, 252059, China.
, Faiza Siddiquib,**
Corresponding author
719040@sdnu.edu.cn

Corresponding author at: School of Business, Shandong Normal University, Shandong, Jinan, 250358, China.
a School of Business, Liaocheng University, Shandong, Liaocheng, 252059, China
b School of Business, Shandong Normal University, Shandong, Jinan, 250358, China
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (2)
Tables (15)
Table 1. Respondents’ information.
Tables
Table 2. KMO and Bartlett's Test n = 188.
Tables
Table 3. Total variance explained.
Tables
Table 4. Rotated factor loading for the 28 items of the OSAIC scale.
Tables
Table 5. Corrected items-to-total correlations of 27 items of the OSAIC scale using a sample of 188 sample.
Tables
Table 6. Assessment of reliability and validity (n = 364).
Tables
Table 7. Assessment of DV.
Tables
Table 8. Assessment of OSAIC reliability, validity, and model fit.
Tables
Table 9. Assessment of DV of Higher-order construct (OSAIC).
Tables
Table 10. Assessment of the structural model.
Tables
Table 11. Assessment of predictive validity using PLS-predict.
Tables
Tables
Table A. Asia structural model.
Tables
Table B. Non_Asia structural model.
Tables
Table C. Multigroup analysis.
Tables
Show moreShow less
Abstract

As artificial intelligence (AI) adoption accelerates globally, its sustainability implications remain insufficiently integrated into organizational capability frameworks. This study develops and validates the organizational sustainable AI capabilities (OSAIC) construct, extending dynamic capabilities theory by embedding sustainability as a meta-capability in AI governance and innovation processes. OSAIC is conceptualized as a five-dimensional, reflective, higher-order construct, encompassing sustainable AI learning, seizing, sensing, stakeholder integration, and transformation. A multi-phase scale development procedure—including expert Q-sorting, exploratory factor analysis, and confirmatory factor analysis, using partial least squares structural equation modeling—was employed. The scale was assessed and validated using two distinct samples: a pilot study (n = 188) and a main study (n = 364), both comprising managers from diverse industries and regions. The findings indicated robust psychometric attributes, characterized by substantial reliability, convergent, discriminant, and predictive validity. A positive and significant relationship between OSAIC and sustainable innovation indicated nomological validity, addressing the AI sustainability paradox by illustrating that sustainability-oriented AI capabilities enhance rather than constrain innovation. By extending the research on dynamic capabilities and paradoxes and presenting a validated measurement tool, this study contributes theoretically and methodologically, respectively, to the literature. Practically, it offers managers a diagnostic framework to align AI implementation with environmental and social accountability while fostering innovation.

Keywords:
Sustainable AI sensing
sustainable AI seizing
sustainable AI transforming
sustainable AI learning
organizational sustainable AI capabilities
stakeholder integration
JEL classification:
O30
Q01
Q56
Full Text
Introduction

Despite the undeniable transformative power of artificial intelligence (AI) in organizational contexts (Secundo et al., 2025), its dual implications for sustainability and innovation remain under-theorized (Ojong, 2025). On the one hand, AI provides unprecedented opportunities for efficiency, predictive insights, and eco-friendly solutions across industries (Abuzaid, 2024). On the other hand, it poses significant ecological and social challenges, including carbon emissions and energy consumption (Reijers et al., 2025), as well as algorithmic bias and governance dilemmas (Mancuso et al., 2025; Mergen et al., 2025). This dual impact reflects the AI sustainability paradox—the tension between AI’s potential to advance sustainable innovation and its unsustainable externalities (Mancuso et al., 2025). Addressing this paradox requires organizations not only to adopt and implement AI technologies but also develop capabilities that embed sustainability principles into AI deployment and governance (Falk & van Wynsberghe, 2024; Mikalef & Gupta, 2021; Tripathi et al., 2024).

Dynamic capabilities theory (DCT) provides a foundational framework for understanding organizational responses to technological disruptions (Teece et al., 1997). Teece’s (2007) triple model, encompassing seizing, sensing, and transforming, is most often employed in the digital transformation context (Ghosh et al., 2022; Warner & Wäger, 2019). However, the integration of the three factors with sustainability requirements remains under-investigated (Amui et al., 2017; Feroz et al., 2023). Moreover, such triple-model factors rarely consider sustainability as a meta-capability influencing all capability building aspects (Gao et al., 2025; Ghosh et al., 2022). Further, although current work emphasizes the issues of agility and competitiveness in digital transformation (Vial, 2021) or environmental responsiveness in relation to sustainability efforts (Cezarino et al., 2019), the critical governance structures and routines necessary to ensure the responsible development and use of AI are frequently overlooked. Such omission highlights the challenge of redefining dynamic capabilities to balance the conflicting needs of AI innovation and mitigation of its environmental and social impacts (Wang et al., 2025).

This study attempts to address this theoretical gap by introducing the concept of organizational sustainable artificial intelligence capabilities (OSAIC). OSAIC extends the DCT developed by Teece et al. (1997) by integrating sustainability as a meta-capability across five interconnected dimensions: sustainable AI seizing, sensing, transforming, learning, and stakeholder integration (Teece, 2007). OSAIC connects the literature on dynamic capabilities (Warner & Wäger, 2019) and paradox with ambidexterity theories (Smith & Lewis, 2011), repositioning sustainability from a peripheral role to a core organizational competency to manage competing demands. The framework advances theory in two ways: first, by reframing AI governance as a process requiring a holistic pursuit of both sustainable and innovative goals (Strubell et al., 2020), and second, by executing sustainability-based routines that enable organizational flexibility in responding paradoxically (Amui et al., 2017).

Empirical work on sustainable AI is constrained by the absence of validated measures of organizational capabilities in this area (Schoormann et al., 2023; Seidel et al., 2017). Although prior studies have examined digital capabilities (Vial, 2021), green information technology (Melville, 2010), and sustainability-oriented dynamic capabilities (Amui et al., 2017), no comprehensive measurement scale effectively captures how organizations detect, sense, transform, learn, and involve stakeholders in the development of sustainable AI. This study addresses this gap using Hinkin's (1998) and Podsakoff et al.'s (2016) recommended methodological approach. First, we develop the measurement items based on the existing literature and expert interviews. These measurement items are then refined using the Q-sorting procedure. The scale’s exploratory and confirmatory dimensional structures are then established using the exploratory and confirmatory factor analyses. Finally, nomological validity is established when the OSAIC construct is associated with sustainable innovation (Adams et al., 2016).

This study offers three significant contributions. First, it advances the DCT of Teece et al. (1997), redefining sustainability as the central meta-capability in AI governance (Teece, 2007), while addressing the challenge of including paradoxical and ambidextrous frameworks in capability research (Smith & Lewis, 2011). Second, it constructs and empirically validates the OSAIC’s first-hand measurement scale, enabling stringent empirical investigation into how enterprises manage the AI-sustainability paradox (Hinkin, 1995). Finally, it highlights the practical applicability of OSAIC by revealing its effect on sustainable innovation outcomes (Feng et al., 2024; Zhong & Song, 2025).

Theoretical support and conceptualization of OSAICTheoretical support

OSAIC scale development is rooted in the DCT, providing an intellectual framework for understanding how organizations adapt their resources in response to technological development, primarily by identifying opportunities, seizing them, and reshaping how operations are conducted (Teece et al., 1997). Although this framework is applied in digital (Warner & Wäger, 2019) and ecological (Amui et al., 2017) contexts, the latter often treat sustainability as a mere consequence, rather than an integral, required meta-capability (Buzzao & Rizzi, 2021). This highlights the critical importance of integrating sustainability into core activities that drive technological innovation, particularly in the AI field (Strubell et al., 2020).

To address this misalignment, we introduce the paradox and ambidexterity perspectives. Paradox theory posits that an organization must reconcile competing and interdependent demands, such as responsibility and innovation, without compromising either aspect (Smith & Lewis, 2011). Additionally, ambidexterity theory suggests that performance is only sustainable if conflicting pressures are aligned through integrated capabilities (O’Reilly & Tushman, 2008). In the AI context, this implies that the organization cannot pursue technological advancement without societal and environmental considerations. Instead, they must internalize sustainability into the dynamic capabilities for innovation and responsibility to progress simultaneously rather than sequentially.

Guided by these insights, we introduce the OSAIC concept. This construct adds to the DCT by embedding sustainability as a foundational element across the sensing, seizing, and transforming dimensions, while incorporating two underexplored aspects: sustainable AI learning (Argyris & Schön, 1997) and stakeholder integration (Freeman, 2010). These inclusions highlight the necessity of adaptive learning and inclusive engagement while exercising responsible AI governance. By conceptualizing sustainability as a meta-capability, OSAIC offers a structured approach to managing the complexities of AI application and extends the discourse on dynamic capabilities, paradox management, and responsible AI governance (Adams et al., 2016).

Conceptualization of OSAIC

We define OSAIC as a meta-capability that integrates sustainability principles with AI-related dynamic capabilities, enabling companies to leverage AI for innovation while simultaneously addressing its social and environmental impacts. OSAIC adds to the DCT (Teece, 2007; Teece et al., 1997) by embedding sustainability throughout the five stages of capability development (Adams et al., 2016), further enriching ambidexterity theory and the associated paradox by providing routines that enable companies to reconcile the dual imperatives of innovation and responsibility (O’Reilly & Tushman, 2008; Smith & Lewis, 2011). OSAIC comprises the five dimensions discussed below, which build upon but also extend the traditional understanding of dynamic capabilities.

Sustainable AI sensing

This refers to the ability to continuously scan markets to detect AI-related sustainability concerns. This includes monitoring ecological signals such as the use of algorithms, life-cycle implications (Ligozat et al., 2022), and compliance with regulatory demands such as the EU AI Act. It aligns with "systems sensing" (Schad & Bansal, 2018), in which firms monitor weak signals across technical, ecological, and social dimensions. For instance, Microsoft’s AI sustainability API enables real-time emissions tracking across AI projects, exemplifying this capability. For instance, in January 2025, GPT-4 emitted over 400 kg CO2 (Microsoft, 2025).

Sustainable AI seizing

This refers to the ability to reconfigure resource allocation by integrating sustainability considerations. Unlike conventional seizing, which emphasizes speed and scale (Eisenhardt & Martin, 2000), this dimension evaluates AI projects through a triple-bottom-line framework. Firms may invest in energy-efficient infrastructures, adopt incentive systems that reward sustainability achievements along with model precision, and partner with environmentally conscious cloud services providers (Ahmadisakha & Andrikopoulos, 2024; Reddy, 2024). This aligns with Barney's (1991) resource-based view, which conceptualizes sustainable AI assets as strategically important, difficult to imitate, and based on certain developmental tracks.

Sustainable AI transforming

This involves redesigning organizational processes to promote circularity in AI development. Rooted in the circular economy concept (Geissdoerfer et al., 2017), it entails model reuse, hardware refurbishment, and renewable-powered infrastructure. For instance, Google's application of liquid cooling infrastructure has lowered the data center's CO2 output by 10 % and energy usage by 10 % (Hölzle, 2022), demonstrating how companies can pursue digital transformation and environmental sustainability simultaneously.

Sustainable AI learning

This refers to the integration of feedback systems that are explicitly designed to enhance the sustainability performance of AI systems. It shifts learning priorities from efficiency (Argote, 2012) to sustainability-oriented practices, such as conducting post-incident reviews (for instance, addressing bias or energy inefficiency), forming AI ethics committees, and publishing transparent impact reports. This enhances the exploration–exploitation learning framework by creating a sustainability feedback loop (March, 1991).

Stakeholder integration

This refers to the organization's capacity to actively engage stakeholders in AI governance. Rather than viewing stakeholders mere as information recipients, this dimension highlights the value of co-design approaches, participatory review, and adaptive feedback systems that translate external perspectives into actionable insight for responsible innovation. Buhmann et al. (2024) believe that stakeholders’ legitimacy should be continuously mediated through bespoke frameworks, promoting engagement from consultation only to co-creation (Loureiro et al., 2020).

Hypothesis development

The OSAIC framework’s core proposition is its positive influence on sustainable innovation, defined as the development of new processes, products, or business models that create economic value while reducing environmental and social harm (Adams et al., 2016). According to Teece (2007), the DCT dynamic capabilities are more potent and enhance competitive advantage through innovation. DCT posits that adaptive routines such as real-time emission monitoring and stakeholder co-creation help firms reconfigure resources for innovation. Based on stakeholder theory (Freeman et al., 2020), these adaptive practices promote sustainability-oriented decision making through participatory mechanisms, such as AI ethics boards. OSAICs allow firms to proactively identify AI sustainability risks and opportunities (Kong & Yuen, 2025), invest in green AI projects (Mancuso et al., 2025), and adapt processes to evolving sustainability norms (Ghosh et al., 2022).

Earlier studies have illustrated that AI capabilities significantly improve organizational innovation (Almheiri et al., 2025; Mikalef & Gupta, 2021) and sustainable performance (Kumar et al., 2025). Additionally, Microsoft’s carbon-aware AI inference systems illustrate how EcoServe reduces CO2 emissions by up to 47 %, through performance, energy, and cost-optimized design points, without compromising operational efficiency (Li et al., 2025). Hence, we propose H1.

  • H1: OSAIC has a positive and significant influence on sustainable innovation.

Research methodologyGeneration of scale items

We followed established methodological standards for scale development (Hinkin, 1995; Nunnally, 1978). After defining the OSAIC construct and its five dimensions, we generated an initial pool of items. Consistent with Netemeyer et al. (2003), item clarity and item-wording precision were prioritized while minimizing redundancy. Clear wording ensures consistent interpretation among respondents (Lambert & Newman, 2023), whereas some degree of redundancy at the early development stage helps capture the full conceptual scope of the construct. For clarity and reliability, negatively worded items were deliberately avoided (Netemeyer et al., 2003). Moreover, we initially included more items than we ultimately required, as starting with a comprehensive set is preferred to avoid missing key elements (Wang & Chuang, 2024; Wang & Wang, 2022).

The item generation process began with a comprehensive review of 246 research papers published between 2015 and 2024, focusing on the intersection between AI, sustainability, and dynamic capabilities. Articles were sourced from major research databases, such as Scopus, using targeted keywords such as “sustainable AI,” “AI governance,” and “dynamic capabilities.” This review initially yielded 36 potential scale items. Overly broad items (e.g., “My organization uses AI sustainably”) were eliminated, and the remaining items were mapped onto the five conceptual dimensions of OSAIC. For example, the item “We audit AI projects for compliance with sustainability policies” was mapped onto the dimension of sustainable AI transformation.

To ensure contextual richness and practical relevance, we also conducted 18 semi-structured interviews with domain experts. The panel comprised six AI sustainability officers from leading technology firms (e.g., SAP, Intel), six corporate social responsibility managers from the manufacturing and finance sectors, and six AI ethics researchers affiliated with academia and non-governmental organizations. The interview protocol examined how organizations balance between AI innovation and sustainability objectives, as well as the mechanisms employed to monitor the social and environmental AI impacts. Guiding questions included: “How does your organization balance AI innovation with sustainability goals?” and “What processes exist to monitor AI’s social and environmental impacts?” The interviews were transcribed and thematically coded, resulting in 26 additional items. For instance, the item “We compensate communities affected by AI data collection” was linked to the dimension of stakeholder integration.

The final item pool comprised 62 items: 36 and 26 items from the literature review and expert interviews, respectively. Together, these items provided a comprehensive and conceptually grounded foundation for subsequent scale development stages. Appendix A presents the list of 62 items.

Scale refinement

To further refine the item pool and ensure content validity, we conducted a Q-sorting procedure with a panel of ten experts. The panel comprised five scholars specializing in organizational capabilities, AI ethics, and sustainability, and five industry practitioners with expertise in AI governance and corporate sustainability. Each expert independently sorted the 62 items into the five proposed OSAIC dimensions. Besides classification, experts rated the relevance of each item using a five-point scale (1 = “Not Relevant” to 5 = “Highly Relevant”).

Items were retained if they satisfied three criteria: (1) placement accuracy of at least 80 %, meaning that at least 80 % of the experts correctly classified the item into its intended dimension; (2) an average relevance score of 4.0 or higher; and (3) no cross-loading, with each item aligned to a single dimension without ambiguity. Following this comprehensive sorting and rating procedure, 34 of these items were removed. For instance, “We benchmark our AI sustainability practices against industry standards” was removed because it lacked concreteness or failed conceptually to align with OSAIC. The remaining 28 items were further revised for clarity, pilot testing, and empirical validation.

Sampling and data collection

The OSAIC scale was developed based on a multi-stage, quantitative research design. Netemeyer et al. (2003) recommend scale development procedures with two data collection stages: (i) an initial pilot study for item refinement and exploratory factor analysis (EFA), and (ii) the main study to validate the measurement model through partial least squares structure equation modeling (PLS–SEM).

In the pilot test, a purposive sample of 188 managers at mid- or senior-levels was solicited from value-creating industries with high AI adoption, such as technology, manufacturing, finance, and healthcare (see Table 1). Data were collected through professional networks using a Google-Form-developed survey distributed through email, WhatsApp, and LinkedIn groups. The strategy allowed efficient access to qualified participants from diverse backgrounds across multiple geographical locations (Buhrmester et al., 2018; Dicce & Ewers, 2021). Purposive sampling was especially appropriate as it allowed the respondents to possess related information regarding AI and sustainability, raising the possibility of retrieving content-valid responses (Qalati et al., 2024). The pilot study data were used in EFA, with the findings guiding the scaling refinement and elimination of unsuitable items before being used in the subsequent confirmatory test.

Table 1.

Respondents’ information.

Constructs  Characteristics  n= 188n = 364
    Number  %  Number  % 
Region           
  Asia  86  45.7  168  46.2 
  North America  75  39.9  133  36.5 
  Europe  37  19.7  63  17.3 
Gender           
  Male  121  64.4  232  63.7 
  Female  64  34.0  120  33.0 
  Not to specify  1.6  12  3.3 
Age           
  Under 30  21  11.2  41  11.3 
  31 to 40  118  62.8  227  62.4 
  41 to 50  36  19.1  66  18.1 
  51 to 60  11  5.9  23  6.3 
  Over 60  1.1  1.9 
Industry in which organization operates           
  Technology  95  50.5  185  50.8 
  Manufacturing  64  34.0  115  31.6 
  Finance  17  9.0  33  9.1 
  Healthcare  10  5.3  28  7.7 
  Other  1.1  .8 
Size of organization           
  <100  13  6.9  27  7.4 
  100 to 499  47  25.0  74  20.3 
  500 to 999  69  36.7  127  34.9 
  1000 to 4999  33  17.6  93  25.5 
  Over 5000  26  13.8  43  11.8 
Years of professional experience           
  < 5  45  23.9  102  28.0 
  5 to 10  71  37.8  129  35.4 
  11 to 15  60  31.9  108  29.7 
  Over 15  12  6.4  25  6.9 

The second sample includes data from 364 managers and executives from all over North America, Europe, and Asia, who focused on sectors with leading AI adoption and sustainability agendas. Both purposive and snowball sampling techniques were applied. Through purposive sampling, the researchers target relevant leadership jobs, such as AI project leaders, data scientists, sustainability officers, and senior managers with insightful views on their OSAIC. The same applies to the theoretical sampling requirements in the study of dynamic capabilities, whereby informants should be well-informed regarding organizational processes (Divya et al., 2024; Qalati et al., 2024). Snowball sampling expanded the survey's coverage using professional networks and LinkedIn groups, which are recommended sampling techniques applied when sampling expert population that is not accessible through random sampling (Bello et al., 2024; Hossain et al., 2025). An online survey questionnaire used for both phases ensured anonymity and provided broad geographic coverage, reducing social desirability bias and increasing response truthfulness (Larson, 2019). Screening questions ensured that only respondents with experience in AI and sustainability proceeded, and attention checks were embedded to identify inattentive responses.

Table 1 illustrates and summarizes the demographic profile. In the main study, participants were primarily from Asia (46.2 %), North America (36.5 %), and Europe (17.3 %). The sample comprised 63.7 % males, with the majority aged between 31 and 40 years (62.4 %). Technology (50.8 %) and manufacturing (31.6 %) were the largest industry groups, with most participants working in organizations of 500–999 employees (34.9 %). Most respondents had 5–15 years of experience, ensuring informed insights. Lastly, to check sample size adequacy for validation, power analysis with G*Power was run (Cohen, 2013). With a medium effect size of f² = 0.15, α = 0.05, and 0.95 statistical power, the minimum number of samples required to estimate regression paths with one predictor was 89 (see Fig. 1). Hence, the pilot sample of 188 and the main study sample of 364 met the criterion, ensuring that data were adequate for PLS-SEM analysis.

Fig. 1.

G*Power sample size calculation.

Analytical strategy

We employed a two-stage analytical strategy to validate OSAIC scale according to the scale development guidelines (Hinkin, 1998; Netemeyer et al., 2003). In the pilot phase, we used EFA to investigate the item pool’s dimensional structure and to allow scale refinement before full validation. Additionally, EFA was selected for the initial phase, allowing the identification of latent factor structures and removal of weak or ambiguous items early in the development process (Fabrigar et al., 1999). Principal axis factoring with Promax (oblimin) rotation was utilized as intercorrelations between the proposed OSAIC dimensions were theorized and empirically found. This method ensured that cross-loading, poor, or conceptually vague items would be eliminated at the start (Lin et al., 2025).

For the main study (n = 364), the refined scale was assessed comprehensively through PLS–SEM using SmartPLS 4. We preferred PLS-SEM over covariance-based SEM for three reasons. First, the study’s aim is prediction—constructing a valid measurement scale and demonstrating its nomological validity concerning sustainable innovation—and PLS–SEM is well-suited to this aim (Mohd Dzin & Lay, 2021). Second, the OSAIC was conceptualized as a reflective–reflective higher-order construct, and PLS–SEM offers a strong framework for predicting such hierarchical models using the two-stage approach (measurement model and structural model) (Senapati & Panda, 2024). Third, PLS–SEM is more appropriate if data are not multivariate normally distributed and sample sizes are modest compared to model complexity, as is the case here (Chen et al., 2025; Hair & Alamer, 2022).

This EFA pairing approach for item purification and PLS–SEM for confirmatory validity and hypothesis testing is robustly recommended in information systems and organizational capabilities studies (Hair et al., 2017). It allows for the exploratory identification of the factor structure and robust assessment of measurement properties, such as reliability, convergent, discriminant, and nomological validity (Usakli & Rasoolimanesh, 2023).

ResultsPilot testing and EFA

The pre-test phase aimed to refine the OSAIC measurement scale by assessing its internal consistency and factor structure, following established scale development guidelines (Hinkin, 1998). Internal reliability was initially assessed using corrected item-to-total correlations, which aimed to avoid inflated part–whole correlations (Wang & Chuang, 2024; Wang & Wang, 2022). Items with corrected item-to-total correlations below 0.40 were considered for removal (Hinkin, 1998). The analysis showed that 28 retained items exceeded this criterion, and the scale’s overall Cronbach’s alpha (CA) was excellent, validating its reliability for subsequent factor analysis.

Suitability for factor analysis was confirmed with a Kaiser-Meyer-Olkin value of 0.889 and a significant Bartlett's test of Sphericity (χ² = 4223.633; p < .001), demonstrating adequate sampling adequacy and sufficient intercorrelations among items (Podsakoff et al., 2016) (see Table 2).

Table 2.

KMO and Bartlett's Test n = 188.

Kaiser-Meyer-Olkin Measure of Sampling Adequacy..889 
Bartlett's Test of SphericityApprox. Chi-Square  4223.633 
df  378 
Sig.  .000 

An EFA was run on a pre-testing sample (n = 188) using principal axis factoring with Promax (oblique) rotation, as correlations among dimensions were empirically and theoretically supported. We utilized this method instead of orthogonal rotation, such as Varimax, because it enables interrelated latent constructs (Fabrigar et al., 1999). An EFA illustrated a five-factor solution consistent with the proposed OSAIC. As shown in Table 3, all factors had eigenvalues exceeding 1.0 and together explained 67.05 % of the total variance, with the first factor accounting for 30.55 %. This indicates that no single factor dominated the scale, offering evidence of multidimensionality (see Table 3).

Table 3.

Total variance explained.

FactorInitial eigenvaluesExtraction sums of squared loadingsRotation sums of squared loadingsa 
Total  % of Variance  Cumulative %  Total  % of Variance  Cumulative %  Total 
8.823  31.512  31.512  8.553  30.548  30.548  7.576 
5.190  18.535  50.047  4.900  17.501  48.049  4.621 
2.713  9.690  59.736  2.312  8.259  56.308  5.901 
2.242  8.005  67.742  1.907  6.812  63.120  5.042 
1.454  5.194  72.936  1.101  3.934  67.054  3.023 
.806  2.879  75.815         
.730  2.609  78.424         
.625  2.234  80.657         
.520  1.858  82.515         
10  .505  1.803  84.318         
11  .459  1.639  85.957         
12  .434  1.552  87.509         
13  .392  1.401  88.910         
14  .372  1.327  90.237         
15  .326  1.164  91.401         
16  .320  1.143  92.545         
17  .283  1.010  93.554         
18  .267  .952  94.507         
19  .228  .815  95.322         
20  .208  .741  96.063         
21  .185  .659  96.722         
22  .174  .620  97.343         
23  .158  .565  97.908         
24  .148  .530  98.438         
25  .130  .465  98.903         
26  .122  .435  99.338         
27  .108  .385  99.723         
28  .077  .277  100.000         

Extraction method: Principal axis factoring.

a

When factors are correlated, the sums of squared loadings cannot be added to obtain a total variance.

Items were evaluated using the established retention criteria (Hair et al., 2011):

  • 1.

    Primary factor loadings ≥ 0.50 on the intended construct,

  • 2.

    Absence of cross-loadings above 0.40,

  • 3.

    Communalities ≥ 0.30,

  • 4.

    At least three strong indicators per factor,

  • 5.

    Elimination of single-item factors.

Following the above criteria, we removed single items (SAISe6 = 0.445) because of weak loading, resulting in a refined 27-item scale (see Table 4).

Table 4.

Rotated factor loading for the 28 items of the OSAIC scale.

ConstructItem codeFactor
Sustainable AI learning (SAIL)  SAIL5  .936         
  SAIL2  .920         
  SAIL6  .910         
  SAIL4  .897         
  SAIL7  .843         
  SAIL1  .833         
  SAIL3  .828         
Stakeholder integration (SI)  SI1    .909       
  SI2    .876       
  SI5    .867       
  SI4    .862       
  SI3    .735       
  SI6    .687       
Sustainable AI transforming (SAIT)  SAIT3      .842     
  SAIT6      .836     
  SAIT2      .813     
  SAIT1      .790     
  SAIT4      .758     
  SAIT5      .708     
Sustainable AI seizing (SAISe)  SAISe1        .928   
  SAISe3        .837   
  SAISe4        .799   
  SAISe2        .622   
  SAISe6        .445*   
Sustainable AI Sensing (SAISn)  SAISn3          .764 
  SAISn2          .665 
  SAISn1          .642 
  SAISn4          .602 

Extraction method: Principal axis factoring.

Rotation method: Promax with Kaiser normalization.

Removed.

Furthermore, Cronbach’s alpha (CA) was calculated to measure the internal consistency of the OSAIC scale. Across the five dimensions, all CA values exceed the cutoff of 0.70 (Nunnally, 1978), indicating adequate reliability. The retained 27 items of the OSAIC scale achieved an overall CA of 0.908, indicating strong internal consistency. All five dimensions proved to be highly reliable with the following CA scores: sustainable AI sensing = 0.773, seizing = 0.881, transforming = 0.913, learning = 0.965, and stakeholder integration = 0.926. Additionally, all corrected item-to-total correlations exceeded the cutoff of 0.40 (Wang & Chuang, 2024), demonstrating that every item contributed meaningfully to its respective dimension (see Table 5). Furthermore, the non-removal of items enhances reliability and further stabilizes the retained items. Table 5 results confirm the good psychometric qualities and adequate internal consistency of the refined scale pilot sample, supporting the retention of 27 items for further validation in the main study.

Table 5.

Corrected items-to-total correlations of 27 items of the OSAIC scale using a sample of 188 sample.

Construct/Item  Mean  SD  Corrected items-to-total correlations  CA if item deleted  CA (overall) 
Sustainable AI learning (SAIL)          .965 
SAIL1: We document lessons from AI sustainability failures (e.g., bias incidents).  3.42  1.25  .876  .960   
SAIL2: Our organization maintains a repository of sustainable AI best practices.  3.38  1.33  .909  .957   
SAIL3: AI teams conduct post-mortems on the environmental impact of projects.  3.37  1.30  .874  .960   
SAIL4: We train employees on sustainable AI development practices.  3.21  1.35  .869  .960   
SAIL5: We host cross-departmental forums to share sustainable AI innovations.  3.24  1.27  .882  .959   
SAIL6: External audits of AI systems inform internal policy updates.  3.35  1.35  .891  .959   
SAIL7: Lessons from AI incidents are shared industry-wide.  3.39  1.35  .849  .962   
Stakeholder integration (SI)          .926 
SI1: We consult local communities before deploying AI systems that affect them.  3.23  1.07  .827  .906   
SI2: NGOs are involved in auditing our AI systems for fairness.  3.20  1.04  .830  .906   
SI3: We establish grievance mechanisms for AI-related sustainability harms.  3.39  1.05  .714  .921   
SI4: Our AI ethics board includes representatives from marginalized groups.  3.05  1.12  .819  .907   
SI5: We disclose AI’s environmental impacts to investors and regulators.  2.98  1.23  .850  .903   
SI6: We partner with academia to address AI sustainability gaps.  3.12  .095  .677  .925   
Sustainable AI transforming (SAIT)          .913 
SAIT1: We redesign workflows to minimize energy use in AI inference.  2.88  1.29  .797  .892   
SAIT2: Our data centers use renewable energy for AI computations.  2.97  1.20  .786  .899   
SAIT3: We reuse or adapt existing AI models to avoid redundant training.  3.04  1.11  .779  .895   
SAIT4: We repurpose decommissioned AI hardware for less resource-intensive tasks.  2.95  1.20  .723  .902   
SAIT5: We collaborate with suppliers to reduce AI supply chain emissions.  3.02  1.18  .738  .900   
SAIT6: We mandate energy caps for AI training jobs.  3.04  1.16  .720  .903   
Sustainable AI seizing (SAISe)          .881 
SAISe1: We allocate budget specifically for sustainable AI R&D.  3.26  1.16  .800  .826   
SAISe2: Leadership prioritizes funding for AI projects that reduce carbon emissions.  3.46  1.19  .688  .868   
SAISe3: Employees are incentivized to develop energy-efficient AI solutions.  3.47  1.25  .758  .842   
SAISe4: AI project approvals require sustainability impact assessments.  3.57  1.14  .728  .853   
Sustainable AI sensing (SAISn)          .773 
SAISn1: We monitor regulatory changes (e.g., EU AI Act) affecting AI sustainability.  3.69  .93  .530  .742   
SAISn2: Our organization tracks the energy consumption of AI model training.  .364  1.21  .605  .707   
SAISn3: We have a dedicated team to evaluate AI’s social risks (e.g., job displacement).  3.48  1.10  .664  .669   
SAISn4: Sustainability criteria are included in AI procurement checklists.  3.53  .92  .524  .745   
Measurement model evaluation (first-order constructs)

The reliability and validity of the OSAIC scale measurement model was evaluated by employing the main study sample of 364. Table 6 represents the results of all of the first-order constructs. All item factor loadings, except for the sustainable AI sensing item (SAISn3 = 0.667), were higher than the recommended cut-off value of 0.70, supporting indicator reliability (Hair et al., 2019). Additionally, the OSAIC dimensions and sustainable innovation CA and composite reliability (CR) values were between 0.916 and 0.935, which are far higher than the 0.70 cut-off and less than the 0.95 cut-off (Hair et al., 2019), confirming excellent internal consistency. Convergent validity was confirmed with values of average variance extracted (AVE) between 0.706 and 0.866, exceeding the 0.50 cutoff (Fornell & Larcker, 1981; Hair et al., 2019). These findings affirm that the OSAIC measurement model shows satisfactory reliability, internal consistency, and convergent validity.

Table 6.

Assessment of reliability and validity (n = 364).

Construct  Items  Loadings  CA  CR  AVE  VIF 
Sustainable AI learning (SAIL)  SAIL1  .841  .931  .931  .706  3.129 
  SAIL2  .854         
  SAIL3  .841         
  SAIL4  .840         
  SAIL5  .846         
  SAIL6  .842         
  SAIL7  .817         
Sustainable AI seizing (SAISe)  SAISe1  .906  .931  .932  .829  2.197 
  SAISe2  .914         
  SAISe3  .926         
  SAISe4  .895         
Sustainable AI Sensing (SAISn)  SAISn1  .935  .923  .924  .866  3.272 
  SAISn2  .930         
  SAISn4  .927         
Sustainable AI transforming (SAIT)  SAIT1  .885  .935  .936  .755  3.133 
  SAIT2  .846         
  SAIT3  .872         
  SAIT4  .883         
  SAIT5  .875         
  SAIT6  .852         
Stakeholder integration (SI)  SI1  .858  .916  .917  .706  3.210 
  SI2  .862         
  SI3  .819         
  SI4  .848         
  SI5  .834         
  SI6  .819         
Sustainable innovation (SINN)  SINN1  .864  .911  .911  .737   
  SINN2  .847         
  SINN3  .846         
  SINN4  .878         
  SINN5  .856         

Furthermore, to control bias issues, we used statistical and procedural remedies (Podsakoff et al., 2003). Procedurally, respondents' anonymity was assured, the questionnaire items were randomized, and attention-check questions were used to minimize socially desirable responding. Screening questions ensured that only qualified respondents, who possessed relevant knowledge on sustainability and AI, proceeded to the main questionnaire. Additionally, Harman's single-factor test was performed, and statistics indicated that a single factor describes only 30.5 % of the variance, confirming that bias is not a leading issue (see Table 3). A supplemental measure entailed the use of a full collinearity check procedure using variance inflation factors (VIF) in SmartPLS (Kock, 2024). All constructs' VIF statistics were far below the stringent threshold of 3.3 (see Tables 6 and 8), further supporting the inference that severe common method variance does not materialize (Chen et al., 2025; Hair et al., 2019). Overall, these findings suggest that bias does not significantly distort this study’s results.

Furthermore, discriminant validity (DV) was measured using the Fornell-Larcker criterion and the Heterotrait-monotrait ratio of correlations (HTMT). Specifically, the Fornell-Larcker standard was met, as the square root of the AVE for each variable (diagonal values) exceeds the corresponding inter-factor correlation (Fornell & Larcker, 1981) (see Table 7). Further, the HTMT value remained below the conservative cutoff of 0.85 and the liberal cutoff of 0.90, with the confidence interval not including 1, validating further DV evidence of the OSAIC scale and sustainable innovation (Hair et al., 2019; Henseler et al., 2015) (see Table 7). Overall, these findings offer strong evidence for the reliability and validity of the first-order variables.

Table 7.

Assessment of DV.

Fornell & Larcker criterionHTMT criterion
SAIL  SAISe  SAISn  SAIT  SI  SINN  SAIL  SAISe  SAISn  SAIT  SI  SINN 
SAIL  .840            –           
SAISe  .734  .910          .788  –         
SAISn  .762  .724  .931        .822  .781  –       
SAIT  .721  .730  .739  .869      .772  .781  .793  –     
SI  .730  .724  .750  .760  .840    .790  .783  .814  .819  –   
SINN  .445  3416  .450  .419  .466  .858  .482  .451  .491  .453  .510  – 

Note: SAIL=sustainable AI learning; SAISe=sustainable AI seizing; SAISn=sustainable AI sensing; SAIT=sustainable AI transforming; SI=stakeholder integration, sustainable innovation=SINN.

Assessment of higher-order construct validation

Given that we used the first five-order dimensions (sustainable AI sensing, seizing, learning, transforming, and stakeholder integration) to conceptualize the OSAIC construct, we used a two-stage PLS–SEM approach recommended by Becker et al. (2012) and Sarstedt et al. (2019). In the first stage, we computed latent variable scores (LVS) for the five dimensions, whereas in the second stage, we used these scores to predict the OSAIC, a higher-order construct. This procedure has been widely suggested when structural models include higher-order constructs because it enhances estimation accuracy and prevents complications of model collinearity and complexity (Becker et al., 2012).

Table 8 presents the reliability and convergent validity, and overall model fit results of the OSAIC scale as a higher-order construct. All five first-order dimensions of the outer loading items on the OSAIC ranged between 0.877 and 0.896 above the cutoff of 0.70. Additionally, CA and CR of the OSAIC were 0.934 and 0.935, respectively, above 0.70, and the AVE=0.790 was above the 0.50 required cutoff (Hair et al., 2019). Further, dimensions' VIF ranged between 2.917 and 3.272 and below the cutoff of 3.33, and hence had no collinearity issues (Hair et al., 2019). Moreover, the scale model fit indices’ standard root mean square residual (SRMR) = 0.038, far below the cutoff of 0.08, squared Euclidean distance (d_ULS) = 0.030, and Geodesic distance (d_G) = 0.023; these are close to zero, indicating a good fit (Cho et al., 2020). Further, the normed fit index (NFI) = 0.969 ≥ 0.90 and 0.95, indicating an excellent fit (Bentler & Bonett, 1980), demonstrates that the higher-order construct model achieved excellent overall fit, thus reinforcing the robustness of the measurement model.

Table 8.

Assessment of OSAIC reliability, validity, and model fit.

Construct  Items  Loadings  CA  CR  AVE  VIF 
OSAIC  Sustainable AI learning (SAIL)  .889  .934  .935  .790  3.129 
  Sustainable AI seizing (SAISe)  .877        2.917 
  Sustainable AI sensing (SAISn)  .896        3.272 
  Sustainable AI transforming (SAIT)  .886        3.133 
  Stakeholder integration (SI)  .895        3.210 

Model fit indices SRMR = 0.038; d_ULS = 0.030; d_G = 0.023; NFI = 0.969.

Table 8 presents the findings of the reliability and validity of the OSAIC.

Table 9 presents the discriminant validity results between OSAIC and sustainable innovation. The results meet the criterion of the Fornell–Larcker, which illustrates that the square root of the AVE of OSAIC = 0.889 and sustainable innovation = 0.858 exceeded their inter-construct correlation = 0.495. Further, the HTMT value = 0.535 was below the conservatively adopted cutoff of 0.85 and hence provides further evidence supporting discriminant validity. These findings verify that OSAIC is empirically differentiable from sustainable innovation but has a theoretical link congruent with the DCT.

Table 9.

Assessment of DV of Higher-order construct (OSAIC).

ConstructFornell & Larcker criterionHTMT criterion
SINN  OSAIC  SINN  OSAIC 
Sustainable innovation (SINN)  .858    –   
Organizational sustainable AI capabilities (OSAIC)  .495  .889  .535  – 
Assessment of nomological validity (n = 364)

We examined the structural relationship between OSAIC and sustainable innovation to evaluate the nomological validity. Following the DCT of Teece et al. (1997), we postulate that organizations with a higher-order construct (OSAIC) exhibit a higher level of sustainable innovation. Table 10 and Fig. 2 findings from stage 2 support this proposition (H1), as we observed a positive and significant impact of OSAIC on sustainable innovation (β = 0.495, p = .000 < 0.05) and explained 24.5 % of it. In addition, predictive relevance was validated, as Q2 = 0.241, which exceeds the cutoff of zero, confirming meaningful predictive accuracy. Moreover, the effect size (f2) = 0.235 is above the cutoff of 0.35 for a large effect, confirming that OSAIC contributes substantially to sustainable innovation. These results show that OSAIC operates exactly as theoretically predicted regarding a proven construct, thereby confirming its nomological validity.

Table 10.

Assessment of the structural model.

Path  Beta  S.D  t-value  p-value  R2  Q2  f2  Hypothesis  Decision 
OSAIC → SINN  .495  .043  11.501  .000  .245  .241  .325  H1  Supported 
Fig. 2.

Structural model.

It is noteworthy that sustainable innovation was represented as a composite variable with latent variable scores in a two-stage higher-order construct procedure. Therefore, in SmartPLS output, "NaN" was shown on sustainable innovation outer loadings in a structural model diagram. It is anticipated in reflective higher-order construct estimation and does not signal any statistical issue. Measurement properties of SINN were already validated in a first-order model (see Table 6) and thus ensured adequate preparation for structural analysis later on.

Additional robustness testPLS predict

To enhance the robustness of the OSAIC measurement model, along with the multigroup analysis, predictive validity was examined using the PLS predict approach (Shmueli et al., 2019). Table 11 results demonstrate that the Q²_predict value of 0.239 exceeds zero, signifying that the model possesses substantial predictive relevance. Furthermore, the RMSE = 0.874 and MAE = 0.713 values derived from PLS-SEM were lower than those of the linear regression benchmark RMSE = 0.882; MAE = 0.715. This indicates that the OSAIC model exhibits enhanced predictive accuracy when compared to a simplistic linear model. Collectively, these results substantiate the predictive validity of the OSAIC construct, offering additional evidence that the scale not only aligns with the data but also retains out-of-sample predictive capability.

Table 11.

Assessment of predictive validity using PLS-predict.

Path  Q2 predict  PLS-SEM-RMSE  PLS-SEM-MAE  LM-RMSE  LM-MAE 
LVS_SINN  .239  .874  .713  .882  .715 

Note: LVS_SINN = latent variable score sustainable innovation; PLS–SEM = partial least squares structural equation modeling; RMSE = root mean square error; mean absolute error; LM = linear model.

Multi-group analysis (MGA)

We used MGA, splitting the main study (n = 364) into two groups: Asia (n = 168) and Non_Asia (n = 196). Prior to testing group differences, we checked the measurement model in both groups individually (Appendix B). The findings confirmed that all constructs satisfy the measurement model’s cutoff (CA and CR > 0.70, AVE > 0.50), supporting group comparisons. In addition, the structural model results confirmed OSAIC's significant impact on sustainable innovation across both regions (β = 0.368, p < .001 for Asia; β = 0.597, p < .001 for non-Asia). However, estimation results from MGA indicated a statistically significant difference between the two regions (Δβ = –.228, p = .008), implying that OSAIC has a higher impact on sustainable innovation in non-Asian contexts (see Appendix C, Table A-C)

Discussion and conclusion

The findings provide strong empirical support for OSAIC construction and its theoretical contribution to sustainable innovation. In alignment with the DCT assumptions(Teece et al., 1997), the study finds that companies with advanced AI capabilities (e.g., learning, seizing, sensing, stakeholder integration, and transformation capabilities) are more likely to produce innovation-oriented outcomes, focused on sustainable innovation. This is consistent with earlier studies demonstrating that companies that utilize digital capabilities can innovate better by restructuring their processes and resources (Nasiri et al., 2023; Nour & Arbussà, 2024). Despite this, by explicitly incorporating sustainability into these routines, OSAIC extends the existing digital dynamic capabilities literature, which typically overlooks the externalities of AI, both environmentally and socially (Florek-Paszkowska & Ujwary-Gil, 2025; Sjödin et al., 2023).

The significant and positive effect of OSAIC on sustainable innovation (β = 0.495, t-value = 11.501, p-value = 0.000, R² = 0.245) reveals that AI governance focused on sustainability is more of a facilitator than a boundary to innovation. This finding supports the ambidexterity theory, which posits that organizations can pursue exploration and exploitation simultaneously through the implementation of integrative routines (O’Reilly & Tushman, 2008). In this study, organizations mitigate the AI sustainability paradox, meeting the competing goals of AI-led growth and sustainability risk management (Mancuso et al., 2025) through the institutionalization of sustainability protections in their AI capabilities. Earlier studies suggest that companies often view sustainability and innovation as competing imperatives (Calic et al., 2020; Florek-Paszkowska & Ujwary-Gil, 2025); however, our findings show that capabilities specially developed to balance both paradigms can produce synergistic outcomes.

Furthermore, the robustness checks confirm this result. The PLS-Predict result indicated that OSAIC demonstrates predictive validity superior to linear standards, aligning with Shmueli et al.’s (2019) recommendation to evaluate the out-of-sample performance of a newly developed construct. It further confirms that OSAIC is not only statistically sound but also has practical implications for predicting and enhancing sustainable performance (Yu et al., 2024). In addition, MGA identified regional variations: the OSAIC–sustainable innovation relationship was significantly stronger in non-Asia than in Asia. This result is consistent with institutional theory, highlighting that contextual features such as regulations and stakeholder demands affect capabilities' effectiveness (Daddi et al., 2016). More stringent regulations for sustainability and increased stakeholder demands in Europe and North America may account for why OSAIC has a larger association with sustainable innovation for non-Asia cases than for Asia cases, where the implementation of sustainability regulations varies (Doh & Guay, 2006; Kinderman, 2020).

Theoretical implications

First, this research extends the DCT of Teece (2007) and Teece et al. (1997) by positioning sustainability as a meta-capability that permeates AI governance. Although earlier studies have extended the DCT to digital transformation (Ghosh et al., 2022; Held et al., 2025) and sustainability-initiative practices (Ortiz-Avram et al., 2025), these domains have largely developed in parallel. The OSAIC development integrates these perspectives by incorporating sustainability into the regular routines of seizing, sensing, and transforming (Engelmann, 2024; Quayson et al., 2023), further enriched by learning and stakeholder integration. Furthermore, it responds to recent calls for more integrative studies of organizational adaptation to technological disruption, while addressing serious societal problems simultaneously (Faruque et al., 2024).

Second, this study contributes to the paradox and organizational ambidexterity literature by presenting an empirical depiction of how firms can manage the AI sustainability paradox (Smith & Lewis, 2011), that is, the contradiction between the promise of AI for driving sustainability and its related environmental and social threats (Calic et al., 2020; Mancuso et al., 2025). Contrary to prior work that often portrays sustainability as a competing or restrictive paradigm innovation (Hervas-Oliver et al., 2024), our findings suggest that including sustainability in AI capabilities can drive innovation (Arroyabe et al., 2024). This finding provides a theoretical foundation for ambidexterity assumptions (O’Reilly & Tushman, 2008), demonstrating that exploration and exploitation can co-exist comfortably within the paradigm of deliberate capability building.

Finally, this study has important methodological contributions through the formulation and validation of a new multidimensional OSAIC measurement instrument. The higher-order reflective construct was thoroughly assessed using established scale development techniques (Hinkin, 1995), measurement validation (Wang & Chuang, 2024), predictive ability checking (Shmueli et al., 2019), and robustness checks, including multi-group comparisons. Our scale provides a stable instrument for researchers to empirically examine how organizations implement and deploy OSAIC. Additionally, it provides access to future investigations of boundary conditions (e.g., cultural settings, industry type, and regulative environments), test moderation and mediation mechanisms, and OSAIC relationships to broader outcomes such as ethical compliance, organizational performance, and stakeholder trust. Moreover, this scale provides a solid basis for driving research at the nexus of AI, sustainability, and strategic management as AI reshapes industries and society.

Practical implications

The validated OSAIC scale provides managers with a diagnostic instrument to evaluate their organization's preparedness for sustainable AI implementation. This study’s findings indicate that organizations exhibiting elevated OSAIC levels are more capable of generating innovation from their AI investments, emphasizing sustainability instead of regarding it merely as a compliance issue. This study presents various practical suggestions.

First, firms should integrate sustainability within their AI governance frameworks. This study establishes that OSAIC not only prevents environmental and social risks of AI but also accelerates innovation performance. The discovery debunks the long-established orthodoxy of the sustainability-performance trade-off, as the value of incorporating sustainability metrics is exemplified to capitalize on opportunities (e.g., pursuing environmentally sustainable AI projects), comprehend (e.g., tracking the energy consumption of AI), and correct (e.g., reorganizing operations in alignment with the circular economy).

Second, stakeholder integration with such significant impacts in OSAIC highlights the importance of external engagement. Firms must now surpass formal stakeholder consultation to achieve actual co-design, collaborative audit, and tools for transparency. By embracing these practices, legitimacy and learning increase, which, in our study, was confirmed as having a positive impact on sustainable innovation outcomes.

Third, MGA reflects regional difference, revealing useful contextual insights. In non-Asian regions, where stakeholder influences and regulatory structures are relatively robust, the association between OSAIC and sustainable innovation is stronger. This emphasizes that organizations operating in Asia improve their sustainability governance arrangements to meet international standards and maintain competitiveness in international markets. Further, Asian policymakers and business groups help by introducing stronger regulatory arrangements and incentive systems that encourage firms to attain sustainability-oriented AI capabilities. In addition, for policymakers and regulators, the OSAIC framework offers a valuable lens for designing guidelines and assessment tools that surpass technical compliance.

Lastly, for practitioners in different territories, the OSAIC scale provides a practical instrument for the benchmarking and development of capabilities. Managers use this scale to identify gaps in capabilities, allocate priority to expenditure on education programs and stakeholder engagement, and track progress over time. By aligning OSAIC scores with innovation performance, organizations can improve the quality of their strategic decisions on AI governance and coherence with sustainability goals.

Limitations and future direction for the researchers

First, this study is based on data collected through a cross-sectional, self-reported survey with a single informant per firm. The same-source/common method variance inflates the association and precludes causal interpretation. Although both procedural (anonymity and attention checks) and statistical remedies (Harman’s single factor and VIF) highlight that bias is not a major issue, it cannot be ruled out completely. As such, the magnitude of the OSAIC to the sustainable innovation association needs to be cautiously interpreted as associative, not causal, in strength. Future work could enhance validity through several methods. First, scholars are advised to adopt multi-informant designs (e.g., chief information officer/AI leaders, sustainability officer, operations/engineering), and report interrater agreement and aggregation statistics (e.g., interclass correlation coefficient, type 1 (individual-level reliability)/interclass correlation coefficient, type 2 [group-level reliability], and within-group agreement index [rwg]). Second, studies could triangulate external or archival outcomes, such as environmental, social, and governance, corporate social responsibility, or global reporting initiative reports; supplier audits; green patent counts; independent algorithmic impact assessments; cloud/energy metering data for AI workloads; and third-party sustainability ratings. Third, other design remedies (e.g., marker variables, matched datasets, and temporal separation) are also encouraged.

Second, the sampling technique, purposive and snowball recruitment on LinkedIn and professional networks, implied a bias towards industries that are technology-oriented and with high AI adoption, and respondents from Asia, Europe, and North America. Although this was intentional to facilitate access to domain expertise, it limits generalizability to SMEs and low-AI conditions. Future research should utilize panel-based sampling or probability, with quotas by industry, company size, and region (and under-represented conditions such as Africa and supply-chain-dominated industries), and consider weighting to increase representativeness.

Third, although the study conducted MGA and PLS prediction tests to show robustness for various regions, sample sizes were relatively low, and the Asia vs. Non-Asia dichotomy may obscure finer contextual variations. Future studies require larger equated groups and measurement checks suitable for use with PLS (e.g., measurement invariance of composite models for configural/compositional invariance) to ascertain whether the OSAIC measurement generalizes across different cultures and regulatory environments before comparing structural paths.

Fourth, although this work supported the nomological validity by observing the positive and significant impact of OSAIC on sustainable innovation, it was restricted to a single outcome construct. Future studies should investigate how OSAIC affects other theory-relevant outcomes such as AI ethical compliance, organizational performance, and stakeholder trust. Moreover, investigating mediators (e.g., process reconfiguration, sustainable AI learning) and moderators (e.g., digital maturity, environmental uncertainty, regulatory pressure, top management commitment) can further advance theoretical and boundary conditions and mechanisms.

Lastly, we estimated PLS-SEM under a reflective higher-order specification, appropriate for scale development under a predictive perspective. Future studies should cross-validate under a covariance-based or Bayesian structural equation modeling to allow for comparison of model fit and factor structure, and different estimators of higher-order constructs (repeated-indicators, hybrid, two-stage). Moreover, employing longitudinal or multi-wave designs (e.g., cross-lagged panels) is suggested to evaluate capability development over time and strengthen causal inference.

Conclusion

As AI becomes increasingly central to organizational decision-making, its sustainability implications warrant greater scholarly and managerial attention. This study developed and validated the OSAIC scale as a reflective high-order construct comprising five dimensions: sustainable AI learning, seizing, sensing, stakeholder integration, and transformation. Grounded in the DCT, we posit that OSAIC is a sustainability-oriented meta-capability that enables organizations to manage AI sustainability paradox—the tension between leveraging AI for growth and decreasing its ecological and social impacts. The development of OSAIC follows a multiple-phase process, involving item generation from literature and expert interviews, Q-sorting procedure, EFA, and the PLS–SEM. The OSAIC demonstrated strong psychometric properties. The study outcomes validate that OSAIC significantly influences sustainable innovation, confirming its practical and theoretical relevance. Additional robustness tests using PLS-predict and MGA further supported the scale’s predictive validity and demonstrated contextual differentiation across Asia, Europe, and North America. The current work expands the DCT by incorporating sustainability as a meta-capability aimed at bolstering ambidexterity, and the paradox perspective, highlighting how companies can balance competing demands and providing future studies with a validated scale.

DeclarationsAuthors’ contributions

SAQ conceived the study model. FS designed the online surveys and conducted an expert meeting. SAQ and FS interpreted data and revised it critically for important intellectual content. SAQ and FS designed the research design and analysis. SAQ initially wrote the article, and FS made substantial contributions. SAQ and FS made critical comments and amendments.

Ethical approval

This study was approved by Liaocheng University's Ethical Committee, with approval number LU-ECBS-2024–005. Before approval, this research proposal, including its aims, research design, and approaches to participant involvement, underwent a rigorous review against the ethical standards outlined in the Ethical Principles of Psychologists and the Code of Conduct of the American Psychological Association (APA). The assigned approval confirms that the research design, participants’ recruitment approach, and data handling practices align with the ethical requirements for research involving human participants.

Informed consent

Before participation, all respondents were presented with a detailed informed consent form explaining the purpose of the study, their rights as participants, and assurances of anonymity and confidentiality. The form clearly stated that participation was entirely voluntary, that no personally identifiable information would be collected, and that participants could withdraw at any point without any consequence. Consent was obtained electronically by requiring respondents to confirm their agreement before proceeding with the questionnaire. As well as being written by the experts who were involved in our study. The study adhered to the ethical standards of academic research involving human participants, in line with the Declaration of Helsinki and institutional guidelines.

Data availability statement

The datasets used for exploratory factor analysis will be made available upon request, subject to the mutual consensus of the co-authors, which the nature of the request will determine. However, no information related to the participants or their demographics will be provided in any form.

CRediT authorship contribution statement

Sikandar Ali Qalati: Writing – review & editing, Writing – original draft, Validation, Supervision, Software, Resources, Project administration, Investigation, Funding acquisition, Conceptualization. Faiza Siddiqui: Writing – review & editing, Writing – original draft, Software, Methodology, Investigation, Formal analysis, Data curation.

Declaration of competing interest

The authors declare that they have no competing interests.

Acknowledgments

This study is supported by the Research on the Economic Performance and Innovation Performance of Chinese Manufacturing Enterprises [321052320].

Appendix A. List of generated items

  • 1. Sustainable AI sensing

From literature

  • Q1. We monitor regulatory changes (e.g., EU AI Act) affecting AI sustainability.

  • Q2. Our organization tracks the energy consumption of AI model training.

  • Q3. We assess potential bias in AI systems before deployment.

  • Q4. We benchmark our AI sustainability practices against industry standards.

  • Q5. We use tools to predict the environmental impact of new AI projects.

  • Q6. We evaluate the carbon footprint of third-party AI tools before adoption.

  • Q7. Our AI strategy includes alignment with the UN Sustainable Development Goals.

  • Q8. We conduct lifecycle assessments for AI systems.

From interviews

  • Q9. We have a dedicated team to evaluate AI’s social risks (e.g., job displacement).

  • Q10. Sustainability criteria are included in AI procurement checklists.

  • Q11. We audit data sources for ethical and environmental concerns.

  • Q12. We track AI’s water usage in data centers.

  • Q13. Local community impacts are assessed before AI deployment.

  • 2. Sustainable AI seizing

From literature

  • Q14. We allocate budget specifically for sustainable AI R&D.

  • Q15. Leadership prioritizes funding for AI projects that reduce carbon emissions.

  • Q16. Employees are incentivized to develop energy-efficient AI solutions.

  • Q17. We partner with vendors who prioritize renewable energy for AI workloads.

  • Q18. AI project approvals require sustainability impact assessments.

  • Q19. Our AI investment decisions consider long-term environmental costs.

  • Q20. We sponsor hackathons for sustainable AI innovation.

From interviews

  • Q21. Our AI teams have sustainability KPIs alongside performance metrics.

  • Q22. We prioritize open-source AI models to reduce duplication of energy-intensive training.

  • Q23. Internal grants support AI projects addressing environmental justice.

  • Q24. AI vendors must disclose energy efficiency certifications.

  • Q25. We allocate compute resources based on sustainability thresholds.

  • 3. Sustainable AI transforming

From literature

  • Q26. We redesign workflows to minimize energy use in AI inference.

  • Q27. Our data centers use renewable energy for AI computations.

  • Q28. We reuse or adapt existing AI models to avoid redundant training.

  • Q29. Hardware upgrades prioritize energy-efficient processors for AI tasks.

  • Q30. We optimize AI algorithms to reduce computational waste (e.g., pruning, quantization).

  • Q31. AI model training is scheduled during low-carbon energy availability.

  • Q32. We adopt federated learning to reduce data center reliance.

From interviews

  • Q33. We repurpose decommissioned AI hardware for less resource-intensive tasks.

  • Q34. AI model refresh cycles consider environmental trade-offs.

  • Q35. We collaborate with suppliers to reduce AI supply chain emissions.

  • Q36. Cooling systems in data centers are optimized for energy efficiency.

  • Q37. We mandate energy caps for AI training jobs.

  • 4. Sustainable AI learning

From literature

  • Q38. We document lessons from AI sustainability failures (e.g., bias incidents).

  • Q39. Our organization maintains a repository of sustainable AI best practices.

  • Q40. AI teams conduct post-mortems on the environmental impact of projects.

  • Q41. We train employees on sustainable AI development practices.

  • Q42. Sustainability metrics are included in AI performance reviews.

  • Q43. We publish case studies on sustainable AI implementations.

  • Q44. External audits inform our AI sustainability policies.

From interviews

  • Q45. We host cross-departmental forums to share sustainable AI innovations.

  • Q46. External audits of AI systems inform internal policy updates.

  • Q47. We publish annual reports on AI’s sustainability performance.

  • Q48. AI ethics review boards include sustainability experts.

  • Q49. Lessons from AI incidents are shared industry-wide.

  • 5. Stakeholder integration

From literature

  • Q50. We consult local communities before deploying AI systems that affect them.

  • Q51. NGOs are involved in auditing our AI systems for fairness.

  • Q52. We establish grievance mechanisms for AI-related sustainability harms.

  • Q53. Our AI ethics board includes representatives from marginalized groups.

  • Q54. We disclose AI’s environmental impacts to investors and regulators.

  • Q55. Labor unions are engaged in AI-driven automation decisions.

  • Q56. We partner with academia to address AI sustainability gaps.

From interviews

  • Q57. Indigenous groups are engaged in AI projects using their data.

  • Q58. Labor unions participate in decisions about AI-driven automation.

  • Q59. We co-design AI solutions with communities impacted by climate change.

  • Q60. Customers can opt out of high-emission AI services.

  • Q61. We compensate communities affected by AI data collection.

  • Q62. Suppliers must adhere to our AI sustainability charter.

Appendix B. Assessment of the measurement model Asia (n = 168) vs Non_Asia (n = 196)

Construct  Items  Asia loadings  Non_Aisa loadings  Asia CA  Non_Asia CA  Asia CR  Non_Asia CR  Asia AVE  Non_Asia AVE 
Sustainable AI learning (SAIL)  SAIL1  .844  .842  .935  .928  .936  .925  .721  .689 
  SAIL2  .829  .874             
  SAIL3  .835  .846             
  SAIL4  .824  .851             
  SAIL5  .839  .851             
  SAIL6  .828  .856             
  SAIL7  .811  .824             
Sustainable AI seizing (SAISe)  SAISe1  .913  .899  .925  .938  .925  .939  .816  .844 
  SAISe2  .920  .908             
  SAISe3  .941  .913             
  SAISe4  .899  .893             
Sustainable AI Sensing (SAISn)  SAISn1  .940  .929  .914  .931  .914  .934  .853  .878 
  SAISn2  .944  .916             
  SAISn4  .928  .925             
Sustainable AI transforming (SAIT)  SAIT1  .881  .888  .937  .932  .938  .934  .762  .747 
  SAIT2  .835  .855             
  SAIT3  .868  .874             
  SAIT4  .876  .892             
  SAIT5  .879  .873             
  SAIT6  .847  .855             
Stakeholder integration (SI)  SI1  .874  .846  .922  .908  .923  .911  .720  .687 
  SI2  .859  .865             
  SI3  .801  .831             
  SI4  .813  .873             
  SI5  .824  .842             
  SI6  .798  .834             
Sustainable innovation (SINN)  SINN1  .879  .852  .906  .916  .906  .920  .727  .748 
  SINN2  .846  .846             
  SINN3  .864  .831             
  SINN4  .894  .866             
  SINN5  .843  .868             

Appendix C. Assessment of the structural model considering Asia (n = 168) vs Non_Asia (n = 196)

Table A, Table B, Table C

Table A.

Asia structural model.

Path  Beta  S.D  p-value  R2  Hypothesis  Decision 
OSAIC → SINN  .368  .071  .000  .136  H1  Supported 
Table B.

Non_Asia structural model.

Path  Beta  S.D  p-value  R2  Hypothesis  Decision 
OSAIC → SINN  .597  .051  .000  .356  H1  Supported 
Table C.

Multigroup analysis.

Path  Asia Beta  Non_Asia Beta  Δβ  p-value  R2  Hypothesis  Decision 
OSAIC → SINN  .368  .597  – 0.228  .008  .136  H1  Supported 

References
[Abuzaid, 2024]
Abuzaid, A.N. (.2024, April 18–19 2024). AI and business innovation: Analyzing disruptive effects on industry dynamics and competitive strategies. Paper presented at the 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS). https://doi.org/10.1109/ICKECS61492.2024.10617266.
[Adams et al., 2016]
R. Adams, S. Jeanrenaud, J. Bessant, D. Denyer, P. Overy.
Sustainability-oriented innovation: A systematic review.
International Journal of Management Reviews, 18 (2016), pp. 180-205
[Ahmadisakha and Andrikopoulos, 2024]
S. Ahmadisakha, V. Andrikopoulos.
Architecting for sustainability of and in the cloud: A systematic literature review.
Information and Software Technology, 171 (2024),
[Almheiri et al., 2025]
H.M. Almheiri, S.Z. Ahmad, K. Khalid, A.H. Ngah.
Examining the impact of artificial intelligence capability on dynamic capabilities, organizational creativity and organization performance in public organizations.
Journal of Systems and Information Technology, 27 (2025), pp. 1-20
[Amui et al., 2017]
L.B.L. Amui, C.J.C. Jabbour, A.B.L. de Sousa Jabbour, D. Kannan.
Sustainability as a dynamic organizational capability: A systematic review and a future agenda toward a sustainable transition.
Journal of Cleaner Production, 142 (2017), pp. 308-322
[Argote, 2012]
L. Argote.
Organizational learning: Creating, retaining and transferring knowledge.
Springer Science and Business Media, (2012),
[Argyris and Schön, 1997]
C. Argyris, D.A. Schön.
Organizational learning: A theory of action perspective.
Reis, (1997), pp. 345-348
[Arroyabe et al., 2024]
M.F. Arroyabe, C.F.A. Arranz, I. Fernandez De Arroyabe, J.C. Fernandez de Arroyabe.
Analyzing AI adoption in European SMEs: A study of digital capabilities, innovation, and external environment.
Technology in Society, 79 (2024),
[Barney, 1991]
J. Barney.
Firm resources and sustained competitive advantage.
Journal of Management, 17 (1991), pp. 99-120
[Becker et al., 2012]
J.-M. Becker, K. Klein, M. Wetzels.
Hierarchical latent variable models in PLS-SEM: Guidelines for using reflective-formative type models.
Long Range Planning, 45 (2012), pp. 359-394
[Bello et al., 2024]
A.O. Bello, T.T. Okanlawon, P.O. Gbenga, A.A. Abdulraheem, O.T. Olagoke.
Critical success factors (CSFs) for the implementation of distributed ledger technology (DLT) in the Nigerian construction industry.
Construction Innovation (Ahead-of-print), (2024),
[Bentler and Bonett, 1980]
P.M. Bentler, D.G. Bonett.
Significance tests and goodness of fit in the analysis of covariance structures.
Psychological Bulletin, 88 (1980), pp. 588-606
[Buhmann et al., 2024]
K. Buhmann, A. Fonseca, N. Andrews, G. Amatulli.
The future of meaningful stakeholder engagement: Integrating values, norms and practices.
The routledge handbook on meaningful stakeholder engagement, 1st ed., Routledge, (2024), pp. 435-447
[Buhrmester et al., 2018]
M.D. Buhrmester, S. Talaifar, S.D. Gosling.
An evaluation of amazon’s mechanical Turk, its rapid rise, and its effective use.
Perspectives on Psychological Science, 13 (2018), pp. 149-154
[Buzzao and Rizzi, 2021]
G. Buzzao, F. Rizzi.
On the conceptualization and measurement of dynamic capabilities for sustainability: Building theory through a systematic literature review.
Business Strategy and the Environment, 30 (2021), pp. 135-175
[Calic et al., 2020]
G. Calic, A. Shevchenko, M. Ghasemaghaei, N. Bontis, Z. Ozmen Tokcan.
From sustainability constraints to innovation: Enhancing innovation by simultaneously attending to sustainability and commercial imperatives.
Sustainability Accounting, Management and Policy Journal, 11 (2020), pp. 695-715
[Cezarino et al., 2019]
L.O. Cezarino, M.F.R. Alves, A.C.F. Caldana, L.B. Liboni.
Dynamic capabilities for sustainability: Revealing the systemic key factors.
Systemic Practice and Action Research, 32 (2019), pp. 93-112
[Chen et al., 2025]
L. Chen, S.A. Qalati, M. Fan.
Effects of sustainable innovation on stakeholder engagement and societal impacts: The mediating role of stakeholder engagement and the moderating role of anticipatory governance.
Sustainable Development, 33 (2025), pp. 2406-2428
[Cho et al., 2020]
G. Cho, H. Hwang, M. Sarstedt, C.M. Ringle.
Cutoff criteria for overall model fit indexes in generalized structured component analysis.
Journal of Marketing Analytics, 8 (2020), pp. 189-202
[Cohen, 2013]
J. Cohen.
Statistical power analysis for the behavioral sciences.
[Daddi et al., 2016]
T. Daddi, F. Testa, M. Frey, F. Iraldo.
Exploring the link between institutional pressures and environmental management systems effectiveness: An empirical study.
Journal of Environmental Management, 183 (2016), pp. 647-656
[Dicce and Ewers, 2021]
R.P. Dicce, M.C. Ewers.
Becoming linked. In leveraging professional networks for elite surveys and interviews.
Geographical fieldwork in the 21st century, Routledge, (2021), pp. 160-171
[Divya et al., 2024]
D. Divya, R. Jain, P. Chetty, V. Siwach, A. Mathur.
The mediating effect of leadership in artificial intelligence success for employee-engagement.
Management Decision (Ahead-of-print), (2024),
[Doh and Guay, 2006]
J.P. Doh, T.R. Guay.
Corporate social responsibility, public policy, and NGO activism in Europe and the United States: An institutional-stakeholder perspective.
Journal of Management Studies, 43 (2006), pp. 47-73
[Eisenhardt and Martin, 2000]
K.M. Eisenhardt, J.A. Martin.
Dynamic capabilities: What are they?.
Strategic Management Journal, 21 (2000), pp. 1105-1121
[Engelmann, 2024]
A. Engelmann.
A performative perspective on sensing, seizing, and transforming in small- and medium-sized enterprises.
Entrepreneurship and Regional Development, 36 (2024), pp. 632-658
[Fabrigar et al., 1999]
L.R. Fabrigar, D.T. Wegener, R.C. MacCallum, E.J. Strahan.
Evaluating the use of exploratory factor analysis in psychological research.
Psychological Methods, 4 (1999), pp. 272-299
[Falk and van Wynsberghe, 2024]
S. Falk, A. van Wynsberghe.
Challenging AI for sustainability: What ought it mean?.
AI and Ethics, 4 (2024), pp. 1345-1355
[Faruque et al., 2024]
M.O. Faruque, S. Chowdhury, G. Rabbani, A. Nure.
Technology adoption and digital transformation in small businesses: Trends, challenges, and opportunities.
International Journal for Multidisciplinary Research, 6 (2024),
[Feng et al., 2024]
F. Feng, J. Li, F. Zhang, J. Sun.
The impact of artificial intelligence on green innovation efficiency: Moderating role of dynamic capability.
International Review of Economics and Finance, 96 (2024),
[Feroz et al., 2023]
A.K. Feroz, H. Zo, J. Eom, A. Chiravuri.
Identifying organizations’ dynamic capabilities for sustainable digital transformation: A mixed methods study.
Technology in Society, 73 (2023),
[Florek-Paszkowska and Ujwary-Gil, 2025]
A. Florek-Paszkowska, A. Ujwary-Gil.
The Digital-Sustainability Ecosystem: A conceptual framework for digital transformation and sustainable innovation.
Journal of Entrepreneurship, Management and Innovation, 21 (2025), pp. 116-137
[Fornell and Larcker, 1981]
C. Fornell, D.F. Larcker.
Evaluating structural equation models with unobservable variables and measurement error.
Journal of Marketing Research, 18 (1981), pp. 39-50
[Freeman, 2010]
R.E. Freeman.
Strategic management: A stakeholder approach.
Cambridge University Press, (2010), http://dx.doi.org/10.1017/CBO9781139192675
[Freeman et al., 2020]
R.E. Freeman, R. Phillips, R. Sisodia.
Tensions in stakeholder theory.
Business and Society, 59 (2020), pp. 213-231
[Gao et al., 2025]
Y. Gao, S. Liu, L. Yang.
Artificial intelligence and innovation capability: A dynamic capabilities perspective.
International Review of Economics and Finance, 98 (2025),
[Geissdoerfer et al., 2017]
M. Geissdoerfer, P. Savaget, N.M.P. Bocken, E.J. Hultink.
The Circular Economy—A new sustainability paradigm?.
Journal of Cleaner Production, 143 (2017), pp. 757-768
[Ghosh et al., 2022]
S. Ghosh, M. Hughes, I. Hodgkinson, P. Hughes.
Digital transformation of industrial businesses: A dynamic capability approach.
[Hair and Alamer, 2022]
J. Hair, A. Alamer.
Partial Least squares structural equation modeling (PLS-SEM) in second language and education research: Guidelines using an applied example.
Research Methods in Applied Linguistics, 1 (2022),
[Hair et al., 2017]
Hair, J., Hollingsworth, C.L., Randolph, A.B., .& Chong, A.Y.L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial Management and Data Systems, 117(3), 442–458. https://doi.org/10.1108/IMDS-04-2016-0130.
[Hair et al., 2011]
J.F. Hair, R.C. M, M. Sarstedt.
PLS-SEM: Indeed a silver bullet.
Journal of Marketing Theory and Practice, 19 (2011), pp. 139-152
[Hair et al., 2019]
J.F. Hair, J.J. Risher, M. Sarstedt, C.M. Ringle.
When to use and how to report the results of PLS-SEM.
European Business Review, 31 (2019), pp. 2-24
[Held et al., 2025]
P. Held, T. Heubeck, R. Meckl.
Boosting SMEs’ digital transformation: The role of dynamic capabilities in cultivating digital leadership and digital culture.
Review of Managerial Science, (2025),
[Henseler et al., 2015]
J. Henseler, C.M. Ringle, M. Sarstedt.
A new criterion for assessing discriminant validity in variance-based structural equation modeling.
Journal of the Academy of Marketing Science, 43 (2015), pp. 115-135
[Hervas-Oliver et al., 2024]
J.-L. Hervas-Oliver, J.A.A.F. Márquez García, R. Rojas-Alvarado.
Are clusters and industrial districts really driving sustainability innovation?.
Competitiveness Review, 34 (2024), pp. 896-915
[Hinkin, 1995]
T.R. Hinkin.
A review of scale development practices in the study of organizations.
Journal of Management, 21 (1995), pp. 967-988
[Hinkin, 1998]
T.R. Hinkin.
A brief tutorial on the development of measures for use in survey questionnaires.
Organizational Research Methods, 1 (1998), pp. 104-121
[Hölzle, 2022]
Hölzle, U. (2022). Our commitment to climate-conscious data center cooling. Retrieved from https://blog.google/outreach-initiatives/sustainability/our-commitment-to-climate-conscious-data-center-cooling/.
[Hossain et al., 2025]
S. Hossain, M. Fernando, S. Akter.
Digital leadership: Towards a dynamic managerial capability perspective of artificial intelligence-driven leader capabilities.
Journal of Leadership and Organizational Studies, 32 (2025), pp. 189-208
[Kinderman, 2020]
D. Kinderman.
The challenges of upward regulatory harmonization: The case of sustainability reporting in the European Union.
Regulation and Governance, 14 (2020), pp. 674-697
[Kock, 2024]
N. Kock.
Will PLS have to become factor-based to survive and thrive?.
European Journal of Information Systems, 33 (2024), pp. 882-902
[Kong and Yuen, 2025]
K.Y. Kong, K.F. Yuen.
Sustainability risk management: Exploring the role of artificial intelligence capabilities through an information-processing lens.
Risk Analysis, 45 (2025), pp. 563-580
[Kumar et al., 2025]
S. Kumar, V. Kumar, R. Chaudhuri, S. Chatterjee, D. Vrontis.
AI capability and environmental sustainability performance: Moderating role of green knowledge management.
Technology in Society, 81 (2025),
[Lambert and Newman, 2023]
L.S. Lambert, D.A. Newman.
Construct development and validation in three practical steps: Recommendations for reviewers, editors, and authors, editors, and authors*.
Organizational Research Methods, 26 (2023), pp. 574-607
[Larson, 2019]
R.B. Larson.
Controlling social desirability bias.
International Journal of Market Research, 61 (2019), pp. 534-547
[Li et al., 2025]
Li, Y., Hu, Z., Choukse, E., Fonseca, R., Suh, G.E., .& Gupta, U. (2025). Ecoserve: Designing carbon-aware ai inference systems. arXiv preprint arXiv:2502.05043.
[Ligozat et al., 2022]
A.-L. Ligozat, J. Lefevre, A. Bugeau, J. Combaz.
Unraveling the hidden environmental impacts of AI solutions for environment life cycle assessment of AI solutions.
Sustainability, 14 (2022),
[Lin et al., 2025]
C.-Y. Lin, J. Brailovskaia, S. Üztemur, A. Gökalp, N. Değirmenci, P.-C. Huang, A.H. Pakpour.
Dark future: Development and initial validation of artificial intelligence conspiracy beliefs scale (AICBS).
Brain and Behavior, 15 (2025),
[Loureiro et al., 2020]
S.M.C. Loureiro, J. Romero, R.G. Bilro.
Stakeholder engagement in co-creation processes for innovation: A systematic literature review and case study.
Journal of Business Research, 119 (2020), pp. 388-409
[Mancuso et al., 2025]
I. Mancuso, A.M. Petruzzelli, U. Panniello, G. Vaia.
The bright and dark sides of AI innovation for sustainable development: Understanding the paradoxical tension between value creation and value destruction.
[March 1991]
J.G. March.
Exploration and exploitation in organizational learning.
Organization Science, 2 (1991), pp. 71-87
[Melville, 2010]
N.P. Melville.
Information systems innovation for environmental sustainability.
MIS Quarterly, 34 (2010), pp. 1-21
[Mergen et al., 2025]
A. Mergen, N. Çetin-Kılıç, M.F. Özbilgin.
Artificial intelligence and bias towards marginalised groups: Theoretical roots and challenges.
AI and diversity in a datafied world of work: Will the future of work be inclusive?, http://dx.doi.org/10.1108/S2051-233320250000012004
[Microsoft, 2025]
Microsoft. (2025). AI agent carbon accounting: Tracking emissions with Microsoft’s 2025 sustainability API. Retrieved from https://markaicode.com/ai-carbon-tracking-microsoft-sustainability-api-2025/.
[Mikalef and Gupta, 2021]
P. Mikalef, M. Gupta.
Artificial intelligence capability: Conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance.
Information and Management, 58 (2021),
[Mohd Dzin and Lay, 2021]
N.H. Mohd Dzin, Y.F. Lay.
Validity and reliability of adapted self-efficacy scales in Malaysian context using PLS-SEM approach.
Education Sciences, (2021), pp. 11
[Nasiri et al., 2023]
M. Nasiri, M. Saunila, J. Ukko, T. Rantala, H. Rantanen.
Shaping digital innovation via digital-related capabilities.
Information Systems Frontiers, 25 (2023), pp. 1063-1080
[Netemeyer et al., 2003]
R.G. Netemeyer, W.O. Bearden, S. Sharma.
Scaling procedures: Issues and applications.
Sage Publications, (2003),
[Nour and Arbussà, 2024]
S. Nour, A. Arbussà.
Driving innovation through organizational restructuring and integration of advanced digital technologies: A case study of a world-leading manufacturing company.
European Journal of Innovation Management, (2024),
[Nunnally, 1978]
J.C. Nunnally.
An overview of psychological measurement.
Clinical diagnosis of mental disorders: A handbook, pp. 97-146
[Ojong, 2025]
N. Ojong.
Interrogating the economic, environmental, and social impact of artificial intelligence and big data in sustainable entrepreneurship.
Business Strategy and the Environment, NA, (2025),
[O’Reilly and Tushman, 2008]
C.A. O’Reilly, M.L. Tushman.
Ambidexterity as a dynamic capability: Resolving the innovator’s dilemma.
Research in Organizational Behavior, 28 (2008), pp. 185-206
[Ortiz-Avram et al., 2025]
D. Ortiz-Avram, M. Andraos, K. Salomon, B. Kump.
Unpacking productive dialogues as building blocks of dynamic capabilities for sustainability-oriented innovation.
Business Strategy and the Environment, NA, (2025),
[Podsakoff et al., 2003]
P.M. Podsakoff, S.B. MacKenzie, J.-Y. Lee, N.P. Podsakoff.
Common method biases in behavioral research: A critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88 (2003), pp. 879-903
[Podsakoff et al., 2016]
P.M. Podsakoff, S.B. MacKenzie, N.P. Podsakoff.
Recommendations for creating better concept definitions in the organizational, behavioral, and social sciences.
Organizational Research Methods, 19 (2016), pp. 159-203
[Qalati et al., 2024]
S.A. Qalati, F. Siddiqui, Q. Wu.
The effect of environmental ethics and spiritual orientation on firms’ outcomes: The role of senior management orientation and stakeholder pressure.
Humanities and Social Sciences Communications, 11 (2024), pp. 1746
[Quayson et al., 2023]
M. Quayson, C. Bai, L. Sun, J. Sarkis.
Building blockchain-driven dynamic capabilities for developing circular supply chain: Rethinking the role of sensing, seizing, and reconfiguring.
Business Strategy and the Environment, 32 (2023), pp. 4821-4840
[Reddy, 2024]
R. Reddy.
Sustainable computing: A comprehensive review of energy-efficient algorithms and systems.
[Reijers et al., 2025]
W. Reijers, M.T. Young, M. Coeckelbergh.
Climate change and sustainability.
Introduction to the ethics of emerging technologies, pp. 181-193 http://dx.doi.org/10.1007/978-3-031-85887-1_11
[Sarstedt et al., 2019]
M. Sarstedt, J.F. Hair, J.-H. Cheah, J.-M. Becker, C.M. Ringle.
How to specify, estimate, and validate higher-order constructs in PLS-SEM.
Australasian Marketing Journal, 27 (2019), pp. 197-211
[Schad and Bansal, 2018]
J. Schad, P. Bansal.
Seeing the forest and the trees: How a systems perspective informs paradox research.
Journal of Management Studies, 55 (2018), pp. 1490-1506
[Schoormann et al., 2023]
T. Schoormann, G. Strobel, F. Möller, D. Petrik, P. Zschech.
Artificial intelligence for sustainability—A systematic review of information systems literature.
Communications of the Association for Information Systems, 52 (2023), pp. 199-237
[Secundo et al., 2025]
G. Secundo, C. Spilotro, J. Gast, V. Corvello.
The transformative power of artificial intelligence within innovation ecosystems: A review and a conceptual framework.
Review of Managerial Science, 19 (2025), pp. 2697-2728
[Seidel et al., 2017]
S. Seidel, P. Bharati, G. Fridgen, R.T. Watson, A. Albizri, M.-C.M. Boudreau, S. Watts.
The sustainability imperative in information systems research.
Communications of the Association for Information Systems, 40 (2017), pp. 3
[Senapati and Panda, 2024]
S. Senapati, R.K. Panda.
Assessing the role of consumer experience, engagement, and satisfaction in healthcare services: A two-stage reflective-formative measurement using PLS-SEM.
Services Marketing Quarterly, 45 (2024), pp. 55-82
[Shmueli et al., 2019]
G. Shmueli, M. Sarstedt, J.F. Hair, J.-H. Cheah, H. Ting, S. Vaithilingam, C.M. Ringle.
Predictive model assessment in PLS-SEM: Guidelines for using PLSpredict.
European Journal of Marketing, 53 (2019), pp. 2322-2347
[Sjödin et al., 2023]
D. Sjödin, V. Parida, M. Kohtamäki.
Artificial intelligence enabling circular business model innovation in digital servitization: Conceptualizing dynamic capabilities, AI capacities, business models and effects.
Technological Forecasting and Social Change, 197 (2023),
[Smith and Lewis, 2011]
W.K. Smith, M.W. Lewis.
Toward a theory of paradox: A dynamic equilibrium model of organizing.
Academy of Management Review, 36 (2011), pp. 381-403
[Strubell et al., 2020]
E. Strubell, A. Ganesh, A. McCallum.
Energy and policy considerations for modern deep learning research.
Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), pp. 13693-13696
[Teece, 2007]
D.J. Teece.
Explicating dynamic capabilities: The nature and microfoundations of (sustainable) enterprise performance.
Strategic Management Journal, 28 (2007), pp. 1319-1350
[Teece et al., 1997]
D.J. Teece, G. Pisano, A. Shuen.
Dynamic capabilities and strategic management.
[Tripathi et al., 2024]
S. Tripathi, N. Bachmann, M. Brunner, Z. Rizk, H. Jodlbauer.
Assessing the current landscape of AI and sustainability literature: Identifying key trends, addressing gaps and challenges.
Journal of Big Data, 11 (2024), pp. 65
[Usakli and Rasoolimanesh, 2023]
A. Usakli, S.M. Rasoolimanesh.
Which SEM to use and what to report? A comparison of CB-SEM and PLS-SEM.
Cutting edge research methods in hospitality and tourism, pp. 5-28 http://dx.doi.org/10.1108/978-1-80455-063-220231002
[Vial, 2021]
G. Vial.
Understanding digital transformation: A review and a research agenda.
Journal of Strategic Information Systems, 28 (2021), pp. 118-144
[Wang et al., 2025]
N. Wang, S. Pan, Y. Wang.
How can artificial intelligence capabilities empower sustainable business model innovation? A dynamic capability perspective.
Business Process Management Journal, (2025),
[Wang and Chuang, 2024]
Y.-Y. Wang, Y.-W. Chuang.
Artificial intelligence self-efficacy: Scale development and validation.
Education and Information Technologies, 29 (2024), pp. 4785-4808
[Wang and Wang, 2022]
Y.-Y. Wang, Y.-S. Wang.
Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior.
Interactive Learning Environments, 30 (2022), pp. 619-634
[Warner and Wäger, 2019]
K.S.R. Warner, M. Wäger.
Building dynamic capabilities for digital transformation: An ongoing process of strategic renewal.
Long Range Planning, 52 (2019), pp. 326-349
[Yu et al., 2024]
Y. Yu, J. Xu, J.Z. Zhang, Y. Liu, M.M. Kamal, Y. Cao.
Unleashing the power of AI in manufacturing: Enhancing resilience and performance through cognitive insights, process automation, and cognitive engagement.
International Journal of Production Economics, 270 (2024),
[Zhong and Song, 2025]
K. Zhong, L. Song.
Artificial intelligence adoption and corporate green innovation capability.
Finance Research Letters, 72 (2025),
Copyright © 2025. The Author(s)
Download PDF
Article options
Tools