As artificial intelligence (AI) adoption accelerates globally, its sustainability implications remain insufficiently integrated into organizational capability frameworks. This study develops and validates the organizational sustainable AI capabilities (OSAIC) construct, extending dynamic capabilities theory by embedding sustainability as a meta-capability in AI governance and innovation processes. OSAIC is conceptualized as a five-dimensional, reflective, higher-order construct, encompassing sustainable AI learning, seizing, sensing, stakeholder integration, and transformation. A multi-phase scale development procedure—including expert Q-sorting, exploratory factor analysis, and confirmatory factor analysis, using partial least squares structural equation modeling—was employed. The scale was assessed and validated using two distinct samples: a pilot study (n = 188) and a main study (n = 364), both comprising managers from diverse industries and regions. The findings indicated robust psychometric attributes, characterized by substantial reliability, convergent, discriminant, and predictive validity. A positive and significant relationship between OSAIC and sustainable innovation indicated nomological validity, addressing the AI sustainability paradox by illustrating that sustainability-oriented AI capabilities enhance rather than constrain innovation. By extending the research on dynamic capabilities and paradoxes and presenting a validated measurement tool, this study contributes theoretically and methodologically, respectively, to the literature. Practically, it offers managers a diagnostic framework to align AI implementation with environmental and social accountability while fostering innovation.
Despite the undeniable transformative power of artificial intelligence (AI) in organizational contexts (Secundo et al., 2025), its dual implications for sustainability and innovation remain under-theorized (Ojong, 2025). On the one hand, AI provides unprecedented opportunities for efficiency, predictive insights, and eco-friendly solutions across industries (Abuzaid, 2024). On the other hand, it poses significant ecological and social challenges, including carbon emissions and energy consumption (Reijers et al., 2025), as well as algorithmic bias and governance dilemmas (Mancuso et al., 2025; Mergen et al., 2025). This dual impact reflects the AI sustainability paradox—the tension between AI’s potential to advance sustainable innovation and its unsustainable externalities (Mancuso et al., 2025). Addressing this paradox requires organizations not only to adopt and implement AI technologies but also develop capabilities that embed sustainability principles into AI deployment and governance (Falk & van Wynsberghe, 2024; Mikalef & Gupta, 2021; Tripathi et al., 2024).
Dynamic capabilities theory (DCT) provides a foundational framework for understanding organizational responses to technological disruptions (Teece et al., 1997). Teece’s (2007) triple model, encompassing seizing, sensing, and transforming, is most often employed in the digital transformation context (Ghosh et al., 2022; Warner & Wäger, 2019). However, the integration of the three factors with sustainability requirements remains under-investigated (Amui et al., 2017; Feroz et al., 2023). Moreover, such triple-model factors rarely consider sustainability as a meta-capability influencing all capability building aspects (Gao et al., 2025; Ghosh et al., 2022). Further, although current work emphasizes the issues of agility and competitiveness in digital transformation (Vial, 2021) or environmental responsiveness in relation to sustainability efforts (Cezarino et al., 2019), the critical governance structures and routines necessary to ensure the responsible development and use of AI are frequently overlooked. Such omission highlights the challenge of redefining dynamic capabilities to balance the conflicting needs of AI innovation and mitigation of its environmental and social impacts (Wang et al., 2025).
This study attempts to address this theoretical gap by introducing the concept of organizational sustainable artificial intelligence capabilities (OSAIC). OSAIC extends the DCT developed by Teece et al. (1997) by integrating sustainability as a meta-capability across five interconnected dimensions: sustainable AI seizing, sensing, transforming, learning, and stakeholder integration (Teece, 2007). OSAIC connects the literature on dynamic capabilities (Warner & Wäger, 2019) and paradox with ambidexterity theories (Smith & Lewis, 2011), repositioning sustainability from a peripheral role to a core organizational competency to manage competing demands. The framework advances theory in two ways: first, by reframing AI governance as a process requiring a holistic pursuit of both sustainable and innovative goals (Strubell et al., 2020), and second, by executing sustainability-based routines that enable organizational flexibility in responding paradoxically (Amui et al., 2017).
Empirical work on sustainable AI is constrained by the absence of validated measures of organizational capabilities in this area (Schoormann et al., 2023; Seidel et al., 2017). Although prior studies have examined digital capabilities (Vial, 2021), green information technology (Melville, 2010), and sustainability-oriented dynamic capabilities (Amui et al., 2017), no comprehensive measurement scale effectively captures how organizations detect, sense, transform, learn, and involve stakeholders in the development of sustainable AI. This study addresses this gap using Hinkin's (1998) and Podsakoff et al.'s (2016) recommended methodological approach. First, we develop the measurement items based on the existing literature and expert interviews. These measurement items are then refined using the Q-sorting procedure. The scale’s exploratory and confirmatory dimensional structures are then established using the exploratory and confirmatory factor analyses. Finally, nomological validity is established when the OSAIC construct is associated with sustainable innovation (Adams et al., 2016).
This study offers three significant contributions. First, it advances the DCT of Teece et al. (1997), redefining sustainability as the central meta-capability in AI governance (Teece, 2007), while addressing the challenge of including paradoxical and ambidextrous frameworks in capability research (Smith & Lewis, 2011). Second, it constructs and empirically validates the OSAIC’s first-hand measurement scale, enabling stringent empirical investigation into how enterprises manage the AI-sustainability paradox (Hinkin, 1995). Finally, it highlights the practical applicability of OSAIC by revealing its effect on sustainable innovation outcomes (Feng et al., 2024; Zhong & Song, 2025).
Theoretical support and conceptualization of OSAICTheoretical supportOSAIC scale development is rooted in the DCT, providing an intellectual framework for understanding how organizations adapt their resources in response to technological development, primarily by identifying opportunities, seizing them, and reshaping how operations are conducted (Teece et al., 1997). Although this framework is applied in digital (Warner & Wäger, 2019) and ecological (Amui et al., 2017) contexts, the latter often treat sustainability as a mere consequence, rather than an integral, required meta-capability (Buzzao & Rizzi, 2021). This highlights the critical importance of integrating sustainability into core activities that drive technological innovation, particularly in the AI field (Strubell et al., 2020).
To address this misalignment, we introduce the paradox and ambidexterity perspectives. Paradox theory posits that an organization must reconcile competing and interdependent demands, such as responsibility and innovation, without compromising either aspect (Smith & Lewis, 2011). Additionally, ambidexterity theory suggests that performance is only sustainable if conflicting pressures are aligned through integrated capabilities (O’Reilly & Tushman, 2008). In the AI context, this implies that the organization cannot pursue technological advancement without societal and environmental considerations. Instead, they must internalize sustainability into the dynamic capabilities for innovation and responsibility to progress simultaneously rather than sequentially.
Guided by these insights, we introduce the OSAIC concept. This construct adds to the DCT by embedding sustainability as a foundational element across the sensing, seizing, and transforming dimensions, while incorporating two underexplored aspects: sustainable AI learning (Argyris & Schön, 1997) and stakeholder integration (Freeman, 2010). These inclusions highlight the necessity of adaptive learning and inclusive engagement while exercising responsible AI governance. By conceptualizing sustainability as a meta-capability, OSAIC offers a structured approach to managing the complexities of AI application and extends the discourse on dynamic capabilities, paradox management, and responsible AI governance (Adams et al., 2016).
Conceptualization of OSAICWe define OSAIC as a meta-capability that integrates sustainability principles with AI-related dynamic capabilities, enabling companies to leverage AI for innovation while simultaneously addressing its social and environmental impacts. OSAIC adds to the DCT (Teece, 2007; Teece et al., 1997) by embedding sustainability throughout the five stages of capability development (Adams et al., 2016), further enriching ambidexterity theory and the associated paradox by providing routines that enable companies to reconcile the dual imperatives of innovation and responsibility (O’Reilly & Tushman, 2008; Smith & Lewis, 2011). OSAIC comprises the five dimensions discussed below, which build upon but also extend the traditional understanding of dynamic capabilities.
Sustainable AI sensingThis refers to the ability to continuously scan markets to detect AI-related sustainability concerns. This includes monitoring ecological signals such as the use of algorithms, life-cycle implications (Ligozat et al., 2022), and compliance with regulatory demands such as the EU AI Act. It aligns with "systems sensing" (Schad & Bansal, 2018), in which firms monitor weak signals across technical, ecological, and social dimensions. For instance, Microsoft’s AI sustainability API enables real-time emissions tracking across AI projects, exemplifying this capability. For instance, in January 2025, GPT-4 emitted over 400 kg CO2 (Microsoft, 2025).
Sustainable AI seizingThis refers to the ability to reconfigure resource allocation by integrating sustainability considerations. Unlike conventional seizing, which emphasizes speed and scale (Eisenhardt & Martin, 2000), this dimension evaluates AI projects through a triple-bottom-line framework. Firms may invest in energy-efficient infrastructures, adopt incentive systems that reward sustainability achievements along with model precision, and partner with environmentally conscious cloud services providers (Ahmadisakha & Andrikopoulos, 2024; Reddy, 2024). This aligns with Barney's (1991) resource-based view, which conceptualizes sustainable AI assets as strategically important, difficult to imitate, and based on certain developmental tracks.
Sustainable AI transformingThis involves redesigning organizational processes to promote circularity in AI development. Rooted in the circular economy concept (Geissdoerfer et al., 2017), it entails model reuse, hardware refurbishment, and renewable-powered infrastructure. For instance, Google's application of liquid cooling infrastructure has lowered the data center's CO2 output by 10 % and energy usage by 10 % (Hölzle, 2022), demonstrating how companies can pursue digital transformation and environmental sustainability simultaneously.
Sustainable AI learningThis refers to the integration of feedback systems that are explicitly designed to enhance the sustainability performance of AI systems. It shifts learning priorities from efficiency (Argote, 2012) to sustainability-oriented practices, such as conducting post-incident reviews (for instance, addressing bias or energy inefficiency), forming AI ethics committees, and publishing transparent impact reports. This enhances the exploration–exploitation learning framework by creating a sustainability feedback loop (March, 1991).
Stakeholder integrationThis refers to the organization's capacity to actively engage stakeholders in AI governance. Rather than viewing stakeholders mere as information recipients, this dimension highlights the value of co-design approaches, participatory review, and adaptive feedback systems that translate external perspectives into actionable insight for responsible innovation. Buhmann et al. (2024) believe that stakeholders’ legitimacy should be continuously mediated through bespoke frameworks, promoting engagement from consultation only to co-creation (Loureiro et al., 2020).
Hypothesis developmentThe OSAIC framework’s core proposition is its positive influence on sustainable innovation, defined as the development of new processes, products, or business models that create economic value while reducing environmental and social harm (Adams et al., 2016). According to Teece (2007), the DCT dynamic capabilities are more potent and enhance competitive advantage through innovation. DCT posits that adaptive routines such as real-time emission monitoring and stakeholder co-creation help firms reconfigure resources for innovation. Based on stakeholder theory (Freeman et al., 2020), these adaptive practices promote sustainability-oriented decision making through participatory mechanisms, such as AI ethics boards. OSAICs allow firms to proactively identify AI sustainability risks and opportunities (Kong & Yuen, 2025), invest in green AI projects (Mancuso et al., 2025), and adapt processes to evolving sustainability norms (Ghosh et al., 2022).
Earlier studies have illustrated that AI capabilities significantly improve organizational innovation (Almheiri et al., 2025; Mikalef & Gupta, 2021) and sustainable performance (Kumar et al., 2025). Additionally, Microsoft’s carbon-aware AI inference systems illustrate how EcoServe reduces CO2 emissions by up to 47 %, through performance, energy, and cost-optimized design points, without compromising operational efficiency (Li et al., 2025). Hence, we propose H1.
H1: OSAIC has a positive and significant influence on sustainable innovation.
We followed established methodological standards for scale development (Hinkin, 1995; Nunnally, 1978). After defining the OSAIC construct and its five dimensions, we generated an initial pool of items. Consistent with Netemeyer et al. (2003), item clarity and item-wording precision were prioritized while minimizing redundancy. Clear wording ensures consistent interpretation among respondents (Lambert & Newman, 2023), whereas some degree of redundancy at the early development stage helps capture the full conceptual scope of the construct. For clarity and reliability, negatively worded items were deliberately avoided (Netemeyer et al., 2003). Moreover, we initially included more items than we ultimately required, as starting with a comprehensive set is preferred to avoid missing key elements (Wang & Chuang, 2024; Wang & Wang, 2022).
The item generation process began with a comprehensive review of 246 research papers published between 2015 and 2024, focusing on the intersection between AI, sustainability, and dynamic capabilities. Articles were sourced from major research databases, such as Scopus, using targeted keywords such as “sustainable AI,” “AI governance,” and “dynamic capabilities.” This review initially yielded 36 potential scale items. Overly broad items (e.g., “My organization uses AI sustainably”) were eliminated, and the remaining items were mapped onto the five conceptual dimensions of OSAIC. For example, the item “We audit AI projects for compliance with sustainability policies” was mapped onto the dimension of sustainable AI transformation.
To ensure contextual richness and practical relevance, we also conducted 18 semi-structured interviews with domain experts. The panel comprised six AI sustainability officers from leading technology firms (e.g., SAP, Intel), six corporate social responsibility managers from the manufacturing and finance sectors, and six AI ethics researchers affiliated with academia and non-governmental organizations. The interview protocol examined how organizations balance between AI innovation and sustainability objectives, as well as the mechanisms employed to monitor the social and environmental AI impacts. Guiding questions included: “How does your organization balance AI innovation with sustainability goals?” and “What processes exist to monitor AI’s social and environmental impacts?” The interviews were transcribed and thematically coded, resulting in 26 additional items. For instance, the item “We compensate communities affected by AI data collection” was linked to the dimension of stakeholder integration.
The final item pool comprised 62 items: 36 and 26 items from the literature review and expert interviews, respectively. Together, these items provided a comprehensive and conceptually grounded foundation for subsequent scale development stages. Appendix A presents the list of 62 items.
Scale refinementTo further refine the item pool and ensure content validity, we conducted a Q-sorting procedure with a panel of ten experts. The panel comprised five scholars specializing in organizational capabilities, AI ethics, and sustainability, and five industry practitioners with expertise in AI governance and corporate sustainability. Each expert independently sorted the 62 items into the five proposed OSAIC dimensions. Besides classification, experts rated the relevance of each item using a five-point scale (1 = “Not Relevant” to 5 = “Highly Relevant”).
Items were retained if they satisfied three criteria: (1) placement accuracy of at least 80 %, meaning that at least 80 % of the experts correctly classified the item into its intended dimension; (2) an average relevance score of 4.0 or higher; and (3) no cross-loading, with each item aligned to a single dimension without ambiguity. Following this comprehensive sorting and rating procedure, 34 of these items were removed. For instance, “We benchmark our AI sustainability practices against industry standards” was removed because it lacked concreteness or failed conceptually to align with OSAIC. The remaining 28 items were further revised for clarity, pilot testing, and empirical validation.
Sampling and data collectionThe OSAIC scale was developed based on a multi-stage, quantitative research design. Netemeyer et al. (2003) recommend scale development procedures with two data collection stages: (i) an initial pilot study for item refinement and exploratory factor analysis (EFA), and (ii) the main study to validate the measurement model through partial least squares structure equation modeling (PLS–SEM).
In the pilot test, a purposive sample of 188 managers at mid- or senior-levels was solicited from value-creating industries with high AI adoption, such as technology, manufacturing, finance, and healthcare (see Table 1). Data were collected through professional networks using a Google-Form-developed survey distributed through email, WhatsApp, and LinkedIn groups. The strategy allowed efficient access to qualified participants from diverse backgrounds across multiple geographical locations (Buhrmester et al., 2018; Dicce & Ewers, 2021). Purposive sampling was especially appropriate as it allowed the respondents to possess related information regarding AI and sustainability, raising the possibility of retrieving content-valid responses (Qalati et al., 2024). The pilot study data were used in EFA, with the findings guiding the scaling refinement and elimination of unsuitable items before being used in the subsequent confirmatory test.
Respondents’ information.
The second sample includes data from 364 managers and executives from all over North America, Europe, and Asia, who focused on sectors with leading AI adoption and sustainability agendas. Both purposive and snowball sampling techniques were applied. Through purposive sampling, the researchers target relevant leadership jobs, such as AI project leaders, data scientists, sustainability officers, and senior managers with insightful views on their OSAIC. The same applies to the theoretical sampling requirements in the study of dynamic capabilities, whereby informants should be well-informed regarding organizational processes (Divya et al., 2024; Qalati et al., 2024). Snowball sampling expanded the survey's coverage using professional networks and LinkedIn groups, which are recommended sampling techniques applied when sampling expert population that is not accessible through random sampling (Bello et al., 2024; Hossain et al., 2025). An online survey questionnaire used for both phases ensured anonymity and provided broad geographic coverage, reducing social desirability bias and increasing response truthfulness (Larson, 2019). Screening questions ensured that only respondents with experience in AI and sustainability proceeded, and attention checks were embedded to identify inattentive responses.
Table 1 illustrates and summarizes the demographic profile. In the main study, participants were primarily from Asia (46.2 %), North America (36.5 %), and Europe (17.3 %). The sample comprised 63.7 % males, with the majority aged between 31 and 40 years (62.4 %). Technology (50.8 %) and manufacturing (31.6 %) were the largest industry groups, with most participants working in organizations of 500–999 employees (34.9 %). Most respondents had 5–15 years of experience, ensuring informed insights. Lastly, to check sample size adequacy for validation, power analysis with G*Power was run (Cohen, 2013). With a medium effect size of f² = 0.15, α = 0.05, and 0.95 statistical power, the minimum number of samples required to estimate regression paths with one predictor was 89 (see Fig. 1). Hence, the pilot sample of 188 and the main study sample of 364 met the criterion, ensuring that data were adequate for PLS-SEM analysis.
Analytical strategyWe employed a two-stage analytical strategy to validate OSAIC scale according to the scale development guidelines (Hinkin, 1998; Netemeyer et al., 2003). In the pilot phase, we used EFA to investigate the item pool’s dimensional structure and to allow scale refinement before full validation. Additionally, EFA was selected for the initial phase, allowing the identification of latent factor structures and removal of weak or ambiguous items early in the development process (Fabrigar et al., 1999). Principal axis factoring with Promax (oblimin) rotation was utilized as intercorrelations between the proposed OSAIC dimensions were theorized and empirically found. This method ensured that cross-loading, poor, or conceptually vague items would be eliminated at the start (Lin et al., 2025).
For the main study (n = 364), the refined scale was assessed comprehensively through PLS–SEM using SmartPLS 4. We preferred PLS-SEM over covariance-based SEM for three reasons. First, the study’s aim is prediction—constructing a valid measurement scale and demonstrating its nomological validity concerning sustainable innovation—and PLS–SEM is well-suited to this aim (Mohd Dzin & Lay, 2021). Second, the OSAIC was conceptualized as a reflective–reflective higher-order construct, and PLS–SEM offers a strong framework for predicting such hierarchical models using the two-stage approach (measurement model and structural model) (Senapati & Panda, 2024). Third, PLS–SEM is more appropriate if data are not multivariate normally distributed and sample sizes are modest compared to model complexity, as is the case here (Chen et al., 2025; Hair & Alamer, 2022).
This EFA pairing approach for item purification and PLS–SEM for confirmatory validity and hypothesis testing is robustly recommended in information systems and organizational capabilities studies (Hair et al., 2017). It allows for the exploratory identification of the factor structure and robust assessment of measurement properties, such as reliability, convergent, discriminant, and nomological validity (Usakli & Rasoolimanesh, 2023).
ResultsPilot testing and EFAThe pre-test phase aimed to refine the OSAIC measurement scale by assessing its internal consistency and factor structure, following established scale development guidelines (Hinkin, 1998). Internal reliability was initially assessed using corrected item-to-total correlations, which aimed to avoid inflated part–whole correlations (Wang & Chuang, 2024; Wang & Wang, 2022). Items with corrected item-to-total correlations below 0.40 were considered for removal (Hinkin, 1998). The analysis showed that 28 retained items exceeded this criterion, and the scale’s overall Cronbach’s alpha (CA) was excellent, validating its reliability for subsequent factor analysis.
Suitability for factor analysis was confirmed with a Kaiser-Meyer-Olkin value of 0.889 and a significant Bartlett's test of Sphericity (χ² = 4223.633; p < .001), demonstrating adequate sampling adequacy and sufficient intercorrelations among items (Podsakoff et al., 2016) (see Table 2).
An EFA was run on a pre-testing sample (n = 188) using principal axis factoring with Promax (oblique) rotation, as correlations among dimensions were empirically and theoretically supported. We utilized this method instead of orthogonal rotation, such as Varimax, because it enables interrelated latent constructs (Fabrigar et al., 1999). An EFA illustrated a five-factor solution consistent with the proposed OSAIC. As shown in Table 3, all factors had eigenvalues exceeding 1.0 and together explained 67.05 % of the total variance, with the first factor accounting for 30.55 %. This indicates that no single factor dominated the scale, offering evidence of multidimensionality (see Table 3).
Total variance explained.
| Factor | Initial eigenvalues | Extraction sums of squared loadings | Rotation sums of squared loadingsa | ||||
|---|---|---|---|---|---|---|---|
| Total | % of Variance | Cumulative % | Total | % of Variance | Cumulative % | Total | |
| 1 | 8.823 | 31.512 | 31.512 | 8.553 | 30.548 | 30.548 | 7.576 |
| 2 | 5.190 | 18.535 | 50.047 | 4.900 | 17.501 | 48.049 | 4.621 |
| 3 | 2.713 | 9.690 | 59.736 | 2.312 | 8.259 | 56.308 | 5.901 |
| 4 | 2.242 | 8.005 | 67.742 | 1.907 | 6.812 | 63.120 | 5.042 |
| 5 | 1.454 | 5.194 | 72.936 | 1.101 | 3.934 | 67.054 | 3.023 |
| 6 | .806 | 2.879 | 75.815 | ||||
| 7 | .730 | 2.609 | 78.424 | ||||
| 8 | .625 | 2.234 | 80.657 | ||||
| 9 | .520 | 1.858 | 82.515 | ||||
| 10 | .505 | 1.803 | 84.318 | ||||
| 11 | .459 | 1.639 | 85.957 | ||||
| 12 | .434 | 1.552 | 87.509 | ||||
| 13 | .392 | 1.401 | 88.910 | ||||
| 14 | .372 | 1.327 | 90.237 | ||||
| 15 | .326 | 1.164 | 91.401 | ||||
| 16 | .320 | 1.143 | 92.545 | ||||
| 17 | .283 | 1.010 | 93.554 | ||||
| 18 | .267 | .952 | 94.507 | ||||
| 19 | .228 | .815 | 95.322 | ||||
| 20 | .208 | .741 | 96.063 | ||||
| 21 | .185 | .659 | 96.722 | ||||
| 22 | .174 | .620 | 97.343 | ||||
| 23 | .158 | .565 | 97.908 | ||||
| 24 | .148 | .530 | 98.438 | ||||
| 25 | .130 | .465 | 98.903 | ||||
| 26 | .122 | .435 | 99.338 | ||||
| 27 | .108 | .385 | 99.723 | ||||
| 28 | .077 | .277 | 100.000 | ||||
Extraction method: Principal axis factoring.
Items were evaluated using the established retention criteria (Hair et al., 2011):
- 1.
Primary factor loadings ≥ 0.50 on the intended construct,
- 2.
Absence of cross-loadings above 0.40,
- 3.
Communalities ≥ 0.30,
- 4.
At least three strong indicators per factor,
- 5.
Elimination of single-item factors.
Following the above criteria, we removed single items (SAISe6 = 0.445) because of weak loading, resulting in a refined 27-item scale (see Table 4).
Rotated factor loading for the 28 items of the OSAIC scale.
| Construct | Item code | Factor | ||||
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | ||
| Sustainable AI learning (SAIL) | SAIL5 | .936 | ||||
| SAIL2 | .920 | |||||
| SAIL6 | .910 | |||||
| SAIL4 | .897 | |||||
| SAIL7 | .843 | |||||
| SAIL1 | .833 | |||||
| SAIL3 | .828 | |||||
| Stakeholder integration (SI) | SI1 | .909 | ||||
| SI2 | .876 | |||||
| SI5 | .867 | |||||
| SI4 | .862 | |||||
| SI3 | .735 | |||||
| SI6 | .687 | |||||
| Sustainable AI transforming (SAIT) | SAIT3 | .842 | ||||
| SAIT6 | .836 | |||||
| SAIT2 | .813 | |||||
| SAIT1 | .790 | |||||
| SAIT4 | .758 | |||||
| SAIT5 | .708 | |||||
| Sustainable AI seizing (SAISe) | SAISe1 | .928 | ||||
| SAISe3 | .837 | |||||
| SAISe4 | .799 | |||||
| SAISe2 | .622 | |||||
| SAISe6 | .445* | |||||
| Sustainable AI Sensing (SAISn) | SAISn3 | .764 | ||||
| SAISn2 | .665 | |||||
| SAISn1 | .642 | |||||
| SAISn4 | .602 | |||||
Extraction method: Principal axis factoring.
Rotation method: Promax with Kaiser normalization.
Furthermore, Cronbach’s alpha (CA) was calculated to measure the internal consistency of the OSAIC scale. Across the five dimensions, all CA values exceed the cutoff of 0.70 (Nunnally, 1978), indicating adequate reliability. The retained 27 items of the OSAIC scale achieved an overall CA of 0.908, indicating strong internal consistency. All five dimensions proved to be highly reliable with the following CA scores: sustainable AI sensing = 0.773, seizing = 0.881, transforming = 0.913, learning = 0.965, and stakeholder integration = 0.926. Additionally, all corrected item-to-total correlations exceeded the cutoff of 0.40 (Wang & Chuang, 2024), demonstrating that every item contributed meaningfully to its respective dimension (see Table 5). Furthermore, the non-removal of items enhances reliability and further stabilizes the retained items. Table 5 results confirm the good psychometric qualities and adequate internal consistency of the refined scale pilot sample, supporting the retention of 27 items for further validation in the main study.
Corrected items-to-total correlations of 27 items of the OSAIC scale using a sample of 188 sample.
The reliability and validity of the OSAIC scale measurement model was evaluated by employing the main study sample of 364. Table 6 represents the results of all of the first-order constructs. All item factor loadings, except for the sustainable AI sensing item (SAISn3 = 0.667), were higher than the recommended cut-off value of 0.70, supporting indicator reliability (Hair et al., 2019). Additionally, the OSAIC dimensions and sustainable innovation CA and composite reliability (CR) values were between 0.916 and 0.935, which are far higher than the 0.70 cut-off and less than the 0.95 cut-off (Hair et al., 2019), confirming excellent internal consistency. Convergent validity was confirmed with values of average variance extracted (AVE) between 0.706 and 0.866, exceeding the 0.50 cutoff (Fornell & Larcker, 1981; Hair et al., 2019). These findings affirm that the OSAIC measurement model shows satisfactory reliability, internal consistency, and convergent validity.
Assessment of reliability and validity (n = 364).
Furthermore, to control bias issues, we used statistical and procedural remedies (Podsakoff et al., 2003). Procedurally, respondents' anonymity was assured, the questionnaire items were randomized, and attention-check questions were used to minimize socially desirable responding. Screening questions ensured that only qualified respondents, who possessed relevant knowledge on sustainability and AI, proceeded to the main questionnaire. Additionally, Harman's single-factor test was performed, and statistics indicated that a single factor describes only 30.5 % of the variance, confirming that bias is not a leading issue (see Table 3). A supplemental measure entailed the use of a full collinearity check procedure using variance inflation factors (VIF) in SmartPLS (Kock, 2024). All constructs' VIF statistics were far below the stringent threshold of 3.3 (see Tables 6 and 8), further supporting the inference that severe common method variance does not materialize (Chen et al., 2025; Hair et al., 2019). Overall, these findings suggest that bias does not significantly distort this study’s results.
Furthermore, discriminant validity (DV) was measured using the Fornell-Larcker criterion and the Heterotrait-monotrait ratio of correlations (HTMT). Specifically, the Fornell-Larcker standard was met, as the square root of the AVE for each variable (diagonal values) exceeds the corresponding inter-factor correlation (Fornell & Larcker, 1981) (see Table 7). Further, the HTMT value remained below the conservative cutoff of 0.85 and the liberal cutoff of 0.90, with the confidence interval not including 1, validating further DV evidence of the OSAIC scale and sustainable innovation (Hair et al., 2019; Henseler et al., 2015) (see Table 7). Overall, these findings offer strong evidence for the reliability and validity of the first-order variables.
Assessment of DV.
Note: SAIL=sustainable AI learning; SAISe=sustainable AI seizing; SAISn=sustainable AI sensing; SAIT=sustainable AI transforming; SI=stakeholder integration, sustainable innovation=SINN.
Given that we used the first five-order dimensions (sustainable AI sensing, seizing, learning, transforming, and stakeholder integration) to conceptualize the OSAIC construct, we used a two-stage PLS–SEM approach recommended by Becker et al. (2012) and Sarstedt et al. (2019). In the first stage, we computed latent variable scores (LVS) for the five dimensions, whereas in the second stage, we used these scores to predict the OSAIC, a higher-order construct. This procedure has been widely suggested when structural models include higher-order constructs because it enhances estimation accuracy and prevents complications of model collinearity and complexity (Becker et al., 2012).
Table 8 presents the reliability and convergent validity, and overall model fit results of the OSAIC scale as a higher-order construct. All five first-order dimensions of the outer loading items on the OSAIC ranged between 0.877 and 0.896 above the cutoff of 0.70. Additionally, CA and CR of the OSAIC were 0.934 and 0.935, respectively, above 0.70, and the AVE=0.790 was above the 0.50 required cutoff (Hair et al., 2019). Further, dimensions' VIF ranged between 2.917 and 3.272 and below the cutoff of 3.33, and hence had no collinearity issues (Hair et al., 2019). Moreover, the scale model fit indices’ standard root mean square residual (SRMR) = 0.038, far below the cutoff of 0.08, squared Euclidean distance (d_ULS) = 0.030, and Geodesic distance (d_G) = 0.023; these are close to zero, indicating a good fit (Cho et al., 2020). Further, the normed fit index (NFI) = 0.969 ≥ 0.90 and 0.95, indicating an excellent fit (Bentler & Bonett, 1980), demonstrates that the higher-order construct model achieved excellent overall fit, thus reinforcing the robustness of the measurement model.
Assessment of OSAIC reliability, validity, and model fit.
Model fit indices SRMR = 0.038; d_ULS = 0.030; d_G = 0.023; NFI = 0.969.
Table 8 presents the findings of the reliability and validity of the OSAIC.
Table 9 presents the discriminant validity results between OSAIC and sustainable innovation. The results meet the criterion of the Fornell–Larcker, which illustrates that the square root of the AVE of OSAIC = 0.889 and sustainable innovation = 0.858 exceeded their inter-construct correlation = 0.495. Further, the HTMT value = 0.535 was below the conservatively adopted cutoff of 0.85 and hence provides further evidence supporting discriminant validity. These findings verify that OSAIC is empirically differentiable from sustainable innovation but has a theoretical link congruent with the DCT.
Assessment of nomological validity (n = 364)We examined the structural relationship between OSAIC and sustainable innovation to evaluate the nomological validity. Following the DCT of Teece et al. (1997), we postulate that organizations with a higher-order construct (OSAIC) exhibit a higher level of sustainable innovation. Table 10 and Fig. 2 findings from stage 2 support this proposition (H1), as we observed a positive and significant impact of OSAIC on sustainable innovation (β = 0.495, p = .000 < 0.05) and explained 24.5 % of it. In addition, predictive relevance was validated, as Q2 = 0.241, which exceeds the cutoff of zero, confirming meaningful predictive accuracy. Moreover, the effect size (f2) = 0.235 is above the cutoff of 0.35 for a large effect, confirming that OSAIC contributes substantially to sustainable innovation. These results show that OSAIC operates exactly as theoretically predicted regarding a proven construct, thereby confirming its nomological validity.
It is noteworthy that sustainable innovation was represented as a composite variable with latent variable scores in a two-stage higher-order construct procedure. Therefore, in SmartPLS output, "NaN" was shown on sustainable innovation outer loadings in a structural model diagram. It is anticipated in reflective higher-order construct estimation and does not signal any statistical issue. Measurement properties of SINN were already validated in a first-order model (see Table 6) and thus ensured adequate preparation for structural analysis later on.
Additional robustness testPLS predictTo enhance the robustness of the OSAIC measurement model, along with the multigroup analysis, predictive validity was examined using the PLS predict approach (Shmueli et al., 2019). Table 11 results demonstrate that the Q²_predict value of 0.239 exceeds zero, signifying that the model possesses substantial predictive relevance. Furthermore, the RMSE = 0.874 and MAE = 0.713 values derived from PLS-SEM were lower than those of the linear regression benchmark RMSE = 0.882; MAE = 0.715. This indicates that the OSAIC model exhibits enhanced predictive accuracy when compared to a simplistic linear model. Collectively, these results substantiate the predictive validity of the OSAIC construct, offering additional evidence that the scale not only aligns with the data but also retains out-of-sample predictive capability.
Multi-group analysis (MGA)We used MGA, splitting the main study (n = 364) into two groups: Asia (n = 168) and Non_Asia (n = 196). Prior to testing group differences, we checked the measurement model in both groups individually (Appendix B). The findings confirmed that all constructs satisfy the measurement model’s cutoff (CA and CR > 0.70, AVE > 0.50), supporting group comparisons. In addition, the structural model results confirmed OSAIC's significant impact on sustainable innovation across both regions (β = 0.368, p < .001 for Asia; β = 0.597, p < .001 for non-Asia). However, estimation results from MGA indicated a statistically significant difference between the two regions (Δβ = –.228, p = .008), implying that OSAIC has a higher impact on sustainable innovation in non-Asian contexts (see Appendix C, Table A-C)
Discussion and conclusionThe findings provide strong empirical support for OSAIC construction and its theoretical contribution to sustainable innovation. In alignment with the DCT assumptions(Teece et al., 1997), the study finds that companies with advanced AI capabilities (e.g., learning, seizing, sensing, stakeholder integration, and transformation capabilities) are more likely to produce innovation-oriented outcomes, focused on sustainable innovation. This is consistent with earlier studies demonstrating that companies that utilize digital capabilities can innovate better by restructuring their processes and resources (Nasiri et al., 2023; Nour & Arbussà, 2024). Despite this, by explicitly incorporating sustainability into these routines, OSAIC extends the existing digital dynamic capabilities literature, which typically overlooks the externalities of AI, both environmentally and socially (Florek-Paszkowska & Ujwary-Gil, 2025; Sjödin et al., 2023).
The significant and positive effect of OSAIC on sustainable innovation (β = 0.495, t-value = 11.501, p-value = 0.000, R² = 0.245) reveals that AI governance focused on sustainability is more of a facilitator than a boundary to innovation. This finding supports the ambidexterity theory, which posits that organizations can pursue exploration and exploitation simultaneously through the implementation of integrative routines (O’Reilly & Tushman, 2008). In this study, organizations mitigate the AI sustainability paradox, meeting the competing goals of AI-led growth and sustainability risk management (Mancuso et al., 2025) through the institutionalization of sustainability protections in their AI capabilities. Earlier studies suggest that companies often view sustainability and innovation as competing imperatives (Calic et al., 2020; Florek-Paszkowska & Ujwary-Gil, 2025); however, our findings show that capabilities specially developed to balance both paradigms can produce synergistic outcomes.
Furthermore, the robustness checks confirm this result. The PLS-Predict result indicated that OSAIC demonstrates predictive validity superior to linear standards, aligning with Shmueli et al.’s (2019) recommendation to evaluate the out-of-sample performance of a newly developed construct. It further confirms that OSAIC is not only statistically sound but also has practical implications for predicting and enhancing sustainable performance (Yu et al., 2024). In addition, MGA identified regional variations: the OSAIC–sustainable innovation relationship was significantly stronger in non-Asia than in Asia. This result is consistent with institutional theory, highlighting that contextual features such as regulations and stakeholder demands affect capabilities' effectiveness (Daddi et al., 2016). More stringent regulations for sustainability and increased stakeholder demands in Europe and North America may account for why OSAIC has a larger association with sustainable innovation for non-Asia cases than for Asia cases, where the implementation of sustainability regulations varies (Doh & Guay, 2006; Kinderman, 2020).
Theoretical implicationsFirst, this research extends the DCT of Teece (2007) and Teece et al. (1997) by positioning sustainability as a meta-capability that permeates AI governance. Although earlier studies have extended the DCT to digital transformation (Ghosh et al., 2022; Held et al., 2025) and sustainability-initiative practices (Ortiz-Avram et al., 2025), these domains have largely developed in parallel. The OSAIC development integrates these perspectives by incorporating sustainability into the regular routines of seizing, sensing, and transforming (Engelmann, 2024; Quayson et al., 2023), further enriched by learning and stakeholder integration. Furthermore, it responds to recent calls for more integrative studies of organizational adaptation to technological disruption, while addressing serious societal problems simultaneously (Faruque et al., 2024).
Second, this study contributes to the paradox and organizational ambidexterity literature by presenting an empirical depiction of how firms can manage the AI sustainability paradox (Smith & Lewis, 2011), that is, the contradiction between the promise of AI for driving sustainability and its related environmental and social threats (Calic et al., 2020; Mancuso et al., 2025). Contrary to prior work that often portrays sustainability as a competing or restrictive paradigm innovation (Hervas-Oliver et al., 2024), our findings suggest that including sustainability in AI capabilities can drive innovation (Arroyabe et al., 2024). This finding provides a theoretical foundation for ambidexterity assumptions (O’Reilly & Tushman, 2008), demonstrating that exploration and exploitation can co-exist comfortably within the paradigm of deliberate capability building.
Finally, this study has important methodological contributions through the formulation and validation of a new multidimensional OSAIC measurement instrument. The higher-order reflective construct was thoroughly assessed using established scale development techniques (Hinkin, 1995), measurement validation (Wang & Chuang, 2024), predictive ability checking (Shmueli et al., 2019), and robustness checks, including multi-group comparisons. Our scale provides a stable instrument for researchers to empirically examine how organizations implement and deploy OSAIC. Additionally, it provides access to future investigations of boundary conditions (e.g., cultural settings, industry type, and regulative environments), test moderation and mediation mechanisms, and OSAIC relationships to broader outcomes such as ethical compliance, organizational performance, and stakeholder trust. Moreover, this scale provides a solid basis for driving research at the nexus of AI, sustainability, and strategic management as AI reshapes industries and society.
Practical implicationsThe validated OSAIC scale provides managers with a diagnostic instrument to evaluate their organization's preparedness for sustainable AI implementation. This study’s findings indicate that organizations exhibiting elevated OSAIC levels are more capable of generating innovation from their AI investments, emphasizing sustainability instead of regarding it merely as a compliance issue. This study presents various practical suggestions.
First, firms should integrate sustainability within their AI governance frameworks. This study establishes that OSAIC not only prevents environmental and social risks of AI but also accelerates innovation performance. The discovery debunks the long-established orthodoxy of the sustainability-performance trade-off, as the value of incorporating sustainability metrics is exemplified to capitalize on opportunities (e.g., pursuing environmentally sustainable AI projects), comprehend (e.g., tracking the energy consumption of AI), and correct (e.g., reorganizing operations in alignment with the circular economy).
Second, stakeholder integration with such significant impacts in OSAIC highlights the importance of external engagement. Firms must now surpass formal stakeholder consultation to achieve actual co-design, collaborative audit, and tools for transparency. By embracing these practices, legitimacy and learning increase, which, in our study, was confirmed as having a positive impact on sustainable innovation outcomes.
Third, MGA reflects regional difference, revealing useful contextual insights. In non-Asian regions, where stakeholder influences and regulatory structures are relatively robust, the association between OSAIC and sustainable innovation is stronger. This emphasizes that organizations operating in Asia improve their sustainability governance arrangements to meet international standards and maintain competitiveness in international markets. Further, Asian policymakers and business groups help by introducing stronger regulatory arrangements and incentive systems that encourage firms to attain sustainability-oriented AI capabilities. In addition, for policymakers and regulators, the OSAIC framework offers a valuable lens for designing guidelines and assessment tools that surpass technical compliance.
Lastly, for practitioners in different territories, the OSAIC scale provides a practical instrument for the benchmarking and development of capabilities. Managers use this scale to identify gaps in capabilities, allocate priority to expenditure on education programs and stakeholder engagement, and track progress over time. By aligning OSAIC scores with innovation performance, organizations can improve the quality of their strategic decisions on AI governance and coherence with sustainability goals.
Limitations and future direction for the researchersFirst, this study is based on data collected through a cross-sectional, self-reported survey with a single informant per firm. The same-source/common method variance inflates the association and precludes causal interpretation. Although both procedural (anonymity and attention checks) and statistical remedies (Harman’s single factor and VIF) highlight that bias is not a major issue, it cannot be ruled out completely. As such, the magnitude of the OSAIC to the sustainable innovation association needs to be cautiously interpreted as associative, not causal, in strength. Future work could enhance validity through several methods. First, scholars are advised to adopt multi-informant designs (e.g., chief information officer/AI leaders, sustainability officer, operations/engineering), and report interrater agreement and aggregation statistics (e.g., interclass correlation coefficient, type 1 (individual-level reliability)/interclass correlation coefficient, type 2 [group-level reliability], and within-group agreement index [rwg]). Second, studies could triangulate external or archival outcomes, such as environmental, social, and governance, corporate social responsibility, or global reporting initiative reports; supplier audits; green patent counts; independent algorithmic impact assessments; cloud/energy metering data for AI workloads; and third-party sustainability ratings. Third, other design remedies (e.g., marker variables, matched datasets, and temporal separation) are also encouraged.
Second, the sampling technique, purposive and snowball recruitment on LinkedIn and professional networks, implied a bias towards industries that are technology-oriented and with high AI adoption, and respondents from Asia, Europe, and North America. Although this was intentional to facilitate access to domain expertise, it limits generalizability to SMEs and low-AI conditions. Future research should utilize panel-based sampling or probability, with quotas by industry, company size, and region (and under-represented conditions such as Africa and supply-chain-dominated industries), and consider weighting to increase representativeness.
Third, although the study conducted MGA and PLS prediction tests to show robustness for various regions, sample sizes were relatively low, and the Asia vs. Non-Asia dichotomy may obscure finer contextual variations. Future studies require larger equated groups and measurement checks suitable for use with PLS (e.g., measurement invariance of composite models for configural/compositional invariance) to ascertain whether the OSAIC measurement generalizes across different cultures and regulatory environments before comparing structural paths.
Fourth, although this work supported the nomological validity by observing the positive and significant impact of OSAIC on sustainable innovation, it was restricted to a single outcome construct. Future studies should investigate how OSAIC affects other theory-relevant outcomes such as AI ethical compliance, organizational performance, and stakeholder trust. Moreover, investigating mediators (e.g., process reconfiguration, sustainable AI learning) and moderators (e.g., digital maturity, environmental uncertainty, regulatory pressure, top management commitment) can further advance theoretical and boundary conditions and mechanisms.
Lastly, we estimated PLS-SEM under a reflective higher-order specification, appropriate for scale development under a predictive perspective. Future studies should cross-validate under a covariance-based or Bayesian structural equation modeling to allow for comparison of model fit and factor structure, and different estimators of higher-order constructs (repeated-indicators, hybrid, two-stage). Moreover, employing longitudinal or multi-wave designs (e.g., cross-lagged panels) is suggested to evaluate capability development over time and strengthen causal inference.
ConclusionAs AI becomes increasingly central to organizational decision-making, its sustainability implications warrant greater scholarly and managerial attention. This study developed and validated the OSAIC scale as a reflective high-order construct comprising five dimensions: sustainable AI learning, seizing, sensing, stakeholder integration, and transformation. Grounded in the DCT, we posit that OSAIC is a sustainability-oriented meta-capability that enables organizations to manage AI sustainability paradox—the tension between leveraging AI for growth and decreasing its ecological and social impacts. The development of OSAIC follows a multiple-phase process, involving item generation from literature and expert interviews, Q-sorting procedure, EFA, and the PLS–SEM. The OSAIC demonstrated strong psychometric properties. The study outcomes validate that OSAIC significantly influences sustainable innovation, confirming its practical and theoretical relevance. Additional robustness tests using PLS-predict and MGA further supported the scale’s predictive validity and demonstrated contextual differentiation across Asia, Europe, and North America. The current work expands the DCT by incorporating sustainability as a meta-capability aimed at bolstering ambidexterity, and the paradox perspective, highlighting how companies can balance competing demands and providing future studies with a validated scale.
DeclarationsAuthors’ contributionsSAQ conceived the study model. FS designed the online surveys and conducted an expert meeting. SAQ and FS interpreted data and revised it critically for important intellectual content. SAQ and FS designed the research design and analysis. SAQ initially wrote the article, and FS made substantial contributions. SAQ and FS made critical comments and amendments.
Ethical approvalThis study was approved by Liaocheng University's Ethical Committee, with approval number LU-ECBS-2024–005. Before approval, this research proposal, including its aims, research design, and approaches to participant involvement, underwent a rigorous review against the ethical standards outlined in the Ethical Principles of Psychologists and the Code of Conduct of the American Psychological Association (APA). The assigned approval confirms that the research design, participants’ recruitment approach, and data handling practices align with the ethical requirements for research involving human participants.
Informed consentBefore participation, all respondents were presented with a detailed informed consent form explaining the purpose of the study, their rights as participants, and assurances of anonymity and confidentiality. The form clearly stated that participation was entirely voluntary, that no personally identifiable information would be collected, and that participants could withdraw at any point without any consequence. Consent was obtained electronically by requiring respondents to confirm their agreement before proceeding with the questionnaire. As well as being written by the experts who were involved in our study. The study adhered to the ethical standards of academic research involving human participants, in line with the Declaration of Helsinki and institutional guidelines.
Data availability statementThe datasets used for exploratory factor analysis will be made available upon request, subject to the mutual consensus of the co-authors, which the nature of the request will determine. However, no information related to the participants or their demographics will be provided in any form.
CRediT authorship contribution statementSikandar Ali Qalati: Writing – review & editing, Writing – original draft, Validation, Supervision, Software, Resources, Project administration, Investigation, Funding acquisition, Conceptualization. Faiza Siddiqui: Writing – review & editing, Writing – original draft, Software, Methodology, Investigation, Formal analysis, Data curation.
The authors declare that they have no competing interests.
This study is supported by the Research on the Economic Performance and Innovation Performance of Chinese Manufacturing Enterprises [321052320].
1. Sustainable AI sensing
From literature
Q1. We monitor regulatory changes (e.g., EU AI Act) affecting AI sustainability.
Q2. Our organization tracks the energy consumption of AI model training.
Q3. We assess potential bias in AI systems before deployment.
Q4. We benchmark our AI sustainability practices against industry standards.
Q5. We use tools to predict the environmental impact of new AI projects.
Q6. We evaluate the carbon footprint of third-party AI tools before adoption.
Q7. Our AI strategy includes alignment with the UN Sustainable Development Goals.
Q8. We conduct lifecycle assessments for AI systems.
From interviews
Q9. We have a dedicated team to evaluate AI’s social risks (e.g., job displacement).
Q10. Sustainability criteria are included in AI procurement checklists.
Q11. We audit data sources for ethical and environmental concerns.
Q12. We track AI’s water usage in data centers.
Q13. Local community impacts are assessed before AI deployment.
2. Sustainable AI seizing
From literature
Q14. We allocate budget specifically for sustainable AI R&D.
Q15. Leadership prioritizes funding for AI projects that reduce carbon emissions.
Q16. Employees are incentivized to develop energy-efficient AI solutions.
Q17. We partner with vendors who prioritize renewable energy for AI workloads.
Q18. AI project approvals require sustainability impact assessments.
Q19. Our AI investment decisions consider long-term environmental costs.
Q20. We sponsor hackathons for sustainable AI innovation.
From interviews
Q21. Our AI teams have sustainability KPIs alongside performance metrics.
Q22. We prioritize open-source AI models to reduce duplication of energy-intensive training.
Q23. Internal grants support AI projects addressing environmental justice.
Q24. AI vendors must disclose energy efficiency certifications.
Q25. We allocate compute resources based on sustainability thresholds.
3. Sustainable AI transforming
From literature
Q26. We redesign workflows to minimize energy use in AI inference.
Q27. Our data centers use renewable energy for AI computations.
Q28. We reuse or adapt existing AI models to avoid redundant training.
Q29. Hardware upgrades prioritize energy-efficient processors for AI tasks.
Q30. We optimize AI algorithms to reduce computational waste (e.g., pruning, quantization).
Q31. AI model training is scheduled during low-carbon energy availability.
Q32. We adopt federated learning to reduce data center reliance.
From interviews
Q33. We repurpose decommissioned AI hardware for less resource-intensive tasks.
Q34. AI model refresh cycles consider environmental trade-offs.
Q35. We collaborate with suppliers to reduce AI supply chain emissions.
Q36. Cooling systems in data centers are optimized for energy efficiency.
Q37. We mandate energy caps for AI training jobs.
4. Sustainable AI learning
From literature
Q38. We document lessons from AI sustainability failures (e.g., bias incidents).
Q39. Our organization maintains a repository of sustainable AI best practices.
Q40. AI teams conduct post-mortems on the environmental impact of projects.
Q41. We train employees on sustainable AI development practices.
Q42. Sustainability metrics are included in AI performance reviews.
Q43. We publish case studies on sustainable AI implementations.
Q44. External audits inform our AI sustainability policies.
From interviews
Q45. We host cross-departmental forums to share sustainable AI innovations.
Q46. External audits of AI systems inform internal policy updates.
Q47. We publish annual reports on AI’s sustainability performance.
Q48. AI ethics review boards include sustainability experts.
Q49. Lessons from AI incidents are shared industry-wide.
5. Stakeholder integration
From literature
Q50. We consult local communities before deploying AI systems that affect them.
Q51. NGOs are involved in auditing our AI systems for fairness.
Q52. We establish grievance mechanisms for AI-related sustainability harms.
Q53. Our AI ethics board includes representatives from marginalized groups.
Q54. We disclose AI’s environmental impacts to investors and regulators.
Q55. Labor unions are engaged in AI-driven automation decisions.
Q56. We partner with academia to address AI sustainability gaps.
From interviews
Q57. Indigenous groups are engaged in AI projects using their data.
Q58. Labor unions participate in decisions about AI-driven automation.
Q59. We co-design AI solutions with communities impacted by climate change.
Q60. Customers can opt out of high-emission AI services.
Q61. We compensate communities affected by AI data collection.
Q62. Suppliers must adhere to our AI sustainability charter.


















