Artificial intelligence (AI)-based innovation relies on using personal data sets stored by the National Health System. The personal data required by AI algorithms can be accessed by third parties pursuing different interests, especially Big Tech companies (e.g., Google, Microsoft and Amazon), because there are no adequate public infrastructures to store and process such vast amounts of data. This is not a trivial issue where market rules are gaining ground in public healthcare. Indeed, two of the three 2024 Nobel Prizes in Chemistry have been awarded to Google DeepMind (Alphafold) scientists for their contribution to predicting protein structures using AI.
Personal data are a new economy and an opportunity. Regulating access to personal data in healthcare to prevent AI-based innovation from aggravating or perpetuating structural inequalities and discrimination is a formidable challenge. Whoever has the personal data has the power. The key lies in giving value to personal data, instead of commodifying them. Initiatives and regulations are already in place to establish impact assessments for data protection and fundamental rights with the use of AI systems.1 Their intentions are praiseworthy, but their implementation is complex. In many cases, healthcare organizations have not invested the time and resources to assess a paradigm based on the stockpiling and intensive use of personal data. They have thus failed to revisit approaches that have now become outdated by this technology. In this scenario, preserving anonymity in the face of such accumulation of personal data is no longer possible, and it becomes necessary to review participant information and informed consent procedures.
Though essential for the Spanish National Health System, innovation is often undertaken without an assessment of ethical and societal aspects, and without prioritizing the protection of individuals’ rights and freedoms—including their data. Data governance should go beyond mere legal compliance, as it is also an ethical and a reputational matter. At stake here is who deserves access to personal data, and for what purposes, amid policies that promote their secondary use.2 The objective is to help Europe, also from the healthcare sector, to win the “geo-AI-political” race against the USA and China.
The purpose of AI-based healthcare innovation is to advance toward personalized medicine,3 more efficient healthcare systems, and active aging and wellbeing, among other political goals. AI is a reality now—not a future to imagine. It has been routinely applied in hospitals for some time, e.g. to read the content of medical records and to make predictions.4 It is about improving decision-making through the use of personal data, in a system grounded in solidarity, with research as its cornerstone. An ethical underpinning is now being sought for these uses of AI, which directly concerns the quality of care and research.
Given its characteristics, its biases and its impact on privacy, individual autonomy and free development of personality, the use of AI in research requires new dynamics in terms of how it works and how it should be reviewed. We are already mediated by AI; we can configure AI, but AI modulates us. In the area of healthcare, additionally, any innovation should be preceded by research, as one cannot be separated from the other. Indeed, research ethics committees have been called upon to review these projects to procure a sort of ‘quality stamp’ for these innovations. However, there are several challenges to these uses, as the quality of the data is not always optimal. Moreover, data are often not findable, accessible, interoperable or reusable.5 If technical issues aren’t solved, the ethical discussion may be futile. There is a pressing need to solve this situation to attain the European Health Data Space. Policies and regulations in this area refer to ethics and governance as essential.
In light of the above, we need to define the concept of governance advocated by Spain's National Health System in its Digital Strategy, when referring to the use of AI. To move forward, capacity building in data ethics should be offered to decision-makers and professionals in charge of data processing. Moreover, the interests of the data subjects should be aligned with health innovation processes from the inception of any project. Particularly, guaranteeing the transparency of the health innovation system, and not just of algorithms, should be a priority; in turn, this compels us to identify the threshold of risk we are willing to take. AI-based health innovation generates enormous financial benefits and is a source of conflicts of interest. In addition, the governance of AI systems faces the challenge of applying the principles of data protection,6 and, in particular, of data minimization, as stockpiling by default is the new normal in every organization. Also applicable is the security principle, which involves deploying technical and organizational measures to enable data protection. This operative part, unfortunately, is perceived as a burden and an unnecessary additional cost by many organizations, as they struggle to understand the significance of risk management in personal data processing, due to its intangible nature.
AI in healthcare must be understood as a tool to support human decision-making—not as a replacement. Responsibility rests with humans, and so it is humans who must correct and oversee AI systems.7 This way, no human jobs will be lost to AI; on the contrary, interdisciplinary teams will become necessary, including profiles in data science and social sciences.
Those who wield the power to decide which AI-based innovations will be adopted in healthcare should understand the impact and the risks of these systems for individual rights. They should also understand the value of ethical management in the use and development of AI systems.
FundingThis publication is part of the I+D+i Project PID2022-138615OB-I00, granted by MICIU/AEI/10.13039/501100011033/ and FEDER/UE.



