metricas
covid
Annals of Hepatology CONCORDANCE BETWEEN EXPERT GASTROENTEROLOGISTS AND ARTIFICIAL INTELLIGENCE TOOLS...
Journal Information
Vol. 30. Issue S2.
Abstracts of the 2025 Annual Meeting of the ALEH
(September 2025)
Vol. 30. Issue S2.
Abstracts of the 2025 Annual Meeting of the ALEH
(September 2025)
#90
Full text access
CONCORDANCE BETWEEN EXPERT GASTROENTEROLOGISTS AND ARTIFICIAL INTELLIGENCE TOOLS IN SOLVING HEPATOLOGY CLINICAL CASES
Visits
160
Jesús Ignacio Mazadiego Cid1, María del Rosario Herrero Maceda1, Paloma Montserrat Diego Salazar2, Rogelio Zapata Arenas2, Scherezada María Isabel Mejía Loza1, Juanita Pérez Escobar1, María Fátima Higuera de la Tijera2, Elías Artemio San Vicente Parada1, Raquel Yazmín López Pérez2, Felipe Zamarripa Dorsey3, Yoali Maribel Velasco Santiago2, Adriana López Luria3, Moises Coutiño Flores1, Alejandra Díaz García1
1 Hospital Juárez de México.
2 Hospital General de México.
3 Hospital Ángeles Lomas, México.
This item has received
Article information
Abstract
Full Text
Download PDF
Statistics
Figures (1)
Special issue
This article is part of special issue:
Vol. 30. Issue S2

Abstracts of the 2025 Annual Meeting of the ALEH

More info
Introduction and Objectives

Evidence regarding the utility of artificial intelligences (AI) for the diagnosis of clinical cases in gastroenterology is limited, and is even scarcer in hepatology.

Determine the concordance between the responses of various AI models and those of specialist physicians in the resolution of hepatology clinical cases.

Materials and Methods

This was a clinical, observational, analytical, and prospective study. The assessment instrument comprised six hepatology clinical cases, each featuring five questions. A panel of eight experts from different institutions was convened; and their individual responses were subjected to calculation of the kappa coefficient (κ) and Cronbach’s alpha. Items that failed to meet the validation threshold (≥ 80 % agreement and κ ≥ 0.6) were reviewed through iterative rounds of a modified Delphi method. Finally, κ was calculated to evaluate concordance between responses generated by the AI models and the expert consensus.

Results

The expert consensus demonstrated a high overall concordance (κ = 0.901; 95 % CI [0.860, 0.943]; z = 61.57; p < 0.001). Individual model concordance ranged from moderate to substantial, with κ values between 0.539 (Meditron-7B) and 0.784 (ChatGPT-4.0 and ChatGPT-4.0 Turbo), all statistically significant. In terms of the percentage of correct responses, the highest performing models were ChatGPT-4.0, ChatGPT-4.0 Turbo, and Deepseek-R1 (figure 1).

Conclusions

A moderate to substantial concordance was observed between diagnoses generated by different AI models and expert judgment in hepatology clinical cases, although variations were noted among the evaluated systems.

Full Text

Conflict of interest: None

Figure 1

Download PDF
Article options
Tools