Artificial intelligence (AI) represents one of the biggest advances in recent years and is considered as a tool that can facilitate various areas of life by performing tasks that traditionally require human intelligence. One particular type of AI model are the Large Language Models (LLMs), such as the Generative Pre-trained Transformer or, better known, ChatGPT.1,2 This model is based on a linguistic structure that mimics human processing skills, even adding creativity to the wording.
In the most recent literature, articles have been published exploring the present and future role of AI in fields such as medicine, specifically in general surgery. Its use as a complement to surgical decision-making, choice of techniques and even in the postoperative management of possible complications3 has been described in several studies, suggesting that AI not only has a presence but is also emerging as an ally in daily clinical practice, rather than representing a threat.
Now, what could be its usefulness in the scientific literature? Let us discuss the possible future, both ideal and catastrophic, of ChatGPT's contribution to our publications.
Writing scientific papers is one of the most time-consuming tasks and is especially challenging in the scientific literature. Concisely and clearly conveying the interpretation of the results obtained is a skill that appears to be intrinsic to the human mind. Knowing what to communicate and how to communicate it requires multiple associations derived from neural connections. In this sense, the use of ChatGPT enables us to improve language and the ability to express and communicate ideas and research results. This speeds up the publishing process and allows for faster availability of results. ChatGPT is capable of writing scientific texts, following a predefined structure, identifying which data to synthesize and generating thousands of scientific articles en masse.
This ease of production could spur more data publication in the community, thus reducing the time and mental effort involved in crafting articles. Consequently, publication bias would be eliminated, enabling studies with "negative" results to be drafted, or written by non-English-speaking scientists.4 These can be easily published, promoting equity and diversity in scientific research.
However, using ChatGPT in scientific writing carries the risk of generating superficial, inaccurate, or incorrect content. This lack of veracity in results and the reason why they are published has already been highlighted by Ioannidis.5 It is crucial to consider the risk of fraud in research associated with ChatGPT, such as ghost writing, or the production of fake or falsified research.6,7 In addition, legal issues related to copyright8 may arise, as currently, the ICMJE and COPE guidelines do not consider ChatGPT as an author, due to the legal liability that authorship entails and that ChatGPT cannot fulfil.9
The human mind remains an enigma, of which we understand only the slightest fraction. We still do not know how our own neural networks work and how they build up connections to draw conclusions from simple ideas, something that ChatGPT, with algorithms previously established by ourselves, attempts to emulate. Can the scientific discussion generated by ChatGPT be comparable in quality to that written by an expert surgeon? Will the articles produced by ChatGPT be of equal quality compared to those created by humans? It is intriguing to consider that if we do not yet fully understand our way of thinking, how will we be able to convey and replicate this in a mathematical algorithm?
The overproduction of articles also poses a challenge, given that readers already have difficulty assimilating and studying all the scientific literature in a subspecialty. How will we address this flood of additional information?
These results highlight the need to proactively address this revolution in scientific publishing by implementing appropriate regulatory policies. Given the complexity and obvious limitations, there are considerable challenges in the application of these tools in both clinical practice and research. While it is true that it is essential for the scientific community to clearly understand the limits and capabilities of ChatGPT, this is a new tool that will facilitate both the production of articles, as well as the identification and ability to discern what is a good or bad publication. It is only a matter of time before we identify the specific tasks and areas in which its use may be appropriate, while carefully considering the potential challenges and constraints associated with it.
In conclusion, AI is increasingly present in our daily lives, facilitating clinical practice, however, in the scientific literature its impact is questionable. Will ChatGPT be able to simulate complex human neural connections? Will it improve our algorithms and conclusions? Will the quality of scientific literature be reduced due to an overproduction of articles? Although the answer is still uncertain, we are witnessing the first results of a tool that we will need to master to adequately answer the question: Was this article written by AI or ChatGPT?


