Official publication of Rawalpindi Medical University
Navigating Ethical Dilemmas Of Generative AI In Medical Writing

Supplementary Files

PDF

How to Cite

1.
Hamdan QU, Umar W, Hasan M. Navigating Ethical Dilemmas Of Generative AI In Medical Writing. JRMC [Internet]. 2024 Oct. 7 [cited 2024 Dec. 10];28(3). Available from: https://www.journalrmc.com/index.php/JRMC/article/view/2744

Abstract

The history of humankind is marked by revolutionary inventions that completely transformed the quality and trajectory of human lives. Revolution and innovation are dynamic processes that have and will continue, at an increasing pace due to discoveries and inventions. Like the steam engine in the 18th century, electricity in the 19th century, and the Internet at the cusp of the 20th and 21st centuries, the modern era is undergoing a new revolution with the advent of Artificial Intelligence (AI). AI is a field of computer science which deals with the development of programs or computational models that are inspired by the human brain’s ability to learn and adapt.1 Perhaps the most prominent advancement AI contributes is generative AI, which has led to the development of robotic “human-like” tools that can generate text, audio, images, and even video with a simple prompt. The widespread and mainstream use of some of these tools, namely ChatGPT, Google Gemini, and Perplexity AI, has revolutionised almost all walks of life, from industry to academia.2 The scope of this article will be limited to the implications of generative AI in academic research writing, particularly in the field of medicine.

Generative AI in Medical Writing

Generative AI tools or “chatbots” combine the adaptive learning capabilities of deep learning algorithms and natural language processing, resulting in a virtual assistant or aide that is capable of answering queries, following commands, and improving its responses according to the vast data available on the Internet in addition to user responses.3 This has allowed the accomplishment of various complex tasks within seconds that would otherwise require hours of trial and error. The speed with which generative AI chatbots solve problems is one of the main reasons behind their remarkable success among the general public. Moreover, their correct grammar and comprehension skills make them a very attractive writing tool, especially for non-native English speakers. However, all of these benefits are not without their pitfalls.

Data Hallucinations

The tendency of generative AI chatbots to create false information or “data hallucinations” has been a cause of grave concern in the field of academia.4 Although ChatGPT declares that “it can make mistakes and users should consider checking important information”, unregulated false results generated by chatbots can significantly degrade the integrity and authenticity of medical research, which is a field characterized by strict ethical and moral guidelines. Additionally, AI chatbots are trained using data that is available on the web, where misinformation itself is abundant. Online resources like Wikipedia and WebMD, while mostly accurate, are generally not considered clinically or medically credible. Even though academic journals are trying to regulate the misuse of generative AI in medical writing by introducing AI detection as a regular part of the review process, the burden of upholding scientific accuracy and integrity still falls on the researcher’s shoulders.

Biasedness

Another aspect of AI tools which impacts their efficiency is the biasedness that can emerge by being repeatedly trained on the same type of information.5 By emulating the human brain’s ability to remember, understand, and adapt according to new information, AI also inherits a “flaw” of the human mind of being influenced or becoming biased by recurrent exposure to specific types of information. This indicates that the resources used by AI chatbots need to be supervised and regulated to ensure that the bots do not provide monotonous responses.

Privacy and Security Concerns

The innate programming of generative AI chatbots to store the information that is provided to them also raises concerns regarding their safety and security. While this feature serves to improve the performance of these tools according to user input and requirements, it also fuels the debate about privacy breaches and cybersecurity issues that could arise if malicious forces override these tools.6 Although ChatGPT has allowed users to turn off their chat history and prevent the bot from storing their data as of April 2023, the extent to which AI-driven tools uphold the pledge of privacy and confidentiality of user data is not quite transparent. To safeguard their sensitive data, users often have to go on a deep dive into the “settings” of these tools. This can be confusing and tedious for medical professionals, who are generally not very well-versed with up-and-coming technology.

The privacy concerns that arise from AI-driven tools have been previously discussed in the context of the development of diagnostic decision support systems.7 However, their possible implications regarding academic writing are uncertain. If a user innocently asks ChatGPT or Gemini to summarize or rephrase their research methodology or scientific discoveries, can their data repositories pose a threat to the integrity and novelty of the research? Nuanced concerns like these will continue to increase as the use of AI-driven chatbots inevitably becomes more mainstream.

Suggestions

Some suggestions for improving the ethical use of AI tools are discussed here. For instance, the proper and safe use of common tools like ChatGPT, Gemini, and Perplexity AI should become a core part of the undergraduate medical curriculum. The working principle of generative AI tools should be transparently discussed without loopholes and confusing jargon. Additionally, the data used by these tools must be regulated and refined to reduce the risk of more data hallucinations, biased results, security breaches, and other potential issues that may arise as the horizon of generative AI widens. It may also be worthwhile to consider adding the use of AI generative tools in article writing as part of informed consent at the start of new scientific research. The settings section of the said tools needs to be simplified and easily accessible for researchers so that they can remove their data from the repository if they prefer to do so.

Conclusion

Referring to the integration of generative AI chatbots into contemporary academia is not merely a bold statement but a necessary acknowledgement of technological advancement. Rather than demonizing generative AI as a harbinger of societal downfall, it is imperative to confront and resolve the ethical dilemmas it presents. This innovative technology stands poised as one of the most transformative creations of our era, offering the potential for enhanced quality of life when wielded responsibly. Embracing its permanence in our world, we must proactively engage with and adapt to its presence, recognizing that its impact is enduring and unlikely to diminish on its own accord.

https://doi.org/10.37939/jrmc.v28i3.2744

References

Ertel W. Introduction to Artificial Intelligence. Springer. 2018.

Fischer D, Heffeter F, Grothe SR, Joachim V, Jung HH. AI in Strategic Foresight–Evaluation of ChatGPT, BARD and Perplexity. In: ISPIM Conference Proceedings. The International Society for Professional Innovation Management (ISPIM); 2023:1-28.

Bridgelall R. Unraveling the Mysteries of AI Chatbots. Published online 2023.

Meyer JG, Urbanowicz RJ, Martin PCN, et al. ChatGPT and large language models in academia: opportunities and challenges. BioData Min. 2023;16(1):20.

Dwivedi Y K, Kshetri N, Hughes L, Slade E L, Jeyaraj A, Kar A K et al 2023. Opinion Paper:“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, p.102642.doi:https://doi.org/10.1016/j.ijinfomgt.2023.102642.

Gupta M, Akiri C, Aryal K, Parker E, Praharaj L. From ChatGPT to Threat GPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access. 2023;11:80218-80245. doi:10.1109/ACCESS.2023.3300381

Kaur S, Singla J, Nkenyereye L, Jha S, Prashar D, Joshi GP,et al. Medical diagnostic systems using artificial intelligence (ai) algorithms: Principles and perspectives. IEEE Access. 2020 Dec 3;8:228049-69.doi:10.1109/ACCESS.2020.3042273.

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Copyright (c) 2024 Qurrat Ulain Hamdan, Waleed Umar, Mahnoor Hasan