Hip Pelvis 2024; 36(4): 231-233
Published online December 1, 2024
https://doi.org/10.5371/hp.2024.36.4.231
© The Korean Hip Society
Correspondence to : Kee Hyung Rhyu, MD, PhD https://orcid.org/0000-0001-9388-7285
Department of Orthopaedic Surgery, Kyung Hee University Hospital, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Korea
E-mail: khrhyu@empas.com
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Recent advances in artificial intelligence (AI) and related technologies now surpass human capabilities in areas once thought to be uniquely human. AI has already outdone humans in complex reasoning tasks like chess and Go1,2). It has been reported that some AIs have qualified for examinations for highly professional jobs3-5). More recently, AI applications have moved beyond superficial judgments and deep learning, expanding into previously human-only domains of art and creativity.
There is no doubt regarding the expectation that scientific progress will eventually benefit humans in everyday life. However, serious concerns also exist about the easy accessibility to AI. One of the biggest concerns could be writing a scientific article. Publishing a paper in a peer-reviewed journal requires originality, creativity, logical thinking, and vital ethical considerations. It seems, however, that these criteria are no longer of the utmost importance. Generally called LLM (large language models), the current version of generative AIs can generate texts based on user prompts through the ‘natural language processing’ of ‘machine-learned’ knowledge6). This helpful technology is just one click away from every author. Ease of accessibility also escalates concerns surrounding potential misuse. Suppose that a research group with little prior publication history suddenly submits multiple papers in a very short period. This sudden productivity may result from their extensive hard work. However, from the current perspective of an editor’s office, we must first examine whether the AI has been maliciously used. As several scientific journals have recently shared such experiences, a new set of contemporary concerns surrounding the use of AI have arisen.
Currently, most generative AI can develop research ideas from a simple user prompt, provide a theory from the concept, and write comprehensive papers with full references. At the very least, AI can provide crucial assistance in nearly all aspects of preparing a research paper. For those involved in scientific publishing, this reality poses several significant dilemmas. First, a generative AI may produce inaccurate or erroneous descriptions, known as “hallucinations.”7) While the providers of the most current AI version claim that this phenomenon is no longer an issue, authors should be aware of the possibility. Without meticulous checks and verification by the authors, these “hallucinations” may slip through the publication process and mislead readers with incorrect information. As AI continues to improve at simulating human language, it becomes increasingly difficult to distinguish between fact and fiction and between AI and human writing8). It raises fears that even initially flawed text could become indistinguishable from original work. Another concern is the ambiguity of applying the strict ethical standards expected of human researchers, authors, and publishers to AI tools. To illustrate this point, consider a scenario where a researcher, maliciously or inadvertently, uploads a part of their research data to an AI-powered company or program. Following the researcher’s intentions, the AI could fabricate parts of the material and create a draft manuscript. After some time, a co-worker, not recognizing where they originated, might write a scientific paper with the assistance of another AI using this dataset and the draft. This process could then lead to the fraudulent creation of a scientific article without extensive and arduous research. Worse still, if malicious intent was involved, recently popularized deepfake technology could even be employed to create or modify graphs and images in research papers9,10). The final concern would be the ambiguity of the definition of plagiarism11). AI-generated text is likely to originate from the imitated content from its training database. Thus, its originality should be questioned if authors or reviewers fail to find a proper citation in every sentence. These issues compound editorial challenges. Even with specialized tools, the possibility of definitively identifying AI-generated content may diminish as technology advances. Consequently, researchers’ adherence to ethical guidelines and transparency in the research process becomes increasingly crucial. Due to these complexities, most editorial organizations hesitate to establish definitive regulations. Instead of strict rules, they state the individual stances or seek consensus.
There is widespread agreement that, rather than being a simple yes/no proposition, AI is currently used somewhere along the entire spectrum from idea generation to publication. If this is true, what is the maximum extent to which AI can be used and still define a manuscript as written by humans? How can we ascertain the originality, creativity, and integrity of scientific research?
No funding to declare.
Kee Hyung Rhyu has been an Editor-in-Chief since January 2023, but had no role in the decision to publish this article. No other potential conflict of interest relevant to this article was reported.
Hip Pelvis 2024; 36(4): 231-233
Published online December 1, 2024 https://doi.org/10.5371/hp.2024.36.4.231
Copyright © The Korean Hip Society.
Department of Orthopaedic Surgery, Kyung Hee University Hospital, Seoul, Korea
Correspondence to:Kee Hyung Rhyu, MD, PhD https://orcid.org/0000-0001-9388-7285
Department of Orthopaedic Surgery, Kyung Hee University Hospital, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Korea
E-mail: khrhyu@empas.com
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Recent advances in artificial intelligence (AI) and related technologies now surpass human capabilities in areas once thought to be uniquely human. AI has already outdone humans in complex reasoning tasks like chess and Go1,2). It has been reported that some AIs have qualified for examinations for highly professional jobs3-5). More recently, AI applications have moved beyond superficial judgments and deep learning, expanding into previously human-only domains of art and creativity.
There is no doubt regarding the expectation that scientific progress will eventually benefit humans in everyday life. However, serious concerns also exist about the easy accessibility to AI. One of the biggest concerns could be writing a scientific article. Publishing a paper in a peer-reviewed journal requires originality, creativity, logical thinking, and vital ethical considerations. It seems, however, that these criteria are no longer of the utmost importance. Generally called LLM (large language models), the current version of generative AIs can generate texts based on user prompts through the ‘natural language processing’ of ‘machine-learned’ knowledge6). This helpful technology is just one click away from every author. Ease of accessibility also escalates concerns surrounding potential misuse. Suppose that a research group with little prior publication history suddenly submits multiple papers in a very short period. This sudden productivity may result from their extensive hard work. However, from the current perspective of an editor’s office, we must first examine whether the AI has been maliciously used. As several scientific journals have recently shared such experiences, a new set of contemporary concerns surrounding the use of AI have arisen.
Currently, most generative AI can develop research ideas from a simple user prompt, provide a theory from the concept, and write comprehensive papers with full references. At the very least, AI can provide crucial assistance in nearly all aspects of preparing a research paper. For those involved in scientific publishing, this reality poses several significant dilemmas. First, a generative AI may produce inaccurate or erroneous descriptions, known as “hallucinations.”7) While the providers of the most current AI version claim that this phenomenon is no longer an issue, authors should be aware of the possibility. Without meticulous checks and verification by the authors, these “hallucinations” may slip through the publication process and mislead readers with incorrect information. As AI continues to improve at simulating human language, it becomes increasingly difficult to distinguish between fact and fiction and between AI and human writing8). It raises fears that even initially flawed text could become indistinguishable from original work. Another concern is the ambiguity of applying the strict ethical standards expected of human researchers, authors, and publishers to AI tools. To illustrate this point, consider a scenario where a researcher, maliciously or inadvertently, uploads a part of their research data to an AI-powered company or program. Following the researcher’s intentions, the AI could fabricate parts of the material and create a draft manuscript. After some time, a co-worker, not recognizing where they originated, might write a scientific paper with the assistance of another AI using this dataset and the draft. This process could then lead to the fraudulent creation of a scientific article without extensive and arduous research. Worse still, if malicious intent was involved, recently popularized deepfake technology could even be employed to create or modify graphs and images in research papers9,10). The final concern would be the ambiguity of the definition of plagiarism11). AI-generated text is likely to originate from the imitated content from its training database. Thus, its originality should be questioned if authors or reviewers fail to find a proper citation in every sentence. These issues compound editorial challenges. Even with specialized tools, the possibility of definitively identifying AI-generated content may diminish as technology advances. Consequently, researchers’ adherence to ethical guidelines and transparency in the research process becomes increasingly crucial. Due to these complexities, most editorial organizations hesitate to establish definitive regulations. Instead of strict rules, they state the individual stances or seek consensus.
There is widespread agreement that, rather than being a simple yes/no proposition, AI is currently used somewhere along the entire spectrum from idea generation to publication. If this is true, what is the maximum extent to which AI can be used and still define a manuscript as written by humans? How can we ascertain the originality, creativity, and integrity of scientific research?
No funding to declare.
Kee Hyung Rhyu has been an Editor-in-Chief since January 2023, but had no role in the decision to publish this article. No other potential conflict of interest relevant to this article was reported.