It is increasingly recognized among scientists that some colleagues are employing AI chatbots, such as ChatGPT, to compose entire research papers or sections thereof.
A recent study published in Science Advances by Dmitry Kobak and his team at the University of Tübingen introduces a method to detect AI involvement in scientific writing by tracking the frequency of particular words in paper abstracts. Their analysis shows that AI-generated texts tend to overuse words like “delves,” “crucial,” “potential,” “significant,” and “important” compared to human authors.
By examining over 15 million biomedical abstracts published from 2010 through 2024, the researchers identified a notable rise in the usage of specific terms coinciding with the release of ChatGPT in November 2022.
This linguistic shift has intensified ongoing discussions within the scientific community about the ethical boundaries and appropriate use of AI-assisted writing tools in scholarly publications.
The study highlights that after ChatGPT’s debut, a distinctive set of words began to appear with unusual frequency in abstracts—a pattern absent prior to the AI’s introduction—serving as a marker of AI-generated content.
In total, the team identified 454 words disproportionately favored by AI in 2024. Using the prevalence of these AI-associated terms, they estimate that at least 13.5 percent of biomedical abstracts were at least partially composed by chatbots. In some countries and less selective journals, this figure may reach as high as 40 percent.
0 Comments
No comments yet. Be the first to comment!