Select Language

English

Down Icon

Select Country

Poland

Down Icon

Prof. Strzelecki: the scale of the problem with using AI in writing publications is becoming increasingly visible

Prof. Strzelecki: the scale of the problem with using AI in writing publications is becoming increasingly visible

It is becoming increasingly clear how large the scale of doubts is regarding the use of AI in creating scientific publications, Artur Strzelecki, professor at the University of Economics in Katowice, told PAP. Further examples of abuse in this area were presented by Nature, including those related to the "silent correction" of texts.

New cases of hidden use of ChatGPT in writing scientific papers are reported by "Nature" (https://www.nature.com/articles/d41586-025-01180-2). The author of the article, Diana Kwon, writes about the detection of characteristic chatbot phrases in hundreds of publications, inserted without disclosing this fact (without a declaration of using ChatuGPT) - which undermines the credibility of the review process.

"Nature" refers, among others, to the findings of Dr. Hab. Artur Strzelecki, professor at the University of Economics in Katowice (UEK). In December 2024, in "Learned Publishing", Prof. Strzelecki described the results of his analyses of Google Scholar resources - one of the largest databases of scientific publications. He showed that unmarked AI fragments appear even in journals with the highest citation rank, and some of them have already gained further references, which emphasizes the systemic nature of the problem.

The researcher then checked whether "unnatural" formulations characteristic of ChataGPT (e.g. "I hope this helps!", "Of course! Here is/are ...", "I'm glad I can help!", etc.) were found in English-language scientific publications. He analyzed only those articles that did not declare using the ChatGPT program in writing the paper.

Phrases that began to appear in publications due to the use of ChatuGPT included phrases such as: "As of my last knowledge update", "As an AI language model", or "I don't have access to real-time...". The content of the interface was also often copied in scientific publications: under each ChatuGPT response there used to be a "regenerate response" button, i.e. a command for the chat to re-work its response. Quite a few inattentive scientists copied the AI ​​response along with its "footer" and did not read the text sent to the publisher until the end.

The thoughtless copying of a fragment of a ChatuGPT statement into a scientific article was also not noticed by co-authors, reviewers or editors of the journal. Taking into account only the more prestigious, peer-reviewed scientific journals from the Scopus database (present in the first and second quartile of the CiteScore indicator), Prof. Strzelecki found ChatuGPT phrases in 89 articles.

Currently, the topic of thoughtless use of AI has returned thanks to "Nature". The editorial office drew attention, for example, to the phenomenon of "silent correction" - the removal by publishers of chatbot phrases from already published articles without official errata.

"Some journals began to realize on their own that they had let serious errors through, that no one had properly checked the text: at the editorial, review, copyediting stage. 'We'll quietly correct it and there won't be any errors'. And yet, several people - including me - noticed these errors. I found the original publications containing them quite quickly and archived them. Then I could see that the publisher had actually made a change to the text without any information that the change had occurred and what it consisted of," says Prof. Strzelecki, commenting on the phenomenon described in Nature.

Nature also confirmed that hundreds of publications from various fields contain characteristic phrases, such as "as an AI language model" or the aforementioned "regenerate response," revealing ChatGPT's involvement in writing (or rather generating) the text. Nature also pointed to the risk of a cascade of errors - undisclosed AI inclusions are already cited in the literature, which may multiply unverified information.

"If works using AI are created and contain fragments indicating this, they are later quoted by other scientists who consider what has been written to be reliable information, although they should not necessarily. Artificial intelligence has hallucinations and can add something on its own, invent something," confirms Prof. Strzelecki.

Quoted in Nature, Alex Glynn, an expert in research skills and communication at the University of Louisville in Kentucky, noted that the changes in question appeared in “a minority of journals.” But given that there are likely also many cases where authors used AI without leaving obvious traces, Glynn expressed surprise at “how much of that there is.”

Like Strzelecki, Glynn has found hundreds of articles with AI traces. His online AI tracking program already has more than 700 articles. Nature’s editors contacted the publishers of some of the articles, including Springer Nature, Taylor & Francis, IEEE, and others that Glynn and Strzelecki had identified. The editors said that all the flagged articles were being reviewed. They also cited their AI policies—which do not require AI disclosure in some cases (e.g., changes made for editorial or linguistic reasons do not need to be flagged).

Prof. Strzelecki emphasized that the awareness of the scale of the problem of thoughtless use of AI tools in writing scientific publications is growing among interested people. There is also an ongoing discussion about when and in what form authors and reviewers should disclose the use of AI tools. Publishers' policies are divergent, but they agree on one thing: full transparency is necessary to protect trust in scientific literature.

It is also becoming increasingly clear that internal training courses and workshops need to be held; that it is a tool - said Prof. Strzelecki. "Yes, it should be used - but you need to know how and for what. For example, language assistance for authors whose native language is not English is justified. In their case, using AI tools to help correct grammar, spelling, vocabulary, etc. seems sensible. In such cases, large scientific publishers are moving in such a direction that it is not necessary to disclose the fact of using this type of support. In this context, we do not ask the question: will I use artificial intelligence?, but: how?" - he said.

The UEK professor noted that currently, scientific publishers are moving towards disclosing the fact that the author of a publication used AI to analyze the text or process data that he collected himself. "Publishers expect that the publication indicates that an artificial intelligence tool was used; or even better - indicate the exact scope to which it was used," he said.

At the same time, he noted that it is impossible to control the way artificial intelligence is used in writing scientific papers. "They are practically already common tools that are being used on a large scale. Recently, when searching for information in Google, logged-in users get a summary prepared by AI right away. We do not get authentic search results, only data processed by AI." (PAP)

zan/ lt/ bar/

naukawpolsce.pl

naukawpolsce.pl

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow