top of page

Publisher Policies on AI Use

Issue #83

Data, Numbers

by Michael Seadle (Humboldt-Universität zu Berlin)


Avi Staiman wrote a post in “The Scholarly Kitchen”¹ on 22 July 2024 about “Woefully Insufficient Publisher Policies on Author AI Use Put Research Integrity at Risk.” He begins by saying: “There is broad consensus in scholarly publishing that AI tools will make the task of ensuring the integrity of the scientific record a Herculean task.”² His goal is a better understanding of what the tools can do and how to understand them. He writes: “While 76% of researchers reported using AI tools in their research in the OUP [Oxford University Press] survey, only 27% reported having a good understanding of how to use these tools responsibly.”² 

Such numbers are not encouraging, but a key question is what does “understanding AI” really mean? A similar question could be asked about how many scholars actually understand the proofs behind the statistical theorems. A similar proportion may never have made an effort to prove the statistical theorems that are the basis for results they cite. Nonetheless the parallel between understanding AI and understanding statistical tests represents an oversimplification. Contemporary scholars take the results from statistical programs for granted in part because such tests have been around for a long time and have become integral to modern scholarship. The real issue may not be so much the degree of understanding as the trust that results from decades of reliable statistical tests. 

Staiman wants “... to make a clear distinction between ‘substantive’ and ‘non-substantive’ use. For example, using AI tools to clear up spelling and grammar is clearly a non-substantive use, while relying on AI for data analysis would very much be ‘substantive’.”² This distinction makes sense because grammar and spelling are readily provable, and spelling and grammar correction tools are already embedded in word processing tools. Staiman recommends the European Union’s AI Act as a model for publishers because of its risk levels, which range from minimal to unacceptable. His recommendation is that publishers cooperate to set up common policies based on this model in order to give authors clear and rational guidelines that will help them avoid research integrity problems.  

 

2: Staiman, Avi. “Woefully Insufficient Publisher Policies on Author AI Use Put Research Integrity at Risk.” The Scholarly Kitchen, July 22, 2024. https://scholarlykitchen.sspnet.org/2024/07/22/woefully-insufficient-publisher-policies-on-author-ai-use-put-research-integrity-at-risk/.

 

9 views

Recent Posts

See All

Comentários


bottom of page