Skip to main content
Start main content

Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers

Ebadi, S., Nejadghanbar, H., Rawdhan Salman, A., & Khosravi, H. (2025). Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers. Journal of Academic Ethics, 23(3), 1383-1397. https://doi.org/10.1007/s10805-025-09604-4

 

Abstract

This study investigates the perspectives of 12 journal reviewers from diverse academic disciplines on using large language models (LLMs) in the peer review process. We identified key themes regarding integrating LLMs through qualitative data analysis of verbatim responses to an open-ended questionnaire. Reviewers noted that LLMs can automate tasks such as preliminary screening, plagiarism detection, and language verification, thereby reducing workload and enhancing consistency in applying review standards. However, significant ethical concerns were raised, including potential biases, lack of transparency, and risks to privacy and confidentiality. Reviewers emphasized that LLMs should not replace human judgment but rather complement it with human oversight, which is essential to ensure the relevance and accuracy of AI-generated feedback. This study underscores the need for clear guidelines and policies, as well as their proper dissemination among researchers, to address the ethical and practical challenges of using LLMs in academic publishing.

 

FH_23Link to publication in Springer Nature

FH_23Link to publication in Scopus

 

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here