Building Guardrails for AI’s Role in Academic Publishing

The application of AI has transformed the position of the publisher in the research ecosystem.

In recent years, the rapid advancement of AI has led to its widespread integration into academic research. The technology is transforming traditional methodologies and optimizing key processes—from data analysis and literature reviews to the generation of research hypotheses. Machine learning-based tools, in particular, enable researchers to efficiently query large-scale databases, identify complex patterns and extract meaningful insights, offering substantial gains in speed and accuracy over manual approaches. This acceleration not only speeds up the pace of discovery but also frees researchers to focus on higher-order tasks that require human creativity, critical thinking and judgment.

Yet these opportunities are accompanied by substantial risks. The misuse of AI in scholarly practices—including generating academic content without disclosing the use of AI, fabricating data or manipulating citations—has raised serious concerns about research integrity and the credibility of published work.

It is against this backdrop that the Embassy of Switzerland, in partnership with the Swiss journal publisher Frontiers and the National Science Library of the Chinese Academy of Sciences (NSLC), convened the Third Sino-Swiss Research Integrity Workshop in Beijing on September 23. Themed A Trustworthy Ecosystem for the Future of Science, discussions focused on a pivotal question: How to define the boundaries of both capability and integrity in the application of AI to scientific research?

From risk to remedy

Liu Xiwen, Director of the NSLC, elaborated at the event that the application of AI in academic publishing raises four main issues: In addition to the common challenges in terms of content authenticity and the transparency of the research process, issues concerning academic responsibility emerge in the context of human-AI collaboration, from determining the originality of research to assigning accountability when AI introduces errors.

And then there are emerging issues of academic integrity in AI-assisted publishing. Throughout this process, including peer review, manuscript editing and integrated service platforms, AI technologies are being widely applied. The key question is how to identify and address the problems that these applications may bring.

These issues are also receiving attention in China. On December 21, 2023, the Ministry of Science and Technology issued the Guidelines for Responsible Research Conduct, drawing clear “red lines” for the use of AI in research. The guidelines ban practices such as using generative AI to draft application materials, crediting the technology as a co-author or citing unverified AI-generated references.

The 15th China International Digital Publishing Expo in Zhengzhou, Henan Province, on Aug. 28, 2025. (Photo/Xinhua)

On September 26, 2024, the Institute of Scientific and Technical Information of China, together with Dutch Elsevier, German-British Springer Nature Group, American Wiley and several other international publishers, released the Guideline on the Boundaries of AIGC (AI-generated content) Usage in Academic Publishing 2.0. The document emphasizes transparency and accountability, urging for the disclosure of AI use throughout research and publishing, greater data transparency and clearer accountability standards shared by researchers, institutions, funders, policymakers and publishers.

Notably, Frontiers has begun leveraging AI itself to address the integrity challenges that the technology poses in academic publishing. Julie Mo Svalastog, Research Integrity Manager with the company, shared her experience during the workshop.

Frontiers tackles AI-related integrity challenges in publishing through a multi-layered defense system that combines human expertise with AI tools. At the core is AIRA, an AI-powered review assistant trained on large language model datasets and continuously updated by human input. AIRA screens every submission, detects risks including manufactured manuscript, image manipulation or fake peer review, identifies suspicious patterns, assigns risk scores, and flags integrity issues for editors. Importantly, AIRA does not make final decisions; instead, it supports editors, peer reviewers and dedicated research integrity teams.

Frontiers complements this with a research integrity auditing team that analyzes groups of papers and reviewers to uncover coordinated misconduct. This human-AI partnership enables early detection of fraudulent content, protects editorial resources and strengthens trust in the scientific record, while also contributing expertise to global integrity initiatives.

Redefining the role of publishers

The transformations AI has generated in turn also present new opportunities for Chinese publishers.

According to NSLC Professor Ma Zheng, who specializes in analyzing scientific research, AI is helping reinvent academic publishing. It enables a new, open science model for journals.

He introduced a new concept: AI-driven Data Pennant (AIDP). This is an innovative academic journal model that integrates big data resources, AI tools, open resources and contributions from leading researchers within a publishing platform. Its core idea lies in using AI tools to generate a “pennant” that deeply engages in research organization and runs through the entire academic publishing chain.

“Conventionally, publishing academic papers requires researchers to manually identify and define research topics, gather large volumes of literature and data, and ultimately produce an outcome,” Ma explained. “With the introduction of AI, however, the publisher can leverage journal data available to evaluate and pinpoint high-quality research topics. After the scientific question is raised, the editorial board and peer experts evaluate its accuracy and feasibility. If deemed feasible, it advances to the pennant stage. Essentially, a pennant is a title or research topic bundled with open resources such as datasets, literature and code. These resources are integrated by AI, which also generates a review summarizing developments in the field, packaging everything together as a pennant. The next step is to identify suitable authors—again through the AIDP—which matches the task with the most appropriate researchers. The work then proceeds with rapid peer review, followed by publication,” he continued.

“The AIDP model enables the prediction of academic frontiers. Meanwhile, it has greatly enhanced the efficiency of scientific research,” Ma told Beijing Review. “For us, as a publisher, we are no longer simply waiting for authors and research teams to propose scientific topics and then submit papers. Instead, we can now take the initiative to raise academic questions ourselves. In the past, this was impossible because obviously, publishing is not a research activity. But today, with access to vast bodies of literature and powerful data-processing AI tools, we can anticipate the direction of disciplinary development. It can be said that the application of AI has transformed the position of the publisher in the research ecosystem.”