Taylor & Francis news

Taylor & Francis Issues Expanded Guidance on AI Application for Authors, Editors and Reviewers

Computer chip processor on motherboard glows blue.

As the use of generative artificial intelligence (AI) in research and writing continues to evolve, Taylor & Francis has issued the latest iteration of its policy on the application of AI tools. The policy aims to promote ethical and transparent use of AI, while addressing the risks and challenges it can pose for research publishing.

Generative AI tools are increasingly providing positive support to researchers in areas such as idea generation, coding, language improvement, and research dissemination. However, their use can also pose serious risks for scholarly works, including the introduction of inaccuracy, bias or lack of attribution, as well as compromising confidentiality and intellectual property rights.

To support the responsible adoption of AI opportunities, and to answer common questions, Taylor & Francis has launched a new policy outlining the expectations of authors, editors, and reviewers who use AI tools in their work.

The guidance for authors is based on the principle that they remain accountable for the originality, validity, and integrity of the content they submit to publishers. It covers areas such as authorship attribution, acknowledgement of AI use, and the activities for which AI use is not permitted.

Editors and reviewers are reminded in the policy of the risks posed by AI use to accuracy, confidentiality, proprietary rights and data. With that in mind, the guidance outlines a number of tasks that AI tools should not be used for, in order to uphold editorial and peer review quality standards.

The policy is based on the current state of generative AI and research ethics and it is expected to continue evolving as technology and practice develop. Taylor & Francis welcomes the new possibilities offered by AI tools and encourages researchers to use them responsibly and in accordance with the new guidance.