Generative AI Policies

A rapid growth in the use of generative AI tools for content creation is currently being observed elsewhere. Generative AI tools refers to a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data. Examples of such AI tools include ChatGPT, NovelAI, Jasper AI, Rytr AI, DALL-E, DeepSeek, Gemini, Canva, Copilot and many others. These policies cover the use of AI tools in content creation by all participants in the journal editorial process, including authors, reviewers, and editors. They aim to ensure transparency and disclosure of the role of AI, redefinition of authorship categories, quality control and verification, and awareness-raising initiatives.

For Authors

Authors are fully responsible for the content of their work. AI tools cannot be listed as authors of an article, as they lack subjectivity and cannot assume responsibility for the generated content.

Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.

Generative AI tools can be use in academic writing to improve the readability and language of the manuscript, including English grammar, syntax, and spelling. Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work. Applying the technology should be done with human oversight and control and authors should carefully review and edit the result, because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work.

AI tools can be use for statistical processing of variable data, in formal research design or research methods. Where AI or AI-assisted tools are used in this context, they should be described in a reproducible manner as part of the methodology of the work, with details provided in the Experimental (Materials and Methods) section. This should include an explanation of how the AI tools were used, the name of the model or tool, its version and extension numbers, and manufacturer (for example (APA style): OpenAI (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat).

In such cases, authors must declare the use of generative AI tools by adding a statement at the end of the manuscript when the paper is first submitted. The statement will appear in the published work and should be placed in a new section before the references list. An example:

  • Title of new section: Declaration of the Use of Generative AI Tools
  • Statement: During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article

The declaration does not apply to the use of basic tools, such as tools used to check grammar, spelling and references.  This means that the use of AI technologies must be accompanied by human oversight and control. Authors should thoroughly review and edit the output, as AI-generated text can often be incorrect, incomplete, or biased. Authors are solely responsible and accountable for the content of their work.

The use of generative AI tools is not permited to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original.

The use of generative AI tools in the production of artwork such as for Graphical Abstracts is not permitted.

The use of artificial intelligence tools to reduce or hide plagiarism is strictly prohibited. Authors must be able to demonstrate that their work is free from plagiarism, including AI-generated text and images. Individuals must ensure proper attribution of all cited materials, including bibliographic references.

For Reviewers

Peer review is at the heart of the journal editorial process and Eurasian Journal of Chemistry abides by the highest standards of integrity in this process. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI tools should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.

When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability.

For Editors

Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI tools should not be used by editors to assist in the evaluation or decision-making process of a manuscript as the critical thinking and original assessment needed for this work is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision and the communication thereof to the authors.

A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to all communication about the manuscript including any notification or decision letters as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI tool, even if it is just for the purpose of improving language and readability.

 

The material in this section was prepared primarily on the basis of the Elsevier Generative AI policies for journals,  ICMJE Recommendations and COPE position about Authorship and AI.