Editorial Policy on the Use of Artificial Intelligence (AI)
The Sustainable Studies Goals Review adopts the following editorial guidelines regarding the responsible and ethical use of Artificial Intelligence (AI) tools in academic manuscripts. These guidelines are aligned with best practices proposed by international bodies such as the Committee on Publication Ethics (COPE).
Purpose of the Guidelines
These guidelines aim to inform authors, reviewers, and editors about the appropriate use of AI tools in the preparation, submission, and evaluation of academic content. The goal is to promote integrity, transparency, and reliability in scientific communication.
Examples of Permitted Uses of AI
AI tools may be used to support academic writing and research processes, provided their use is transparent and appropriately acknowledged. Permitted applications include:
-
Translation and language editing: Using AI for translating content and correcting grammar, punctuation, and style, especially by non-native English speakers.
-
Transcription: Converting interviews or audio recordings into written text.
-
Programming support: Assisting with debugging, formatting, or optimizing code in quantitative or computational research.
-
Reference discovery: Using AI as a tool to locate relevant literature and sources.
These applications are considered auxiliary and do not compromise the intellectual responsibility of the authors.
Examples of Prohibited Uses of AI
AI tools must not be used in ways that bypass scholarly rigor or diminish authorial responsibility. The following uses are not permitted:
-
Authorship attribution: AI tools (e.g., ChatGPT, Bard, Claude) cannot be listed as authors, as they cannot assume legal or ethical responsibility.
-
Content generation: Using AI to generate substantial parts of the manuscript, including theoretical frameworks, data analysis, result interpretation, or conclusions.
-
Peer review: Reviewers must not use AI tools to evaluate manuscripts without prior editorial consent.
-
Data confidentiality: Sensitive or unpublished data must not be input into public AI tools, as this may violate ethical standards or privacy regulations.
Use Statement Requirement
If AI tools are used in any phase of manuscript preparation, authors must include a clear and specific declaration, preferably in the methodology or acknowledgments section, stating:
-
Which tools were used: Include the name and version of the AI tool.
-
How they were used: Briefly describe the function (e.g., language editing, transcription), the context of use, and the date of access.
-
Potential limitations or biases: Authors are encouraged to mention any known constraints of the tools employed.
If the AI use description is lengthy, essential elements should be summarized in the main text, with full details provided in an appendix or supplementary file.
Security, Ethics, and Integrity
The use of AI must comply with ethical research principles. Authors must avoid uploading confidential or sensitive data to AI platforms that may store, reuse, or share such information. The journal encourages researchers to critically assess the reliability and biases of AI-generated outputs and take full responsibility for all content submitted.
References and Further Reading
-
COPE – Committee on Publication Ethics.
https://publicationethics.org/cope-position-statements/ai-author -
Instituto de Investigación en Ciencias de la Administración – Universidad Nacional del Sur (2024).
Guidelines for the Use of Generative Artificial Intelligence in Academic Work. -
Taylor & Francis (2023).
Clarifies the Responsible Use of AI Tools in Academic Content Creation.
https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/