Artificial Intelligence (AI) Policy
Policy Statement
This policy deals with the application of artificial intelligence (AI) and AI-aided technologies to the preparation of manuscripts, review, and editorial. The policy is to address the transparency, the scientific integrity, the ethics, in the context of determining the worthy use of AI tools in scholarly research and writing.
For Authors
Permitted Uses
The following are some of the uses of AI tools by authors:
- Editing and correction of language: grammar, syntax, and readability are corrected and improved.
- Data analysis and visualization: Statistical analysis, chart generation, and data presentation.
- Research support: Literature search assistance, experimental design consultation, and methodology guidance.
- Technical assistance: Code debugging, algorithm optimization, and computational problem-solving.
Mandatory Disclosure Requirements
- Full disclosure: Every contribution with the assistance of AI should be fully disclosed in a specific section of the manuscript called AI Disclosure located ahead of the references list.
- Specific identification: Authors must specify:
- The exact AI tools/systems used (name, version, provider).
- The specific sections or aspects of the manuscript where AI was employed
- The nature and extent of AI assistance provided.
- The level of human oversight and validation applied.
- Sample disclosure statement: "The authors used [AI tool name/version] to improve the language and readability of sections 2.1 and 3.2. All AI-generated content was thoroughly reviewed, fact-checked, and validated by the authors before inclusion."
Prohibited Uses
- Authorship: AI tools cannot be listed as authors or co-authors.
- Image manipulation: Use of AI to create, alter, enhance, or modify research images, figures, or photographs (except when AI image processing is part of the research methodology itself).
- Data fabrication: Using AI to generate fake research data, results, or citations.
- Substantial content generation: AI cannot be used to generate entire sections, conclusions, or core research content without substantial human intellectual contribution.
Author Responsibilities
Authors must:
- Validate all outputs: Thoroughly review and verify all AI-generated content for accuracy, originality, and scientific validity.
- Ensure reproducibility: Provide sufficient documentation including parameters, prompts, and methodological details to ensure reproducibility.
- Maintain accountability: Be able to claim full responsibility of integrity, accuracy and originality of all content, whether AI is involved or not.
- Follow ethical standards: It is recommended to ensure that the utilization of AI does not contradict the data privacy regulations, the mitigation of bias, and corresponding attribution values.
For Reviewers
Confidentiality Requirements
- Manuscript confidentiality: Reviewers must NOT upload submitted manuscripts or any portion thereof into generative AI tools, as this violates author confidentiality and may breach data privacy rights
- Review report confidentiality: Peer review reports should not be processed through AI tools, even for language improvement purposes
Review Process Integrity
- Human-centered evaluation: The critical judgment and analysis of scientificity should be solely in the hands of humans.
- Limits on AI assistance: AI tools cannot be utilized to support scientific review because they cannot make adequate scientific evaluations and are likely to generate biased or incorrect evaluations.
- Reporting suspected violations: Reviewers who are aware of suspected inappropriate or undisclosed AI will need to report their suspicions to the journal editor.
For Editors
Editorial Process Standards
- Manuscript confidentiality: Editors must not upload manuscripts or portions thereof into AI tools.
- Decision letter integrity: Editorial communications and decision letters should not be generated or processed using AI tools.
- Human oversight: Final editorial decisions must remain under human control and judgment.
Policy Enforcement
- The assessment of violation: Editors are expected to investigate they suspect a policy has been violated and report the results to the publisher.
- Educational strategy: Editors can offer instructions to authors about how to use AI and disclose it.
Compliance and Enforcement
Violations and Consequences
Any misuse of AI technology, including but not limited to:
- Undisclosed AI use.
- Plagiarism through AI tools.
- False data or falsification of data.
- Such manipulation of images without declaration.
Will result in:
- Immediate manuscript rejection (for submissions under review).
- Article retraction (for published articles).
- Notification to author's institutional affiliations.
- Possible exclusion from future submissions.
Quality Assurance
The journal employs AI detection tools and plagiarism checking systems to identify potential policy violations while maintaining author confidentiality and data privacy standards.
Policy Updates and Adaptations
This policy will be updated on a regular basis and will be modified to include further developments in technology, new ethical issues, and new best practices in the academic publishing ecosystem. Official journal communications will notify authors, reviewers, and editors of changes to the policies.
Additional Guidelines
Data Privacy and Security
- Any AI application should be in line with the relevant data protection laws (GDPR, HIPAA, etc.).
- Sensitive or confidential research information must not be run on external AI systems.
Bias Mitigation
- Authors are advised to note the biases that may occur as a result of using AI-generated materials and do their best to detect and fix these biases.
- Particular attention will be paid to AI output in relation to demographic, culturally or socioeconomically sensitive issues.
Technical Documentation
- Whenever AI tools are used in analyzing data or doing calculations, the technical documentation must be sufficient to reproduce the result.
- This includes AI model specification, parameters used and validation processes used.