IEEE Guidelines on the Use of Artificial Intelligence in Manuscript Preparation and Peer Review

Prepared for the RAMS® Management Committee

1. Purpose

This document summarizes IEEE’s official policy on the use of Artificial Intelligence (AI) tools in the preparation of manuscripts and the conduct of peer reviews. It is intended to inform RAMS® Management Committee members and serve as a reference when developing or aligning RAMS-specific AI use guidelines.

2. Guidelines for Authors

2.1 What Is Permitted

Authors may use AI tools to assist with the following, provided that the original ideas, research, and substantive content were produced by the human author(s):

  • Grammar checking, spelling correction, tone improvement, and general language editing
  • Checking mathematical and notational consistency of equations and the accuracy of numerical calculations, provided that all such outputs are subsequently verified by the author(s) before submission.

This type of assistance is common practice and is generally outside the scope of mandatory disclosure, though disclosure is still recommended as good practice.

2.2 What Requires Mandatory Disclosure

Any use of AI to generate content in an article — including text, figures, images, and code — must be disclosed. Per IEEE policy, the Acknowledgments section of the submitted manuscript must include:

  • The name and version of the AI system used (e.g., ChatGPT-4o, Google Gemini, etc.)
  • The specific sections of the article where AI-generated content appears
  • A brief explanation of the level at which the AI was used to generate the content

Note: This requirement applies to IEEE journals, transactions, magazines, and conference proceedings. See IEEE Author Center (Journals, Conferences, and Magazines) for the full policy text.

2.3 What Is Prohibited

The following uses of AI are strictly prohibited under IEEE policy:

  • Submitting AI-generated content without disclosure
  • Using AI to fabricate, falsify, or manipulate research data, experimental results, or figures
  • Uploading a confidential manuscript under peer review to any publicly accessible AI platform or service (this constitutes a breach of confidentiality)
  • Listing any AI system as an author or co-author — authorship requires accountability that AI systems cannot provide

2.4 Prompt-Driven Paper Generation

One practice that RAMS® explicitly discourages, even where detection may be difficult, is the use of AI to generate the substantive body of a paper from a set of prompts or outline notes. This means submitting a manuscript where the research narrative, technical argumentation, analysis, or conclusions have been written primarily by an AI system rather than the authors, regardless of whether the underlying research was conducted by humans.

This practice is problematic for several reasons. It misrepresents the intellectual contribution of the authors, it undermines the peer review process by making it harder to evaluate the genuine depth of understanding behind the work, and it erodes the scientific value of published proceedings over time.

Authors are expected to be the primary voice of their manuscript. AI may assist with language and presentation, but the formulation of ideas, the interpretation of results, and the construction of arguments must originate with the human authors. Submissions where AI has served as the primary writer rather than a supporting tool are inconsistent with RAMS® publication standards, even if a disclosure statement is included.

Note: RAMS® reserves the right to request clarification from authors regarding the extent of AI involvement in manuscript preparation, and to reject submissions where AI use is found to be inconsistent with these guidelines.

2.5 Author Accountability

Regardless of the extent to which AI tools are used, human authors bear full and sole responsibility for the accuracy, originality, ethical integrity, and scientific validity of the entire manuscript. The use of AI does not diminish or transfer any aspect of this responsibility.

3. Guidelines for Reviewers

3.1 What Is Prohibited

IEEE’s reviewer guidelines explicitly prohibit reviewers from using AI tools to generate review content. Specifically:

  • Reviewers may not use any public AI platform — directly or indirectly — to generate part or all of a peer review
  • Uploading any portion of a manuscript under review to a public AI system is a breach of confidentiality, as AI systems may learn from submitted content
  • AI-generated reviews are considered a violation of IEEE’s peer review ethics

Note: This prohibition stems from both confidentiality concerns and the professional obligation to provide an expert, independent assessment. A review generated by AI cannot fulfill this obligation.

3.2 What Is Permitted

Reviewers may use AI tools in a limited capacity for their own language and grammar editing, subject to the following conditions:

  • The AI tool is used only to improve the language of the reviewer’s own written comments
  • No part of the manuscript under review is uploaded or shared with any external platform or service
  • The substantive technical assessment and conclusions are entirely the reviewer’s own
  • AI tools may be used to check mathematical and notational consistency of equations and the accuracy of numerical calculations, provided that all such outputs are subsequently verified by the Associate Editor before inclusion in the review.

3.3 What Associate Editors Should Look For

Given the growing prevalence of AI-assisted writing, Associate Editors are encouraged to pay particular attention to the following when assessing a submission:

  • Research value and original contribution: Ask whether the paper demonstrates a genuine, original contribution to the field. AI can produce text that reads fluently and is structured convincingly, but may lack the depth, specificity, and insight that come from researchers who have lived with a problem. Associate Editors should probe whether the analysis reflects real understanding or surface-level synthesis.
  • Signs of prompt-driven generation: Be attentive to manuscripts where the prose is unusually polished, but the technical substance is thin, where arguments are generic rather than grounded in the specific context of the research, where references are used superficially, or where the writing style is inconsistent with the depth of the reported work. These may be indicators that AI played a more central role than a supporting one.
  • Consistency between content and claimed contribution: If the abstract and conclusions describe a significant technical contribution, but the body of the paper does not substantiate it with sufficient detail, rigor, or original analysis, this warrants scrutiny regardless of whether AI was involved.

Note: Associate Editors are not expected to make definitive determinations about AI involvement. However, if a reviewer has serious concerns that a manuscript does not reflect the genuine intellectual work of its authors, they should raise this clearly in their review and flag it to the Technical Chair.

4. Quick Reference Summary

The table below provides a concise overview of permitted and prohibited uses:

Use Case Authors Reviewers
Grammar / Spelling / Formatting Assistance ✅ Permitted

(disclosure recommended)

✅ Permitted

(own text only, no manuscript upload)

AI-Generated Text in the Paper ⚠️ Permitted

(mandatory disclosure required)

❌ Prohibited
AI-Generated Figures / Images / Code ⚠️ Permitted

(mandatory disclosure required)

❌ Prohibited
Uploading Manuscript to a Public AI Platform ❌ Prohibited ❌ Prohibited
Listing AI as an Author or Co-Author ❌ Prohibited
Using AI to Fabricate or Manipulate Data ❌ Prohibited

5. Implications for RAMS®

IEEE’s policy establishes a clear baseline, but it does not include RAMS®-specific procedural guidance for authors or Associate Editors submitting to the RAMS® symposium. Based on these guidelines, the following actions are recommended for the RAMS® Management Committee to consider:

  • Adopt and publish an explicit AI use policy on the RAMS® website for both authors and Associate Editors, aligned with the IEEE framework above
  • Update the call for papers and author instructions to include mandatory AI disclosure requirements in the Acknowledgments section
  • Update reviewer guidance documents to explicitly prohibit the use of AI tools in generating review content
  • Explore whether AI detection tools should be incorporated into the editorial workflow, with the understanding that current tools have documented limitations including false positives and should not be the sole basis for any editorial decision
  • Consider referencing comparable policies from high-impact related journals such as Reliability Engineering & System Safety as benchmarks

6. Publishing the Policy on the RAMS® Website

Adopting a policy is only effective if authors and Associate Editors are aware of it before they begin their work. The RAMS® AI use policy should be prominently displayed on the RAMS® website and must not be buried within a lengthy author guide or relegated to a footnote. The following placement is strongly recommended:

  • Submissions page: A clearly labeled “AI Use Policy” section should appear near the top of the submissions page, visible without scrolling, with a direct link to the full policy document.
  • Reviewer guidance page: An explicit notice should appear informing Associate Editors that AI tools may not be used to generate review content and that uploading manuscripts to public AI platforms is a breach of confidentiality.
  • Call for papers: A brief statement referencing the AI use policy should be included in all calls for papers distributed to the community.
  • Author instructions / paper kit: The policy link should appear at the beginning of the author instructions, before submission formatting details.

This approach reflects best practice among leading IEEE conferences and journals. Making the policy visible and accessible from the outset sets clear expectations, reduces the risk of inadvertent violations, and demonstrates that RAMS® is committed to publication integrity.

7. RAMS® Paper Template: Acknowledgments Section

To support compliance with IEEE’s mandatory AI disclosure requirements, the RAMS® paper template should include a dedicated Acknowledgments section. Authors who used AI tools in preparing their manuscript are required to disclose this in the Acknowledgments. The section below is proposed for inclusion in the official RAMS® paper template.

ACKNOWLEDGMENTS

Pre-written example statement (authors should adapt as appropriate):

The authors used [name and version of AI tool, e.g., ChatGPT-4o by OpenAI] to assist with [describe use, e.g., grammar checking and language editing of Section III and Section IV / generating the literature summary in Section II / producing the flowchart in Figure 2]. All substantive technical content, analysis, conclusions, and claims are solely the work of the authors, who take full responsibility for the accuracy and integrity of this manuscript.

8. Document Acknowledgment

This document was prepared with the assistance of Claude AI (Claude Sonnet, developed by Anthropic). Claude was used to research IEEE’s current policies on AI use in manuscript preparation and peer review, structure and draft the policy guidelines, and format the document for distribution. All editorial decisions, the selection of policy positions, and the framing of recommendations reflect the judgment of the human author. The content has been reviewed for accuracy and alignment with IEEE requirements prior to distribution by the Management Committee.

9. References

[1]  IEEE Author Center (Journals) — Submission and Peer Review Policies

[2]  IEEE Author Center (Conferences) — Submission Policies

[3]  IEEE Author Center (Magazines) — Submission and Peer Review Policies

[4]  IEEE Open — Author Guidelines for AI-Generated Text (April 2024)

[5]  IEEE Author Center — IEEE Journals and Magazines Reviewer Guidelines (Updated April 2024)

[6]  Reliability Engineering & System Safety — Guide for Authors