Artificial Intelligence Policy

Below is a Scopus Q1–oriented Artificial Intelligence Policy written specifically for Journal of Childhood Education (JOCE). I adjusted the language to align with the journal’s focus on early childhood education, pedagogy, and research integrity standards described on its website.

You can place this directly inside the Policies → Artificial Intelligence Policy page on OJS.


Artificial Intelligence (AI) Policy

Journal of Childhood Education (JOCE)

1. Purpose and Scope

The Journal of Childhood Education recognizes the growing role of Artificial Intelligence (AI) tools in academic writing, data analysis, and scholarly communication. This policy establishes ethical guidelines for the responsible, transparent, and accountable use of AI technologies in research and publication processes. The policy aims to ensure that all manuscripts maintain academic integrity, methodological rigor, and originality consistent with international publishing standards.

Given the journal’s focus on early childhood education, pedagogy, and child development research, authors must ensure that AI-assisted content reflects scholarly accuracy, educational responsibility, and ethical sensitivity toward research involving children and educational contexts.


2. Definition of Artificial Intelligence Tools

Artificial Intelligence tools refer to software or systems capable of generating text, images, code, analysis, or other research-related outputs through machine learning or automated processes. Examples include generative AI writing assistants, automated translation tools, and AI-based data analysis platforms.

AI tools are considered assistive technologies, not intellectual contributors.


3. Authorship and Accountability

AI tools cannot be listed as authors or co-authors under any circumstances. Authorship is limited to individuals who have made substantial intellectual contributions to the research and manuscript.

All authors remain fully responsible for:

  • The accuracy, originality, and integrity of the manuscript.

  • Ethical compliance and scholarly validity.

  • Proper interpretation of research findings.

The use of AI does not transfer responsibility from human authors to technological systems.


4. Acceptable Uses of AI

The Journal of Childhood Education permits the responsible use of AI tools for limited support functions, including:

  • Language editing and grammar improvement.

  • Formatting assistance.

  • Data organization or preliminary coding support.

  • Idea structuring or outline generation.

AI use must not replace critical academic thinking, theoretical development, or pedagogical interpretation.


5. Prohibited Uses of AI

The following practices are strictly prohibited:

  • Generating entire manuscripts or substantial sections without significant human revision.

  • Fabricating references, data, or research results.

  • Producing synthetic empirical findings.

  • Using AI to manipulate peer review or editorial processes.

  • Uploading confidential peer-review materials into AI systems.

Any detected misuse may result in manuscript rejection or retraction.


6. Transparency and Disclosure Requirements

Authors must clearly disclose AI use in a dedicated statement within the manuscript. The disclosure should include:

  • Name of the AI tool used.

  • Purpose of its use (e.g., language editing, data visualization).

  • Confirmation that authors verified and revised all AI-generated content.

Example disclosure statement:

“The authors used AI-assisted language editing tools solely to improve readability. All conceptual, methodological, and interpretative aspects were developed and verified by the authors.”

Failure to disclose AI use constitutes a breach of publication ethics.


7. Ethical Considerations in Early Childhood Research

Because the Journal of Childhood Education publishes research involving children, learning environments, and educational practices, authors must ensure that AI tools:

  • Do not compromise participant confidentiality.

  • Do not introduce bias into developmental or pedagogical interpretations.

  • Do not replace ethical human judgment in research design or analysis.

Research involving minors must remain fully compliant with ethical research standards regardless of AI involvement.


8. Responsibilities of Editors and Reviewers

Editors and reviewers may use AI tools cautiously for:

  • Language refinement suggestions.

  • Detecting structural inconsistencies.

However, they must not upload confidential manuscripts or reviewer comments into external AI systems that may store or reuse content. Peer review confidentiality must be strictly maintained.


9. Screening and Compliance

All submissions may undergo screening for AI-generated content alongside plagiarism checks as part of the journal’s quality assurance process. Manuscripts suspected of excessive AI dependence may be:

  • Returned for revision,

  • Subjected to additional editorial review, or

  • Rejected for failing to meet scholarly originality standards.


10. Policy Updates

This Artificial Intelligence Policy will be periodically reviewed and updated to reflect developments in scholarly publishing, technological innovation, and international ethical standards.


If you want, I can also create a more advanced “Scopus Q1 editorial-style version” that is stronger and more aligned with policies used by Elsevier, Springer, and Taylor & Francis journals. That version usually includes:

  • AI risk classification (low, moderate, high AI involvement)

  • Mandatory AI Disclosure Form for authors

  • AI detection workflow integrated with plagiarism screening

  • Early childhood research ethics clauses (this will significantly strengthen JOCE’s credibility for indexing).