AI Policy

Generative Artificial Intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve and might be an useful inrument in recearch. 

As we (International Journal of Social and Eductional Innovation) welcome the new opportunities offered by Generative AI tools, AI use must pe documented and included amonf the used resources and methods.

From 1st of January 2025, along with the Originality Report, an AI Report will be added as decisional factor. Any non-documented use of AI or a score > 15% concerning the contents submitted will be considered reasons for manuscript rejection.

While Generative AI has immense capabilities to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools. 

Some of the risks associated with the way Generative AI tools work today are:

  • Inaccuracy and bias: Generative AI tools are of a statistical nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations) or bias, which can be hard to detect, verify, and correct.
  • Lack of attribution: Generative AI is often lacking the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations.
  • Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.
  • Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g. for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.

 

Authors

Authors are accountable for the originality, validity, and integrity of the content of their submissions. In choosing to use Generative AI tools, journal authors are expected to do so responsibly and in accordance with our journal editorial policies on authorship and principles of publishing ethics. This includes reviewing the outputs of any Generative AI tools and confirming content accuracy. 

  • Idea generation and idea exploration
  • Language improvement
  • Interactive online search with LLM-enhanced search engines
  • Literature classification
  • Coding assistance

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research and validation, and is created by the author.

 

 

Authors must clearly acknowledge within the article or book any use of Generative AI tools through a statement which includes: the full name of the tool used (with version number), how it was used, and the reason for use. For article submissions, this statement must be included in the Methods or Acknowledgments section.

Authors should not submit manuscripts where Generative AI tools have been used in ways that replace core researcher and author responsibilities.