Artificial Intelligence (AI) Usage Policy
The journal adheres to the principles of academic integrity, transparency, and responsibility when using automated tools and artificial intelligence (AI) technologies, in particular generative models (LLM, including chatbots), in scientific and publishing activities.
Policy for authors
- Authors are required to disclose the use of generative artificial intelligence in the preparation of the manuscript, except in cases of technical editing of the text (language correction, formatting, stylistic editing).
- Authors are fully responsible for the reliability, accuracy, and scientific correctness of the results obtained using automated tools.
- Generative artificial intelligence cannot be listed as an author or co-author of a scientific publication.
- Generative AI cannot be listed as a source of information in the list of references.
Policy for reviewers and editors
- Reviewers and editors should not use generative artificial intelligence to create reviews or editorial conclusions, given the risks of privacy violations, bias, superficial assessments, hidden prompts, or the creation of unreliable information (including fictitious references).
- The use of automated tools for editing or linguistic improvement of text is permitted, provided that such use is fully disclosed.
- All automated processes used by the journal (in particular for checking text similarity, detecting image manipulation, or undeclared use of AI) are subject to mandatory human control (human-in-the-loop).
- The journal ensures that the editor or responsible employee checks the results of automated tools before making any editorial decisions.
The journal adheres to the principles of transparency, responsible use of technology, and academic ethics in the process of preparing, reviewing, and publishing scientific materials.