ChatGPT and Generative AI

Policy on ChatGPT and Generative AI based on COPE (Committee on Publication Ethics) for Bigint Computing Journal.

Use of Large Language Models and Generative AI Tools in Writing Your Submission.

Bigint recognizes the benefits of utilizing large language models (LLMs), such as ChatGPT, and generative AI as productivity tools for authors during the article preparation process. These tools can assist in generating initial ideas, structuring content, summarizing, paraphrasing, and refining language usage. However, it is crucial to acknowledge that all language models have limitations and cannot replicate human creative and critical thinking. Human intervention remains essential to ensure accuracy and appropriateness of the content presented to readers. Therefore, Hanif requires authors to be mindful of the following considerations when using LLMs in their submissions:

  1. Objectivity: LLM-generated text may contain previously published content with biases, including racism, sexism, or other biases. Minority viewpoints may not be adequately represented. The use of LLMs has the potential to perpetuate these biases since the information generated is decontextualized and harder to identify.

  2. Accuracy: LLMs can produce false content, particularly when used beyond their domain or when addressing complex or ambiguous topics. They might generate linguistically plausible but scientifically implausible content, provide incorrect facts, and even generate nonexistent citations. Some LLMs may also lack access to recent data, resulting in an incomplete picture.

  3. Contextual understanding: LLMs struggle to apply human understanding to the context of a given text, especially when dealing with idiomatic expressions, sarcasm, humor, or metaphorical language. This can lead to errors or misinterpretations in the generated content.

  4. Training data: LLMs require a substantial amount of high-quality training data to achieve optimal performance. However, in certain domains or languages, such data may not be readily available, limiting the model's usefulness.

Guidance for Authors

Authors are required to:

  1. Clearly indicate the use of language models in their manuscripts, specifying which model was employed and for what purpose. This information should be provided in the methods or acknowledgments section, as appropriate.

  2. Verify the accuracy, validity, and appropriateness of the content and citations generated by language models, correcting any errors or inconsistencies that may arise.

  3. Provide a list of sources used to generate content and citations, including those generated by language models. Authors should carefully review citations to ensure accuracy and proper referencing.

  4. Be aware of the potential for plagiarism when language models reproduce substantial text from other sources. Authors should cross-check the original sources to avoid plagiarizing others' work.

  5. Acknowledge the limitations of language models in their manuscripts, including the potential for bias, errors, and knowledge gaps.

Please note that AI bots such as ChatGPT should not be listed as authors in your submission.

Appropriate corrective actions will be taken if published articles are identified with undisclosed use of such tools.

Authors should review the guidelines of the journal they are submitting to for any specific policies regarding these tools.

Editors and Reviewers:

Editors and reviewers should evaluate the appropriateness of using LLMs and ensure the accuracy and validity of the generated content.

Further information

Please see the World Association of Medical Editors (WAME) recommendations on chat bots, ChatGPT and scholarly manuscripts and the Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools 

This policy may be subject to further evolution as we collaborate with our publishing partners to understand how emerging technologies can facilitate or hinder the research publication process. Please revisit this page for the latest information.