As artificial intelligence (AI) becomes increasingly prevalent in academic research, it’s crucial for researchers to understand how to use these tools responsibly. This guide, based on my presentation for the GCU Academic Writing Centre and GCU Graduate School, outlines key considerations for using AI in research.

1. Understanding AI in Research

Artificial Intelligence (AI) is a broad concept encompassing various approaches to creating intelligent systems that can perform tasks traditionally requiring human intelligence. Within this field, there are several important distinctions:

Generative AI refers to systems that can generate new content, data, or information. These systems often use techniques like neural networks to create original outputs based on patterns and data they have been trained on. Generative AI is used in various fields, including art generation, content creation, and natural language generation.

Large Language Models (LLMs) are a specific type of generative AI that focuses on understanding and generating human-like language. These models are trained on vast amounts of text data and can generate coherent and contextually relevant text based on input prompts. GPT-3 (Generative Pre-trained Transformer 3) is a prominent example of a large language model developed by OpenAI.

Algorithms are step-by-step procedures or sets of rules followed to perform specific tasks or solve particular problems. In the context of AI, algorithms are the underlying mathematical and logical instructions that enable machines to perform tasks. AI systems, including generative AI and LLMs, are built on algorithms. Algorithms define how data is processed, how models learn from data, and how they generate outputs.

AI tools commonly used in research include:

Chatbots: ChatGPT, Bard

Academic tools: Perplexity, Scite Assistant, Elicit, Consensus

Note-taking tools: Notion

Mind mapping tools: GitMind, Whimsical

Find out more and compare Generative AI tools with the Hong Kong University of Science and Technology lib guide.

2. Research Integrity and AI

Research integrity is paramount in academic pursuits. When incorporating AI into research practices, it’s crucial to consider:

Researchers should critically evaluate where and how AI tools fit within research integrity guidelines and be aware of any potential ethical concerns. The Russell Group Principles on AI and Education are a good starting point. 

3. Responsible AI Practices in Research

The THINK CHECK USE framework is a helpful guide for responsible AI use:

THINK

  • Are you using a trusted AI tool?
  • Is it the right tool for your purpose?

CHECK

  • Privacy and data sharing agreements
  • Longevity of the tool

USE

  • Keep a record of the tool and version used
  • Document prompts or workflow
  • Maintain a secure, clean copy of original work
  • Implement version control

Be aware that unacceptable use of generative AI might breach good research practice and violate institutional policies. Improper use or lack of proper referencing could lead to research misconduct cases.

4. AI Applications in Research

Research Planning

AI can assist in planning project timelines, deciding on milestones, and overall project management. However, it’s important not to rely uncritically on AI for structuring your planning. Always discuss plans with colleagues or supervisors.

Data Analysis

  • Some AI tools are task-specific and can support data analysis.
  • Popular data analysis software is likely to integrate AI features in newer versions (for example NVivo 14).
  • For qualitative data, be aware of potential breaches of confidentiality and intellectual property.
  • Ensure research participants are aware of any planned use of AI tools, and include this in ethics approvals.
  • Don’t rely on AI alone for data analysis.

Academic Writing

Authorship

Editing (AI is covered in my editing workshop, get in touch if interested)

  • AI tools can provide feedback on written work and help experiment with different writing styles.
  • The original piece must be produced by the researcher.
  • Work generated entirely by AI and submitted without acknowledgement is considered plagiarism.

Peer Review

Generative AI tools should not be used by peer reviewers (both for publications or grants) or by editors as published by Science in their Science funding agencies say no to using AI for peer review report because:

  • This often breaches confidentiality. Some platforms, e.g., Duet AI in Google Workspace, retain content and will allow people to read it. Other tools may not declare this but still do so, e.g., for training material. Submitted content could be leaked, hacked, or closely reproduced if a prompt is specific enough.
  • The tools are not validated to critically appraise scholarly content and risk producing inaccurate, cursory, and biased reviews.
  • If someone cannot assess a manuscript, they should decline to review or edit rather than relying on a chatbot.

5. Best Practices for AI Use

  • To ensure responsible use of AI in research:
  • Keep detailed records of AI tools and versions used
  • Document all prompts or workflows
  • Maintain secure, clean copies of original work
  • Implement version control to track changes
  • Properly acknowledge and cite AI use in your research outputs

Conclusions

As AI continues to evolve, it’s essential for researchers to stay informed about best practices and institutional guidelines. By using AI responsibly, graduate students can enhance their research while maintaining academic integrity. Remember to critically evaluate AI tools, use them as supplements to your own expertise, and always prioritise ethical considerations in your research process.


Download the Introduction to AI in research slides below. Please follow CC BY 4.0 guidelines if you use these.