AI for research

Artificial Intelligence (AI) can change the way you approach your research; from speeding up your literature reviews to helping transcribe interviews and analyse data.
You already use AI more than you might realise. It is behind many of the smart tools you use every day. AI can scan huge amounts of information in seconds, spot patterns you might miss, and even help you brainstorm ideas.
While AI is developing fast, it is still just a tool. The value comes from how you use it; to work smarter, solve problems, and explore new ideas.
Responsible research fosters trust and confidence in the research process. There are principles that form the foundation of good research practice and these must be upheld regardless of the tools or technologies employed. A helpful resource is Embracing AI with integrity, produced by the UK Research Integrity Office.
Generative AI (GenAI) builds on traditional AI methods. It represents a major leap forward; it can also be seen as a natural next step in AI’s ongoing evolution. This LinkedIn Learning course illustrates the differences between generative and traditional machine learning.
Misinformation and bias in AI
While AI tools can assist users with a varying and ever-increasing array of tasks, they can also produce misinformation and make blatantly wrong statements. AI tools can be used to create fake information, which in turn other AI tools cannot distinguish. And so, when using GenAI for research purposes you need to FACT CHECK.
Bias is another significant issue. GenAI tools are trained using vast amounts of data (usually from the internet). These data naturally carry inherent biases. Thus, the tools often reflect and perpetuate the biases already held in the data that has been used. AI tools that use reinforcement learning with human feedback (RLHF) are further prone to bias, as the human testers who provide feedback are often not neutral.
What to do?
- Fact check: Always verify the information generated by AI, including checking the accuracy of citations it may use.
- Critical evaluation: Assess AI output for potential biases that may affect the information’s integrity.
- Avoid false citations: Do not trust AI tools to generate a list of sources on a particular topic as they may fabricate citations.
- Consult developer notes: Check whether the AI tool is up to date, especially when relevant data or events are involved.
- Understand AI limitations: plausible sounding responses may not always be factually reliable.
Responsible use of GenAI: acknowledge and reference
The use of generative AI tools must always be acknowledged, whatever part the tool played in your work. The appropriate citation guidelines should be followed.
- Cite AI-generated content when you paraphrase, quote, or use any AI-produced text, image or data.
- Always mention the prompt, AI tool name, developer (e.g. OpenAI), generation date and URL (if available).
If you plan to publish, check the journal’s policies regarding the use of generative AI tools.
Edge Hill student guide to ethical use of AIRoyal Society introduction to AI and ethicsUsing AI tools
There are many AI and GenAI tools available that support specific tasks in the research process. That number is growing.
Relying on a single tool may lead to gaps in coverage. Using multiple sources to get a well-rounded understanding of a topic is essential.
Privacy, intellectual property and copyright
A critical issue in using GenAI tools is privacy and intellectual property. The large datasets used for training can inadvertently include personal or sensitive data. AI models do not specifically collect identifying information, however the risk of re-identification remains, as patterns in the data could link generated content back to an individual.
Thus, it is essential to consider personal data, intellectual property and copyright. When using AI in research the following questions should be asked:
- Who owns the information you are inputting to the AI tool?
- Who owns the information once it is in the AI tool?
- Who owns the outputs from the AI tool?
The answers to these questions might not be as straightforward as expected. Many tools are free to use initially, some are available on a subscription basis. Many contain problematic terms and conditions.
- The University of Birmingham has developed a helpful Evaluative Framework for AI tools for this very purpose.
- The Generative AI Product Tracker is a useful tool that identifies AI tools which you can evaluate using the framework.
AI tools: licence to use?
Before you start using a tool you have identified as appropriate for your research, it is crucial to review the terms and conditions (licence) which is the legal agreement between you and the supplier. The University of Birmingham has helpfully produced a quick review checklist, consisting of 10 questions, to assist with reviewing the terms.
If a group of users or researchers wishes to procure a tool, a more in-depth review may be required.
Getting the best out of an AI tool
Prompt design is the art of crafting precise and structured instructions to optimise interaction with Large Language Models (LLMs).
The CLEAR framework is a structured approach to refining AI queries. It provides a systematic method for improving prompt efficiency.
Find out more about creating CLEAR AI promptsFurther support
For more support in using AI tools or with any other questions about AI for research, please contact the Academic Engagement Team.