Always make sure your use of GenAI in academic work is responsible and is within what is permitted by the University policies. AI can produce plausible sounding information, but this does not mean it is accurate or reliable. There are a number of issues to be cautious of when you are using AI tools:
GenAI tools can sometimes create incorrect or false information – these are known as hallucinations. The best way to detect hallucinations in a tool’s output is to be already familiar with your topic area and think critically whether you can verify the information with external sources.
Here are some tips to help you spot hallucinations:
Publicly accessible GenAI tools were trained on data scraped from billions of pages on the internet. If the writing it was trained on contained bias, it could reproduce them in its output.
Gen AI is regularly known to provide incorrect or misleading references. Check against known sources, search for the article title, Journal title or Digital Object Identifier (DOI) in a standard search engine.
It isn’t always possible to see where the information has come from or see if the source is reliable.
It can plagiarise whole texts – reproducing works verbatim or providing paraphrases that aren’t sufficiently different to the original source.
As its text output is based on the probabilities of words appearing together, the results can sometimes be over simplistic, it may repeat the same phrases, lack critical thought, analysis, or reasoning, and it doesn’t have an original voice.
Do not use them as a replacement for producing your own work. They can increase productivity, but you need to develop your own writing skills and learn how to identify and critically analyse literature.