Skip to Main Content

Generative Artificial Intelligence (GenAI)

Tips for using AI

Hallucinations

Even though at first look AI can generate credible looking results, it is important to perform additional critical evaluation tasks such as cross referencing information with academic sources found in library databases to make sure the information provided is accurate.

Because these models were built to provide answers, they will rarely say “I do not know”. Instead, they will offer a made-up answer that reads well and looks credible. These are called hallucinations – fabricated information presented as fact. Please be cautious when using AI generated output, hallucinations can make it difficult to distinguish between accurate and inaccurate information.

Can I use AI to find references for my paper?

If you ask AI tools to produce a list of references, they will do that for you. References created by AI look credible at first sight – they are presented in APA compliant format, names of journals look valid, names of real researchers are listed as authors. However, in some cases, these works can not be found in academic sources (library databases, journal archives, etc). If you come across such examples, there is a high probability these references were created entirely by AI and they are not real works. We recommend caution, always double check references provided by an AI tool. 

Privacy

Sharing confidential or sensitive information while using AI tools is generally not advised. At this time users do not have control over how the information they share with AI tools is managed and they run the risk of having their information made available to third parties (either other users or business partners of the organization that own or train the tool).