Skip to Main Content

Library Staff - Login to LibApps

Student Guide to ChatGPT

ChatGPT and Hallucinations

 

Beware: ChatGPT Can Hallucinate!

ChatGPT is a Large Language Model. It works by predicting the next word based on what words came before - in other words, the context of the moment. It is not capable of judging the accuracy of what it is providing at that moment. Just as autocorrect often fails at predicting the right word, so can ChatGPT fail - but on a larger scale.

ChatGPT is prone to what are being called “hallucinations” – information that is completely made up. Unfortunately, hallucinatory information can look convincing and can even come with nonexistent citations (see the article from The Guardian in the page “Associated Sources for “Hallucinations’”).

ChatGPT and Scholarly Work

For scholarly research purposes, ChatGPT is unreliable. Often, when queried further, ChatGPT might recognize and apologize for its error. When corrected, it may change its answer to a correct one; other times it might provide a different incorrect answer.

Scholars who have experimented with ChatGPT agree that the tool can be useful in some applications, but only if one is familiar enough with the information it provides to be able to certify its accuracy. In other words, its information should not be taken at face value, but carefully evaluated. For this reason, its use poses a risk for students who are in the midst of attempting to master a subject and may not be capable of judging the accuracy.

Research

It’s important students understand that scholarly research still depends on finding reliable sources through licensed databases and reliable sites that can be found online. However, one way ChatGPT could be helpful in scholarly research is by helping students identify useful keywords for conducting searches on licensed databases. Librarians are always available to help find the best resources as well.