ChatGPT is a large language model (LLM). It works by predicting the next word based on what words came before – in other words, the context of the moment. It is not capable of judging the accuracy of what it is providing at that moment. Meanwhile, it is prone to providing what are being called “hallucinations” – information that is completely made up. Unfortunately, hallucinatory information can look convincing and can even come with nonexistent citations (see the article from The Guardian in the page “Associated Sources for “Hallucinations’”).
For scholarly research purposes, ChatGPT is unreliable. Numerous scholarly articles already have appeared illustrating how the tool provided inaccurate information and/or fabricated citations. Often, when queried further, ChatGPT might recognize and apologize for its error. When corrected, it may change its answer to a correct one; other times it might provide a different incorrect answer.
It’s important students understand that scholarly research still depends on finding reliable sources through licensed databases and reliable sites that can be found online. However, one way ChatGPT could be helpful in scholarly research is by helping students identify useful keywords for conducting searches on licensed databases.