Skip to Main Content

Generative AI for Faculty

Risks

Risks

  • The most important thing to know about GAI at this time in its development is that it often produces false information and invented sources. GAI has limitations and weaknesses that you should familiarize yourself with. You are responsible for all materials you submit, including any inaccurate, biased, or otherwise unethical content generated by an AI tool. 

  • GAI models like ChatGPT do not access or understand reality or check facts. They predict and generate language based on probability, sometimes providing biased results, misinformation, or non-existent sources (“hallucinations”), necessitating the verification of all content.

  • These models have been trained on limited datasets and many do not have access to current data, yet another reason why thorough verification is essential.

  • Remember there are no guarantees or expectations of privacy when using AI, so avoid including any personal information in prompts.

  • GAI information can reflect inherent biases towards minorities and women.

  • Fact-check all AI outputs. Assume it is wrong until you cross-check the claims with reliable sources. Currently, AI models will confidently reassert factual errors. You will be responsible for any errors or omissions.

  • Be aware that GAI use may stifle your own independent thinking and creativity.

  • GAI can help students see possibilities that they might miss, but it can also narrow insights and ideas to smaller responsive ranges or blind them to other possibilities. 

  • As beginners in your field(s) of study, you are not (yet) consistently able to identify gaps, biases, or outright misinformation provided in GAI output.