Being an Informed User of Technology Can Help Mitigate Abuses in the Industry
A variety of criticisms have arisen concerning how ChatGPT was created, particularly with regards to social justice, ethics, inherent biases, and risks. It's worthwhile for students to learn about these issues, not just because these issues will interest them, but also to promote awareness around the very real impacts these technologies represent.
Taking the time to consider the impact these technologies have on society, third world nations, institutional racism, threats to democracy, etc. will help train students to become more informed about new technologies as they emerge, and to become more responsible users. Awareness of abuses within the technology industry itself by users may help to rein them in.
Social Justice
Almost as soon as ChatGPT-3.5 was released in November 2022, articles began to appear around its controversial use of labor. In January of 2023, Time magazine ran an article about Open AI's contract with a company in Kenya that paid workers $2.00/hour to screen highly objectionable material for the company. Many of the workers reported feeling traumatized by the viewings. Workers in other parts of the globe also have been exploited by big tech companies.
Another fear is that the digital divide will expand even more between those who have access to computers and AI and those who don't. As Open AI continues to monetize ChatGPT, some will be able to afford it while others will not.
Bias and Ethics
Other concerns include the inherent bias found in LLMs, largely due to the information sources they rely on. Critics contend that ChatGPT is biased against people of color, minorities and women. Critics also point to the unethical practice of grabbing information from a variety of sources without gaining permission.
Risks
Risks include the hallucinations addressed in the "ChatGPT Hallucination" section. Some of the hallucinations created by ChatGPT have resulted in unfounded personal smears of individuals as a result of fabricated information. There is also the very real risk of the tool being used for the widespread dissemination of false and dangerous information. Additionally, there is genuine concern within the tech industry that hallucinations may not be able to be completely eliminated. All these risks argue for a cautious and informed use of the tool.
The links in the section "Associated Sources on Social Justice, Ethics, Bias and Risks" contain sources with information about these issues.