Fully relying on ChatGPT for factual information, up-to-date information or appropriate case studies or legal cases, can be embarrassing or even career-threatening.

This was aptly demonstrated recently by a widely reported legal case in New York where a lawyer submitted a federal court filing that cited at least six cases that don’t exist. (https://www.forbes.com/sites/mattnovak/2023/05/27/lawyer-uses-chatgpt-in-federal-court-and-it-goes-horribly-wrong/?sh=e15a0493494d)

The lawyer relied on ChatGPT to provide legal cases relevant to the submission he had to make in representing his client in a lawsuit. However, ChatGPT “invented” the cases. The lawyer even asked ChatGPT whether the cases are real, and ChatGPT insisted that they are indeed real. Needless to say, the lawyer is now in deep trouble. He says he did not know that ChatGPT can just invent cases.

The phenomenon of ChatGPT providing information that is factually incorrect, that is not related to the context, or just inventing facts and cases, is called “hallucination”. It is a well-known phenomenon, and on the ChatGPT starting page a warning to this effect is provided. However, in the enthusiasm of users about the well-formulated and well-structured responses provided by the application, the limitations and drawbacks of this wizard are often ignored or forgotten.

The results generated by applications such as ChatGPT always have to be tested against other, more reliable, internet sources of information.

GPT-4 (the more advanced subscription version) seems to provide less hallucination. Over time, this limitation could possibly be remedied or eliminated.

In addition to this inherent limitation, users should also keep in mind that ChatGPT and GPT-4 exclude events post-September 2021, since the model was “trained” on a corpus of material from before that time.

“Hallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. These outputs often emerge from the AI model’s inherent biases, lack of real-world understanding, or training data limitations.” (https://bernardmarr.com/chatgpt-what-are-hallucinations-and-why-are-they-a-problem-for-ai-systems/)

For a useful discussion of the trustworthiness of ChatGPT, see the updated https://www.scribbr.com/ai-tools/is-chatgpt-trustworthy/.

Users of Generative AI in the higher education environment have to inform themselves about the trustworthiness of the applications they are using, both for the topics on which they themselves need information, and in their role of advising and guiding colleagues and students in the use of these new exciting and cutting-edge AI technologies and applications.

Better know the wizard you entrust with your search and text generation requirements!

Published On: 17 June 2023Categories: Digital transformation

Share