On 30 November it was two years ago that ChatGPT was released for public use, being the first application to bring ‘generative AI’ (GenAI) to more users than a select group of mostly computer scientists experimenting with early prototypes and versions.
Never before, has the release of a technology application to the public had more widespread and far-reaching implications for education, in such a short time. Of course, ChatGPT had impact in all areas of life and work, but in education and learning the impact was felt intensely within weeks. Students and learners at universities and schools started using it to complete their assignments or to assist in writing them; lecturers and teachers followed up by banning GenAI or developing policies to curb the use of GenAI and devising ways to detect GenAI-generated text. Some educators, however, ventured into new terrain by embracing the technology and using it to improve instructional processes and materials.
In time, a number of versions of ChatGPT appeared, as well as applications built around it. Many competitive large language models (LLMs) entered the scene and many providers are trying to ensure their share of the new world of AI. Two years later, we have a very wide range of GenAI models and applications, billions of users of this technology (especially now with the default inclusion of GenAI in search actions in Google in hundreds of countries), and thousands of publications and guides to use the technology and to write prompts.
In short, GenAI has become omnipresent and embedded in activities in large sectors of society. Together with the promise of ‘GenAI for good’ has also come the reality of misuse, cheating, misrepresentation, deepfakes and disinformation. GenAI is not only the underlying technology of the tools to be mastered by students, but also shapes our knowledge society with its manifestations of the good, the bad and the fearsome of the future.
GenAI – also in the ChatGPT family – is constantly in flux and in some ways getting better, e.g. by reducing (but not eliminating) inaccuracies, by making prompting easier for the user, by providing new search experiences … and in a multitude of human endeavours that can be imagined. Sure, there are also signs of serious limitations in progress with LLM approaches, but for many people GenAI is ‘good enough’.
In their positioning for the future of GenAI, universities and academics could note the following:
- ‘AI fluency is the new digital literacy’ (as formulated recently by Bernard Marr) – including GenAI, but also different shapes of AI, as relevant.
- Between 40% and 80% of students are using GenAI (varying between countries and across demographics and different areas of study) and an increase can be seen.
- Many students (according to surveys) express a need for proper training in acceptable and responsible use of GenAI.
- Many students complain about university policies regarding GenAI that are insufficient, unclear and confusing between faculties and departments.
- Some students expect GenAI competencies to be part of their future world of work, even though lecturers might take a negative or neutral position.
- The changes are happening so rapidly, and new applications appear so fast, that universities need an office that can act as clearing house to provide guidance to both students and staff. GenAI should be embraced (evidently: cautiously and critically) at the highest levels of university management – a precondition for many forms of digital transformation.
- Opportunities should be provided for university staff to acquire at least basic GenAI capabilities.
- Universities should strive for student equity of access, affordability and skills/capabilities regarding GenAI, at an agreed upon level.
GenAI is here to stay, warts and all. Universities are well advised to ensure that GenAI is used responsibly and innovatively within their ambit, and that in future they will be seen as having played a constructive role in shaping this component of the future AI-enabled knowledge society.