SARUA statement on ChatGPT and other AI tools
Most academics would have heard of ChatGPT, the artificial intelligence (AI) application that was released by OpenAI at the end of November 2022 and that has since taken the world by storm (GPT = Generative, Pre-Trained, Transformer). Some analysts (from UBS) indicate that ChatGPT may be the fastest growing consumer app of all time, already having reached 100 million users within two months.
The ChatGPT bot provides text-based conversational AI. This means that the user can interact with the bot in ordinary language, and the responses of the bot, or the products generated by the bot (e.g., a few paragraphs, or an essay) are produced in ordinary language – which may be well-structured in paragraphs, or in bulleted lists if that is the most appropriate shape for the response. ChatGPT can also be used in a number of languages other than English, through the automated use of existing digital translation tools.
It was immediately clear to users in many areas of work and life that a new era of AI has dawned: This is AI that is not hidden behind recommender lists on commercial websites, or behind travel navigation systems, but AI with which users can interact in what seems to be “magical” conversational ways. The bot collects relevant information from a huge database of text (etc.) with which it has been trained and uses an autoregressive language model to generate the most fitting responses. (Pre-trained – hence the P in the name; Generative – hence the G.)
It also became clear that this new AI functionality has implications for all areas of learning, teaching and assessment, both in schools and in post-school education. Students soon started to use ChatGPT to write essays or to generate responses for “homework” assignments, mostly in flawless English and with well-constructed sentences and paragraphs.
In the short space of time since its release in late November 2022, ChatGPT has been tested to the limit and proved itself as a decent writer, test taker, study tool, contract scrutiniser, and many more. ChatGPT made some remarkable achievements by passing some of the most intense exams such as university law and multistate bar exams, business management and final MBA exams, English language, coding, and a major medical licensing exam, and many more.
As soon as the implications of ChatGPT became better understood, universities reacted in very different ways. Some universities banned the use of ChatGPT by its students for university assessments (e.g., Sciences Po in Paris). Others have placed a temporary ban on its use (University of Hong Kong). Still others believe that the negative impact of ChatGPT can be eliminated by having more pen and paper assessments or examinations (thus a group of top universities in Australia).
ChatGPT is not a panacea, and it raises many legitimate concerns such as biases in the datasets on which it is trained. As more experimenting with ChatGPT is taking place, many of the limitations of the bot are now increasingly coming to the fore, such as the overuse of certain phrases and the provision of broadly similar responses to multiple users, thus lacking creativity.
In higher education, it raises serious concerns about academic integrity. The perceived risk involved with ChatGPT was so high amongst the public and policymakers that OpenAI in late January 2023 introduced a free text classifier tool to detect if text was generated by artificial intelligence (AI) or a human being. They believe that this tool would mitigate the risks that students could rely on artificial intelligence to do their work. Elsewhere, various AI checkers such as GPTZero have been developed.
But a deeper challenge for higher education academics lies in the way that they assess learning. Will they choose to stick to conventional approaches to assessment, or change their strategies to accept that, while their students might develop some machine-generated text, they can design assessment strategies that require new forms of engagement with that text, supporting the development of their students’ critical thinking skills? After all, the mastery of learning outcomes should require that students demonstrate their ability to engage with information critically and creatively and to apply knowledge in grades of increasing complexity. There are already numerous cases where professors are incorporating ChatGPT in student assignments, requiring them to critically assess the output generated by the bot or to supplement it by appropriate references from recognised publications.
Moratoriums on the use of ChatGPT can at most be temporary solutions. The key point is that ChatGPT is not a passing phenomenon, and that it forms part of a wider development around the use of AI in the generation and configuration of information. Universities and academics have to take note that ChatGPT is already widely used by students and that, for the sake of orderly and well-considered learning activities and ensuring the quality of learning, it is a matter of urgency that universities should take note of this development and should take the necessary steps.
So, it is not surprising that some higher education leaders and academics acknowledge that there is no way to sidestep ChatGPT and the many other comparable GenerativeAI applications that will follow. Universities rather have to find ways to engage with this new reality and to adapt academic practices to take into account that it is inevitable that academics and students will make use of this new technology.
Many academics argue that being able to engage critically with these new AI applications is an essential part of the preparation of students for a future world of work in which AI will play an increasingly important role.
Formal positions taken by universities on sentiments such as the former, that is, finding ways to accommodate applications such as ChatGPT in positive and critical ways, have not yet been forthcoming in the media, possibly since many issues and regulations have to be reconsidered and decisions have to move through formal channels of approval.
Currently, and based on reports and publications by academics, there seems to be a preference for a position that strategies should be developed to engage with the new range of AI-based tools such as ChatGPT and to use those in positive and critical ways. The new realities can then be leveraged towards innovation in learning and teaching and to develop the critical thinking of students, at least in some disciplines.
SARUA agrees with this view. It is also reflected in the recent guidance by the European Universities Association (EUA) that “the higher education sector must adapt its learning, teaching and assessment approaches in such a way that AI is used effectively and appropriately. Universities must explore the responsible use of AI tools, in line with their mission, goals and values, and paying due regard to their legal framework and the broader consequences for and impacts on society, culture and the economy.”
It is important that this new development should be discussed within universities and that clarity should be reached on core principles that can provide guidance for faculty, professional administrative and support staff, and students. Developments in the new AI space are fast and many new tools will appear soon. Consequently, learning from other universities and from their actions and experiences will have to take place in agile ways. It might therefore also be a good approach for universities to set up points of expertise in appropriate divisions of the university to monitor and to advise management and staff in an ongoing way.
Apart from universities, it is also important that national educational regulatory bodies and educational authorities should take note of these developments and might have to take steps commensurate with the unstoppable and accelerated pace of AI developments.
SARUA intends to watch this space carefully and to engage with the members of our network in navigating the rapid developments around the use of AI.
We welcome your comments and thoughts on the topic.