How can academics cope with this ever-changing reality?

Academics (and university professionals and support staff) who follow developments in AI and try to make informed choices on behalf of their institutions and towards developing their own skills portfolios, are often overwhelmed by the large number of new models and versions of AI that have been appearing on a regular basis. In addition, it becomes difficult to distinguish between hype and reality, promise and performance, broad applicability and suitability for mission-specific work.

During the second half of December and the first weeks of January, the hype engine was running at high revolutions. On 20 December, OpenAI announced an upcoming version, namely ChatGPT o3, that is, a further so-called ‘reasoner’ model improving vastly on the o1 model. (There is no ‘o2’ model, due to the use of that name for a telecoms company.) Initial test results on the o3 model were also revealed, indicating outstanding performance on some standard tests. A few days later, one contributor to a newsletter commented: “OpenAI o3 didn’t just snatch the SOTA [state-of-the-art] crown, it obliterated the aspirants’ hopes of getting it back anytime soon.” (ChatGPT o3 eventually launched on 31 January in two limited versions for paying subscribers.)

During this time of overblown expectations, many news media also reported that we are very near to supersmart AI systems that will bring a flood of intelligence, or even the Holy Grail of AI endeavours, namely AGI (Artificial General Intelligence).

On 10 January, Ethan Mollick, a widely recognised expert on GenAI and its application in higher education contexts wrote an informative and sobering article on his substack One Useful Thing: ‘Prophecies of the Flood: What to make of the statements of the AI labs?’ (https://www.oneusefulthing.org). Mollick provides a very useful lens for evaluating the claims and promises of the various AI labs and their supporters – not only referring to the developments during previous weeks, but also ongoing … where we will surely see more similar claims.

This article by Mollick is essential reading for academics and university staff. For a deep dive and an excellent demonstration of how AI can be harnessed as co-intelligence in contexts of life and work, see his highly acclaimed book Co-Intelligence (2024).

A subsequent article (26 Jan.) by Mollick addresses the question as to which AI model is the best, or the most appropriate, to use now. ‘Which AI to Use Now: An Updated Opinionated Guide’ [same location as above]. The article aptly ends with the following statement: “In the time it took you to read this guide, a new AI capability probably launched and two others got major upgrades. But don’t let that paralyze you. The secret isn’t waiting for the perfect AI – it’s diving in and discovering what these tools can actually accomplish” (my italics).

Walter Claassen (SARUA Associate)

Published On: 21 February 2025Categories: News
Categories: News

Share