Two recently published reports encapsulate two different visions of how generative AI can be integrated into activities in the university environment.
Very soon after the introduction of GenAI in November 2022, many academics started producing reports on aspects of the integration of this new technology into university activities, and universities soon introduced regulations and policies. It took some 18 months for academics and organisations to develop more integrated positions on this important topic, reflecting not only on benefits, risks and specific use-cases, but also on institution-wide dimensions, including the challenges of broad acceptance by all role players amidst the realities of change in universities. The latter, we know, can be complex and can take time, given the different views relating to subject areas and the expectations of the various industries and professions.
In the meantime, the number of publications relating to GenAI and other AI tools, and their role in universities, has become overwhelming. Reports that take institution-wide positions are therefore welcomed.
The first of the two reports Navigating Artificial Intelligence in Postsecondary Education: Building Capacity for the Road Ahead was released by the Office of Educational Technology in the U.S. Department of Education, Washington, DC in 2024 (date on report: January 2025). (Yes, this is the Federal Department of Education that is intended to be dismantled.)
The report takes a cautious and measured approach to the integration of AI into universities, focusing on steps that are partly sequential, but that also incorporate iterative cycles of testing, evaluation and adaptation.
The key recommendations of the report are:
- “Establish transparent policies for how AI is used to support operational activities in postsecondary education settings. …
- Create or expand infrastructure to support the innovative application of AI in instruction, student advising and support, and assessment. …
- Rigorously test and evaluate AI-driven tools, supports, and services. (Iterative testing and evaluation is necessary.) …
- Seek collaborative partners for designing and iteratively testing AI models across educational applications. …
- Review, refine, and supplement program offerings in light of the growing impact of AI on future jobs and career opportunities.” (here quoted verbatim; my italics)
The authors of the report ensured that they take approaches that are in line with positions and policies reflecting US policy at the time of the writing of the report.
The second report (‘whitepaper’) is Generative AI in higher education: Current practices and ways forward, with authors Danny Liu, Professor of Educational Technologies, the University of Sydney, Australia, and Simon Bates, Vice-Provost and Associate Vice President, Teaching and Learning, the University of British Columbia, Canada. (https://www.apru.org/resources_report/whitepaper-generative-ai-in-higher-education-current-practices-and-ways-forward/)
The report emanated from a project on GenAI adoption by universities forming the Association of Pacific Rim Universities (APRU).
The report takes the position that the adoption of AI, especially GenAI, confronts universities with questions that are fundamental to universities, the nature of knowledge and the way universities relate to society, and among other things, to the changing world of work. This is a pivotal moment for universities and higher education in general, involving much more than merely reacting to another technological innovation. How can universities respond to the new technology, while acknowledging and respecting the value of higher education?
The following statement provides an indication of the vision expressed in the report: “The emergence of generative AI may be our best opportunity to reimagine higher education for the 21st century. Success requires us to move beyond incremental adaptation to fundamental transformation while preserving our core educational values” (p. 7, my italics). At the same time, the urgency of acting now is stressed.
The issue of the integrity of university qualifications is very prominent, and a case is made for a reimagination and redesign of assessment.
The report introduces a framework of five dimensions or areas of focus for universities in moving forward towards integrating GenAI: Culture, Rules, Access, Familiarity, Trust (CRAFT). For each of these dimensions a useful ‘self-positioning rubric’ is provided, where everyone can position themselves, or each role-player constituency can position itself, at one or more of the following levels: emerging, established, evolved and ‘extending’ (i.e. collaborating and forming partnerships, both within and beyond the university).
Universities might take different positions regarding these two reports, depending on their needs, their views on the urgency of the matter and their understanding of the appetite for change of their various constituencies. However, these two reports can be regarded as essential reading on the journey towards the integration of GenAI into universities.
Walter Claassen (SARUA Associate)