Understandably, the main immediate focus of the use of ChatGPT in academic circles after its release in late 2022 was on the way in which the chatbots could lead to academic misconduct by students in the preparation of assignments and in other forms of academic tests.
Very soon, other aspects of academic activity would move into the spotlight, namely the way in which academics use ChatGPT in their publications and in their research. Some academics were referring to ChatGPT as co-author of papers submitted for publication, or for books published. Others, indicated that they wrote the text with the help of ChatGPT, that is, that the text or large parts of it was generated by ChatGPT. Still others, indicated that they made use extensively of ChatGPT in conceptualising the content, in developing the structure of the paper, or in some or other part of the research process – and they desired to recognise that explicitly.
Various well-known publishers immediately indicated that ChatGPT (or any other AI tool of that kind) cannot be submitted as co-author; in addition, many took the position that text generated by ChatGPT (etc.) should be treated as plagiarism, since the chatbot was trained on copyrighted material, mostly without the permission of the copyright owners.
Some academics continued to defend the position that Generative AI is part of the new and future digital world and that they will continue to use this powerful tool in their scholarly activity, both to shape and self-evaluate their written arguments, and in the process of doing their research. Increasingly, we see scholarly publications in which recognition is given to the fact that Generative AI was used, and often indicating specifically for which role and in which parts.
During the past seven months, the debate was ongoing, resulting in a stream of publications on this topic. Of course, the debate was – and is – broader than only Generative AI, and also concerns AI more broadly and the many applications that make use of AI to assist the research process.
Two recent publications (one still as preprint) provide valuable perspectives on the state of thinking on Generative AI in relation to scholarly work (publishing, and the research informing it) – perspectives that could be of value to both academics and managers at higher education and research institutions:
- Bill Tomlinson et al. (ChatGPT and Works Scholarly. Best Practices and Legal Pitfalls in Writing with AI) provide a framework for establishing sound legal and scholarly foundations for writing with the assistance of AI tools. (Preprint 2023: https://arxiv.org/abs/2305.03722)The suggested best practices are intended to help scholars and researchers navigate the legal and ethical issues relating to the of AI in scholarly writing. The authors take the position that AI is likely to grow more capable in the coming years, and that it is therefore appropriate to move along the pathway of integrating AI into scholarly writing activities. They make the case that this can be done by establishing clear guidelines for doing so without violating the law or scholarly norms.
- Ruopeng An recently published a book Supercharge Your Research Productivity with ChatGPT. A Practical Guide (2023). An, a renowned and well-published scholar in public health, addresses the criticisms against the use of ChatGPT in the research process, with due recognition to limitations such as the propensity for hallucination. He argues that large language models (LLMs) such as ChatGPT possess key characteristics that make them well-suited for the role of research assistant. ChatGPT can empower researchers and graduate students, and can act as an equaliser where there are no large teams.An takes the writing of appropriate prompts seriously; in his words: “By mastering this skill, we position ourselves to optimally leverage the full potential of ChatGPT and future language models.” His view of prompting is different from that which can be found in the hundreds of guidelines and pre-formulated templates available today; prompting should take the form of a conversation with the chatbots that can unleash “the transformative impact of ChatGPT on our access to information and the potential it holds for augmenting human intelligence and creativity.”
He then presents 10 essential rules of prompt engineering – again, different from the simplistic advice given in so many publications. To take one phrase from his Rule 10: “When crafting prompts, invite the AI to analyse different perspectives, evaluate the evidence, and synthesize information from various fields.”
In the various chapters, An takes his readers through the use of ChatGPT for a wide range of research tasks, from identifying research topics and formulating research questions to choosing research design, analysing data and writing research papers and reports. One is really astonished at the way in which An crafts prompts and elicits responses from the chatbots in conversational ways, either by delving deeper, or by broadening the scope. This is truly a prompt-for-research tour de force by someone who is well-versed in research!
(Note that An’s examples all result from the use of GPT-4, which comes with the subscription version ChatGPT Plus.)
Both of these publications provide useful guidelines academics for augmenting one’s intellect with the new tools of the digital era, and preparing one for more advanced tools still to come. For university managers, these publications also signal the importance of providing the space and the opportunities for scholars to explore and advance the digital tools of the future.