A recently proposed AI disclosure framework brings more clarity on the way in which the use of AI in research and other knowledge-based activities can be identified and disclosed.

Academics are used to conventions for handling citations in their research, thereby ensuring that relevant sources are properly identified and can receive due recognition. Over time, conventions developed for the recognition and documentation of internet resources. In 2012, the Contributor Roles Taxonomy (CRediT) was developed, indicating the contributions of the different authors to a specific research output. All these developments relate to dutiful forms of disclosure.

Since the appearance of generative AI at the end of 2022, academics needed a way to disclose their use of generative AI, both regarding the way they reached their conclusions (the research stage) and the fact that the final wording could have been generated by GenAI (the writing and editing stage). Publishers and editors very soon reached consensus on the approach that GenAI (or an LLM) cannot be identified as an author and that explicit and comprehensive disclosure of the GenAI tools used and the way they were used, is necessary. Accordingly, various publishers of journals, books, etc. require a statement of AI use, either a short sentence or a more comprehensive statement. (Many universities have on their websites lists of the requirements of the various publishers to assist researchers and to remind them of the need to conform to such requirements.)

The reason for this requirement is that, different from the static nature of published and internet resources, the GenAI tool can generate different results depending on which tool is used, which prompts are given, which methodologies are followed, etc.

Unfortunately, the disclosure formats required by the various publishers differ considerably. In view of this situation, Kari Weaver from the library of the University of Waterloo developed the Artificial Intelligence Disclosure Framework (often referred to as the ‘AID Framework’) late in 2024. (https://crln.acrl.org/index.php/crlnews/article/view/26548/34482).

In line with the thinking behind the CRediT taxonomy, she identified the headings that could be included in the AID statement, providing examples of how this would look in specific cases, also in the context of teaching. The AID Framework is currently often referenced in the scholarly literature. Only time will tell to what extent it will be accepted and implemented by stakeholders. Until now, no other non-proprietary, comprehensive framework for GenAI disclosure has received prominence in academic or library/information circles.

Even outside of the academic context, detailed disclosure of GenAI use is increasingly included in ‘serious documents’ of all kinds (reports, newsletters, ‘substacks’, etc.).

It is only reasonable to expect that members of all constituencies in the university context should keep in mind the expectations regarding AI disclosure. This also applies to internal documents, as well as to outward-facing documents such as policy advice, grant applications or consulting done by university staff to outside parties.

Over the past six months, the discussion around disclosures has become prominent in relation to the work of consultants, typically in the context of ‘responsible AI’. The 2024 article by Elizabeth Renieris et al. Artificial Intelligence Disclosures Are Key to Customer Trust is widely referenced on this topic. (https://sloanreview.mit.edu/article/artificial-intelligence-disclosures-are-key-to-customer-trust/)

The view is sometimes put forward by critics of disclosure that GenAI is simply a tool, comparable to a calculator or a spelling checker, and that, just as when those tools are used, no disclosures should be required on the use of GenAI. Apart from the reality that GenAI can provide inaccuracies, there are also considerations of transparency and trust. (The client might want to know whether the views presented in the report are those of the consultant herself/himself [or of a specific team] or those generated by GenAI. Of course, the consultant can indicate that an LLM was used to obtain or to confirm certain perspectives, allowing the client to draw his/her own conclusions.) 

Due to the proliferation of AI-generated content in nearly all areas, also in manuscripts submitted for publication or contributions to blogsites, editors and other gatekeepers find it increasingly necessary to submit manuscripts to AI detectors. During the first year of AI detectors, many of these were not very reliable. Today, however, there are much better AI detectors that might not lead to perfect results, but that will provide sufficient indication as to whether human experts should also be engaged … and this combination is powerful. (See Jae Liu et al., The great detectives: humans versus AI detectors… 2024.) (https://edintegrity.biomedcentral.com/articles/10.1007/s40979-024-00155-6)

At the heart of the matter of AI disclosures are three highly valued considerations: integrity (of research or of websites, etc.), transparency and trust. Trust lost is hard to regain.

 

Walter Claassen (SARUA Associate)

Published On: 6 March 2025Categories: News
Categories: News

Share