In an earlier post, I surveyed differing approaches to issues of sourcing, transparency and ‘information integrity’ in generative AI platforms. The underlying questions there dealt with how AI tools handle sourcing and citation within their own algorithmically generated output. A second distinct but related challenge deals with developing new norms around transparency and disclosure as it relates to human use of AI tools in the human authorship of work—which is to say our work, and the work of our students.
Most academic institutions are navigating a bit of a high wire act when it comes to issuing guidance and standards for the use of AI in both faculty and student work. On one hand, it seems clear that some degree of literacy with and experience using generative AI tools will have both economic and non-economic (utilitarian, creative, academic, educational) value in the future that awaits our students.
On the other hand, as I argued in my previous post, academic structures and norms across disciplines rest on an ethic and practice of clearly tracing the lineage of ideas and contributions that one is engaging with, whether building upon them, challenging them, or (often) both at the same time. Each of these contributions records the work of one or more human minds. It seems to me that any use of a text-based generative AI tool that results in a final product which contains significant chunks of algorithmically generated text breaks or interrupts this chain of human scholarship. As we know, the algorithmically generated text is not a novel ‘contribution’ so much as a remix of uncredited and unsourced human ideas.
So the dilemma for the academic community is how to adapt to a technological paradigm shift, revisiting and possibly revising longstanding norms while preserving the integrity of the processes and contributions those norms gave rise to.
One of the early attempts to thread this needle is the notion that we ought to treat algorithmically generated texts as citable works. One can see evidence of this attitude in the guidance issued by organizations like MLA and APA for citing output from ChatGPT and similar tools. This approach strikes me as a shortsighted misapplication of the convention of citation, one that is likely to leave students with a confused and conflated view of the relationship between a coherent source (one of known authorship with the ‘information integrity’ of traceable evidence and claims) and the (comparatively opaque, muddled, untraceable) output of a probabilistic language engine like GPT-4 or other Large Language Models.
Given how quickly this technology is changing, how technically opaque it is, and the fact that its very design encourages the user to view it as a humanlike ‘author’ or interlocutor, our students need all the help and support they can get in breaking through these trappings and appearances to understand how this output differs in important ways from the human expression of ideas. Encouraging ‘citation’ as the mode for academic disclosure of the use of generative AI tools serves the opposite purpose, reinforcing the mistaken notion that algorithmically generated texts are similar enough to human-authored ones to be governed by the same rules of sourcing and credit-giving.
As an alternative mode of disclosing academic use of these tools (in contexts where instructors have permitted certain uses, or established guidelines for their use), I would propose something more like an AI methods section or disclosure statement. Such a statement would accomplish the goal of transparency while avoiding the epistemological confusion inherent in treating the text output of AI chatbots as citable material. Such a section or statement would give the student a chance to narrate how and for what purpose they used the tools, and could include the prompt language they used. Such an approach would be sufficient disclosure in some cases, especially those where no ‘raw’ AI-generated text was incorporated into the final product (i.e. use of AI was restricted to activities such as brainstorming, research, surfacing possible counterarguments, pre-writing and outlining).
In cases where an instructor has required or condoned the incorporation of AI-generated text into the final written product, additional norms or methods would likely be needed. Cathy Warner has been exploring and evaluating ChatGPT with her English 112 students this semester, and she uses a system where each student maintains two Google Docs, one being the essay draft and the other being their combination outline/brainstorming document. In the latter, students copy quotes or key points from articles they plan to use as sources, as well as their own outline and other notes. This semester they have also put AI-generated text in this process doc. Transparency is preserved by way of attribution and a color-coding system. For example, a quote from a news article would indicate the source and be highlighted in red, whereas AI-generated text might be highlighted in yellow. When Cathy conferences with students, they pull up both documents side-by-side. This gives them a structure and visual cues to help avoid plagiarism or academic integrity issues.
Appropriate methods and approaches will of course vary depending on discipline, assignment and an instructor’s goals and intended learning outcomes. But this example demonstrates that there are workable and viable alternatives to reflexively adopting citation as the paradigm for disclosing AI use in an academic context. My own inclination is to treat AI as I encourage students to treat Wikipedia, not as a citable source but as a useful tool for surfacing relevant search terms and research directions (identifying scholars or organizations doing work on a specific topic, exploring key ideas and establishing some context), and sometimes as a springboard to citable sources via a page’s references section.
To the extent that Google Gemini, Perplexity AI and other more search-oriented AI tools enable this kind of approach, treating them as essentially more interactive or conversational versions of Wikipedia seems like it might be an effective approach. In fact, Wikipedia represents a sizable portion of the training data undergirding every major Large Language Model, so in a sense that’s exactly what they are. (Because of Wikipedia’s unmatched scale, breadth of coverage, and sheer volume of embodied human work hours, it is a near certainty that if the major LLMs hadn’t had access to Wikipedia, the quality of their responses would be noticeably worse across many topics and domains).
Getting the epistemological framing right from the beginning is critical here, and will save us all lots of unlearning and relearning later as these tools continue to proliferate at pace and scale. To borrow an analogy from detective fiction, AI text generators are evidence tampering machines. They enter a basement archive containing a huge, carefully assembled and tagged chain of human-gathered evidence and evidence-based claims, rip open the boxes and throw the contents on the floor and sweep them into interesting piles, following an unknown classification system of their own devising. For some purposes, the results of this pile sort are interesting, compelling, and even useful. But for any use that values an intact evidentiary chain, or transparent web of authorship, the current market-leading tools are seriously lacking.
It is clear that there are all kinds of potentially valuable uses of AI text generators within academe, but most of these involve using the platforms more as language association machines or conversational search engines than as writing aids or sources of information in and of themselves.
Trackbacks/Pingbacks