Intelligence Artificial: ChatGPT in Higher Education  

Photo credit: Unsplash

ChatGPT looks and functions like a chatbot, where an algorithm produces text based on a prompt put in by the user. Chatbots are driven by technology called large language models, or LLM, which learn to produce writing by analyzing immense amounts of text. But calling ChatGPT a chatbot is like calling a Ferrari a vehicle. While technically true, ChatGPT vastly outstrips its predecessors of the past several decades in its impressive ability to produce lucid, compelling language on any subject almost instantly. From tone to topic and even meter, ChatGPT can produce language indistinguishable from human writing limited only by the imagination of its prompts.  

ChatGPT produced this introductory paragraph based on the following prompt: “Write the introductory paragraph to a college essay analyzing the role of the gods in the Iliad. How do they influence the actions of mortals, and what does their intervention reveal about human agency and fate?”  

The role of the gods in Homer’s epic poem, the Iliad, is fundamental to understanding the intricate web of human affairs and divine intervention that shapes the narrative. Throughout the epic, the gods are not mere spectators but active participants in the lives of mortals, wielding their influence to manipulate events and individuals according to their own agendas. Their interventions often serve to heighten the drama of human conflict, yet they also raise profound questions about the extent of human agency and the inevitability of fate . . .  

Written by AI

“Now write it in iambic pentameter” 

In Homer’s epic, the Iliad’s grand scheme,  
The gods, they play a role that’s far from slight.  
Not passive, but with wills and aims they gleam,  
And shape the fates of men in mortal fight . . . 

Written by AI

The thesis was generated by ChatGPT as well. 

From literary analysis to potluck recipe ideas to travel itineraries, ChatGPT is an awe inspiring and overwhelming technological breakthrough in artificial intelligence. After its introduction by OpenAI more than a year ago, ChatGPT reached over 100 million users in its first two months and started a new boom in AI technology efforts and interest. Giants like Google began diverting resources to develop its own AI, and Microsoft is working to incorporate an LLM into its Office Suite. A flood of articles followed praising its promise and providing a warning of its impact and flaws. ChatGPT’s most recent competitor, Claude 3, was released on March 14, 2024 by Google-backed AI startup, Anthropic. This new chatbot can analyze book-length texts and, to some users, can sound uncannily human and produce more articulate and easy-to-read text.

The obvious concern is that ChatGPT and other chatbots are an extremely easy way for students to cheat. Faculty’s responses, according to the Chronicle of Higher Education, have been sporadic. While some have turned to only in-class writing assignments, others have incorporated the AI tool into their teaching.  

“Most colleges and universities started this fall without redefining existing plagiarism policies in light of AI, instead treating its use as a matter best left to instructors,” wrote Jordan Alexander Stein, professor of English at Fordham University.  

Besides cues of voice and tone or drastic changes in a student’s writing, there is no reliable way to tell the difference between human-generated text and that written by artificial intelligence. Software to detect its use is impossibly flawed, and while some students are falsely accused, others go undetected. Software that detects plagiarism often gives false positives, especially to non-native speakers, according to a study at Stanford University.   

It’s difficult not to become paranoid or nihilistic. While some are hearing about this new AI for the first time, others are so far ahead in their understanding and use of AI it would surprise even the most tech-savvy.  

“I’ve only had one student who I’m pretty sure did it. And you can kind of tell because the writing is just bad. And it’s generic,” said Assistant Professor of English Helen Davies. “One of the things about ChatGPT, all large language models, is that it uses a lot of averages and that includes average writing. It’s fine. It writes. It didn’t really address the question that I’m assuming the student asked it, or they should have asked a better question.” 

The technology is as powerful as the prompts given to it. A student can input an essay prompt, and the AI will churn out something passible. In May 2023, Harvard undergraduate Maya Bodnick asked seven of her Harvard professors and teaching assistants to grade essays written by ChatGPT for major assignments that counted for about one-quarter to one-third of students’ grades in classes ranging from micro and macroeconomics to Latin American politics. ChatGPT impressed professors with attention to detail, vivid writing, and fake but persuasive personal anecdotes. In this loose experiment, ChatGPT passed a first semester at an Ivy League school with a 3.57 GPA.  

“What makes cheating with ChatGPT different than many kinds of cheating that have long existed is, I think, the speed and ease by which it works to mitigate exactly the challenging contexts in which the current generation of students now works: by freeing up time, supplementing deficits in their preparation, and asking nothing emotionally,” wrote Stein.  

But the fear around students using ChatGPT to cheat is statistically and experientially overblown. “It’s understandable that there’s some concern,” said Laurel Bidwell, Associate Professor of Social Work. “But the truth is, cheating has always been a part of the worry for higher education. There will always be some who choose to cheat, just as there will always be many who choose to follow the rules. What we should be focused on is helping our students learn how to ethically use this new tool.”

According to research from Stanford University, the introduction of AI chatbots into schools has not increased the rates of cheating, at least in high schools. In surveys of more than 40 U.S. high schools, around 60 to 70 percent of students said they had recently engaged in using AI chatbots to cheat. Though a high number, it is the same percentage to years before the popularization of LLM chatbots. 

“Cheating is not new. Using technology to cheat is not new, whether that technology is a Post-it note or AI,” said Davies. 

Photo credit: Unsplash

What AI can do is produce persuasive, well-written text. What it cannot do is produce perfect facts. The new AI systems are “built to be persuasive, not truthful,” an internal Microsoft document said. Like the fake anecdotes in the Harvard papers, ChatGPT is designed to mimic human writing in its fluency and structure, but not in its truth. Generated text reads extremely realistic, but often includes false statements.  

In the world of AI, these falsehoods are called “hallucinations.” Because chatbots are trained with massive amounts of texts, the AI can mimic writing in Frankenstein-like fashion, but it does not produce factual accuracy. When asked to produce sources for a research paper, ChatGPT will generate a bibliography filled with trustworthy-looking sources, but few if any of them are real. While an author’s name, journal or title exists, the actual article does not.  

“We’ve always asked students not to just go online and pull stuff from Google, because they don’t know the difference between good information and bad information, good sources and bad sources,” said Bidwell.  

Davies had a similar opinion: “I’m not opposed to students using it in the same way I’m not opposed to students using Wikipedia,” said Davies. “It’s not a final product, it’s not a final stop; but as a kind of getting the ball rolling, it’s okay. But you do have to acknowledge that you use it in the same way that you have to acknowledge whenever you cite any source or whenever you use anything to help you come up with ideas.” 

In response to ChatGPT, some faculty have shifting assignments from homework to in-class writing. In her class, Davies began assigning quizzes and midterms by hand. She also made time for students to distil the central ideas of their paper before beginning the writing process outside of class, where they might use ChatGPT. “If you can use ChatGPT to articulate your paper well enough and include research, and you include an acknowledgement that you use ChatGPT, I’m not mad about it.” 

“My first thought was, how can we use this in our teaching? How can we teach with it and show examples, and have students use this ethically in a way that helps,” said Johanna Creswell Báez, assistant professor of Social Work. In the spring of 2023, Báez brought ChatGPT into the classroom, talking about its strengths and its weaknesses with her students. Every week, students used the AI tool during in-class assignments. By introducing the technology to her students, Báez controlled the dialogue — demonstrating that she knew what ChatGPT-generated text looks and sounds like — and the class was able to discuss appropriate boundaries. Using the tool to create ideas was appropriate, copying and pasting it verbatim was plagiarism.  

Adapting to changes in technology can be difficult as it is necessary. AI is certainly not the first: Elementary math teachers used to exhort their classes to master mental math because students would not always have a calculator in their pocket. (Now, of course, everyone has a calculator in their pocket on their phone.) Email too was sometimes seen as an unsustainable fad. “One of my colleagues retired a couple of years ago, and he has very strong memories about email being introduced to the university and about how much reluctance he had about it,” said Davies. “And it’s not a technology I even think about anymore. AI might be like that. We start to accept it as a part of life and there’s got to be an awareness of what it can and can’t do.” 

Photo credit: Unsplash

No matter how some faculty have tried to prevent the use of AI in their classrooms, from punitive policies to the handwritten essay, ChatGPT has changed the nature of writing forever. AI is here to stay.  

“Johanna and I have taken the perspective that we are not being student centered if we don’t embrace this nowadays because we’re going to be way, way behind our students,” said Bidwell.  

Learning how to write and write well is learning how to think well. In academe, writing is its own epistemology. Through the act of writing students learn to grapple with difficult ideas and how to communicate their own.  

The essay is the offspring of the oral argument, the oldest form of higher education tracing back to the mythological ancestor of education — the Lyceum, where philosophical debate took place in classical Athens. The term “essay” comes from the French “essayer,” meaning “to try” or “to attempt.” Essays began as attempts to articulate abstract thought, to connect ideas into larger and more complex arguments.  

A paper is not meant to be a product, but a process. For students, the act of writing an essay is what makes one valuable, not the written product itself. Few will choose a vocation in academics were researched articles are imperative (and even if they do, the college essay is far from the peer-reviewed article.) The essay is a verb, not a noun. Through the act of research, articulation and weaving the abstract into the concrete, the writer learns to think critically, to communicate to others, to question unexamined assertions and to argue an original idea. The writer of an essay arrives at the very essence of critical thinking: an investigation into reality. Whether or not the attempt fails is irrelevant. As National Book Award winner Joan Didion says is her essay “Why I Write,” “I write entirely to find out what I’m thinking, what I’m looking at, what I see and what it means.” The attempt is a foray into discovering meaning, that which makes intelligence not artificial.   

“If what students think matters is the finished essay and its grade, ChatGPT will always produce results faster and more easily than students can,” wrote Stein. “If what they think matters is the multistep, collaborative process of writing and revising, well, that’s exactly what ChatGPT robs them of.” 

While a student can simply enter an essay prompt into ChatGPT and get something passible, others are using it more sophisticatedly — asking for lists of thesis ideas for their paper, then asking for an outline to argue a chosen thesis, then generating paragraphs for each of the outlined points. Artificial intelligence not only does the writing for the student, it does the thinking for the student.  

“I want the students to be doing the thinking work,” said Davies. “I don’t care if ChatGPT comes up with a first draft, or if ChatGPT is used in editing. I do care if that’s it. But I also care if a student stays up until 3 a.m. one night, writes a paper, and then turns it in.”  

For Davies, the technology is meant to be assistive: it helps in the editing process and in the initial stages of getting words onto the blank page. “Because words on a page are less scary than a blank page, and then you can edit and change from there. But the ideas I want to be theirs.” 

“ChatGPT is the window dressing on the ideas that I’ve seen them build.”  

As an assistive tool, ChatGPT can help students get started in the writing process and can help in editing and drafting. But for Báez, the process of engaging with ChatGPT in generating individual ideas can be just as productive: “It is adding to our thinking and our knowledge and taking us five more steps rather than not thinking. It’s expanding our thinking.” By engaging with the chatbot in a dialogue, the AI might spark new ideas or offer new lines of thinking, in much the same way as speaking with a colleague or participating in group work.  

Photo credit: Unsplash

Despite the many advantages of utilizing ChatGPT to help the work process, AI-generated text is not valued the same as the human-made. On February 16, 2023, the Peabody Office of Equity, Diversity and Inclusion at Vanderbilt University issued a statement in response to a recent shooting at Michigan State University. The message read, in part, “In the wake of the Michigan shooting, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus. By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.” This message of solidarity with a fellow institution of higher education during a time of tragedy was followed by the parenthetical “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.” Swift outrage followed the use of artificial intelligence to express a message of inclusivity during such a human moment of loss. The university quickly issued an apology for using AI to write the message.  

If the school had not included the parenthetical “Paraphrase from ChatGPT,” it’s doubtful their campus community would have been able to tell the difference. And the message itself would most likely have been well received, or at least accepted as a common administrative response to an all-too-common occurrence. It was the fact that a human being did not write the message, not the language of the message itself, that was so upsetting. This points to a value in the human-made. In the words of editor and critic Ian Boudreau, “why should I bother reading something that nobody could be bothered to write?” Even if the text generated by a chatbot is as eloquent, or even more so, than that written by a human being, the individual’s thoughts and feeling and language will always be more valued. 

“I think there is a human resistance to things, especially things that are supposed to be heartfelt, that are automatically generated,” said Davies.  

If there is an innate discomfort towards artificially-generated language around human emotion, there is a more subtle discomfort around artificially-generated language around human thought.  

“One of the things I think about a lot is the importance of the ideas of the individual. Your writing can be more specific than ChatGPT,” said Davies. “I don’t know that I have some 1800’s notion of the artistic genius inherent in a specific person that can’t be replicated, but probably that’s true, who knows.” 

In Samuel Taylor Coleridge’s essay “On Poesy or Art” in which he explores the underlying philosophy of his theory of organic form, he says, “The artist must imitate that which is within the thing, that which is active through form and figure, and discourses to us by symbols — the Natur-geist, or spirit of nature, as we unconsciously imitate those whom we love; for so only can he hope to produce any work truly natural in the object and truly human in the effect.”  

To be able to imitate the realities of the physical and intellectual world, the author first must investigate its essence, its reality. Essaying then becomes an investigation into being itself.   

If to write enlivens the world into a deeper reality, than relying on AI to write for us does the opposite by removing the process of investigation. It makes the world less real. Though opinions differ.  

“I don’t know if I believe this. I have heard many colleagues who are better writers than me say that the spark of individuality is lost. I’m not sure I buy it,” said Davies when asked if there is something that is lost in using AI. “And similarly, I want my students to know that just because a computer spits an essay back to them doesn’t actually mean it’s good.” 

A necessary skill to effectively use AI generated text is the ability to discern good writing from bad, a frustratingly ineffable skill. The five-paragraph essay teaches structure and the use of sources to support and argument; but language competency doesn’t end with literacy. Clarity, impact, voice and communication techniques are far more difficult to teach and are constantly evolving. These skills require time, feedback and exposure to a wide range of approaches and perspectives.   

“Our [professors’] perspective is fundamentally different than an 18-year-old just starting their college journey,” said Davies. “Students, especially freshmen at this point, are entering the process of learning to write with their own voice now. And they are developing their critical thinking skills now.”  

“If you are not confident that this writing is bad,” she continued, “then you are going to trust the robot much more than somebody who’s using it as a tool knowing what they already want to see.” 

A key part of the life-changing experience of higher education is discovering the danger of letting someone else, or something else, do our thinking for us. An experience that is partly tied to the self-discovery inherent in the transition into adulthood, but it also arrives in the research and theory components of writing. Knowledge is derivative, it comes from a source and is influenced and shaped by that source. Gone unexamined, the process of research can lead to the dissemination of misinformation — similar to old fears around Wikipedia.  

Not using AI in the classroom setting does not mean students will not be using it in their future careers. Job preparedness means being prepared to use AI. As emerging AI technology has surged in the market, interest for integrating AI into preexisting roles has also increased.  

It’s impossible to forecast where AI will be in the coming decade, or even by the next commencement ceremony: “I think we’re in that stage right now of technology where it’s just going to keep evolving,” said Davies. “And so saying that I want students to know the ins and outs of ChatGPT is silly. Because by the time our freshmen graduate, ChatGPT is going to look like the equivalent of a flip phone. It’s just going to evolve so continuously.” But what is certain is that students will need to know how to use AI, even if the specific software is different than its current form. “Almost no matter what field they’re in, they’re going to have to reckon with it in some way,” said Davies. Millennials had to adapt from the corded landline to the flip phone to the smart phone. Gen Z and Alpha’s technological rollercoaster may be AI.  

“In the future, I cannot imagine that anyone is going to spend such long hours generating internal reports for their agency without using something like ChatGPT. It takes a lot of time,” said Bidwell. “We used to do statistics by hand, we’re not doing that anymore. However, we have to know how to check the results of our statistics to make sure errors haven’t happened — that’s what we have to know how to do.” 

Throughout human history, advancements in technology radically changed the nature of labor, from the plow to irrigation to the assembly line and automated manufacturing. These changes removed burdens from labor and opened a degree of non-labor time that allowed for further innovations, including innovations in thinking and art. The promise of AI is that it may do the same. By removing more tedious tasks, which for some includes writing, a person will have time to focus on more crucial work.  

These same innovations have also stripped skilled workers of their markets. Text generators may one day do the same for writers. “Humans doing the hard jobs on minimum wage while the robots write poetry and paint is not the future I wanted,” posted architect, satirist and cartoonist Karl Sharro on X. Still, the innate value of the human-made remains. 

Photo credit: Unsplash

The distinction between the two is difficult. AI may still be a human act because it is a tool created by people for people, trained by human words. And using AI may be just as human as using any other technology — a computer instead of pen and paper; pen and paper instead of clay tablets. Language itself is an invented technology. As is art.  

Who actually owns the text that AI produces is still under debate. A response to AI generated art and writing may be a movement of radical absurdism and abstraction. Artwork so fractured and sporadic, so inventive, that no computer could have the ability to generate it based on the ocean of preexisting examples that it pulls from because of its singularity. 

To some UCCS faculty who have embraced the use of AI in the classroom, the greater concern is not the inherent value of writing itself, but the equity of the practice of writing. 

“There are different types of writers and different types of learners, and one of the things that we have to acknowledge is that the processes for some people [is different],” said Bidwell. “So for instance, let’s say someone is struggling with ADHD, the process of writing could be blocked from even getting into it because of the lack of ability to formulate an organization for the writing.” Utilizing AI may make education more equitable to neurodivergent students.  

Who has access to the technology in the first place is also an issue as the newest version of ChatGPT is a charged service.  

“We need to acknowledge that it’s not going to be right for everybody. And so how do we really even that playing field, whether it’s financial, or whether it’s the way the brain works,” said Bidwell. “How do we embrace multiple ways of using it or not using it where we’re still getting at those skills and content that students need to graduate and to be proficient in their field.” 

As a practitioner of the digital humanities, Davies is aware that a drawback of using AI is the introduction of new levels of bias. Because ChatGPT is trained using preexisting text, much of that writing is dominated by the hegemonic voices of earlier periods.   

“Depending on what it is used as training data, we’re going to have an increased amount of certain viewpoints and certain styles of writing and a decrease of others,” said Davies. “It’s going to act as a funnel that can both increase certain problems and exasperate others that already exist.” 

“It’s channeling certain voices as what a paper should sound like,” she continued. “If it’s channeling certain sources for what should go into a paper, if it’s accessing certain material, all of that is going to create further bias, it’s going to lose a diversity of perspectives and information.” 

The information used to train AI can increase the discrimination of history and limit innovation just as much as it may spark new ideas for the individual user. “We have to be open to acknowledging that every advantage has some sort of disadvantage to it,” said Bidwell. “It’s just a matter of identifying what those are and for whom.” 

AI cannot provide the value of writing, or the reason for communication between one another. It cannot replace the mind’s purpose to think. It is a tool, and soon a necessary one that will permeate our daily lives. And like all tools, it has limitations and exceptional applications. But it can never replace the why behind the what. Learning how to use it effectively is unquestionably beneficial to the future of work. But not as beneficial as pursuing the cultivation of individual thinking and individual voice. The introduction of sophisticated LLM AI chatbots doesn’t question the importance of learning to write, it brings into question who we are and what we value. 

For faculty concerned about the misuse of AI or interested in techniques to introduce AI into the classroom, there are resources available at the Faculty Resource Center.