According to ChatGPT, a generative artificial intelligence overview is untrustworthy because it has a “lack of source transparency,” “hallucinations and misinformation,” a “lack of context and nuance,” “bias and algorithmic influence,” an “inability to interpret real-time or niche information” and “no direct accountability.” Despite its self-proclaimed faults, AI-generated overviews pop up for almost every Google search, with no way for users to opt out of receiving them.
Students, along with everyone else, have been able to commercially access free generative AI services, such as ChatGPT, since 2022. Within Tufts and other academic institutions, this has caused a shift in the classroom for both professors and their pupils.
Monica Kim, a senior lecturer in the Department of Philosophy, explained that the progression of generative AI within the past few years has caused her to change how she assesses her students so that they are more inclined to submit original work.
“Students turn in a draft of something that we look over and then they turn in a final paper. And if there’s going to be a huge discrepancy between the draft and the final, then we have a conversation about that,” Kim said. “Sometimes paper topics end up being more specific … rather than … something that you can easily type into AI.”
Nick Seaver, associate professor in the Department of Anthropology and the director of the Science, Technology and Society program at Tufts, discussed that in previous years, he has created exams centered around generative AI.
“I gave questions I used to do as essay questions on my final exam to ChatGPT … and got short answers from ChatGPT. Then the student assignment was to critique those answers,” Seaver said. “If you’re going to keep using [generative AI] in your life, that’s the kind of skill that you need [which is] how to assess what’s coming out of this.”
Even though professors have made changes to their classrooms to try and prevent students from plagiarizing with generative AI, there has not been a catch-all solution. So, along with lecturing, holding office hours, lesson planning, grading and looking out for non-AI plagiarism in students’ works, professors have to factor checking for generative AI responses into their job descriptions.
Instructors at Tufts have access to Turnitin, a plagiarism detection service, for any assignments students submit on Canvas, but they do not have access to the version that has AI detection.
“Apparently, it is the policy right now of Tufts not to have something with Canvas where you can use AI detection,” Kim said. “If I did suspect that a paper had been written by an AI, I would have to copy and paste the paper, and put it into my own AI detection tools.”
Even if a professor decides to put an assignment into an external AI detection service, Seaver points out that these tools can often be unreliable.
“AI detection [is] truly bogus, does not work, cannot work. No one should use it. If anyone is using it, students should complain. It’s completely illegitimate,” Seaver said. “It’s based on how these systems work, you cannot know for sure and [it creates] very serious consequences for students.”
Seaver and Kim both shared that while professors might have a sense of when a student has used AI for an assignment, there’s not a huge push to heavily police it.
For areas of study that involve more creative thinking, such as the humanities and social sciences, generative AI usage among students is often easier to detect by instructors.
Kim emphasized that generative AI services are generally good at compounding facts and summarizing and clarifying philosophical ideas. However, when it comes to creating an analysis for those things, a pivotal skill in a philosophy class, that’s where AI falters.
Nika Lea Tomicic is a senior studying sociology as well as science, technology and society who has never used ChatGPT, or any other generative AI service, who shares Kim’s doubts about the capabilities of generative AI.
“These kinds of tools, while they may be helpful in parsing out code or getting some study questions in order, at the same time, they erase a lot of the nuance and context that I think is necessary,” Tomicic said.
Kim noted that using generative AI could also cause students to have stunted critical thinking and problem-solving skills.
“If students started to not be able to know how to get from a reading and a paper prompt to an outline by themselves … that would be a really bad thing, and that is part of what I’m trying to encourage them to be able to do on their own,” Kim said.
Similarly, Tomicic wonders how students using generative AI in school will affect them in the long run.
“It’s just not sustainable for the long term, and [you ask yourself], ‘Did I actually grasp anything from this class? Or do I just have these pre-generated notes?’” she said.
Long-term effects of generative AI not only apply to individuals who use the services but also to the world around them. Generative AI has both direct and indirect impacts on the environment from its over-usage of electricity and high levels of water consumption.
Tomicic is the editor-in-chief of The Lantern, a publication and think tank revolving around science, technology and society, and they are having an event with the Center for Engagement and Learning at Tufts and the Sustainable CORE Fellows for people to discuss AI usage as it relates to sustainability.
If generative AI is not producing the best quality content, causing negative effects on the environment and there is the possibility of academic repercussions: Why are students taking the risk of using it?
Tomicic expressed that some key factors to generative AI usage among students are academic pressures and workloads.
“I think about my peers, and when ChatGPT comes up, it’s usually related to stress,” Tomicic said. “It’s usually related to just not wanting to think about what you’re currently working on and not wanting to analyze any of the ethical, environmental, legal implications of what you’re doing if you have an assignment due at midnight.”