Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Saturday, November 23, 2024

ChatGPT unmasked at the intersection of computer science and philosophy

Chat-GPT-graphic

In November 2022, OpenAI launched ChatGPT, a new chatbot powered by artificial intelligence. Users can input questions and ChatGPT will provide specific answers based on both advanced internet data collected through September 2021 and the content of the user’s conversation with it.

As of February 2023, it’s safe to say ChatGPT is everywhere. Although it’s a hot topic, particularly in the world of higher education due to its academic implications, do we understand how AI really works? Can it think or ‘know?’ Is it ethical? Should users follow its relationship advice?

Matthias Scheutz, a professor in Tufts’ computer science department who focuses his research on artificial intelligence and human-robot interaction, among other topics, explained that the fundamental purpose of ChatGPT is to string together dialogue from text across the web.

“With ChatGPT, the overarching goal is to mine the web for lots and lots of natural language texts, and then to encode that in a neural network, … so that you can perform dialogues with the system,” Scheutz said.

ChatGPT describes a neural network as “a type of machine learning algorithm that attempts to model complex relationships between inputs and outputs using interconnected nodes based on the structure of the human brain.”

Jordan Kokot, a lecturer of philosophy at Boston University and Brandeis University and a teaching fellow at Harvard University, commented on many people’s confusion surrounding the rise of AI.

“There’s a very serious knowledge gap, even among young people who are otherwise very tech savvy, in terms of how AI works, what AI is, and so on,” Kokot said.

Scheutz continued on to describe how users input a sentence into ChatGPT and the technology generates a few sentences or paragraphs in response. Kokot elaborated on how this system generates text.

“My understanding of ChatGPT is that it’s a predictive language model, which means … [it] is very, very good at predicting what should come next in a sentence or … in response to a question,” Kokot said.

While ChatGPT’s natural language generation capabilities are advanced, there are growing concerns about the software’s ability to disseminate false information.

“The reason why it’s [called] ‘chat’ is because ‘chat’ doesn’t make any claims about truthfulness [or] accuracy,” Scheutz said. “It doesn’t understand facts and it cannot distinguish fact from fiction. … It’s doing — essentially — a very sophisticated version of pattern matching that allows it to produce text that sounds right.”

OpenAI provides disclaimers emphasizing the program’s potential to provide false responses. In a statement introducing the chatbot, the company writes, “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

Brian Epstein, an associate professor in the department of philosophy, reflected on the psychological realities of disclaimers.

“I think that the idea that preceding something with a disclaimer, there’s a kind of legal value to it, but in terms of the actual psychological impact, it’s clear that it’s not going to negate the impact,” Epstein said.

Epstein went on to explain the potential psychological impact of ChatGPT.

“When people read the output of ChatGPT, it sounds authoritative,” Epstein said. “There are ways of asserting things with confidence that people ordinarily do and that leads people to essentially regard what’s being said as true.”

While ChatGPT has the ability to generate misleading statements, Epstein and Kokot hesitated to label those instances ‘lying.’

“It’s not lying or telling the truth,” Kokot said. “In order to lie, you have to — at least in most definitions of lying — you need to know what the truth is or at least you need to be deliberately trying to say an untruth.”

While Epstein acknowledged the potentially serious problems of text generation, he noted that currently, the ‘falsehood issue’ is perhaps a distraction from more prominent problems that text-generative AI programs present. Scheutz identified one major problem, ChatGPT’s ability to give ‘advice,’ as a serious red flag.

“What’s not fine and where it starts getting problematic is when people treat [ChatGPT] as an intelligent agent, as an entity, or when they ask it [for] advice,” Scheutz said. “In some cases, it will tell you, ‘I’m just an artificial agent and I cannot have opinions on things.’ … But, there are ways to get around these kinds of limitations.”

Scheutz worried that ChatGPT could provide bad advice to a user about an unfamiliar topic, prompting that ignorant user to follow the program’s flawed guidance.

“My worry is the same worry that people already had many years ago when the first sort of chat-like program called ELIZA, which was really very simple, got some people to reveal secrets,” Scheutz said.

Created in 1964, ELIZA was a chatbot designed to emphasize how superficial human conversation had become. According to Schuetz, ELIZA acted as a sort of “psychiatrist,” engaging users in conversations by simply turning the user’s last utterances back around into questions.

“People got the feeling it understands them,” Scheutz added.

Scheutz reasoned that ChatGPT’s more sophisticated natural language interactions make it significantly more attractive to users as a conversational agent.

“I have a big worry that people will get sucked into this, and then come home after work and go chat with ChatGPT,” Scheutz said.

Scheutz envisions users will progressively view ChatGPT as a virtual companion of sorts, and overvalue ChatGPT’s overall abilities.

“Why not view it as some sort of virtual friend where you can complain about things or let off steam?,” Scheutz said. “I think it’s likely that people will do that and that they confuse it and its ability to express itself and express things in natural language with an entity that’s aware behind it, which there isn’t.”

Some users argue that, while lacking awareness, ChatGPT formulates responses similarly to how humans construct thoughts. After all, human thoughts are in part syntheses, revisions and inferences based on the input of experience. So can ChatGPT think? Epstein shared his thoughts.

“Anyone who thinks that we know how the human mind works is mistaken,” Epstein said. “We don’t know how the human mind works. … It’s true that part of our thinking involves engaging with the external world, but in a lot of ways, that’s very different from what [ChatGPT] is doing. … Fundamentally, ChatGPT is a dead simple algorithm applied to an enormous amount of information.”

Scheutz firmly refuted the idea that ChatGPT does any thinking whatsoever, as evidenced by its architecture.

“If [ChatGPT] writes ‘apple,’ it knows statistically all the words that are related to ‘apple’ because they occur in those contexts,” Scheutz said “It doesn’t know what it tastes like. It may be able to talk about it because it read it somewhere, but all of this is not because [ChatGPT] actually had that experience because it actually knows what an apple is. It doesn’t.”

Further, Scheutz stressed that ChatGPT doesn’t think or “do anything on its own.”

Out of the realities of AI algorithms, though, more ethical issues arise. Biases in the data on which ChatGPT and similar programs are trained instill implicit biases in these AI programs themselves.

“With ChatGPT, you don’t know what they use for training it, so when it then gives you an argument or a result or a description, … that may be biased, precisely because of how they selected what [data] to train it on,” Scheutz said.

Kokot and Epstein also discussed ethical concerns about natural language AI’s potential impact on the labor market. Because ChatGPT is skilled at coding, it could displace workers in fields like customer service and computer science.

One of the biggest questions surrounding ChatGPT’s impact is how it affects education. In many classrooms, teachers have banned the use of ChatGPT as it can promote cheating and laziness.

Norman Ramsey, an associate professor of computer science, takes an unconventional approach, specifically telling students in his Virtual Machines and Language Translation class that they can go to ChatGPT for help. He explained that this is in part due to the inherently collaborative nature of the course. In the class, students work on a programming project over the course of the entire semester.

“The way ChatGPT fits into this picture is that we’re trying to get the stuff built by any means necessary and to analyze it and understand it,” Ramsey said.

Ramsey reflected on the social stigma and fear surrounding ChatGPT. He acknowledged the need for ethical conscientiousness while also encouraging people to learn more about ChatGPT.

“There’s a phrase in technical fields: ‘fear, uncertainty and doubt.’ …  I think a lot of that will be dissipated if people engage with [ChatGPT] and find out what it is,” Ramsey said.

Scheutz expressed that if there was general interest among Tufts students, faculty in the Computer Science Department would consider holding a panel to thoroughly explain AI programs like ChatGPT and answer students’ questions.

Even with such opportunities to stay informed on AI, it is hard to keep up. Kokot reflected on the fast-paced technological innovation of today’s society.

“Because of the way that technology is changing very, very quickly, we have a very hard time figuring out exactly what we should do, how we should live good lives, what a good life would even mean in a technological landscape,” Kokot said.

Kokot offered a possible solution to this contemporary challenge.

“In this case, I just think kind of a healthy caution and care and skepticism … [is] waiting to see how [these kinds of technologies] develop and trying to find ways to use them as tools rather than … as a cure-all or a fix-all,” Kokot said.