Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Thursday, November 21, 2024

ChatGPT: Love it or hate it, you have to understand it first

IMG_1547-scaled
OpenAI's ChatGPT website is pictured on April 19.

If you have been active on social media over the last four months, it is very likely that you have heard about the hype of ChatGPT. You might have experimented with it or used it for an assignment. But do you know how it works? Is it going to replace your job? Is this the start of an artificial intelligence powered apocalypse?

To understand ChatGPT, let’s take a step back and see how we reached this point. The field of AI started in the 1950s when mathematicians, logicians and scientists started trying to mimic the logical reasoning and problem-solving methods used by the human brain in machines. All of the newest AI models seen in the news today are generative models, which means that the models are able to generate data that has not been directly selected from any other source.

ChatGPT is a generative predictive text model, meaning it gives text outputs based on a certain input. At its core, the model is trying to predict what the next word in a sentence is. Peter Nadel, digital humanities and natural language processing specialist at Tufts Technology Services, explained that ChatGPT is a big black box model based on GPT 3. 

"There’s a GPT 3 model and that is a 175 billion parameter, a word to [vector] model. So, what it does is it takes this enormous, enormous amount of text,” Nadel said. “They pass it into this really complicated algorithm that can assign Bayesian weights to each of the words so that basically what you’re training for is a model that can predict the next word of a sentence. And so that’s what the GPT series are built around.”

GPT 2, which was the version of the model that preceded GPT 3, is a casual language model. This means to use it, a person has to pass the first two to three words in a sentence and the model can complete that sentence. To develop ChatGPT, the team at OpenAI used a strategy known as reinforcement learning with human feedback based on the feedback given by GPT 3 — a method called “few shot learning.”

“You basically give it a text document where you say, 10 plus 10 equals 20, 20 plus 20 equals 40, and then you say 40 plus 40 equals and then you just leave it blank, and the task is to predict the next thing, and it already knows that 40 plus 40 equals 80,” Nadel explained. "It knows that just from the massive amount of information that is held in the GPT 3 model, what you're teaching it, really, is how to respond to a question, and that’s what’s fundamentally different about ChatGPT versus GPT 3.”

The newest GPT 4 model has expanded on the training data and capabilities of GPT 3, to be able to support multiple languages and images in its response. The wording of your input determines what kind of output you get from the model, which means that if you are looking for the best response, you might have to try a couple of inputs with different prompts.

ChatGPT does have its downsides. A machine learning model can therefore only give out information that it learned from its input. If the input is biased, incomplete or not large enough, the machine learning model may give out a result that it deems correct but is far from the truth.

An example of this bias in action is with facial recognition software, where the algorithm is often trained to recognize white people more easily than Black people because of the data used in training the model. Since the performance of these artificial intelligence models heavily depends on the data inputted into the model, bias in such models can perpetuate oppression that already exists, and it’s why AI tools can give out false information. If the user does not realize that the output is incorrect, that can lead to harmful misunderstandings.

These predictive text models can be used for writing essays, poems and cover letters, but if a model is asked a factual question, make sure the response is cross checked with some actual research.

AI hasn’t replaced my job as a journalist yet, but it’s nearly impossible to predict what the future of AI will look like.