Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Wednesday, April 30, 2025

Deepfakes, the fight for truth in science

Deepfakes are proving difficult to distinguish from real clips.

Morphing madness

A man's face is altered by deepfake technology.

Imagine if anyone — or anything — could perfectly copy your voice or face and spread it online. A flawless imitation, indistinguishable from the real you, saying things you never said, showing up in places you’ve never been. How would that change the way you see yourself? How would it feel to lose control over your own image, your own sound?

Would you start questioning everything you hear or see online — wondering not just if something is true, but if it’s even real? How would this shift your sense of reality, your trust in the media, your relationships with others? If your identity could be copied and repurposed with ease, what does authenticity even mean anymore?

The term “deepfakes” is used to describe the use of digital technology, specifically artificial intelligence, to fabricate media such as images, videos and audio in an effort to make them appear real. “Deepfake” is an umbrella term that can include anything from superimposing a person’s face onto another person’s body to creating a voice memo of your favorite celebrity saying “Happy Birthday” to you — even though they never actually said it.

As AI technology continues to improve, deepfakes pose a greater danger of containing misleading and false content that can ultimately do harm.

One of the biggest dangers deepfakes pose is the threat to the integrity of scientific research. Scientific communication relies on the integrity and authenticity of data. AI-generated media has the potential to fabricate or alter visual and audio information that could mislead researchers, policymakers or the public if disseminated as genuine. Inaccurate information, if perceived as credible, could affect decision-making processes in fields such as public health, environmental science or policy making.

Public health communication, in particular, could be gravely affected. In times of crisis, such as in pandemics or environmental disasters, clear and accurate messaging from trusted experts is crucial. Imagine a deepfake video of a well-known epidemiologist spreading false information about a vaccine. If such a video went viral, the damage to public health efforts could be immense.

This concern is reinforced by a recent national survey, which found that about 33% to 50% of people in a sample population could not distinguish between real videos and deepfakes. This suggests that a significant portion of the public may struggle to detect synthetic media, which has implications for the effectiveness of information literacy and critical media analysis.  

Audio deepfakes are another growing threat. Audio clips where the voices of real people are cloned by machines can make it sound like they are saying things they never said. They are particularly accessible in comparison to videos due to lower production costs and technical requirements.

These audio files can also circulate rapidly online, which can be dangerous. In one example, an audio clip attributed to Vice President JD Vance, which included critical remarks about Elon Musk, gained widespread attention before being confirmed as artificially generated.

So, how should we go about combating deepfakes? There are a couple of possible solutions. First, we could create development guidelines that clearly lay out the ethical uses of such technology. These guidelines would provide boundaries for developers and help prevent misuse. Second, we need to invest in and create new technology tools that can detect deepfakes — tools capable of analyzing all the subtle cues and inconsistencies that give away fake media.

Researchers have already made it clear: We are not yet prepared to properly combat the AI audio deepfakes that are becoming increasingly sophisticated. In many cases, even close friends or family members of the person whose voice has been faked cannot tell the difference. Because of this, it’s critical that we develop better detection tools and systems.

Ongoing research is being conducted into technological responses to synthetic media. These include detection systems that can analyze audio or visual files for inconsistencies or signs of AI generation. Such tools aim to support verification efforts by identifying markers not typically perceivable by the human eye or ear.

Scientific journals, media platforms and educational institutions are also examining methods to address the dissemination of AI-generated misinformation. For example, content verification protocols and user reporting mechanisms are being implemented or evaluated for efficacy in digital environments.

Additionally, AI-generated media detection tools have potential beneficial roles in secondary applications of scientific research quality control. These tools could be adapted to detect anomalies or manipulations in research data, supporting broader efforts to ensure research integrity.

Educational programs focused on digital literacy and critical thinking are also being explored as a strategy to increase awareness of synthetic media. Curricula that emphasize media evaluation and analytical reasoning can provide students and citizens with the skills needed to assess the credibility of digital content. These approaches are aligned with broader science education goals aimed at fostering informed public engagement with scientific and technological issues.

The effects of deepfake technologies on science and education continue to be a subject of active investigation. Ongoing research and interdisciplinary collaboration are contributing to the development of tools, frameworks and educational models that aim to address both the challenges and potential applications of synthetic media.

The future impact of deepfakes will depend heavily on how the scientific and educational communities address these challenges and leverage the opportunities they present. By creating effective misinformation detection tools, upholding strong ethical standards and implementing research-based educational strategies, we can ensure that deep learning in science is not only protected from the threat of deepfakes but actually strengthened by our efforts to counter them.