Wednesday, October 8, 2025
Log In
Menu

Log In

The Flawed Deepfake of A.O.C. Highlights a Misguided Approach to Combating AI Misinformation

A recent deepfake video falsely showing Representative Alexandria Ocasio-Cortez sparked debate over the limits of critical thinking and media literacy in confronting increasingly sophisticated AI-generated misinformation.

Eleanor Vance
Published • 5 MIN READ
The Flawed Deepfake of A.O.C. Highlights a Misguided Approach to Combating AI Misinformation

Last week, television host Chris Cuomo shared on social media a video that appeared to show Representative Alexandria Ocasio-Cortez delivering a speech on the House floor condemning a Sydney Sweeney American Eagle jeans advertisement as racist.

In his post, Cuomo questioned the priorities of the political party, noting, “Nothing about Hamas or people burning Jewish cars, but a Sweeney jeans ad? Deserved time on the floor of Congress? What happened to this party? Fight for small business, not for small culture wars.”

However, Ocasio-Cortez never made such a speech. The video was an AI-generated deepfake. Cuomo removed the post after Ocasio-Cortez admonished him for “reposting Facebook memes and calling it journalism.”

She urged the public to apply critical thinking when consuming media content.

While Ocasio-Cortez’s call for vigilance is valid, her belief that critical thinking alone can counter deepfakes is overly optimistic. Many well-intentioned advocates argue that media literacy will help detect such fabrications, but these tools are already insufficient and will likely become obsolete as AI video technology rapidly advances.

In Cuomo’s case, verifying whether Ocasio-Cortez had made the speech would have been straightforward, as Congressional records are publicly available. However, most situations are not so easily verified. Early AI-generated content often contained obvious glitches, such as extra fingers, but as technology improves, these flaws are becoming rare.

Moreover, those creating deepfakes can also employ critical thinking to eliminate low-quality fakes, or use AI tools themselves to generate convincing fabrications quickly and inexpensively. Given this ease, creators can produce numerous variants and select the most believable.

Photography and audio have long lost their status as definitive proof due to manipulation capabilities. Video was one of the last reliable verification methods because it was difficult to fake—until now. With video forgery becoming commonplace, the only way to confirm an event one did not witness is to rely on trustworthy sources and verification processes. But defining what constitutes a credible source remains a fundamental challenge for society.

Trusting authorities is complicated, as they do not always provide accurate or truthful information. Attempts to regulate misinformation—such as proposals to empower government officials to define and police false health information—face challenges because scientific knowledge evolves and perspectives shift over time. If such legislation were enacted, the authority could fall under individuals with potentially biased views.

High-profile incidents tend to attract rapid correction. For instance, in May 2023, an AI-generated image depicting a massive explosion near the Pentagon circulated on Twitter as breaking news, amplified by prominent accounts. Officials quickly denied any incident, and the stock market recovered after a brief dip triggered by the false report, highlighting the enormous potential impact of such disinformation.

Interestingly, deepfakes have had less impact on elections than feared, not because the public is highly discerning, but because many individuals readily accept crude forgeries that confirm their preexisting biases. For those seeking to reinforce tribal beliefs, the authenticity of the content is often irrelevant.

Generally, the higher the stakes and prominence of the subject, the more effort is made to challenge and correct deceptive content like deepfakes.

The real concern arises when such videos involve private matters or individuals outside the public eye. Fabricated footage could be used to accuse or exonerate people of wrongdoing, causing chaos in courts, personal relationships, and social environments.

For example, someone caught vandalizing a neighbor’s car could claim the video evidence is a deepfake—or alternatively produce a fabricated video implicating someone else. In such disputes, it becomes a matter of conflicting claims with little means of verification.

Equally troubling is a future where only one authority’s version of truth is accepted, potentially enforced through extensive government surveillance and strict controls on video provenance and chain-of-custody. Such a system might claim to act for public good but risks consolidating power and eroding individual freedoms.

In 1971, Nobel laureate Herbert Simon insightfully observed that transitioning from scarcity to abundance in information creates a new scarcity: attention. As information becomes ubiquitous and easy to produce, the challenge shifts to discerning credible content amid overwhelming volume.

Technological progress transforms rare and difficult tasks into common and simple ones, but rarity often serves as a protective barrier. Consider cash: it is not impossible to counterfeit, but sufficiently difficult to deter most fraud. Similarly, previous challenges in manipulating media served as safeguards to differentiate truth from falsehood.

Having grown up in a pre-internet era, information scarcity was real—running out of books to read was common. Today, information overload is the norm, yet the day remains only 24 hours, making the protection of one’s attention a critical priority.

The growing ease of generating deceptive media also erodes credibility, especially in photos, audio, and video, which have been fundamental tools for establishing reality. Losing trust in these media forms threatens the very concept of objective truth.

While large organizations and public figures may have resources to combat falsehoods, the general public lacks similar protection. Emerging technological frameworks—such as zero-knowledge proofs, secure enclaves, hardware authentication tokens, and distributed ledgers—offer promising avenues for verifying authenticity, but these require broader adoption.

Without proactive measures to safeguard proof and verification now, governments may fill the void, potentially under authoritarian regimes, further complicating the preservation of truth and transparency.

Eleanor Vance
Eleanor Vance

A seasoned journalist with 15 years of experience, Eleanor focuses on the intricate connections between national policy decisions and their economic consequences.

0 Comments

No comments yet. Be the first to comment!