Archie Bergosa

A lie, if dressed well enough, can pass as truth. And in the age of artificial intelligence, it only takes a few seconds to create that lie—and a nation that can’t comprehend critically to believe it.


Recently, the internet once again served us a bitter taste of the future. A video of two “students” expressing support for Vice President Sara Duterte amid impeachment talks quickly gained traction online. It looked authentic with real people, real voices, real conviction. Except it wasn’t. It was a deepfake. A synthetic concoction of artificial intelligence, designed to look and sound human.

Senator Bato dela Rosa, a known Duterte ally, not only shared the video, but even doubled down when it was confirmed to be AI-generated. “I agree with the message that it conveys,” he shrugged, missing the entire point. Truth matters. Authenticity matters. And when public officials dismiss the dangers of deepfakes with a wave of the hand, what message are we sending to a digitally naïve public?

The Philippines has long struggled with media literacy. In the 2018 PISA assessment, Filipino students ranked second to the last globally in reading comprehension. But the problem isn’t limited to the classroom. According to the Organisation for Economic Co-operation and Development (OECD) 2022 PISA country note for the Philippines, only 24% of 15-year-old Filipino students met the minimum proficiency level in reading, compared to the OECD average of 74%. More recently, a report from Second Congressional Commission on Education (EDCOM 2) showed that we are 4 to 5 years behind expected literacy standards as reported by Rappler.

We are a nation that consumes content fast—but absorbs almost nothing. According to the Digital 2025 report by DataReportal, Filipinos spend an average of 8 hours and 52 minutes daily on the internet, one of the highest in the world. Yet few possess the tools to critically evaluate what they see. It’s the perfect storm: high engagement, low comprehension, and virtually no guardrails.

Now add generative AI to that equation.

We are entering an era where disinformation doesn't just come from troll farms or partisan bloggers. It can be algorithmically generated, tailored to manipulate emotions, and packaged in the voice and face of someone who doesn’t even exist. And for many Filipinos, there is no filter, no buffer, no defense.

That’s what makes this recent deepfake incident so terrifying. Not the tech itself but our collective unreadiness.

The irony? The same individuals who cannot distinguish fact from fiction, who barely understand the headlines they read, are now inheriting AI—its applications, its consequences, its governance. We’re talking about a nation where “screenshot equals truth” and YouTube vloggers are more trusted than journalists. And now we’re handing them tools that can fabricate faces, twist facts, and rewrite reality.

These aren’t just your regular animated hoaxes or computer-generated curiosities. What makes them dangerous is precisely what makes them convincing. They mimic human behavior almost perfectly. When fabricated content appears as genuine human interaction, complete with eye contact, subtle head nods, and a conversational tone, it carries the illusion of authenticity. 

In this case, a fake interview featuring supposed students defending a political figure wasn't seen as a meme or parody. It was shared, reposted, and even defended because it looked real. Sadly, for many Filipinos, that’s all it takes for something to be believed.

Deepfakes like the one that made recent headlines are no longer futuristic anomalies. They are cheap, accessible, and increasingly weaponized in politics. Tools like Veo, Sora, Midjourney, and a lot of more accessible tools allow users to create realistic avatars or mimic voices with no technical expertise needed. During the 2022 elections, disinformation campaigns flooded TikTok, YouTube, and Facebook, much of it targeting young voters. Even after the 2025 elections, where AI was already suspected of being used to subtly shape opinion, the threat hasn’t subsided—it has only grown more refined.

Our general political landscape, in particular, has long benefited from well-organized disinformation networks. Now with AI, the playbook evolves, but the audience remains the same: a population that has not been equipped to distinguish narrative from news, or performance from proof.

This is no longer just about education, it’s about democracy.

If you want to destabilize a nation, you don’t need tanks. You just need millions of citizens who can’t fact-check a Facebook video. If we don’t act—through stronger AI regulations, deeper investment in media literacy, and honest public discourse—we are setting ourselves up for a future where the loudest lie wins. Where democracy becomes performance art powered by generative models. Where truth is just another casualty in the algorithmic arms race.

In the Philippines, where young learners face a worsening learning crisis, deepfakes aren’t just digital tricks but weapons. When a senator shares an AI-generated video and defends it as ‘truthful anyway,’ the danger isn’t just misinformation. It’s that millions will believe it, because they were never taught how not to. We can’t out-regulate deepfakes without fixing what’s broken at the root: education. Until every Filipino can tell the difference between performance and proof, the next viral lie is only a click away.

Ultimately, this is a battle not just against “fake news” but against a slow and systemic erosion of critical thinking. Deepfakes don’t just distort reality. They reveal it. And if we do nothing, the algorithm won’t just rewrite our truth. It will rewrite our future.