DEMOCRACY in the Philippines is no longer decided solely at the ballot box. It’s being shaped daily on Facebook timelines, YouTube comments, TikTok videos, and X posts. With the May 2025 midterm elections approaching, nearly 68.4 million Filipinos will vote for senators, district representatives, and thousands of local officials. What plays out online in the weeks leading up to that moment could have as much impact as what happens on Election Day.
The fallout between President Ferdinand Marcos Jr. and Vice President Sara Duterte has only deepened the divisions playing out online. Supporters from both camps are flooding social media with posts, memes, and commentaries; some are factual, and others are not. That puts Facebook, X, TikTok, and YouTube in a tough spot. The question now is whether they’re doing enough to keep the conversation going and keep it honest.
Meta’s approach to political content
Meta remains a central force in online political discourse. It continues its partnerships with the Commission on Elections (Comelec) and local fact-checking groups, promoting voter education through reminders and verified content. Political ads are still allowed, provided advertisers pass identity checks and use “Paid for by” disclaimers. Meta archives these ads in a public Ad Library.
Meta relies on professional third-party fact-checkers in the Philippines to flag misinformation. Global changes in Meta’s moderation policies raise concerns about future consistency as the company shifts towards the Community Notes model.
X’s free-speech
first approach
X has reintroduced political advertising, supported by its Ads Transparency Center. While its Civic Integrity Policy prohibits misleading information about voting processes, the platform has softened its stance on claims about election results.
Much of X’s moderation now hinges on Community Notes, where users collaboratively add context to misleading posts. This feature invites local contributors into the moderation process. In practice, the model may not keep pace with the volume and sophistication of coordinated campaigns. X’s ties to Philippine election authorities remain limited, making enforcement inconsistent.
TikTok’s firmer stand
TikTok has adopted some of the strictest election-related policies. It bans all paid political ads and prohibits politicians from accessing monetization tools. Its Philippine Elections Center, developed with Comelec and civil society groups, provides verified voter information. The platform’s #ThinkTwice campaign promotes digital literacy, especially among young voters.
TikTok’s viral nature means misinformation can spread quickly, often through unpaid, influencer-led content. While its policies are strong on paper, moderation at the scale and speed of TikTok’s algorithm remains a significant challenge.
Google’s approach to content integrity
Each platform handles political advertising differently: Meta allows it under strict conditions, X recently reinstated it with some transparency measures, Google restricts it during the official campaign period, and TikTok bans it entirely.
Google and YouTube focus on surfacing authoritative sources, particularly during election periods. YouTube now labels AI-generated content and has strengthened its rules against manipulated media. Google has also committed to pausing political ads during the official campaign period in line with local regulations.
These moves aim to support credible information flow. Still, critics worry that prioritizing “authoritative” sources could inadvertently sideline independent voices. Enforcement across Google’s massive content ecosystem continues to be a work in progress.
The profit engine behind misinformation
A less-discussed factor in the misinformation crisis is profit. Content creators benefit financially from sensational or divisive posts. Influencers earn through ad views, sponsorships and live streams. Platforms profit even more from increased engagement and advertising revenue.
Because these systems are profit-driven, holding individual creators accountable isn’t enough. Platforms that design and benefit from engagement-based models must share responsibility for curbing misinformation. Holding platforms accountable for the systems they’ve built is not just about regulating speech; it’s about protecting the integrity of public conversation and ensuring that technology serves democracy rather than weakens it.
A digital battlefield
Voters and political actors are now waging this election at polling stations and across news feeds and timelines. Threats like coordinated inauthentic behavior, deepfakes, and disinformation campaigns are real. The rivalry between the Marcos and Duterte camps could further intensify online manipulation efforts.
Comelec Resolution 11064 is a step forward. It requires registration of official campaign platforms and disclosure of AI-generated content. Comelec expects platforms to comply with takedown requests and enforce transparency. Ensuring compliance across vast digital ecosystems is a significant undertaking.
A shared responsibility
Ensuring election integrity in 2025 demands more than flagging false content or banning posts. Platforms must be held accountable for how their systems prioritize information. They must go beyond reactive moderation and actively promote credible, balanced content. Clear standards are needed to compel platforms to elevate trustworthy information and discourage divisive sensationalism. Creating a space for thoughtful discourse must become a core responsibility, not just a public relations goal.
In a noisy digital landscape, every voter’s judgment matters. Reviewing information, reflecting before reacting, and choosing facts over outrage strengthen your vote and democracy.