by: Santine Mauritius Susa
Graphics by: Cyrelle Rañeses

In the Philippines, where internet users spend an average of 4 hours and 6 minutes daily on social media, AI has become a quiet actor that curates not only content, but emotions. While it may seem that Facebook, TikTok, and YouTube simply show what’s “relevant,” the truth is more complicated—and even more concerning.

AI systems on these platforms are designed to maximize engagement by learning what triggers reactions—whether that be anger, sympathy, excitement, or fear. The more emotionally intense the content, the more likely it is to be shown. Over time, this creates highly personalized and emotional echo chambers that show the user a feed of what makes them ‘feel’ the most, instead of what’s accurate and factual.

What makes this even more dangerous is that AI is no longer just curating content—it is creating it. Tools like ChatGPT, Midjourney, and countless deepfake generators are now used to craft realistic-looking articles, images, videos, and even human voices, with little to no human input. These tools are becoming so advanced that many AI-generated posts are now indistinguishable from real content.

Fake news articles can be written with proper grammar, journalistic tone, and convincing statistics—none of which are true. Deepfakes can replicate a politician’s face and voice to say something they never said. AI-generated tweets, memes, and videos can appear organic, especially when shared by trusted influencers or fan pages.

These types of content are often paired with emotionally charged stories designed to go viral: stories that provoke anger, reinforce biases, or target vulnerable groups. And when these are amplified by social media algorithms, they spread faster than attemots at counter-acting with verified information—because emotional content is more likely to be clicked, shared, and engaged with. The staggering rise of AI-generated deepfakes on social media is outpacing manual fact-checking, tilting the balance in favor of falsehoods.

With a population that’s highly expressive and active online, Filipino users are particularly at risk. Emotional reactions—be it laugh-reacts, rage comments, or teary shares—feed the algorithm’s understanding of what to push next.

During the 2022 elections, many Filipinos unknowingly shared AI-generated content ranging from edited videos to false quotes. In many cases, the posts appeared professional, complete with watermarks and fake sources, making them harder to question.

As AI-generated misinformation increases, so does public confusion and emotional fatigue. Users are left unsure about what’s real, overwhelmed by conflicting stories, and an increasingly distrustful rise of legitimate sources. This erosion of trust in both media and institutions is one of the most damaging long-term effects of emotional manipulation online.

The first step in resisting this manipulation is understanding that not everything in your feed is real—or neutral. Users should be more critical of content that seems overly dramatic, unverified, or too perfectly aligned with personal beliefs.

Efforts in digital literacy are being made nationwide, from NGOs to school programs, aiming to teach students how algorithms and AI content work. While regulation and platform accountability remain ongoing debates, the most immediate defense is personal awareness.

In an age where artificial intelligence can imitate reality with alarming accuracy, being emotionally informed—and not just emotionally triggered—might be the best safeguard we have.