Disinformation Strategies on TikTok & Spotify
The Context
This project examined how disinformation operates on social media platforms not only through false claims, but through form, tone, and emotional design. Conducted as part of my undergraduate research at the University of Arizona, the study focused on TikTok and Spotify as two distinct but influential media environments in which misinformation can circulate through short-form video and long-form audio.
The central question was broader than whether content was true or false. The project asked how creators make misleading content persuasive, memorable, and emotionally credible. That meant paying attention not only to verbal arguments and logical manipulation, but also to music, pacing, visuals, delivery style, and mood. These non-verbal elements often shape audience reaction before a factual claim is even evaluated.
The Method
The research was designed and carried out under faculty mentorship and drew from media studies, communication theory, and psychology. I analyzed platform-specific case examples to understand how disinformation is structured, how it appeals to viewers and listeners, and how different media forms shape the reception of misleading claims.
The case material covered several recurring areas of digital misinformation, including health misinformation, climate change denial, conspiracy narratives, and fear-based public claims. Across these examples, I examined both message content and presentation strategies. This included looking at how ominous sound design, emotionally charged language, selective framing, and platform-native aesthetics can intensify the apparent plausibility of unsupported or false claims.
The project also included a comparative analysis of possible responses to digital disinformation. Rather than treating intervention as a single solution, I studied multiple approaches, including fact-checking, verification practices, bot detection, content moderation, inoculation theory, and information literacy. The goal was to compare not only what these responses do, but where each one succeeds or falls short in platform environments shaped by speed, emotion, and repetition.
Key Outputs
Undergraduate research study conducted under faculty mentorship
Platform-specific analysis of disinformation strategies on TikTok and Spotify
Case-based examination of health misinformation, climate denial, conspiracy narratives, and fear-based content
Analysis of verbal, visual, musical, and emotional manipulation techniques
Comparative framework for understanding how misinformation gains credibility and traction
Review of intervention models, including fact-checking, verification, bot detection, moderation, inoculation theory, and information literacy
Cross-disciplinary approach drawing from media studies, communication theory, and psychology
The Result
The project showed that disinformation spreads not only through incorrect information, but through carefully shaped emotional and aesthetic experience. Sound, visuals, pacing, tone, and platform conventions were not secondary elements. In many cases, they were central to the persuasive force of the message.
By analyzing both content and form, the study made visible how misinformation can feel convincing even when its claims are weak or fabricated. It also produced a structured framework for thinking about resistance, showing that effective responses need to extend beyond correction alone. Education, platform design, moderation systems, and media literacy all play different roles in reducing the persuasive power of misleading content.
More broadly, the project demonstrated that platform-specific media analysis is necessary for understanding contemporary disinformation. False narratives do not spread in abstract space. They spread through formats designed to hold attention, trigger emotion, and create a sense of immediacy or trust.
Case Studies
The “Great Reset” & Climate Denial: showed how ominous music and fear-driven narratives shape public anxiety.
False Health Cures: examined TikTok claims about natural cancer remedies undermining medical expertise.
Vaccine & Donation Misinformation: studied videos misrepresenting Red Cross plasma donation policies.
Tap Water Polio Claim: revealed how local fears can be manipulated through fabricated outbreaks.
Podcast Conspiracy Narratives: explored how long-form audio amplifies alternative disease theories (e.g., 5G as a health threat).
Interventions & Analytical Framework
Developed a comparative framework of strategies to counter digital disinformation:
Fact-Checking vs. Verification: differentiated approaches to ensure accuracy and context.
Bot Detection: studied automation in spreading false narratives.
Content Moderation: analyzed strengths and risks of platform-level controls.
Inoculation Theory: explored how pre-exposure to weakened misinformation builds resistance.
Information Literacy: emphasized critical thinking and lateral reading as long-term solutions.

