Connect with us

Net Influencer

Digital Deception How Trolls And Bots Shape Public Opinion

Commentary

Digital Deception: How Trolls And Bots Shape Public Opinion

A comprehensive literature review published in the European Scientific Journal reveals how trolls and bots manipulate public discourse through social media, particularly during political campaigns

The research, conducted by Salome Khazhomia of Grigol Robakidze University in Georgia, examines propaganda theories and the impact of misinformation on public opinion.

Distinguishing Between Trolls and Bots

The study differentiates between trolls and bots on social media platforms. 

Trolls are human users who intentionally post provocative content, often using abusive language and personal insults to manipulate opinions, particularly in political contexts. Their behavior can affect mental health and normalize aggressive online interactions.

Bots, conversely, are automated accounts that can spread information without human intervention. These accounts post repetitive comments with unnatural language patterns and cannot engage in meaningful conversations. 

Both entities serve as vehicles for propaganda, but they operate differently and require distinct countermeasures.

“Bots can spread false information very fast, and trolls can create content to confirm public opinion,” Khazhomia notes, highlighting how these entities work in tandem to shape public discourse.

Propaganda Theories

The research traces the evolution of propaganda theories from American political scientist Harold Lasswell’s early work to modern understandings of media influence. 

According to Lasswell, propaganda extends beyond simply deceiving people; governments can employ it in ways that don’t necessarily harm society. The power of propaganda, he argued, lies in its ability to shape people’s states of mind.

Modern propaganda, as described by behaviorists Richard Laitinen and Richard Rakos, functions as “the control of behavior by media manipulation.” 

The study also references the theory of informational autocracy, which suggests that leaders manipulate public opinion to maintain power, using media censorship to ensure citizens perceive them as competent.

Social media has become a focal point for propaganda efforts. The study finds that political parties often hire companies to spread disinformation, launching bot or trolling campaigns that operate either short-term (during elections) or long-term, depending on budget and resources.

“Campaigns that are managed by fake accounts can be considered a threat to societies,” the research states, pointing to political propaganda, targeted community manipulation, and disinformation as primary tactics.

Combating Digital Misinformation

The paper outlines current strategies for managing trolls and bots across major platforms. 

X employs bot detection algorithms that analyze patterns of repetitive activity, implements two-factor authentication to prevent bots from creating fake accounts, and uses verification systems to help users distinguish between authentic and fake accounts.

Facebook utilizes AI moderation tools to detect harmful content, including spam and hate speech, while providing group administrators with tools to remove offensive comments and ban disruptive users. 

Instagram offers comment filters that automatically hide abusive remarks and allows users to restrict accounts engaged in trolling behavior.

Governments have also begun implementing policies to address misinformation. The European Union’s Digital Services Act requires platforms to be transparent and accountable regarding content moderation and user safety. 

The UK is introducing the Online Safety Bill to ensure tech companies take action against harmful online behavior.

In the United States, while comprehensive federal legislation is lacking, initiatives like the Integrity and Innovation Act and discussions around Section 230 reform aim to enhance platform accountability and combat bot-related issues.

Individual Responsibility and Critical Thinking

The research emphasizes that responsibility for combating misinformation extends beyond platforms and governments to individual users. 

By engaging with news more critically and verifying information before accepting or sharing it, people can significantly reduce the impact of false narratives.

“Media consumption is a choice, and individuals must take an active role in verifying information before accepting or disseminating it,” the paper concludes.

The study connects digital misinformation to social bullying, a concept first addressed by Norwegian psychologist Dan Olweus. Social bullying aims to damage or humiliate someone’s reputation through spreading rumors and lies. Unlike direct aggression, it often manifests as covert intimidation, causing depression, social exhaustion, and low self-esteem among victims.

While misinformation research continues to develop, the study notes that detecting disinformation remains challenging due to cognitive biases and social pressures that prevent people from recognizing unreliable information. 

Fact-checkers, recently discarded by Meta, play a vital role in counteracting disinformation, but technological solutions alone cannot solve the problem.

The research concludes that successfully mitigating online misinformation requires ongoing collaboration between technology companies, regulators, and users, combined with advancements in detection techniques, policy enforcement, and public awareness.

The full study is available here.

Avatar photo

David Adler is an entrepreneur and freelance blog post writer who enjoys writing about business, entrepreneurship, travel and the influencer marketing space.

Click to comment

More in Commentary

To Top