Fighting Fake News and Digital Manipulation

Apply critical thinking on digital media and filter out manipulative content and misinformation by questioning its Purpose, Perspective, and Provenance.

Fighting Fake News and Digital Manipulation
Photo by Markus Spiske / Unsplash

Wired for mental shortcuts

After several millennia of evolution, even though our physical characteristics have transformed remarkably from apes to humans, the most material divergence from animals has been the development of human intellect, especially our ability to think rationally. We owe this distinctly human ability to the prefrontal cortex, which is the section of the human brain that controls our mental capacity to plan, solve problems, exercise self-control, make rational decisions and act with long-term goals in mind.

Despite the outsized role that the prefrontal cortex plays in giving us human intelligence, it actually occupies only about 10 percent of the brain’s total volume. This thin layer of the brain responsible for our highly evolved executive function sits on top of a mostly primitive brain that governs motor functions, intuition, perception, emotions and our ‘fight or flight’ responses.

Given the huge difference between the capacity of these two cognitive resources, it is no surprise that the majority of our daily decision making and thinking processes simply bypass the rational center of the mind and end up in the impulsive center instead.

Through their in-depth studies on human thinking, cognitive scientists like Gary Marcus (1) and the Nobel prize winning psychologist Daniel Kahneman (2), have illustrated the challenges posed by this availability imbalance between the rational and impulsive brain. They may have picked different descriptive labels in their publications, but both outline two distinctly different modes of thinking in which our minds operate and make decisions. ‘Rational’ thinking that is slow, deliberate, conscious and driven by facts, versus ‘Impulsive’ thinking that is fast, automatic, intuitive and driven by emotion.

We utilize the rational mind when we formally weigh options to make a decision or process information that requires conscious mental exertion. Imagine the mental effort of reading an academic paper and then summarizing the conclusions with data and arguments.

We utilize the impulsive mind for the vast majority of thinking that we do throughout the day. Imagine viewing social media content on a topic you are passionate about and then liking or forwarding the posts without too much thinking, fact checking or deeper analysis.

Unfortunately, what we gain in speed with impulsive thinking, we lose in accuracy. It often makes generalizations, neglects important information, oversimplifies complex situations and jumps to conclusions. As the brain gets taxed with information processing load, the impulsive mind becomes even more susceptible to errors and biases.

In the last decade, we have seen some dramatic changes in how digital media is created, published and distributed. Exponential increase in connectivity coupled with the availability of smartphones, digital content and social channels has created a relentless stream of digital media on our screens. To process this deluge of information, our poor exhausted mind has no choice but to resort to the cognitive convenience of using the impulsive brain and taking mental shortcuts.

Vulnerable to misinformation

We have come a long way from the time when people gained knowledge and formed opinions by absorbing information through the limited mediums of printed literature, in-person communication and broadcast television media. Instead, we now acquire our knowledge primarily from online sources using smartphones and computer screens that have become universal channels for mass distribution and consumption of digital content.

On one hand, improved connectivity and availability of information online has been highly beneficial in making knowledge accessible to people around the world, but on the other hand it has also been equally fallible as a tool that can be manipulated for influence, surveillance, exploitation and spreading misinformation on a mass scale.

The idea of corporations subliminally influencing the masses to consume their products, regimes using propaganda to swing elections and extremist groups using disinformation to create radical movements is not a new phenomenon. However, now with the broad reach of digital content, their influence and spread of any misinformation is exponentially accelerated. Using loosely moderated social media platforms like Twitter, Facebook, YouTube, Instagram, WhatsApp, etc. any content that was previously spread by print media, television, billboards and word of mouth, can now be distributed and amplified across the world, spreading virally to millions of people within minutes.

The spread of misinformation in the digital realm can sometimes have severe and devastating consequences in the real world. Mass distribution of unverified incendiary digital content can drive people to harmful actions and cause significant damage before anyone questions the authenticity and purpose of such content. In 2018, the spread of fake messages on the WhatsApp messaging platform led to spate of mob-related violence and killings (3). Even more recently in 2020, Facebook accounts and pages were used by militia groups to recruit people and coordinate the violent and deadly shootings in Kenosha, Wisconsin in the United States (4).

An even more unsettling development is the recent ability to produce highly convincing ‘synthetic media’, which is the use of Artificial Intelligence to generate video (5), images (6) and text (7) that has uncanny realism and human likeness. It is no surprise that a recent study on “AI-enabled future crime” published in Crime Science Journal (8), places AI authored fake news and audio/video impersonation at the top of their list as the most dangerous AI-enabled crimes.

With the added stress and anxiety of our modern lives, including ongoing issues like the economic downturn, racial tensions and a global pandemic, we are looking at a depleted mind that is increasingly vulnerable to misinformation that may be embedded in our daily digital media platforms.

Tech giants have their own agenda

There are more than 1.6 billion social network users worldwide with more than 80 percent of internet users accessing social media services online. Facebook’s platforms, including WhatsApp and Instagram, are used by over a billion users worldwide and account for more than half of all mobile social traffic, followed closely by Twitter and Google’s video service YouTube.

By providing free access and services, these companies make money in proportion to how much time users spend on their platforms. More time spent on their platforms means more opportunities to serve ads and to collect data that can be used to help advertisers target the right people. As the popular quote goes “if you are not paying for it, you are not the customer; you are the product”, or rather your attention is the product being sold to advertisers.

To meet their monetization goals, these technology giants have been employing behavioral psychology for several years to create walled gardens that attract and then keep users engaged within their ecosystems. The AI agents and algorithms developed by these companies act as the gatekeepers and promoters of digital content that help maximize the platform’s acquisition, engagement and growth objectives. Even peer reviewed medical research has been published about the dopamine inducing addictive and impulsive features that are built into these platforms (9).

In addition, as the Cambridge Analytica — Facebook scandal revealed in 2018, these platform owners and third parties can use the behavioral data collected from the platform for profiling, targeting and influencing specific user clusters (10). Cambridge Analytica used Facebook data to score over 50 million users on the big 5 OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism — and a variety of other personality traits. They then targeted voter clusters with a digital misinformation campaign to influence the outcome of 2016 US elections.

Even if there isn’t a direct intent to spread misinformation, the sophisticated content recommendation and targeting algorithms that leverage psychological or behavioral models to keep users engaged might prioritize content that gets people riled up, or tells people what they want to hear, even if it is false. Targeting algorithms combined with synthetic media can become quite dangerous and even unstoppable.

Some steps have been taken by social media platforms to combat harmful content and various regulators, especially in the European Union, are developing policies around the liabilities of artificial intelligence and other emerging digital technologies (11). However, the recent departure of various dissenting employees at Facebook and Google who disagree with the policies of their platforms is a sign of the lack of motivation at the top to deeply tackle these systemic issues. It is highly unlikely that the digital distribution and social media platforms will implement safeguards that prioritize facts and interests of the audience over their own growth and profit-seeking objectives.

Navigating cognitive minefields

Living in a world where technological progress will only keep accelerating requires that we learn to coexist with advanced digital platforms and sophisticated algorithms. However, this also puts us in quite a predicament when it comes to combating digital misinformation.

With a limited prefrontal cortex, we know our brains have a biological predisposition to take mental shortcuts. At the same time, publishers of misinformation are getting better at using artificial intelligence to fake digital content and target susceptible audiences. Meanwhile, any mitigation efforts from the digital platform owners to combat misinformation are ineffective and meager at best.

So instead of waiting for policymakers or technology giants to solve the issue of digital misinformation, we need to recognize the challenges and start helping ourselves navigate the cognitive minefield of online digital and social media content.

We need to apply critical thinking to evaluate three key characteristics, or the 3 Ps, of all digital content:
Purpose, Perspective, and Provenance.

Purpose

The purpose of any digital content can range from simply offering information (facts, opinions or knowledge) to the other extreme of swaying the reader into taking action or jumping to a conclusion.

We have to remember that our minds are wired for shortcuts that generate quick answers to difficult questions and this can be used to influence us. The advertising world leverages this by creating an association between feelings and moments in life with certain brands or products in order to drive increased sales and profits for their corporate clients. Cause based non-profits leverage this by taking deeply complex issues like poverty, unemployment or climate change and characterize a donation to their cause as a solution. Politicians leverage this during elections to create compelling campaigns that reduce the myriad challenges of their voters to simplistic slogans.

Critical Thinking Strategy #1: Question the purpose of the content and intent of the author/sender to ensure that you are not falling into a cognitive trap that triggers impulsive thinking and action by ignoring or glossing over facts and rationality.

Perspective

The perspective offered by any digital content can range from being a balanced point of view to a completely one-sided and an extreme point of view.

In the digital world, with algorithms limiting our exposure, there can be strong biases and a lack of perspective in the digital content that is being served up to us online (12). Recommendation algorithms of search engines and social media platforms that have created content feeds where homogeneity of views and opinions is reinforced. These algorithms keep the audience engaged by selectively serving information and recommending network connections that show similarity to existing beliefs, viewpoints and social connections. So instead of helping us consume more heterogeneous sources of information or connect with people who may be different, digital platforms are instead perpetuating patterns of information sharing and social networking that simply reinforce pre-existing biases and beliefs.

Critical Thinking Strategy #2: Challenge the perspective provided by platform digital feeds and online content to ensure you are exposed to diverse, even opposing, points of view and not trapped in an ‘echo chamber’ that amplifies and reinforces one-sided thinking.

Provenance

Of all three characteristics, this is likely the most difficult to evaluate. Given the nature of the internet, most content comes to us second-hand or from even more removed sources. In many cases the content may be generated by individuals instead of a publishing house. On top of that, content that is natively digital can be altered and tampered before it gets to us. The challenge is to verify the provenance (origin and authenticity) of the digital content before we can consider it reliable.

Several tools exist online to do reverse image searches, find metadata on photos and get information on Twitter and Facebook accounts like The Verification Toolbox (13) and TinEye (14). There are tools to find out information about online videos such as YouTube DataViewer (15). There are even tools for fact checking such as Snopes (16) and FactCheck (17). Some excellent resources are also available on the website of the recent Netflix documentary The Social Dilemma (18).

However, in my view it is an issue that the burden of validating provenance and authenticity continues to lie with the audience, and not publishers or platforms. In the future, this gap could be addressed by building an independent system that acts as a public registry of digital assets to combat synthetic media and fake news. Publishers and platforms would use the system to authenticate and certify digital content prior to distribution.

In such a system, blockchain distributed ledger technology would be used to create a trusted record of the digital content. The author’s identity, metadata including date, time, location, tags, etc. and the content itself either text, image, audio or video would be used to create a provenance record. The utilization of blockchain based smart contracts could then automate the verification and use of authentic assets in the publication process. Deploying such a system will require rethinking how digital publication happens and the isolation of certified content from the infrastructure of the publishing platform.

Unfortunately, we are not quite there yet in terms of either technical solutions or publication practices and for now the onus lies with the consumers of digital content to validate provenance.

Critical Thinking Strategy #3: Verify the origin and authenticity of digital content to ensure the information comes from reliable unbiased sources and that the content hasn’t been specifically altered or designed to trigger impulsive thinking and action.

Conclusion

If we are humble about our abilities and limitations as humans, then we need to recognize that we truly need help in making good rational choices and decisions. Our brains haven’t sufficiently evolved to be able to process the 21st century overload of digital stimuli through our prefrontal cortex and its rational thinking capacity.

Meanwhile, artificial intelligence, social platforms, digital content creation and distribution is growing exponentially and showing no signs of slowing down. This means the digital stimuli overload will only increase over time. We are already seeing societal problems related to compulsive and impulsive consumption of digital media. Impacts from the misuse of digital technologies for exploitative or manipulative ends by spreading misinformation have been unfolding publicly for the past few years.

Our response to this relentless assault of digital technologies should not be to develop a paranoia of digital content and platforms which would greatly diminish the utility and benefits of all this amazing technological advancement. Instead, we need to employ critical thinking strategies to evaluate the digital content that we consume for the 3 Ps of Purpose, Perspective and Provenance.

Over time proactive approaches to limit misinformation and manipulative digital content will come in the form of policy interventions and preventative technology solutions. Until then, it is incumbent on all of us to use ‘always on’ critical thinking and filter out insidious content from our increasingly digital world.

References

1 Kahneman, D. (2011). Thinking Fast and Slow. Farrar, Straus and Giroux.

2 Marcus, G. (2008). Kluge: The haphazard construction of the human mind. Houghton Mifflin.

3 BBC. (2018, 7 18). How WhatsApp helped turn an Indian village into a lynch mob. BBC. https://www.bbc.com/news/world-asia-india-44856910

4 Brandom, R. (2020, 08 26). Facebook chose not to act on militia complaints before Kenosha shooting. theverge.com. https://www.theverge.com/2020/8/26/21403004/facebook-kenosha-militia-groups-shooting-blm-protest

5 Wikipedia. (2020, 09 22). Deepfake. Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Deepfake&oldid=979669528

6 Vincent, J. (2019, 2 15). ThisPersonDoesNotExist.com uses AI to generate endless fake faces. theverge.com. https://www.theverge.com/tldr/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-stylegan

7 Wikipedia. (2020, 09 24). Generative Pre-trained Transformer 3 (GPT-3). Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=GPT-3&oldid=980041403

8 M, C., J.T.A, A., & T, T. (2020). AI-enabled future crime. Springer Nature, 9(14). https://doi.org/10.1186/s40163-020-00123-8

9 Montag, C., Lachmann, B., Herrlich, M., & Zweig, K. (2019). Addictive Features of Social Media/Messenger Platforms and Freemium Games against the Background of Psychological and Economic Theories. International Journal of Environmental Research & Public Health, 16(14). 10.3390/ijerph16142612. https://doi.org/10.3390/ijerph16142612

10 Rosenberg, M., Confessore, N., & Cadwalladr, C. (2018, 03 17). How Trump Consultants Exploited the Facebook Data of Millions. The New York Times. https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html?

11 Directorate-General for Justice and Consumers (European Commission). (2019). Liability for artificial intelligence and other emerging digital technologies. Publications Office of the EU. 10.2838/573689. https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1

12 Choi, D., Chun, S., Oh, H., han, J., & Kwon, T. (2020). Rumor Propagation is Amplified by Echo Chambers in Social Media. Nature, 10(310). https://doi.org/10.1038/s41598-019-57272-3

13 First Draft News. (n.d.). Verification Toolbox. https://firstdraftnews.org/verification-toolbox/

14 TinEye. (n.d.). TinEye Reverse Image Search. https://tineye.com/

16 Snopes Media Group. (n.d.). Snopes. https://www.snopes.com/

15 Amnesty International. (n.d.). YouTube Dataviewer. https://citizenevidence.amnestyusa.org/

17 The Annenberg Public Policy Center. (n.d.). Fact Check. https://www.factcheck.org/

18 The Social Dilemma. (n.d.). Take Action. https://www.thesocialdilemma.com/take-action/