Skip to main content

Read up on social media monitoring: What does the literature say?

CA
Christoph M. Abels
University of Potsdam

This chapter builds on a scoping review of:

  • Roughly 80 papers, covering research since 2010
  • Articles sources from Europe, North America, and Latin America
  • Research on public health has grown significantly since Covid-19, whereas other domains display less variety, notably, advancements in AI technology and other emerging threats (for instance to electoral integrity) would benefit from more attention.

State of the Art: Research on mis- and disinformation#

The following article provides an overview on the current debate in the field of social media monitoring with a focus on mis- and disinformation and intends to serve as a starting point for further research and more in-depth reading. The literature review was conducted based on a screening of recent publications from leading journals (e.g., Political Communication, Nature Human Behaviour), widely cited literature published in the last 15 years, as well as current policy papers and reports from relevant institutions and actors in the field of social media monitoring.

This overview subsequently discusses the role of social media in circulating disinformation, before describing how social media monitoring is employed to track and engage with disinformation.

Premise of social media#

Social media has become a popular playground for various actors seeking to inject false or misleading information into the public information stream – from intelligence agencies over (extremist) political parties to rogue civil society. Simultaneously, social media is increasingly used as an important source of information for citizens around the globe.

The (dis-)information ecosystem: terminology and definitions#

While well-informed citizens are the cornerstone of a functioning democracy (Lewandowsky et al., 2020), false as well as misleading information threatens to undermine this informational basis citizens depend on. These types of information are frequently referred to as mis- and/or disinformation. While misinformation is merely false information, for example reporting errors in journalistic pieces, disinformation intentionally seeks to mislead individuals (Wardle & Derakhshan, 2017). Beyond that, various other sub-types of disinformation are prevalent online. These include, for instance, conspiracy theories often linked to beliefs that actors with malign intent operate in secret against the public (for an overview see, Douglas & Sutton, 2023), as well as “fake news”, which are published in the style of legitimate news articles but are partially or fully fabricated (Tandoc et al., 2018).

How and why mis- and disinformation spreads#

While the pursuit of a political agenda is an important reason for people to circulate conspiracy theories (Douglas & Sutton, 2023), fake news are more frequently shared due to inattention of users rather than the intention to mislead (Pennycook & Rand, 2021). Disinformation is also not limited to simple texts, tweets, or posts. Technological progress made it possible to alter pictures and videos in a way that serves disinformation purposes. Fabricated videos are now commonly referred to as deepfakes (Chesney & Citron, 2018). This overview subsequently discusses the role of social media in circulating disinformation, before describing how social media monitoring is employed to track and engage with disinformation.

Social media as vector for disinformation#

Very broadly, social media can be understood as Internet-based channels in which users can interact and share content, either in real-time or asynchronously (Carr & Hayes, 2015). Various social media platforms exist today, such as Discord, Facebook, Flickr, Instagram, LinkedIn, Pinterest, Reddit, Snapchat, Telegram, TikTok, Twitter, and YouTube. Virtual social and virtual game worlds can also be understood as social media, such as Minecraft or the Metaverse (Kaplan & Haenlein, 2010).

These platforms have different audiences, purposes, and functionalities, creating a complex and continuiously evolving media landscape. To understand the of digital media use, including social media, on democracy, the systematic review by Lorenz-Spreen et al. (2022) is highly recommended. Regardless of their purpose or funtionalities, disinformation is prevalent on most, if not all, platforms. For example, Facebook, Instagram, Reddit, and Twitter all struggle to contain disinformation on their platforms (Hao, 2021; Lukito, 2020; The Economist, 2020; Yang et al., 2021). While these platforms are used to dissiminate disinformation, some platforms are used to prepare and develop disinformation, including conspiracy theories, such as the messaging board 4chan in the case of Pizzagate (Tuters et al., 2018).

Beyond that, individuals frequently use several social media platforms, connecting these technically separate spheres. As a result, disinformation can reemerge at a different point in time on other platforms (Kang & Frenkel, 2020). Thus, disinformation campaigns spread across various platforms (Lukito, 2020), often with tailored content to maximize engagement and visibility, and can therefore reach a large number of people – sometimes in surprising places. For instance, Russian propaganda has recently been reported to be present in various online video games, such as Minecraft (Myers & Borwning, 2023).

This is especially problematic in crises, e.g., terrorist attacks, wars, or natural disasters, or otherwise sensitive situations, such as elections. During these times, many people go on social media to follow the developing situation and try to find the latest information. Monitoring these developments on social media is therefore of crucial importance (Reuter & Kaufhold, 2018; Starbird et al., 2014; Vieweg et al., 2010).

Monitoring disinformation on social media#

Social media monitoring (SMM) describes the systematic observation of social media platforms and other digital information sources. Although various approaches to SMM exist, the process can be broadly organized in four steps (Brady, 2020; Karafillakis et al., 2021):

  • preparation (e.g., defining the problem or goal of the monitoring as well as associated topics and terms),
  • data extraction,
  • data analysis,
  • and dissemination of findings.

Some contexts demand a more comprehensive planning step, that includes a risk assessment to better understand relevant factors such as trust in media, social media consumption, and the political landscape (Brady, 2020). Data collection is to a large extent automatized by crawling a social media platform to retrieve data and identify trends in respect to a certain set of topics or keywords but also specific sources of information. As Brady however points out based on insights from SMM during five European elections, there is no gold standard for SMM, and the field remains largely experimental.

Thus, the precise combination of tools, team qualifications, and organizational settings varies from case to case. The increasing availability of AI technology will also support SMM. Some argue, that given the immense volume of data produced every day, only AI will be able to identify emerging threats in real time (Yankoski et al., 2020). Research on automated fake news detection highlights the potential of this approach (for example, Tacchini et al., 2017).

In the context of disinformation, SMM is primarily employed to protect elections and address disinformation campaigns. Monitoring the information space during elections has become increasingly important. For instance, Russia tried to interfere in the 2016 U.S. presidential elections, although with limited success in respect to changing people’s attitudes or voting behavior (Eady et al., 2023). Various elections of EU member states were targeted by disinformation campaigns as well, such as Slovakia in 2019 and Spain in 2019 and again in 2023 (Fundación Maldita.es & Democracy Reporting International, 2023), although mostly with limited success (Bayer et al., 2021). Another notable case was the Brazilian elections in 2022, which witnessed a comprehensive disinformation campaign, using various tools and channels such as Cheapfakes (i.e., altered images and audio based on off-the-shelf technology) and WhatsApp Status. The campaign aimed to undermine citizens' trust in the elections and democratic institutions in general (Saab et al., 2022). Brady (2020) provides an informative example for SMM during elections.

SMM is also employed by organizations dedicated to counter ongoing foreign interference aimed at undermining public trust in democratic institutions. Distinguishing organic rumors from organized campaigns is a challenging endeavor, as disinformation campaigns often blend false information with facts (Starbird, 2020). Examples of relevant organizations include the NATO Strategic Communications Centre of Excellence (StratCom COE) located in Latvia (for a brief introduction of StratCom COE, see Hanley, 2022) as well as the European External Action Services’ East StratCom Task Force (ESCTF). The flagship product of ESCTF is the EUvsDisinfo initiative, which collects and debunks disinformation campaigns targeting the EU, its member states, and neighboring countries.

Making SMM-based research actionable#

Monitoring disinformation and similar phenomena is of crucial importance to protect democratic processes and institutions. However, given the versatile category of disinformation, that is not limited to text but increasingly includes videos and audio (for example, deepfakes (Chesney & Citron, 2018; Weikmann & Lecheler, 2022)), and the everchanging social media landscape, SMM faces significant challenges.

From this perspective, SMM should be intertwined with comprehensive outreach and community mobilization efforts, to increase the effectiveness of SMM and disseminate findings to a larger public as fast as possible. Thus, like science communication (Holford et al., 2023), SMM should be understood as a collective intelligence endeavor that builds on the technical and regional expertise of various stakeholders, as illustrated by Brady's (2020) examples as well as the recent monitoring of Spain’s snap general election (Fundación Maldita.es & Democracy Reporting International, 2023). SMM therefore goes beyond merely monitoring the information space on social media, as it tries to contribute to a better understanding of the current challenges democratic information spheres are coping with.

References#

Bayer, J., Holznagel, B., Lubianiec, K., Pintea, A., Schmitt, J. B., Szakács, J., Uszkiewicz, E., European Parliament. Directorate-General for External Policies of the Union. Policy Department, & European Parliament. Special Committee on Foreign Interference in all Democratic Processes in the European Union, including D. (2021). Disinformation and propaganda: impact on the functioning of the rule of law and democratic processes in the EU and its Member States - 2021 update. European Parliament. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/INGE/DV/2021/04-13/EXPO_STU2021653633_EN.pdf

Brady, M. (2020). Lessons Learned: Social Media Monitoring during Elections. Democracy Reporting International. https://democracy-reporting.org/en/office/global/collection?type=publications

Carr, C. T., & Hayes, R. A. (2015). Social Media: Defining, Developing, and Divining. Atlantic Journal of Communication, 23(1), 46–65. https://doi.org/10.1080/15456870.2015.972282

Chesney, R., & Citron, D. K. (2018). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (No. 692; Public Law Research Paper). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954

Douglas, K. M., & Sutton, R. M. (2023). What Are Conspiracy Theories? A Definitional Approach to Their Correlates, Consequences, and Communication. Annual Review of Psychology, 74(1), 271–298. https://doi.org/10.1146/annurev-psych-032420-031329

Eady, G., Paskhalis, T., Zilinsky, J., Bonneau, R., Nagler, J., & Tucker, J. A. (2023). Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior. Nature Communications, 14(1). https://doi.org/10.1038/s41467-022-35576-9

Fundación Maldita.es & Democracy Reporting International. (2023, August 8). Disinformation and Hate Online During the Spanish Snap General Election. Democracy Reporting International. https://democracy-reporting.org/en/office/global/publications/disinformation-and-hate-online-during-the-spanish-snap-general-election-of-july-2023

Hanley, M. (2022). NATO’s Response to Information Warfare Threats. In J. Chakars & I. Ikmanis (Eds.), Information Wars in the Baltic States (pp. 205–223). Palgrave Macmillan. https://doi.org/10.1007/978-3-030-99987-2_11

Hao, K. (2021). How Facebook got addicted to spreading misinformation. MIT Technology Review. https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

Holford, D., Fasce, A., Tapper, K., Demko, M., Lewandowsky, S., Hahn, U., Abels, C. M., Al-Rawi, A., Alladin, S., Sonia Boender, T., Bruns, H., Fischer, H., Gilde, C., Hanel, P. H. P., Herzog, S. M., Kause, A., Lehmann, S., Nurse, M. S., Orr, C., … Wulf, M. (2023). Science Communication as a Collective Intelligence Endeavor: A Manifesto and Examples for Implementation. Science Communication, c. https://doi.org/10.1177/10755470231162634

Kang, C., & Frenkel, S. (2020, June 27). ‘PizzaGate’ Conspiracy Theory Thrives Anew in the TikTok Era. The New York Times, 30–33. https://www.nytimes.com/2020/06/27/technology/pizzagate-justin-bieber-qanon-tiktok.html

Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons, 53(1), 59–68. https://doi.org/10.1016/j.bushor.2009.09.003

Karafillakis, E., Martin, S., Simas, C., Olsson, K., Takacs, J., Dada, S., & Larson, H. J. (2021). Methods for social media monitoring related to vaccination: Systematic scoping review. JMIR Public Health and Surveillance, 7(2). https://doi.org/10.2196/17149

Lewandowsky, S., Smillie, L., Garcia, D., Hertwig, R., Weatherall, J., Egidy, S., Robertson, R. E., O’Connor, C., Kozyreva, A., Lorenz-Spreen, P., Blaschke, Y., & Leiser, M. (2020). Technology and Democracy: Understanding the influence of online technologies on political behaviour and decision-making. https://doi.org/10.2760/709177

Lorenz-Spreen, P., Oswald, L., Lewandowsky, S., & Hertwig, R. (2022). A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature Human Behaviour, 7(1), 74–101. https://doi.org/10.1038/s41562-022-01460-1

Lukito, J. (2020). Coordinating a Multi-Platform Disinformation Campaign: Internet Research Agency Activity on Three U.S. Social Media Platforms, 2015 to 2017. Political Communication, 37(2), 238–255. https://doi.org/10.1080/10584609.2019.1661889

Myers, S. L., & Borwning, K. (2023, July 30). Russia Takes Its Ukraine Information War Into Video Games. The New York Times. https://www.nytimes.com/2023/07/30/technology/russia-propaganda-video-games.html#:~:text=Russian propaganda is spreading into,popular social media network%2C VKontakte.

Pennycook, G., & Rand, D. G. (2021). The Psychology of Fake News. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Reuter, C., & Kaufhold, M.-A. (2018). Fifteen years of social media in emergencies: A retrospective review and future directions for crisis Informatics. Journal of Contingencies and Crisis Management, 26(1), 41–57. https://doi.org/10.1111/1468-5973.12196

Saab, B. A., Beyer, J. N., & Böswald, L.-M. (2022). Beyond the Radar: Emerging Threats, Emerging Solutions. Democracy Reporting International. https://democracy-reporting.org/en/office/global/publications/going-beyond-the-radar-emerging-threats-emerging-solutions

Starbird, K. (2020, July). Disinformation campaigns are murky blends of truth, lies and sincere beliefs – lessons from the pandemic. The Conversation. https://theconversation.com/disinformation-campaigns-are-murky-blends-of-truth-lies-and-sincere-beliefs-lessons-from-the-pandemic-140677

Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014, March 1). Rumors, False Flags, and Digital Vigilantes: Misinformation on Twitter after the 2013 Boston Marathon Bombing. IConference 2014 Proceedings. https://doi.org/10.9776/14308

Tacchini, E., Ballarin, G., Della Vedova, M. L., Moret, S., & de Alfaro, L. (2017). Some like it Hoax: Automated fake news detection in social networks. CEUR Workshop Proceedings, 1960, 1–12.

Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143

The Economist. (2020). Instagram will be the new front-line in the misinformation wars. https://www.economist.com/the-world-in/2020/01/01/instagram-will-be-the-new-front-line-in-the-misinformation-wars

Tuters, M., Jokubauskaitė, E., & Bach, D. (2018). Post-Truth Protest: How 4chan Cooked Up the Pizzagate Bullshit. M/C Journal, 21(3), 1–18. https://doi.org/10.5204/mcj.1422

Twetman, H., Paramonova, M., & Hanley, M. (2020). Social Media Monitoring: A Primer. Methods, tools, and applications for monitoring the social media space. NATO Strategic Communications Centre of Excellence. https://stratcomcoe.org/pdfjs/?file=/cuploads/pfiles/social_media_monitoring_a_primer_12-02-2020.pdf?zoom=page-fit. Consultado em 01mai2023, 18:10

Vieweg, S., Hughes, A. L., Starbird, K., & Palen, L. (2010). Microblogging during two natural hazards events. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2, 1079–1088. https://doi.org/10.1145/1753326.1753486

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary framework for research and policy making. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c

Weikmann, T., & Lecheler, S. (2022). Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society. https://doi.org/10.1177/14614448221141648

Yang, K. C., Pierri, F., Hui, P. M., Axelrod, D., Torres-Lugo, C., Bryden, J., & Menczer, F. (2021). The COVID-19 Infodemic: Twitter versus Facebook. Big Data and Society, 8(1). https://doi.org/10.1177/20539517211013861

Yankoski, M., Weninger, T., & Scheirer, W. (2020). An AI early warning system to monitor online disinformation, stop violence, and protect elections. Bulletin of the Atomic Scientists, 76(2), 85–90. https://doi.org/10.1080/00963402.2020.1728976