Digital communication in the public interest
Royal Roads' Jaigris Hodson received Canada Research Chair funding in 2020 to research misinformation, disinformation and conspiracy related to COVID-19.
The online spread of misinformation is one of the drivers behind issues such as the recent COVID-19 related “infodemic” or climate change denial and is a threat to the public interest. Drawing from political economy, cultural capital theories, a social-ecological model, and user experience literature, this research addresses the online spread of misinformation, disinformation and conspiracy.
Hodson’s work aims to uncover how content delivery, economics, relational factors, ritual, and emotion influence people’s choices of what information and misinformation to engage with. This research will create an increased understanding of best practices and barriers to science communication in the face of digital misinformation flows.
Context
The spread of misinformation on online platforms is a major factor in social problems such as the recent measles outbreaks in the western world (Gesser-Edelsburg et al., 2019), and may also contribute to a lack of action on climate change. In fact, in a recent book, Vaidhyanathan has gone as far as to call Facebook a “disinformation machine” (2017, p. 175). While researchers are currently using algorithmic methods to examine how misinformation travels online and how specific content and influencers can make some information go viral (Song & Gruzd, 2017), there is widespread agreement that algorithmic approaches are not enough to address growing issues of misinformation flows (Bechmann et al., 2018). Thus a transdisciplinary approach is needed to understand this issue, so that 1) professional communicators can design public interest related information to make it more likely to be shared, accessed, and trusted by the public; 2) policymakers can create policies to guide platform development in ways that support better information sharing in areas of research and public health; and 3) individuals can understand the drivers that influence their own information consumption habits, so they can better understand their own tendencies and reflect on what is and is not working for them. This program of research will employ user focused research such as psychometric survey design, context mapping, and interviews as a formative assessment, followed by direct user testing using experience design methods, such assurveys, diary study, card sort, and concurrent eye-tracking with think-aloud protocol. Adopting a mixed methods approach allows for a nuanced and context-rich understanding of what is, in essence a complex and multi-dimensional issue.
The need for this research, both in Canada and internationally, is clear for three important reasons: First, there is a pressing need for more user focused, small scale social media research. As identified in the 2017 Social Media and Society Workshop on Data Governance and articulated again in the 2018 Social Media and Society Plenary Panel, the tightening of platform regulations on independent research as well as the growing presence of bots or bad actors on social networks has resulted in a sea change for social media research. Whereas algorithmic analysis of social media data used to be considered the best way to understand social media activity, now researchers are beginning to point to the importance of methods that directly address user behavior (Bechmann et al., 2018). User experience methodologies, and specifically the methods developed by the CRC nominee, can be exceptionally useful in this area, as discussed later in this proposal (Hodson & Traynor, 2018).
Second, there is a growing focus, both academically and in the popular press, on the problem of online misinformation and the role that social media plays in its spread. Social platforms like Twitter and Facebook have been implicated in the spread of misinformation in relation to, for example, the COVID-19 “infodemic” (according to the New York Times) or climate change (Cook & Lewandowsky, 2016).. This research program could thus play a significant role in both academic and also professional understanding of why people share online science-related misinformation.
The effective spread of relevant and accurate information to the public has been shown to be essential to democratic engagement, since it helps to create an informed and participatory public (Stilgoe & Wilsdon, 2014). However, much of science and health-related communication often continues to operate via a model of information deficit, which positions the public as ignorant and in need of education (Simis et. al., 2016). Online social networks and digital communication platforms such as Twitter, Facebook and YouTube offer the potential to change the dominant information deficit model of science communication, since their affordances favor dialogic or two-way networked communication (Hodson, Dale, & Clifton-Ross, 2019). While recent research shows the strength of participatory platforms for spreading science communication, there is still an increase in the spread of malicious misinformation, inaccurate information, and propaganda (Bessi et. al., 2015; Del Vicario et. al., 2016). These issues are exacerbated by the online harassment of marginalized and diverse researchers (Hodson et. al., 2018), which could impede research communication. The rise of malicious misinformation is, in part, due to the unique nature of spreadable media. Social media platforms have resulted in unparalleled information flow between many different and often disparate publics (Tahkteyev, Gruzd & Wellman, 2012). Despite this, their very strength may also be a key challenge for the sharing of accurate information on these platforms. In other words, the very affordances (Halpern & Gibbs, 2013) that make it possible for anyone to contribute and share information on social media have also led to a media environment where information overload, or what some call “filter failure” is an issue (Shirky, 2008). Complicating the issue the important fact that information, particularly scientific information, is neither neutral nor non-contested (Lubchenko, 2017). People may have very good reasons to mistrust scientific information, and algorithmic filtering does not begin to address the human factors driving this distrust. Furthermore, the question of who is able to stand as an expert or authority is a very real barrier for diverse individuals who wish to communicate their research online (Hodson et. al, 2018) To understand the human drivers behind online information flows, a context-specific understanding of the lived experiences of people who share information online (professional communicators, researchers who use online tools to communicate, and the public who engage with online sources of information) is needed. The user-based study of information producers and intermediaries is an emerging field in the study of digital science communication.
With an abundance of online information available, individuals and platforms develop strategies to prevent information overload (Koroleva & Röhler, 2012). Strategies for filtering an abundance of online information include the adoption of both top-down (platform content curation/algorithmic filtering) and bottom-up (user based habits, behaviors and norms) strategies. Top down mechanisms stem from the platforms themselves and consist of algorithms that filter the content users see in their social media feeds. These filters usually tend to support the economic needs of the social media companies, aiming to curate content that holds user attention in order to provide a desirable space for advertising (Vaihyanathan, 2018). Bottom up methods are developed organically by users themselves, and involve user-generated hashtags to find and sort information (Bruns, 2018) as well as cross-posting content from one platform to another, or the use of email or instant messaging to share content outside the platforms themselves (Lim, Lu, Chen & Kan, 2015).
Both top-down and bottom-up information filtering strategies rely in part on individual liking and sharing behaviors. The platform algorithms that automatically filter information do so in such a way as to deliver people more of what they already like or share (Vaihyanathan, 2018), and in the act of deciding what to share with others, particularly across platforms, individuals also make their own content curation choices (Harsin, 2015). As such, to understand how scientific or research information, and related scientific misinformation is spread online, it is important to conduct research on personal, emotional, and social factors that drive or curtail online information sharing behaviors. This research will thus look at four related questions:
What kind of information do people share with each other on various online platforms including social media, email, and text messages (SMS)?
Why do people share some types of information and not others?
What are the barriers to the spread of accurate scientifically-supported information online?
What approaches can help people to engage more with accurate research and scientific information instead of misinformation or disinformation?
______________________________
Bechmann, A., Bruns, A., Gruzd, A., Quinn, A., & Rogers, R. (2018, July 20). Plenary panel: Accessing social media data after Cambridge Analytica. International Conference on Social Media and Society, Copenhagen.
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation. PloS one, 10(2), e0118093. https://doi.org/10.1371/journal.pone.0118093
Bruns, A. (2018). Big social data approaches in internet studies: The case of Twitter. In J. Hunsinger, M. Allen, & L. Klastrup (Eds.), Second International handbook of internet research (pp. 65-81). Springer. https://doi.org/10.1007/978-94-024-1555-1_3
Cook, J., & Lewandowsky, S. (2016). Rational irrationality: Modeling climate change belief polarization using Bayesian networks. Topics in Cognitive Science, 8(1), 160-179. https://doi.org/10.1111/tops.12186
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554-559. https://doi.org/10.1073/pnas.1517441113
Gesser-Edelsburg, A., Diamant, A., Hijazi, R., & Mesch, G. S. (2018). Correcting misinformation by health organizations during measles outbreaks: A controlled experiment. PloS one, 13(12), e0209505. https://doi.org/10.1371/journal.pone.0209505
Halpern, D., & Gibbs, J. (2013). Social media as a catalyst for online deliberation? Exploring the affordances of Facebook and YouTube for political expression. Computers in Human Behavior, 29(3), 1159-1168. https://doi.org/10.1016/j.chb.2012.10.008
Harsin, J. (2015). Regimes of posttruth, postpolitics, and attention economies. Communication, Culture & Critique, 8(2), 327-333. https://doi.org/10.1111/cccr.12097
Hodson, J., Dale, A., & Clifton-Ross, J. (2018). Sharing sustainability stories: Case study of social media content curation for Canada Research Connections. Journal of Digital and Social Media Marketing, 6(3), 1-13. https://www.ingentaconnect.com/content/hsp/jdsmm/2018/00000006/00000003/art00002
Koroleva, K., & Röhler, A. B. (2012). Reducing information overload: Design and evaluation of filtering and ranking algorithms for social networking sites. In ECIS 2012 Proceedings, 12. https://aisel.aisnet.org/ecis2012/12/
Lim, B. H., Lu, D., Chen, T., & Kan, M.-Y. (2015). #mytweet via Instagram: Exploring user behaviour across multiple social networks. In 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris (pp.113-120). https://doi.org/10.1145/2808797.2808820
Lubchenko, J. (2017). Environmental science in a post-truth world. Frontiers in Ecology and the Environment, 15(1), 3-3. https://doi.org/10.1002/fee.1454
Shirky, C. (2008, September 19). It’s not information overload: It’s filter failure. Web 2.0 Expo, New York.
Simis, M. J., Madden, H., Cacciatore, M. A., & Yeo, S. K. (2016). The lure of rationality: Why does the deficit model persist in science communication? Public Understanding of Science, 25(4), 400-414. https://doi.org/10.1177%2F0963662516629749
Song, M., & Gruzd, A. (2017). Examining sentiments and popularity of pro- and anti-vaccination videos on YouTube. In Proceedings of the 8th International Conference on Social Media & Society (#SMSociety17). ACM. https://doi.org/10.1145/3097286.3097303
Stilgoe, J., Lock, S. J., & Wilsdon, J. (2014). Why should we promote public engagement with science? Public understanding of science, 23(1), 4-15. https://doi.org/10.1177%2F0963662513518154
Takhteyev, Y., Gruzd, A., & Wellman, B. (2012). Geography of Twitter networks. Social networks, 34(1), 73-81. https://doi.org/10.1016/j.socnet.2011.05.006
Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.