Sunday, 22 December 2024
A role for qualitative methods in researching Twitter data on a popular science article's communication
Written for scholars and students who are interested in using qualitative research methods for research with small data, such as tweets on X.
Myself, Dr Corrie Uys, Dr Pat Harpur and Prof Izak van Zyl's open-access paper, 'A role for qualitative methods in researching Twitter data on a popular science article's communication' identifies several potential qualitative research contributions in analysing small data from microblogging communications:
Qualitative research can provide a rich contextual framing for how micro-practices (such as tweet shares for journal articles...) relate to important social dynamics (... like debates on paradigms within higher-level social strata in the Global Health Science field) plus professionals' related identity work. Also, in-depth explorations of microblogging data following qualitative methods can contribute to the research process by supporting meta-level critiques of missing data, (mis-) categorisations, and flawed automated (and manual) results.
Published in Frontiers in Research Metrics and Analytics journal's special topic, Network Analysis of Social Media Texts, our paper responds to calls from Big Data communication researchers for qualitative analysis of online science conversations to better explore their meaning. We identified a scholarly gap in the Science Communication field regarding the role that qualitative methods might play in researching small data regarding micro-bloggers' article communications. Although social media attention assists with academic article dissemination, qualitative research into related microblogging practices is scant. To support calls for the qualitative analysis of such communications, we provided a practical example:
Mixed methods were applied for better understanding an unorthodox, but popular, article (Diet, Diabetes Status, and Personal Experiences of Individuals with Type 2 diabetes Who Self-Selected and Followed a Low Carbohydrate High Fat diet) and its Twitter users' shares over two years. Big Data studies describe patterns in micro-bloggers' activities from large sets of data. In contrast, this small data set was analysed in NVivo™ by me (a pragmatist), and in MAXQDA™ by Corrie (a statistician). As part of the data preparation and cleaning, a comprehensive view of hyperlink sharing and conversations was developed, which quantitative extraction alone could not support. For example, through neglecting the general publication paths that fall outside listed academic publications, and related formal correspondence (such as academic letters, and sharing via open resources).
My multimodal content analysis found that links related to the article were largely shared by health professionals. Its popularity related to its position as a communication event within a longstanding debate in the Health Sciences. This issue arena sees an emergent Insulin Resistance (IR) paradigm contesting the dominant “cholesterol” model of chronic disease development. Health experts mostly shared this article, and their profiles reflected support for the emergent IR paradigm. We identified that these professionals followed a wider range of deliberation practices, than previously described by quantitative SciComm Twitter studies. Practices ranged from being included as part of a lecture-reading list, to language localisation in translating the article's title from English to Spanish, and study participants mentioning being involved. Contributing under their genuine identities, expert contributors carried the formal norms for civil communication into the scientific Twitter genre. There were no original URL shares from IR critics, suggesting how sharing evidence for an unconventional low-carbohydrate, healthy fats approach might be viewed as undermining orthodox identities. However, critics did respond with pro-social replies, and constructive criticism linked to the article's content, and its methodological limitations.
The statistician's semantic network analysis (SNA) confirmed that terms used by the article's tweeters related strongly to the article's content, and its discussion was pro-social. A few prominent IR individual advocates and organisations shared academic links to the article repeatedly, with its most influential tweeters and sharers being from England and South Africa. In using Atlas.ti and MAXQDA's tools for automated sentiment analysis, the statistician found many instances where sentiment was inaccurately described as negative when it should have been positive. This suggested a methodological limitation of quantitative approaches, such as QDAS, in (i) accurately analysing microblogging data. The SNA also uncovered concerns with (ii) incorrect automated counts for link shares. Concerns i & ii indicate how microblogging statistics may oversimplify complex categories, leading to inaccurate comparisons. In response, close readings of microblogging data present a distinct opportunity for meta-critique. Qualitative research can support critiques of microblogging data sources, as well as its use in QDAS. A lack of support for static Twitter data spreadsheet analysis was concerning.
Meta-inferences were then derived from the two methods' varied claims above. These findings flagged the importance of contextualising a health science article's sharing in relation to tweeters' professional identities and stances on what is healthy. In addition, meta-critiques spotlighted challenges with preparing accurate tweet data, and their analysis via qualitative data analysis software. Such findings suggest the valuable contributions that qualitative research can make to research with microblogging data in science communication.
The manuscript's development history
In 2020, Dr Pat Harpur and I selected an outlier IR scientific publication based on its unusually high Twitter popularity. At that time, the editorial, 'It is time to bust the myth of physical inactivity and obesity: you cannot outrun a bad diet' had been tweeted about over 3,000 times (now nearing 4,000 according to Altmetric!). However analysing this highly popular outlier stalled after its static export in qualitative data analysis software proved unsuitable for efficient coding. The large quantum of tweet data also proved very difficult to analyse. Accordingly, we shifted focus to a popular article that had been shared as an episode of a broader, long-running IR versus cholesterol debate. Even with its relatively small volume of tweets, organising this data for qualitative analysis proved challenging. For example, it was necessary to refine the Python extraction code, while cross-checks of static vs Twitter search results necessitated the capture of “missing” conversations.
We originally developed a multimodal analysis of these tweets, which focused on their relationship to Twitter user's profiles, potentially reflecting a wide range of communication goals. Our manuscript was submitted in 2022 to Science Communication, where Professor Susanna Priest kindly gave in-depth feedback on changing the original manuscript's contribution to a methodological one. We tackled this through developing a rationale for qualitative research with small data in the majorly revised article, which Dr Corrie Uys did a semantic network analysis for, while I revisited the social semiotic analysis.
If you have any questions, comments or concerns about our article, please comment below.
Acknowledgements
P.S. Related research manuscript from the team
Thursday, 29 August 2024
After half-a-million views, "Dr Noakes" erection dysfunction "advert" taken down by Facebook + suggested actions for META to do better
Figure 1. Screenshot from the fake 'Dr Noakes' erection dysfunction advert on Facebook (2024) |
Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024) |
Figure 3. Scammer account location behind fake Facebook Dr Tim Noakes adverts (2024) |
Our initial Facebook advert lookup revealed that one page was running four adverts (Figure 2). This account ("Tristan") was managed from Nepal and India (Figure 3).
Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook (2024) |
This fake account page also leveraged fake interactions to suggest that it was liked, and followed (Figures 4 and 5).
Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024) |
The scammers flick-flacked between varied accounts in committing this cybercrime- they initially used "Hughles" (Figures 6 and 7), "Cameron Sullivan Setting", and "Murthyrius" in launching the same deepfake ads. By the 28th of July, 13 of these "adverts were taken down by Facebook, but the scammers shifted to new accounts, "Longjiaren.com" (Figure 8) and "Brentlinger" (renamed "Brentlingerkk" after we reported it). On the 29th of August, these accounts and their adverts were disabled by Facebook.
Figure 8. Screenshot of Longjiaren.com scammers Facebook account for fake adverts (2024) |
Such adverts typically reach viewers outside The Noakes Foundation, Nutrition Network and Eat Better South Africa’s networks. Their audiences know Professor Noakes does not endorse miracle weight loss and other cures. To reach vulnerable publics, The Noakes Foundation has run Facebook alerts to warn about this latest cybercrime. Ironically, the most recent advert attempting to flag the "Dr Noakes" scam was blocked by Facebook advertising (Figure 9)!
Actions for META to do better in fighting cybercrime on its platforms
2) Create a compliance team that is dedicated to thwarting cybercriminals' activities;
3) Offer at least one human contact on each META platform for serious reports of criminal misuse;
4) Promote frequent reporters of cybercrime by referring them to META's Trusted Partners or Business Partners for rapid aid;
5) Encourage external research on every platform regarding cybercriminals' activities (such initiatives could develop inexpensive tools. For example, for celebrities' reps to protect public figures from being deep faked in "adverts");
6) Provide more feedback on what was influential in reporting cybercrime for accounts and content to be removed. Without such feedback, fraud reporters may not be sure which reports are most effective;
7) Have a recommendation system in place for support networks that cybervictims can approach (such as referring South Africans to its national CyberSecurity hub).
Thursday, 22 August 2024
META profits off fake celebrity endorsement ads (& no, "Dr Noakes" cannot help your erection problem 🙄!)
Figure 1. Screenshot of fake "Bretlinger" company's advert reel for "Dr Noakes" erection product on Facebook (2024) |
Figure 2. Facebook support message reply to fake "Dr Noakes erection advert" report (2024) |
Figure 3. Screenshot of The Noakes Foundation scam alert post on Dr Noakes scam with scammer comments, top (2024) |
Figure 4. Screenshot of scammer comments to The Noakes Foundation's scam alert post for the Dr Noakes scam, bottom (2024) |
META's Facebook and Instagram do not offer its public page managers any option to quickly respond to a barrage of scammy, fake comments. As a result, in addition to responding to fake ads, organisations must also use scarce resources to manage this scammy commentary. Each dodgy Facebook user's comment (Figure 4) must be- hidden and reported (as false information) one-at-a-time. Likewise for blocking each account, and hiding their feed. The Noakes Foundation is going to flag to META that it must provide page administrators with decent tools to efficiently tackle the 'fake comments' threat. Let's just hope that META is not turning a blind eye to that threat, too...
Figure 5. ALIEN Ash sympathies meme |
Friday, 26 July 2024
Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for dissidents challenging orthodox narratives in science.
The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).
#1 Covering up algorithmic manipulation
Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.
#2 Fact choke versus counter-narratives
An example she tweeted about was the BBC's Trusted New Initiative warning in 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation".
#3 Title-jacking
For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.
#4 Blacklisting trending dissent
Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures.#5 Blacklisting content due to dodgy account interactions or external platform links
#6 Making content unlikeable and unsharable
This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.
Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)
Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.
#7 Disabling public commentary
#8 Making content unsearchable within, and across, digital platforms
#9 Rapid content takedowns
Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.#10 Creating memory holes
#11 Rewriting history
#12 Concealing the motives behind censorship, and who its real enforcers are
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception. |
Monday, 8 January 2024
Presentation notes for Cybermobs for online academic bullying - a new censorship option to protect The Science™ ’s status-quo
Written for viewers of the Slideshare presentation.
Here is the transcript of the talk that accompanied https://www.slideshare.net/TravisNoakes/cybermobs-for-online-academic-bullying-2023pptx. For more on its background, click here.
Slide #1
Thanks for the joining this talk on how academic cybermobs can serve as a new censorship option for protecting scientific orthodoxy. Mobs that seek to silence dissenters are a small part of a much greater concern regarding the censorship of legitimate disagreement… and scientific truths online.
#2
You can all read faster than I can speak, so please do for this organizer of my presentation. After introducing yours truly and The Noakes Foundation (TNF), I am going to define the key concepts of The Science, scientific suppression, undone science, digital voice and online censorship
#3
And how digital voice in the Fifth Estate is useful for working around scientific suppression, changing science and guidelines. Dissidents who succeed in gaining public attention can face hard and soft forms of censorship, which include the distinctive actions of an academic cybermobs, plus facing as a myriad of forms of censorship on digital platforms. The talk closes with the challenges of researching academic cybermobs and a brief intro into celebrity cybermobbing research that TNF assists.
#27