Friday, 6 June 2025
Techniques for suppressing health experts' social media accounts (7 - 12, part 2) - The Science™ versus key opinion leaders challenging the COVID-19 narrative
Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.
This is the second post alerting readers to the myriad of techniques that social media companies continue to use against key opinion leaders that question the dominant consensus from The Science™. While examples are provided for techniques versus prominent critics of the official COVID-19 narrative, these are also readily applied for stifling the emergence of other issue arenas. These range from the United States of America's support for forever wars via a ghost budget (Bilmes, 2018), to man-made climate change, and from low carbohydrate diets to transgender "gender affirming" medical surgery ideology. These dogmatic prescriptions by the global policy decision makers of the Global Private Partnership (G3P or GPPP) are presented as a "scientific consensus", but are unscientific when protected from being questioned- especially by legitimate experts with dissenting arguments.
#7 Concealing the sources behind dissidents' censorship
An important aspect of information control is that the sources behind it will be very well hidden from the public. For the organisers of propaganda, their ideal targets do not appreciate that they are receiving propaganda, nor should they recognise its source. Their audiences' ignorance is a key aspect of psychological warfare (otherwise known as 5th generation warfare (Abbot, 2010, Krishnan, 2024). Likewise for censors, its targets and their followers should ideally not be aware that they are being censored, nor able to identity the sources of their censorship. Accordingly, there is significant deception around the primary sources for social media censorship being the platforms themselves, and their policies. Instead, these platforms are largely responding to co-ordinated COVID-19 narrative control from G3P members who span each of the six estates*.
The well-funded, complicity theorists for a COVID-19 "Infodemic" (for example- Calleja et al., 2021, Caulfield, 2020, DiResta, 2022, Schiffrin, 2022) may genuinely believe in advocating for censorship as a legitimate, organic counterweight to "malinformation". In contrast, researchers at the Unlimited Hangout point out that this censorship is highly centralised, aiming at opinions that are deemed "illegitimate" merely for disagreeing with the positions of the most powerful policy makers at the G3P's macro-level. Ian Davis writes that the G3P policy makers are Chatham House, the Club of Rome, the Council of Foreign Relations, the Rockefellers and the World Economic Forum. Each guides international policy distributors, including the; International Monetary Fund, The Intergovernmental Panel on Climate Change, United Nations, World Health Organisation, plus "philanthropists" {eg. the Bill and Melinda Gates Foundation (BMGF)}, multinational corporations and global non-governmental organisations..
Mr Bill Gates serves as an example of the Sixth Estate exercising undue influence on public health, especially Africa's: His foundation is the largest private benefactor of the World Health Organization. The BMGF finances the health ministries in virtually every African country. Mr Gates can place conditions on that financing, such as vaccinating a certain percentage of a country’s population. Some vaccines and health-related initiatives that these countries purchase are developed by companies that Gates’ Cascade Investment LLC invests in. As a result, he can benefit indirectly from stock appreciation. This is alongside tax savings from his donations, whilst his reputation as a ‘global health leader’ is further burnished. In South Africa, the BMGF have directly funded the Department of Health, SA’s regulator SAHPRA, plus its Medical Research Council, top medical universities and the media (such as the Mail and Guardian’s health journalism centre, Bhekisisa). All would seem highly motivated to protect substantial donations by not querying Mr Gates’ vaccine altruism. However, the many challenges of the Gates Foundation’s dominating role in its transnational philanthropy must not be ignored. Such dominance poses a challenge to justice- locals’ rights to control the institutions that can profoundly impact their basic interests (Blunt, 2022). While the BMGF cannot be directly tied to COVID-19 social media account censorship, it is indisputable that Mr Gates' financial power and partner organisations indirectly suppressed dissenting voices by prioritising certain COVID-19 treatment narratives (Politico, 2022A, 2022B).
At a meso-level, select G3P policy enforcers organise that macro-level's policy directives are followed by both national governments (and their departments, such as health) and scientific authorities (including the AMA, CDC, EMA, FDA, ICL, JCVI, NERVTAG, NIH, MHRA and SAGE). Enforcers strive to prevent rival scientific ideas gaining traction, and thereby challenging its policymakers' dictates. These bodies task psychological 'nudge' specialists (Junger and Hirsch, 2024), propagandists and other experts with convincing the public to accept, and ideally buy-into, G3P policies. This involves censorship and psychological manipulation via public relations, propaganda, disinformation and misinformation. The authors of such practices are largely unattributed. Dissidents facing algorithmic censorship through social media companies' opaque processes of content moderation are unlikely to be able to identify the true originator of their censorship in a complex process. Content moderation is a 'multi-dimensional process through which content produced by users is monitored, filtered, ordered, enhanced, monetised or deleted on social media platforms' (Badouard and Bellon, 2025). This process spans a 'great diversity of actors' who develop specific practices of content regulation (p3). Actors may range from activist users and researchers who flag content, to fact-checkers from non-governmental organisations and public authorities. If such actors disclose their involvement in censorship, this may only happen much later. For example, Mark Zuckerberg’s 2024 letter to the House Judiciary Committee revealed that the Biden administration pressured Meta to censor certain COVID-19 content, including humour and satire, in 2021.
#8 Blocking a user’s access to his or her account
A social media platform may stop a user from being able to login to his or her account. Where the platform does not make this blocking obvious to a users' followers, this is deceptive. For example, Emeritus Professor Tim Noakes' Twitter account was deactivated for months after querying health authorities' motivations in deciding on interventions during the COVID-19 "pandemic". Many viewers would not recognise that his seemingly live profile was in fact inactive, since it looked to be active. The only clue was that @ProfTimNoakes had not tweeted for a long time. This was highly unusual.
This suspension followed Twitter's introduction of a “five-strike” system, with repeat offenders or egregious violations leading to permanent bans. Twitter's system tracked violations, with the first and second strikes resulting in a warning or temporary lock. A third strike resulted in a 12-hour suspension, while a 7-day suspension followed a 4th strike. Users faced a permanent ban for a 5th strike. In Professor Tim Noakes' case, he was given a vague warning regarding 'breaking community rules etc.' (email correspondence, 24.10.2022), this followed him noticing a loss of followers and his tweets reach being restricted. Twitter 'originally said I was banned for 10 hours. But after 10 hours when I tried to re-access it they would not send me a code to enter. When I complained they just told me I was banned. When I asked for how long, they did not answer.' In reviewing his tweets, Prof. Noakes noticed that some had been (mis-)labelled by Twitter to be "misleading" before his suspension (see Figure 1 below).
![]() |
Figure 1. Screenshot of @ProfTimNoakes' "controversial" tweet on President Macron not taking COVID-19 'experimental gene therapy' (24 October, 2022) |
Prof Noakes had also tweet-quoted Alec Hogg’s BizNews article regarding Professor Salim Abdool Karim’s conflicts of interest, adding 'something about' cheque book science. The @ProfTimNoakes account was in a state of limbo after seven days, but was never permanently banned. Usually, accounts placed on “read-only” mode, or temporary lockouts, required tweet deletion to regain full access. However, @ProfTimNoakes latest tweets were not visible, and he was never asked to delete any. In addition to account login blocks, platforms may also suspend accounts from being visible. But this was not applied to @ProfTimNoakes. In response to being locked out, Prof Noakes shifted to using his alternate @loreofrunning account- its topics of nutrition, running and other sports seemed safe from the reach of unknown censors' Twitter influence.
#9 Temporary suspensions of accounts (temporary user bans)
#10 Permanent suspension of accounts, pages and groups (complete bans)
In contrast to Twitter's five-strikes system, Meta's Facebook's was not as formalised. It tracked violations on accounts, pages and groups. The latter serve different functions in Facebook’s system architecture (Broniatowski, et al. 2023): only page administrators may post in pages, which are designed for brand promotion and marketing. In contrast, any member may post in groups. These serve as a forum for members to build community and discuss shared interests. In addition, pages may serve as group administrators. From December 2020, Meta began removing "false claims about COVID-19 vaccines" that were "debunked by public health experts". This included "misinformation" about their efficacy, ingredients, safety, or side effects. Repeatedly sharing "debunked claims" risked escalating penalties to individual users/administrators, pages and groups. Penalties ranged from from reduced visibility to removal and permanent suspension. For example, if a user posted that 'COVID vaccines cause infertility' "without evidence", this violated policy thresholds. The user was then asked to acknowledge the violation, or appeal. Appeals were often denied if the content clashed with official narratives.
Meta could choose to permanently ban individual-, fan page- and group- accounts on Facebook. For example, high-profile repeated offenders were targeted for removals. In November 2020, the page "Stop Mandatory Vaccination", which was one of the platform’s largest "anti-vaccine" fan pages was removed. Robert F. Kennedy Jr.’s Instagram account was permanently removed in 2021 for "sharing debunked COVID-19 vaccine claims". The non-profit he founded, Children’s Health Defense was suspended from both Facebook and Instagram in August 2022 for its repeated violations of Meta’s COVID-19 misinformation policies.
Microsoft's LinkedIn generally has stricter content moderation for professional content than other social networks. It updated its 'Professional Community Policies' for COVID-19 to prohibit content contradicting guidance from global health organisations, like the CDC and WHO. This included promoting unverified treatments and downplaying the "pandemic"’s severity. Although LinkedIn has not disclosed specific thresholds, high-profile cases evidence that the persistent sharing of contrarian COVID-19 views—especially if flagged by users, or contradicting official narratives—would lead to removal. Dr. Mary Talley Bowden, Dr. Aseem Malhotra, Dr Robert Malone, and Mr Steve Kirsch and accounts have all been permanently suspended.
#11 Non-disclosure of information around banning's rationale for account-holders
Social media platforms' Terms of Service (TOS) may ensure that these companies are not legally obligated to share information with their users on the precise reasons for their accounts being suspended. Popular platforms like Facebook, LinkedIn and X can terminate accounts at their sole discretion without providing detailed information to users. Such suspensions are typically couched opaquely in terms of policy violation (such as being in breach of community standards).
Less opaque details may be forthcoming if the platform's TOS is superseded by a country, or regional bloc's, laws. In the US, section 230 of its Communications Decency Act allows platforms to moderate content as they see fit. They are only obligated to disclose reasons under a court order, or if a specific law applies (such as one related to data privacy). By contrast, companies operating under European Union countries are expected to comply with the EU's Digital Services Act (DSA). Here, platforms must provide a 'statement of reasons' for content moderation decisions, including suspensions, with some level of detail about the violation. Whilst compliant feedback must be clear and user-friendly, granular specifics may not be a DSA requirement. In the EU and USA, COVID-19 dissidents could only expect detailed explanations in response to legal appeals, or significant public pressure. Internal whistleblowing and investigative reports, such as the Facebook and Twitter files, also produced some transparency.
One outcome of this opaque feedback is that the reasons for dissidents' COVID-19 health experts' accounts being suspended are seldom made public. Even where dissidents have shared their experiences, the opaque processes and actors behind COVID-19 censorship remain unclear. Even reports from embedded researchers, such as The Center for Countering Digital Hate's "Disinformation Dozen", lack specificity. While it reported how Meta permanently banned 16 accounts, and restricted 22 others, for "sharing anti-vaccine content" in response to public reporting in 2021. However, the CCDH did not explicitly name the health experts given permanent suspensions. Hopefully, a recent 171-page federal civil rights suit by half of the dissidents mentioned in this report against the CCDH, Imran Ahmed, U.S. officials & tech giants will expose more about who is behind prominent transnational censorship & reputational warfare (Ji, 2025).
#12 No public reports from platforms regarding account suspensions and censorship requests
![]() | |
Figure 2. Slide on 'Critical social justice as a protected ideology in Higher Education, but contested in social media hashtag communities' (Noakes, 2024) |
More about censorship techniques against dissenters on social networks
More about censorship techniques against dissenters on social networks
- Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
- Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
N.B. I am writing a third post on account censorship during COVID-19, that will cover at least three more serious techniques. Do follow me on X to learn when that is published. Please suggest improvements to this post in the comments below, or reply to my tweet thread at https://x.com/travisnoakes/status/1930989080231203126.
Saturday, 29 March 2025
Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022. Shir-Raz et al, 2023. Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).
Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!
Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.
COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020. Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021. Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!
This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.
This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:
Practices for @Account suppression
#1 Deception - users are not alerted to unconstitutional limitations on their free speech
#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents
#3 Othering - enabling public character assassination via cyber smears
#4 Not blocking impersonators or preventing brandjacked accounts
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.
Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning') or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.
#5 Filtering an account's visibility through ghostbanning
As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas.
This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast, the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:
Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.
The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.
An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.
#6 Penalising accounts that share COVID-19 "misinformation"
I am writing a series of post on this topic that will cover more serious techniques. Do follow me on X to be alerted when they are published. Please share your views by commenting below, or reply to this tweet thread at https://x.com/travisnoakes/status/1906250555564900710.
Sunday, 22 December 2024
A role for qualitative methods in researching Twitter data on a popular science article's communication
Written for scholars and students who are interested in using qualitative research methods for research with small data, such as tweets on X.
Myself, Dr Corrie Uys, Dr Pat Harpur and Prof Izak van Zyl's open-access paper, 'A role for qualitative methods in researching Twitter data on a popular science article's communication' identifies several potential qualitative research contributions in analysing small data from microblogging communications:
Qualitative research can provide a rich contextual framing for how micro-practices (such as tweet shares for journal articles...) relate to important social dynamics (... like debates on paradigms within higher-level social strata in the Global Health Science field) plus professionals' related identity work. Also, in-depth explorations of microblogging data following qualitative methods can contribute to the research process by supporting meta-level critiques of missing data, (mis-) categorisations, and flawed automated (and manual) results.
Published in Frontiers in Research Metrics and Analytics journal's special topic, Network Analysis of Social Media Texts, our paper responds to calls from Big Data communication researchers for qualitative analysis of online science conversations to better explore their meaning. We identified a scholarly gap in the Science Communication field regarding the role that qualitative methods might play in researching small data regarding micro-bloggers' article communications. Although social media attention assists with academic article dissemination, qualitative research into related microblogging practices is scant. To support calls for the qualitative analysis of such communications, we provided a practical example:
Mixed methods were applied for better understanding an unorthodox, but popular, article (Diet, Diabetes Status, and Personal Experiences of Individuals with Type 2 diabetes Who Self-Selected and Followed a Low Carbohydrate High Fat diet) and its Twitter users' shares over two years. Big Data studies describe patterns in micro-bloggers' activities from large sets of data. In contrast, this small data set was analysed in NVivo™ by me (a pragmatist), and in MAXQDA™ by Corrie (a statistician). As part of the data preparation and cleaning, a comprehensive view of hyperlink sharing and conversations was developed, which quantitative extraction alone could not support. For example, through neglecting the general publication paths that fall outside listed academic publications, and related formal correspondence (such as academic letters, and sharing via open resources).
My multimodal content analysis found that links related to the article were largely shared by health professionals. Its popularity related to its position as a communication event within a longstanding debate in the Health Sciences. This issue arena sees an emergent Insulin Resistance (IR) paradigm contesting the dominant “cholesterol” model of chronic disease development. Health experts mostly shared this article, and their profiles reflected support for the emergent IR paradigm. We identified that these professionals followed a wider range of deliberation practices, than previously described by quantitative SciComm Twitter studies. Practices ranged from being included as part of a lecture-reading list, to language localisation in translating the article's title from English to Spanish, and study participants mentioning being involved. Contributing under their genuine identities, expert contributors carried the formal norms for civil communication into the scientific Twitter genre. There were no original URL shares from IR critics, suggesting how sharing evidence for an unconventional low-carbohydrate, healthy fats approach might be viewed as undermining orthodox identities. However, critics did respond with pro-social replies, and constructive criticism linked to the article's content, and its methodological limitations.
The statistician's semantic network analysis (SNA) confirmed that terms used by the article's tweeters related strongly to the article's content, and its discussion was pro-social. A few prominent IR individual advocates and organisations shared academic links to the article repeatedly, with its most influential tweeters and sharers being from England and South Africa. In using Atlas.ti and MAXQDA's tools for automated sentiment analysis, the statistician found many instances where sentiment was inaccurately described as negative when it should have been positive. This suggested a methodological limitation of quantitative approaches, such as QDAS, in (i) accurately analysing microblogging data. The SNA also uncovered concerns with (ii) incorrect automated counts for link shares. Concerns i & ii indicate how microblogging statistics may oversimplify complex categories, leading to inaccurate comparisons. In response, close readings of microblogging data present a distinct opportunity for meta-critique. Qualitative research can support critiques of microblogging data sources, as well as its use in QDAS. A lack of support for static Twitter data spreadsheet analysis was concerning.
Meta-inferences were then derived from the two methods' varied claims above. These findings flagged the importance of contextualising a health science article's sharing in relation to tweeters' professional identities and stances on what is healthy. In addition, meta-critiques spotlighted challenges with preparing accurate tweet data, and their analysis via qualitative data analysis software. Such findings suggest the valuable contributions that qualitative research can make to research with microblogging data in science communication.
The manuscript's development history
In 2020, Dr Pat Harpur and I selected an outlier IR scientific publication based on its unusually high Twitter popularity. At that time, the editorial, 'It is time to bust the myth of physical inactivity and obesity: you cannot outrun a bad diet' had been tweeted about over 3,000 times (now nearing 4,000 according to Altmetric!). However analysing this highly popular outlier stalled after its static export in qualitative data analysis software proved unsuitable for efficient coding. The large quantum of tweet data also proved very difficult to analyse. Accordingly, we shifted focus to a popular article that had been shared as an episode of a broader, long-running IR versus cholesterol debate. Even with its relatively small volume of tweets, organising this data for qualitative analysis proved challenging. For example, it was necessary to refine the Python extraction code, while cross-checks of static vs Twitter search results necessitated the capture of “missing” conversations.
We originally developed a multimodal analysis of these tweets, which focused on their relationship to Twitter user's profiles, potentially reflecting a wide range of communication goals. Our manuscript was submitted in 2022 to Science Communication, where Professor Susanna Priest kindly gave in-depth feedback on changing the original manuscript's contribution to a methodological one. We tackled this through developing a rationale for qualitative research with small data in the majorly revised article, which Dr Corrie Uys did a semantic network analysis for, while I revisited the social semiotic analysis.
If you have any questions, comments or concerns about our article, please comment below.
Acknowledgements
P.S. Related research manuscript from the team
Tuesday, 26 September 2023
Noteworthy disparities with four CAQDAS tools: explorations in organising live Twitter (now known as X) data
QDAS tools that support live data extraction are a relatively recent innovation. At the time of our fieldwork, four prominent QDAS provided this: only ATLAS.ti™, NVivo™, MAXQDA™ and QDA Miner™ had Twitter data import functionalities. Little has been written concerning the research implications of differences between their functionalities, and how such disparities might contribute to contrasting analytical opportunities. Consequently, early-stage researchers may experience difficulties in choosing an apt QDAS to extract live data for Twitter academic research.
To ensure that each week's Twitter data extractions could produce much data for potential evaluation, we focused on extracting and organising communiqués from the national electrical company, the Electricity Supply Commission (Eskom). ‘Load-shedding’ is the Pan South African Language Board’s word of the year for 2022 (PanSALB, 2022), due to it most frequent use in credible print, broadcast and online media. Invented as a euphemism by Eskom’s public-relations team, load-shedding describes electricity blackouts. Since 2007, planned rolling blackouts have been used in a rotating schedule for periods ‘where short supply threatens the integrity of the grid’ (McGregor & Nuttall, 2013). In the weeks up to, and during, the researchers’ fieldwork, Eskom, and the different stages of loadshedding strongly trended on Twitter. These tweets reflected the depth of public disapproval, discontent, anger, frustration, and general concern.
QDAS packages commonly serve as tools that researchers can use for four broad activities in the qualitative analysis process (Gilbert, Jackson, & di Gregorio, 2014). These are (a) organising- coding sets, families and hyperlinking; (b) exploring - models, maps, networks, coding and text searches; (c) reflecting - through memoing, annotating and mapping; and (d) integrating qualitative data through memoing with hyperlinks and merging projects (Davidson & di Gregorio, 2011; Di Gregorio, 2010; Lewins & Silver, 2014).
Please comment below if you have any questions or comments regarding our paper?
Thursday, 29 June 2023
Twitter Support must do better for helping celebrity and public victims of a global diet phishing scam!
This post presents the underwhelming example of reporting diet phishing accounts to Twitter Support as a way to spotlight the difficulties of tackling fraud via social media platforms. Hopefully publicly shaming @TwitterSupport will encourage its leaders to help address the global diet phishing scam properly, whilst also providing decent reporting options for celebrities and their representatives:
South African celebrities hijacked in fake diet adverts
A major factor in the “success" of this global scam (it has been running since 2014!) is the poor response from Facebook, Instagram, Twitter and other social media companies to formal requests to close fake accounts and their advertisement campaigns. Their ineffective responses are legally shortsighted: social media companies that repeatedly permit diet phishing ads on their platforms are complicit in a fraud, and possibly in the delict of passing off. For example, in South Africa, the diet phishing scam has undoubtedly harmed the reputation of Prof Tim Noakes and The Noakes Foundation through its fraudulent, direct misrepresentation, of fake products. These have certainly confused the public and @TheNoakesF has lost goodwill from the many victims of the fraud’s misrepresentation!Since Prof Noakes’ identity was first hijacked in 2020, The Noakes Foundation (TNF) and partners (such as Dr Michael Mol and Hello Doctor) have tried many options to stop the scam. For example, TNF developed and publicised content against it via blogposts, such as Keto Extreme Scams Social Media Users Out of Thousands. TNF also produced these videos: Professor Tim Noakes vs. Diet Phishing: Exposing a Global Scam with Fake Celebrity Endorsements, Dr Michael Mol highlighting Diet Scams and Prof Noakes Speaks Out Against The Ongoing Diet Scam. Sadly, The Noakes Foundation’s repeated warnings to the public don’t seem to be making much difference in preventing new victims!
American, Australian, British and Swedish celebs hijacked, too!
In the United States, the diet phishing scam has also stolen the identities of major celebrities. Most are in popular TV franchises: Oprah Winfrey (@Oprah), Dr Mehmet Oz (@DrOz) Dr Phil (@DrPhil), Dolly Parton (@DollyParton), Kelly Clarkson (@kellyclarkson), the Kardashian Family (@kardashianshulu + @KimKardashian), Kelly Osbourne (@KellyOsbourne), Chrissy Teigen (@chrissyteigen), Martha Maccallum (@marthamaccallum), Blake Shelton (@blakeshelton) and #TomSelleck 🥸. It’s a Magnum opus of fraud!Amazing female celebs in the United Kingdom have also seen their identities stolen. Diet phishing scammers have hijacked the IDs of Holly Willis (@hollywills), Amanda Holden (@AmandaHolden), Anne Hegerty (@anne_hegerty) and Dawn French (@Dawn_French). Even the British (@RoyalFamily) has not been immune, with the targeting of Catherine, the Princess of Wales (@KensingtonRoyal) and the Former Queen Elisabeth II, RIP and God Bless. Sadly, Meghan Duchess of Sussex, has been targeted too...
Down Under, well-known Australian personalities, such as its national treasure Maggie Beer (@maggie_beer) and Farmer Wants A Wife host Sam Armytage (@sam_armytage) have had their identities misused for fake #weightloss endorsements. And also Mr Embarrassing Bodies Down Under himself, Dr Brad McKay (@DrBradMcKay).
In Sweden, Dr Andreas Eenfeldt (@DrEenfeldt from @DietDoctor), another leader in the low carbohydrate movement, has been targeted in promotions of fake #keto products. Sadly, the fake ads seem to generate far more attention and action than his or my father's health advice!
N.B. The examples above are not extensive in terms of all victims. We largely know of celebrities in the Anglosphere whose identities were stolen, then featured in English language reports and related search engine results.
Deceptive "Tim Noakes" Twitter accounts market Keto Gummies
Just as the celebrity names stolen for the fake ads change often, so do the product names. A few examples of these fake names are Capsaicin, FigurWeightLossCapsules, Garcinia, Ketovatru and KetoLifePlus. Be warned that new "products" are added every month! One particularly common term used in the scammers' product names is "Keto Gummies". A recent Twitter search for "Tim Noakes keto gummies" suggested many fake accounts in Figure 1 (just the top view!), plus diverse "product" names.Figure 1. Twitter search results for Tim Noakes keto gummies (fake product accounts) (20 June, 2023)
Twitter Support does not think fake accounts are misleading and deceptive?!
These accounts have clearly been setup to fraudulently market "keto gummies" by suggesting an association with "Tim Noakes". So, the logical response for any representative of The Noakes Foundation would seem to be reporting each fake account for violating Twitter’s misleading and deceptive identities policy, right?Fake Twitter accounts, including those below, were reported to Twitter, with support documentation:
@NoakesGumm28693 0327118996 @TimNoakesHoax 0327120384
@TimGummies 0327119602 @NoakesGumm91126 0327119675
@gummies_tim 0327120030 @TimNoakes_ZA 0327119741
@tim_gummies 0327118910 @NoakesSouth 0327118634
@timnoakesketo0 0327119362 @NoakesGumm22663 0327119487
In each case, @TwitterSupport replied that the following accounts are NOT in violation of Twitter’s misleading and deceptive identities policy. This would seem to contradict the obvious evidence that Tim Noakes' name has been hijacked by scammers for misleading victims with a fake product!
This "Tim Noakes keto gummies" Twitter account is not deceptive?!
Figure 3 shows a typical example of a fake account's style. It uses Tim Noakes' name, plus stock photography in marketing a non-existent product. It only tweeted on May the 24th, and is followed by one person. Any knowledgeable complaint reviewer would surely consider this to be a case of a scammer creating a misleading and deceptive account for gaming Twitter's search engine. However, Twitter Support does not agree, nor explain why in its generic correspondence around each scam account.