Friday, 6 June 2025
Techniques for suppressing health experts' social media accounts (7 - 12, part 2) - The Science™ versus key opinion leaders challenging the COVID-19 narrative
Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.
This is the second post alerting readers to the myriad of techniques that social media companies continue to use against key opinion leaders that question the dominant consensus from The Science™. While examples are provided for techniques versus prominent critics of the official COVID-19 narrative, these are also readily applied for stifling the emergence of other issue arenas. These range from the United States of America's support for forever wars via a ghost budget (Bilmes, 2018), to man-made climate change, and from low carbohydrate diets to transgender "gender affirming" medical surgery ideology. These dogmatic prescriptions by the global policy decision makers of the Global Private Partnership (G3P or GPPP) are presented as a "scientific consensus", but are unscientific when protected from being questioned- especially by legitimate experts with dissenting arguments.
#7 Concealing the sources behind dissidents' censorship
An important aspect of information control is that the sources behind it will be very well hidden from the public. For the organisers of propaganda, their ideal targets do not appreciate that they are receiving propaganda, nor should they recognise its source. Their audiences' ignorance is a key aspect of psychological warfare (otherwise known as 5th generation warfare (Abbot, 2010, Krishnan, 2024). Likewise for censors, its targets and their followers should ideally not be aware that they are being censored, nor able to identity the sources of their censorship. Accordingly, there is significant deception around the primary sources for social media censorship being the platforms themselves, and their policies. Instead, these platforms are largely responding to co-ordinated COVID-19 narrative control from G3P members who span each of the six estates*.
The well-funded, complicity theorists for a COVID-19 "Infodemic" (for example- Calleja et al., 2021, Caulfield, 2020, DiResta, 2022, Schiffrin, 2022) may genuinely believe in advocating for censorship as a legitimate, organic counterweight to "malinformation". In contrast, researchers at the Unlimited Hangout point out that this censorship is highly centralised, aiming at opinions that are deemed "illegitimate" merely for disagreeing with the positions of the most powerful policy makers at the G3P's macro-level. Ian Davis writes that the G3P policy makers are Chatham House, the Club of Rome, the Council of Foreign Relations, the Rockefellers and the World Economic Forum. Each guides international policy distributors, including the; International Monetary Fund, The Intergovernmental Panel on Climate Change, United Nations, World Health Organisation, plus "philanthropists" {eg. the Bill and Melinda Gates Foundation (BMGF)}, multinational corporations and global non-governmental organisations..
Mr Bill Gates serves as an example of the Sixth Estate exercising undue influence on public health, especially Africa's: His foundation is the largest private benefactor of the World Health Organization. The BMGF finances the health ministries in virtually every African country. Mr Gates can place conditions on that financing, such as vaccinating a certain percentage of a country’s population. Some vaccines and health-related initiatives that these countries purchase are developed by companies that Gates’ Cascade Investment LLC invests in. As a result, he can benefit indirectly from stock appreciation. This is alongside tax savings from his donations, whilst his reputation as a ‘global health leader’ is further burnished. In South Africa, the BMGF have directly funded the Department of Health, SA’s regulator SAHPRA, plus its Medical Research Council, top medical universities and the media (such as the Mail and Guardian’s health journalism centre, Bhekisisa). All would seem highly motivated to protect substantial donations by not querying Mr Gates’ vaccine altruism. However, the many challenges of the Gates Foundation’s dominating role in its transnational philanthropy must not be ignored. Such dominance poses a challenge to justice- locals’ rights to control the institutions that can profoundly impact their basic interests (Blunt, 2022). While the BMGF cannot be directly tied to COVID-19 social media account censorship, it is indisputable that Mr Gates' financial power and partner organisations indirectly suppressed dissenting voices by prioritising certain COVID-19 treatment narratives (Politico, 2022A, 2022B).
At a meso-level, select G3P policy enforcers organise that macro-level's policy directives are followed by both national governments (and their departments, such as health) and scientific authorities (including the AMA, CDC, EMA, FDA, ICL, JCVI, NERVTAG, NIH, MHRA and SAGE). Enforcers strive to prevent rival scientific ideas gaining traction, and thereby challenging its policymakers' dictates. These bodies task psychological 'nudge' specialists (Junger and Hirsch, 2024), propagandists and other experts with convincing the public to accept, and ideally buy-into, G3P policies. This involves censorship and psychological manipulation via public relations, propaganda, disinformation and misinformation. The authors of such practices are largely unattributed. Dissidents facing algorithmic censorship through social media companies' opaque processes of content moderation are unlikely to be able to identify the true originator of their censorship in a complex process. Content moderation is a 'multi-dimensional process through which content produced by users is monitored, filtered, ordered, enhanced, monetised or deleted on social media platforms' (Badouard and Bellon, 2025). This process spans a 'great diversity of actors' who develop specific practices of content regulation (p3). Actors may range from activist users and researchers who flag content, to fact-checkers from non-governmental organisations and public authorities. If such actors disclose their involvement in censorship, this may only happen much later. For example, Mark Zuckerberg’s 2024 letter to the House Judiciary Committee revealed that the Biden administration pressured Meta to censor certain COVID-19 content, including humour and satire, in 2021.
#8 Blocking a user’s access to his or her account
A social media platform may stop a user from being able to login to his or her account. Where the platform does not make this blocking obvious to a users' followers, this is deceptive. For example, Emeritus Professor Tim Noakes' Twitter account was deactivated for months after querying health authorities' motivations in deciding on interventions during the COVID-19 "pandemic". Many viewers would not recognise that his seemingly live profile was in fact inactive, since it looked to be active. The only clue was that @ProfTimNoakes had not tweeted for a long time. This was highly unusual.
This suspension followed Twitter's introduction of a “five-strike” system, with repeat offenders or egregious violations leading to permanent bans. Twitter's system tracked violations, with the first and second strikes resulting in a warning or temporary lock. A third strike resulted in a 12-hour suspension, while a 7-day suspension followed a 4th strike. Users faced a permanent ban for a 5th strike. In Professor Tim Noakes' case, he was given a vague warning regarding 'breaking community rules etc.' (email correspondence, 24.10.2022), this followed him noticing a loss of followers and his tweets reach being restricted. Twitter 'originally said I was banned for 10 hours. But after 10 hours when I tried to re-access it they would not send me a code to enter. When I complained they just told me I was banned. When I asked for how long, they did not answer.' In reviewing his tweets, Prof. Noakes noticed that some had been (mis-)labelled by Twitter to be "misleading" before his suspension (see Figure 1 below).
![]() |
Figure 1. Screenshot of @ProfTimNoakes' "controversial" tweet on President Macron not taking COVID-19 'experimental gene therapy' (24 October, 2022) |
Prof Noakes had also tweet-quoted Alec Hogg’s BizNews article regarding Professor Salim Abdool Karim’s conflicts of interest, adding 'something about' cheque book science. The @ProfTimNoakes account was in a state of limbo after seven days, but was never permanently banned. Usually, accounts placed on “read-only” mode, or temporary lockouts, required tweet deletion to regain full access. However, @ProfTimNoakes latest tweets were not visible, and he was never asked to delete any. In addition to account login blocks, platforms may also suspend accounts from being visible. But this was not applied to @ProfTimNoakes. In response to being locked out, Prof Noakes shifted to using his alternate @loreofrunning account- its topics of nutrition, running and other sports seemed safe from the reach of unknown censors' Twitter influence.
#9 Temporary suspensions of accounts (temporary user bans)
#10 Permanent suspension of accounts, pages and groups (complete bans)
In contrast to Twitter's five-strikes system, Meta's Facebook's was not as formalised. It tracked violations on accounts, pages and groups. The latter serve different functions in Facebook’s system architecture (Broniatowski, et al. 2023): only page administrators may post in pages, which are designed for brand promotion and marketing. In contrast, any member may post in groups. These serve as a forum for members to build community and discuss shared interests. In addition, pages may serve as group administrators. From December 2020, Meta began removing "false claims about COVID-19 vaccines" that were "debunked by public health experts". This included "misinformation" about their efficacy, ingredients, safety, or side effects. Repeatedly sharing "debunked claims" risked escalating penalties to individual users/administrators, pages and groups. Penalties ranged from from reduced visibility to removal and permanent suspension. For example, if a user posted that 'COVID vaccines cause infertility' "without evidence", this violated policy thresholds. The user was then asked to acknowledge the violation, or appeal. Appeals were often denied if the content clashed with official narratives.
Meta could choose to permanently ban individual-, fan page- and group- accounts on Facebook. For example, high-profile repeated offenders were targeted for removals. In November 2020, the page "Stop Mandatory Vaccination", which was one of the platform’s largest "anti-vaccine" fan pages was removed. Robert F. Kennedy Jr.’s Instagram account was permanently removed in 2021 for "sharing debunked COVID-19 vaccine claims". The non-profit he founded, Children’s Health Defense was suspended from both Facebook and Instagram in August 2022 for its repeated violations of Meta’s COVID-19 misinformation policies.
Microsoft's LinkedIn generally has stricter content moderation for professional content than other social networks. It updated its 'Professional Community Policies' for COVID-19 to prohibit content contradicting guidance from global health organisations, like the CDC and WHO. This included promoting unverified treatments and downplaying the "pandemic"’s severity. Although LinkedIn has not disclosed specific thresholds, high-profile cases evidence that the persistent sharing of contrarian COVID-19 views—especially if flagged by users, or contradicting official narratives—would lead to removal. Dr. Mary Talley Bowden, Dr. Aseem Malhotra, Dr Robert Malone, and Mr Steve Kirsch and accounts have all been permanently suspended.
#11 Non-disclosure of information around banning's rationale for account-holders
Social media platforms' Terms of Service (TOS) may ensure that these companies are not legally obligated to share information with their users on the precise reasons for their accounts being suspended. Popular platforms like Facebook, LinkedIn and X can terminate accounts at their sole discretion without providing detailed information to users. Such suspensions are typically couched opaquely in terms of policy violation (such as being in breach of community standards).
Less opaque details may be forthcoming if the platform's TOS is superseded by a country, or regional bloc's, laws. In the US, section 230 of its Communications Decency Act allows platforms to moderate content as they see fit. They are only obligated to disclose reasons under a court order, or if a specific law applies (such as one related to data privacy). By contrast, companies operating under European Union countries are expected to comply with the EU's Digital Services Act (DSA). Here, platforms must provide a 'statement of reasons' for content moderation decisions, including suspensions, with some level of detail about the violation. Whilst compliant feedback must be clear and user-friendly, granular specifics may not be a DSA requirement. In the EU and USA, COVID-19 dissidents could only expect detailed explanations in response to legal appeals, or significant public pressure. Internal whistleblowing and investigative reports, such as the Facebook and Twitter files, also produced some transparency.
One outcome of this opaque feedback is that the reasons for dissidents' COVID-19 health experts' accounts being suspended are seldom made public. Even where dissidents have shared their experiences, the opaque processes and actors behind COVID-19 censorship remain unclear. Even reports from embedded researchers, such as The Center for Countering Digital Hate's "Disinformation Dozen", lack specificity. While it reported how Meta permanently banned 16 accounts, and restricted 22 others, for "sharing anti-vaccine content" in response to public reporting in 2021. However, the CCDH did not explicitly name the health experts given permanent suspensions. Hopefully, a recent 171-page federal civil rights suit by half of the dissidents mentioned in this report against the CCDH, Imran Ahmed, U.S. officials & tech giants will expose more about who is behind prominent transnational censorship & reputational warfare (Ji, 2025).
#12 No public reports from platforms regarding account suspensions and censorship requests
![]() | |
Figure 2. Slide on 'Critical social justice as a protected ideology in Higher Education, but contested in social media hashtag communities' (Noakes, 2024) |
More about censorship techniques against dissenters on social networks
More about censorship techniques against dissenters on social networks
- Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
- Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
N.B. I am writing a third post on account censorship during COVID-19, that will cover at least three more serious techniques. Do follow me on X to learn when that is published. Please suggest improvements to this post in the comments below, or reply to my tweet thread at https://x.com/travisnoakes/status/1930989080231203126.
Wednesday, 9 April 2025
Wanted - Fair critics of 'Promoting Vaccines in South Africa: Consensual or Non-Consensual Health Science Communication?'
Written for health science communication researchers concerned with genetic vaccination promotion being non-consensual and a form of propaganda.
Since June 2023, Dr Piers Robinson, myself and Dr David Bell have submitted the titular manuscript to nine journals with no strong reviews, and many desk rejects without solid explanation. This is despite our journal search from 2024 focusing on seemingly suitable journals that met the criteria of tackling (i) health communication, (ii) propaganda, and (iii) previously having shared controversial articles questioning the official COVID-19 narrative. Since we cannot identify any viable new targets, we have decided to share our manuscript as a pre-print on SSRN and ResearchGate. We hope that readers there can at least offer solid, constructive criticism of our work.
As scholars know, every journal submission can take many hours for preparing the related documentation, plus formatting the manuscript to a journal's stylistic specifications, etc. To compensate for such lengthy academic labour, authors might reasonably expect that editorial teams will be highly ethical in providing detailed reasoning behind desk-rejections. Where there is a strong pattern of such feedback being absent, or poor, on controversial topics, dissident authors may justifiably perceive that they are negotiating an academic journal publication firewall. Why would editors be reluctant to go on record for their reasons for desk-rejection, if they are indisputable? Even when editorial staff's feedback is highly critical, this is still constructive for authors. They can then completely revise their manuscript for submission to new journals. Or perhaps save time, by confronting the reality that their manuscript's non- or weak-contribution means it must be abandoned!
Our frustration with not receiving constructive criticism is similar to accounts from many other dissenters against the official COVID-19 narrative. Notably, Professors Bhattacharya and Hanke (2023) documented dissidents’ censorship experiences via popular pre-print options. And Professor Norman Fenton (in Fighting Goliath, 2024) and Dr Robert Malone (in PsyWar, 2024) provide compelling accounts of shifting from being welcome journal authors and conference speakers, to unpublishable for any manuscript critical of COVID-19 statistics or treatment policies. Such experts would seem unlikely to have produced fallacious research unsuited to peer review given their high levels of expertise, plus long publication records.
Our wannabe-journal article tackles an important, albeit controversial, question, How might pharma- or medical propaganda be distinguished from health communication? South Africa's (SA) case of COVID-19 genetic vaccine promotion is described for how incentivization, coercion and deceptive messaging approximated to a non-consensual approach- preventing informed consent for pregnant women. In terms of generalisability, this case study can be described as a hard case- given the status of pregnant women as perhaps the most vulnerable and protected category in society, one expects health communicators to be extremely cautious about adopting non-consensual methods of persuasion. We show that this was indeed the case in South Africa, making it more likely that such tactics were used for other less vulnerable groups.
In desk rejecting our work, editors and reviewers may well have thought that evaluating persuasive communication in terms of whether or not it is deceptive and non-consensual is not, in some sense, a legitimate research question. In stark contrast, as Dr Piers Robinson argues (at the end of this Linked thread), our research question is indeed, 'an essential part of evaluating whether any given persuasion campaign can be said to meet appropriate ethical/democratic standards. With the attention to fake news and disinformation, there is in fact much in the way of scholarly attention to questions of deceptive or manipulative communication. So we are not asking a question that is not asked by many others and across a wide range of issue areas. And we utilised a conceptual framework developed and published elsewhere.'
Another concern may be that our manuscript it "biased" to 'reach a predetermined outcome'. This ignores the possibility that our work could have found no evidence of deceptive communication, and none for incentivization. However, the evidence presented does strongly support a major concern that pregnant women were incentivised, deceived and coerced into taking (poorly-tested) genetic vaccines (whose side-effects are also poorly tracked). In the absence of detailed editor rejection feedback, it's hard for us to improve our argument for a hoped-for peer review that's fair.
It's also important to acknowledge the context in which our paper was written, which is of considerable scientific concern over the COVID-19 event. Notably, rushed guidance based on weak evidence from international health organisations could well have perpetuated negative health and other societal outcomes, rather than ameliorating them (Noakes, Bell & Noakes, 2022). In particular, health authorities rushed approval of genetic vaccines as the primary response, and their "health promotion" seems a ripe target for robust critique. Particularly when successful early treatments were widely reported to have been suppressed so that Emergency Use Authorisation for genetic vaccines could be granted (Kennedy, 2021).
An unworthy topic?
Our negative experience of repeated, poorly (or un-) explained rejections would seem to suggest that presenting South Africa's case of COVID-19 genetic vaccine promotion as pharmaceutical/medical propaganda was not worthy of academic journals' review- even for those promising to tackle scientific controversies and challenging topics.
Not unexpectedly, SSRN removed our pre-print after a week, providing the following email rationale: 'Given the need to be cautious about posting medical content, SSRN is selective on the papers we post. Your paper has not been accepted for posting on SSRN.' So, no critique of the paper's facts or methods, just rapid removal of our COVID 19 "health communication" critique. In SSRN 's defence, its website's FAQs do flag that 'Medical or health care preprints at SSRN are designed for the rapid, early dissemination of research findings; therefore, in most instances, we do not post reviews or opinion-led pieces, as well as editorials and perspectives.' So perhaps the latter concern was indeed the most significant factor in SSRN's decision... But with no explicit/specific explanation for its rationale for its decision, it's also possible that our critique of COVID-19 "health science communication" weighed more heavily as a factor by human decision makers. Alternately, an Artificial Intelligence agent wrote the rejection email, triggered by our sensitive keywords. COVID-19 + proganda = (a must reject routine.)
A history of a manuscript's rejection in one image
Over two years, we also refined our manuscript to narrowly focus on 'non-consensual Health Science Communication', versus propaganda. While the latter is accurate, we recognised that it could be too contentious for some editors and reviewers, so revised the initial title. Our analysis was clearly bounded to describe the ways in which non-consensual persuasion tactics were employed in South Africa to promote uptake of the COVID-19 vaccines. There are several vulnerable categories (such as teenagers), and we decided to focus on pregnant women, or women wanting to be mothers. We explored the local incentives and coercive measures (both consensual and non-consensual) that were used in South Africa during the COVID-19 event. Our manuscript then critiqued deceptive messaging on the safety of the Pfizer BioNTech Comirnaty® vaccine in a Western Cape government flyer. We also contrasted the South Africa Health Products Regulatory Authority's vaccine safety monitoring and reporting of adverse events following immunisation (SAHPRA AEFI) infrormation, contrasting how it (does not) report on outcomes for women's health, versus the Vaccine Adverse Report System (VAERS SA). If there is a methodological flaw in this approach, we are open to suggestions on improving it.
That said, there are some changes that we would like an opportunity to argue against. For example, our title might be criticised for not addressing harms to "pregnant people". However, following such advice would distract from how genetic vaccines have proven especially damaging to biological females. Likewise, our definition of "health science communication" can be criticised as a narrow one, especially for South Africa's myriad of health contexts. While this is true and we should gloss this limitation, we must also prioritise what is core to focus on within a 10,000 word limit. Expanding our focus to include a broad view of science communication in SA would inevitably require the removal of evidence related to the Organised Persuasive Communication Framework's consensual versus non-consensual aspects. This would distract from our paper's core focus.
The inspiration for our original manuscript
The original paper was drafted for a special issue of the Transdisciplinary Research Journal of Southern Africa. It focuses on ‘Efficacy in health science communication in a post-pandemic age: Implications for Southern Africa’. In a small way, our review article was inspired as a critique of two assumptions in the call for the special issue's opening paragraph: (1) 'Much of the broad population and indeed more of the intelligentsia than one would imagine arguably remain to a greater or lesser degree sceptical of science' and (2) 'widespread suspicion of the origin of the virus seemingly fuelled by conspiracy theories, and of surprising levels of vaccine hesitancy voiced in a range of guises.'
In the first place, there is a different between science, and following The Science™ from a transglobal vaccine cartel. Individuals or groups did have sound scientific grounds to reject genetic vaccination. Indeed, individuals with PhDs were most likely to reject being "vaccinated" with a rushed and poorly-tested product. Secondly, the theory that COVID-19 emerged from the Wuhan lab is not a "conspiracy theory", but just one of four possible explanations {the others being zoonotic (animal-to-human) origins, a deliberate bio-weapon release, or a prior endemicity ‘discovered’ by an outbreak of testing}.
To flag the danger of assumptions, such as (1) and (2) being presented as "fact", our review originally sought to spotlight a major, but neglected, issue in the health communication field: what is pharmaceutical propaganda and how does it differ from health communication. Media studies and health communication scholars should be exercising hyper-reflexivity in considering how the communications they study typically emerge in an externally directed field. Their field's solutionist emphasis is often driven by powerful external groupings’ motives, such as national government departments or multinational pharmaceutical companies. Such actors can be incentivised to manipulate messaging for reasons other than the simple concern to protect the public's wellbeing during a perceived crisis or emergency.
Our reflexive article was originally rejected without explanation by one of the special issue’s editors. I have tweeted about how such behaviour is unacceptable, plus how AOSIS could update its policy to specify that an editor must provide explicit feedback on the reasons for desk rejection. This would meet COPE’s guideline that editors meet the needs of authors. Otherwise rejected authors might suspect that an AOSIS journal is not championing freedom of expression (and rather practicing scientific suppression) and is not precluding business needs (e.g. pharmaceutical support) from compromising intellectual standards. Tackling the danger of “successful” communications for dangerous pharmaceutical interventions as pharmaceutical propaganda is important, particularly given the rise of health authoritarianism during a “pandemic”.
Constructive criticism, plus new journal targets welcome?
We believe that our topic of how incentivization, coercion and deceptive COVID-19 messaging approximates a non-consensual approach is highly salient. Without sound rationales for the rejections of our paper, academic social networks seem the most promising fora for receiving constructive criticism. Drs Robinson, Bell and I welcome such feedback. Kindly also let me know in the comments below should you know of a health communication journal that supports COVID-19 dissent, champions academic freedom and would be interested in giving our submission a fair review?
Future research
Saturday, 29 March 2025
Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022. Shir-Raz et al, 2023. Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).
Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!
Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.
COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020. Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021. Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!
This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.
This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:
Practices for @Account suppression
#1 Deception - users are not alerted to unconstitutional limitations on their free speech
#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents
#3 Othering - enabling public character assassination via cyber smears
#4 Not blocking impersonators or preventing brandjacked accounts
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.
Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning') or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.
#5 Filtering an account's visibility through ghostbanning
As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas.
This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast, the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:
Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.
The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.
An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.
#6 Penalising accounts that share COVID-19 "misinformation"
I am writing a series of post on this topic that will cover more serious techniques. Do follow me on X to be alerted when they are published. Please share your views by commenting below, or reply to this tweet thread at https://x.com/travisnoakes/status/1906250555564900710.
Friday, 23 December 2022
A summary of 'Who is watching the World Health Organisation? ‘Post-truth’ moments beyond infodemic research'
A major criticism this paper raises is that infodemic research lacks earnest discussion on where health authorities’ own choices and guidelines might be contributing to ‘misinformation’, ‘disinformation’ and even ‘malinformation’. Rushed guidance based on weak evidence from international health organisations can perpetuate negative health and other societal outcomes, not ameliorate them! If health authorities’ choices are not up for review and debate, there is a danger that a hidden goal of the World Health Organisation (WHO) infodemic (or related disinfodemic funders’ research) could be to direct attention away from funders' multitude of failures in fighting pandemics with inappropriate guidelines and measures.
In The regime of ‘post-truth’: COVID-19 and the politics of knowledge (at https://www.tandfonline.com/doi/abs/10.1080/01596306.2021.1965544), Kwok, Singh and Heimans (2019) describe how the global health crisis of COVID-19 presents a fertile ground for exploring the complex division of knowledge labour in a ‘post-truth’ era. Kwok et al. (2019) illustrates this by describing COVID-19 knowledge production at university. Our paper focuses on the relationships between health communication, public health policy and recommended medical interventions.
Divisions of knowledge labour are described for (1) the ‘infodemic/disinfodemic research agenda’, (2) ‘mRNA vaccine research’ and (3) ‘personal health responsibility’. We argue for exploring intra- and inter relationships between influential knowledge development fields. In particular, the vaccine manufacturing pharmaceutical companies that drive and promote mRNA knowledge production. Within divisions of knowledge labour (1-3), we identify key inter-group contradictions between the interests of agencies and their contrasting goals. Such conflicts are useful to consider in relation to potential gaps in the WHO’s infodemic research agenda:
For (1), a key contradiction is that infodemic scholars benefit from health authority funding may face difficulties questioning their “scientific” guidance. We flag how the WHO ’s advice for managing COVID-19 departed markedly from a 2019 review of evidence it commissioned (see https://www.ncbi.nlm.nih.gov/pubmed/35444988).
(2)’s division features very different contradictions. Notably, the pivotal role that pharmaceutical companies have in generating vaccine discourse is massively conflicted. Conflict of interest arises in pursuing costly research on novel mRNA vaccines because whether the company producing these therapies will ultimately benefit financially from the future sales of these therapies depends entirely on the published efficacy and safety results from their own research. The division of knowledge labour for (2) mRNA vaccine development should not be considered separately from COVID-19’s in Higher Education or the (1) infodemic research agenda. Multinational pharmaceutical companies direct the research agenda in academia and medical research discourse through the lucrative grants they distribute. Research organisations dependant on external funding for covering budget shortfalls will be more susceptible to the influence of those funders on their research programs.
However, from the perspective of orthodoxy, views that support new paradigms are unverified knowledge (and potentially "misinformation"). Any international health organisation that wishes to be an evaluator must have the scientific expertise for managing this ongoing ‘paradox’, or irresolvable contradiction. Organisations such as the WHO may theoretically be able to convene such knowledge, but their dependency on funding from conflicted parties would normally render them ineligible to perform such a task. This is particularly salient where powerful agents can collaborate across divisions of knowledge labour for establishing an institutional oligarchy. Such hegemonic collaboration can suppress alternative viewpoints that contest and query powerful agents’ interests.
Our article results from collaboration between The Noakes Foundation and PANDA. The authors thank JTSA’s editors for the opportunity to contribute to its special issue, the paper’s critical reviewers for their helpful suggestions and AOSIS for editing and proof-reading the paper.
This is the third publication from The Noakes Foundation’s Academic Free Speech and Digital Voices (AFSDV) project. Do follow me on Twitter or https://www.researchgate.net/project/Academic-Free-Speech-and-Digital-Voices-AFSDV for updates regarding it.
I welcome you sharing constructive comments, below.