Friday, 6 June 2025
Techniques for suppressing health experts' social media accounts (7 - 12, part 2) - The Science™ versus key opinion leaders challenging the COVID-19 narrative
Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.
This is the second post alerting readers to the myriad of techniques that social media companies continue to use against key opinion leaders that question the dominant consensus from The Science™. While examples are provided for techniques versus prominent critics of the official COVID-19 narrative, these are also readily applied for stifling the emergence of other issue arenas. These range from the United States of America's support for forever wars via a ghost budget (Bilmes, 2018), to man-made climate change, and from low carbohydrate diets to transgender "gender affirming" medical surgery ideology. These dogmatic prescriptions by the global policy decision makers of the Global Private Partnership (G3P or GPPP) are presented as a "scientific consensus", but are unscientific when protected from being questioned- especially by legitimate experts with dissenting arguments.
#7 Concealing the sources behind dissidents' censorship
An important aspect of information control is that the sources behind it will be very well hidden from the public. For the organisers of propaganda, their ideal targets do not appreciate that they are receiving propaganda, nor should they recognise its source. Their audiences' ignorance is a key aspect of psychological warfare (otherwise known as 5th generation warfare (Abbot, 2010, Krishnan, 2024). Likewise for censors, its targets and their followers should ideally not be aware that they are being censored, nor able to identity the sources of their censorship. Accordingly, there is significant deception around the primary sources for social media censorship being the platforms themselves, and their policies. Instead, these platforms are largely responding to co-ordinated COVID-19 narrative control from G3P members who span each of the six estates*.
The well-funded, complicity theorists for a COVID-19 "Infodemic" (for example- Calleja et al., 2021, Caulfield, 2020, DiResta, 2022, Schiffrin, 2022) may genuinely believe in advocating for censorship as a legitimate, organic counterweight to "malinformation". In contrast, researchers at the Unlimited Hangout point out that this censorship is highly centralised, aiming at opinions that are deemed "illegitimate" merely for disagreeing with the positions of the most powerful policy makers at the G3P's macro-level. Ian Davis writes that the G3P policy makers are Chatham House, the Club of Rome, the Council of Foreign Relations, the Rockefellers and the World Economic Forum. Each guides international policy distributors, including the; International Monetary Fund, The Intergovernmental Panel on Climate Change, United Nations, World Health Organisation, plus "philanthropists" {eg. the Bill and Melinda Gates Foundation (BMGF)}, multinational corporations and global non-governmental organisations..
Mr Bill Gates serves as an example of the Sixth Estate exercising undue influence on public health, especially Africa's: His foundation is the largest private benefactor of the World Health Organization. The BMGF finances the health ministries in virtually every African country. Mr Gates can place conditions on that financing, such as vaccinating a certain percentage of a country’s population. Some vaccines and health-related initiatives that these countries purchase are developed by companies that Gates’ Cascade Investment LLC invests in. As a result, he can benefit indirectly from stock appreciation. This is alongside tax savings from his donations, whilst his reputation as a ‘global health leader’ is further burnished. In South Africa, the BMGF have directly funded the Department of Health, SA’s regulator SAHPRA, plus its Medical Research Council, top medical universities and the media (such as the Mail and Guardian’s health journalism centre, Bhekisisa). All would seem highly motivated to protect substantial donations by not querying Mr Gates’ vaccine altruism. However, the many challenges of the Gates Foundation’s dominating role in its transnational philanthropy must not be ignored. Such dominance poses a challenge to justice- locals’ rights to control the institutions that can profoundly impact their basic interests (Blunt, 2022). While the BMGF cannot be directly tied to COVID-19 social media account censorship, it is indisputable that Mr Gates' financial power and partner organisations indirectly suppressed dissenting voices by prioritising certain COVID-19 treatment narratives (Politico, 2022A, 2022B).
At a meso-level, select G3P policy enforcers organise that macro-level's policy directives are followed by both national governments (and their departments, such as health) and scientific authorities (including the AMA, CDC, EMA, FDA, ICL, JCVI, NERVTAG, NIH, MHRA and SAGE). Enforcers strive to prevent rival scientific ideas gaining traction, and thereby challenging its policymakers' dictates. These bodies task psychological 'nudge' specialists (Junger and Hirsch, 2024), propagandists and other experts with convincing the public to accept, and ideally buy-into, G3P policies. This involves censorship and psychological manipulation via public relations, propaganda, disinformation and misinformation. The authors of such practices are largely unattributed. Dissidents facing algorithmic censorship through social media companies' opaque processes of content moderation are unlikely to be able to identify the true originator of their censorship in a complex process. Content moderation is a 'multi-dimensional process through which content produced by users is monitored, filtered, ordered, enhanced, monetised or deleted on social media platforms' (Badouard and Bellon, 2025). This process spans a 'great diversity of actors' who develop specific practices of content regulation (p3). Actors may range from activist users and researchers who flag content, to fact-checkers from non-governmental organisations and public authorities. If such actors disclose their involvement in censorship, this may only happen much later. For example, Mark Zuckerberg’s 2024 letter to the House Judiciary Committee revealed that the Biden administration pressured Meta to censor certain COVID-19 content, including humour and satire, in 2021.
#8 Blocking a user’s access to his or her account
A social media platform may stop a user from being able to login to his or her account. Where the platform does not make this blocking obvious to a users' followers, this is deceptive. For example, Emeritus Professor Tim Noakes' Twitter account was deactivated for months after querying health authorities' motivations in deciding on interventions during the COVID-19 "pandemic". Many viewers would not recognise that his seemingly live profile was in fact inactive, since it looked to be active. The only clue was that @ProfTimNoakes had not tweeted for a long time. This was highly unusual.
This suspension followed Twitter's introduction of a “five-strike” system, with repeat offenders or egregious violations leading to permanent bans. Twitter's system tracked violations, with the first and second strikes resulting in a warning or temporary lock. A third strike resulted in a 12-hour suspension, while a 7-day suspension followed a 4th strike. Users faced a permanent ban for a 5th strike. In Professor Tim Noakes' case, he was given a vague warning regarding 'breaking community rules etc.' (email correspondence, 24.10.2022), this followed him noticing a loss of followers and his tweets reach being restricted. Twitter 'originally said I was banned for 10 hours. But after 10 hours when I tried to re-access it they would not send me a code to enter. When I complained they just told me I was banned. When I asked for how long, they did not answer.' In reviewing his tweets, Prof. Noakes noticed that some had been (mis-)labelled by Twitter to be "misleading" before his suspension (see Figure 1 below).
![]() |
Figure 1. Screenshot of @ProfTimNoakes' "controversial" tweet on President Macron not taking COVID-19 'experimental gene therapy' (24 October, 2022) |
Prof Noakes had also tweet-quoted Alec Hogg’s BizNews article regarding Professor Salim Abdool Karim’s conflicts of interest, adding 'something about' cheque book science. The @ProfTimNoakes account was in a state of limbo after seven days, but was never permanently banned. Usually, accounts placed on “read-only” mode, or temporary lockouts, required tweet deletion to regain full access. However, @ProfTimNoakes latest tweets were not visible, and he was never asked to delete any. In addition to account login blocks, platforms may also suspend accounts from being visible. But this was not applied to @ProfTimNoakes. In response to being locked out, Prof Noakes shifted to using his alternate @loreofrunning account- its topics of nutrition, running and other sports seemed safe from the reach of unknown censors' Twitter influence.
#9 Temporary suspensions of accounts (temporary user bans)
#10 Permanent suspension of accounts, pages and groups (complete bans)
In contrast to Twitter's five-strikes system, Meta's Facebook's was not as formalised. It tracked violations on accounts, pages and groups. The latter serve different functions in Facebook’s system architecture (Broniatowski, et al. 2023): only page administrators may post in pages, which are designed for brand promotion and marketing. In contrast, any member may post in groups. These serve as a forum for members to build community and discuss shared interests. In addition, pages may serve as group administrators. From December 2020, Meta began removing "false claims about COVID-19 vaccines" that were "debunked by public health experts". This included "misinformation" about their efficacy, ingredients, safety, or side effects. Repeatedly sharing "debunked claims" risked escalating penalties to individual users/administrators, pages and groups. Penalties ranged from from reduced visibility to removal and permanent suspension. For example, if a user posted that 'COVID vaccines cause infertility' "without evidence", this violated policy thresholds. The user was then asked to acknowledge the violation, or appeal. Appeals were often denied if the content clashed with official narratives.
Meta could choose to permanently ban individual-, fan page- and group- accounts on Facebook. For example, high-profile repeated offenders were targeted for removals. In November 2020, the page "Stop Mandatory Vaccination", which was one of the platform’s largest "anti-vaccine" fan pages was removed. Robert F. Kennedy Jr.’s Instagram account was permanently removed in 2021 for "sharing debunked COVID-19 vaccine claims". The non-profit he founded, Children’s Health Defense was suspended from both Facebook and Instagram in August 2022 for its repeated violations of Meta’s COVID-19 misinformation policies.
Microsoft's LinkedIn generally has stricter content moderation for professional content than other social networks. It updated its 'Professional Community Policies' for COVID-19 to prohibit content contradicting guidance from global health organisations, like the CDC and WHO. This included promoting unverified treatments and downplaying the "pandemic"’s severity. Although LinkedIn has not disclosed specific thresholds, high-profile cases evidence that the persistent sharing of contrarian COVID-19 views—especially if flagged by users, or contradicting official narratives—would lead to removal. Dr. Mary Talley Bowden, Dr. Aseem Malhotra, Dr Robert Malone, and Mr Steve Kirsch and accounts have all been permanently suspended.
#11 Non-disclosure of information around banning's rationale for account-holders
Social media platforms' Terms of Service (TOS) may ensure that these companies are not legally obligated to share information with their users on the precise reasons for their accounts being suspended. Popular platforms like Facebook, LinkedIn and X can terminate accounts at their sole discretion without providing detailed information to users. Such suspensions are typically couched opaquely in terms of policy violation (such as being in breach of community standards).
Less opaque details may be forthcoming if the platform's TOS is superseded by a country, or regional bloc's, laws. In the US, section 230 of its Communications Decency Act allows platforms to moderate content as they see fit. They are only obligated to disclose reasons under a court order, or if a specific law applies (such as one related to data privacy). By contrast, companies operating under European Union countries are expected to comply with the EU's Digital Services Act (DSA). Here, platforms must provide a 'statement of reasons' for content moderation decisions, including suspensions, with some level of detail about the violation. Whilst compliant feedback must be clear and user-friendly, granular specifics may not be a DSA requirement. In the EU and USA, COVID-19 dissidents could only expect detailed explanations in response to legal appeals, or significant public pressure. Internal whistleblowing and investigative reports, such as the Facebook and Twitter files, also produced some transparency.
One outcome of this opaque feedback is that the reasons for dissidents' COVID-19 health experts' accounts being suspended are seldom made public. Even where dissidents have shared their experiences, the opaque processes and actors behind COVID-19 censorship remain unclear. Even reports from embedded researchers, such as The Center for Countering Digital Hate's "Disinformation Dozen", lack specificity. While it reported how Meta permanently banned 16 accounts, and restricted 22 others, for "sharing anti-vaccine content" in response to public reporting in 2021. However, the CCDH did not explicitly name the health experts given permanent suspensions. Hopefully, a recent 171-page federal civil rights suit by half of the dissidents mentioned in this report against the CCDH, Imran Ahmed, U.S. officials & tech giants will expose more about who is behind prominent transnational censorship & reputational warfare (Ji, 2025).
#12 No public reports from platforms regarding account suspensions and censorship requests
![]() | |
Figure 2. Slide on 'Critical social justice as a protected ideology in Higher Education, but contested in social media hashtag communities' (Noakes, 2024) |
More about censorship techniques against dissenters on social networks
More about censorship techniques against dissenters on social networks
- Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
- Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
N.B. I am writing a third post on account censorship during COVID-19, that will cover at least three more serious techniques. Do follow me on X to learn when that is published. Please suggest improvements to this post in the comments below, or reply to my tweet thread at https://x.com/travisnoakes/status/1930989080231203126.
Friday, 26 July 2024
Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for dissidents challenging orthodox narratives in science.
The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).
#1 Covering up algorithmic manipulation
Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.
#2 Fact choke versus counter-narratives
An example she tweeted about was the BBC's Trusted New Initiative warning in 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation".
#3 Title-jacking
For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.
#4 Blacklisting trending dissent
Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures.#5 Blacklisting content due to dodgy account interactions or external platform links
#6 Making content unlikeable and unsharable
This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.

Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)
Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.

#7 Disabling public commentary
#8 Making content unsearchable within, and across, digital platforms
#9 Rapid content takedowns
Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.#10 Creating memory holes
#11 Rewriting history
#12 Concealing the motives behind censorship, and who its real enforcers are
![]() |
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception. |
Thursday, 25 February 2021
Some background for 'Distinguishing online academic bullying: identifying new forms of harassment in a dissenting Emeritus Professor’s case'
- entrenches an outdated and incorrect scientific model;
- suppresses scholarly debate over rival models;
- continues to support poor advice and interventions that result in sub-par outcomes versus proven and relatively inexpensive alternatives.
In HE, scientific suppression can be understood as a normative category of impedance that is unfair, unjust and counter to the standards of academic behaviour (Delborne, 2016). Such impedance is apparent in the treatment of dissenting scholars who challenge the CMCDD model, then become ostracised from the Health Sciences as "heretics". In theory, universities should encourage academic free speech and robust debate on the CMCDD versus IRMCIH models. By contrast, in HE practice, IRMCIH scholars cannot exercise their rights to academic free speech.
Academic freedom is a special right of academics- a right to freedom from prescribed orthodoxy in their teaching, research, and lives as academics (Turk, 2014). This right seeks to avoid corruption from the vested interests of other parties, which ranges from scholarly peers and university board members to corporate donors. This right is foundational in supporting scholars to advance and expand knowledge, for example by accommodating diverse voices (Saloojee, 2013).
Academic free speech is a failed ideal where IRMCIH scholars do not enjoy opportunities to research and teach this emergent paradigm. Instead, dissenting IRMCHI scientists must negotiate scientific suppression by a multitude of entrenched networks and embedded academics. These have varied stakes in the medical establishment's highly profitable “cholesterol” model and its costly, but largely ineffective, interventions. This orthodox regime heavily constrains the IRMCIH model's development, whilst applying double-standards for evidence and proof. These demands typically ignore the sociological context of scientific knowledge. It flags key constraints, including:
- The relatively minuscule funding for IRMCIH studies
- Many unethical ”ethical" or pseudo-skeptic "scientific" arguments used for delaying IR research projects
- Long-standing anti-IRMCIH, pro- CMCDD scholarly citation rings
- Academic mobs that defame IR scholars and create a chilling effect for their colleagues
- Likewise, pseudoskeptic academics, politicians and "science" journalists may unwittingly serve as agents of industry by diverting public attention from Fiat science™ and consensus silence to IRMCIH “failures”.
![]() |
With a wide range of vitriolic critics within and outside academia, we focused on the case of an Emeritus Professor as a convenience sample. He had first-hand exposure to OAB for almost a decade across varied social media platforms. In 'Distinguishing online academic bullying', OAB is clearly differentiated from the traditional forms of bullying (eg. academic mobbing) that he had to negotiate after taking the unorthodox, but scientific, position for IRMCIH. Major aspects are shown in the article's abstract graphic, below- academic cyberbullies strategies in OAB may range from misrepresenting an employer's position as "official" to hypercritical academic bloggers whose chains of re-publication become sourced for defamatory online profiles.