Showing posts with label COVID-19. Show all posts
Showing posts with label COVID-19. Show all posts

Wednesday, 9 April 2025

Wanted - Fair critics of 'Promoting Vaccines in South Africa: Consensual or Non-Consensual Health Science Communication?'

Written for health science communication researchers concerned with genetic vaccination promotion being non-consensual and a form of propaganda.


Since June 2023, Dr Piers Robinson, myself and Dr David Bell have submitted the titular manuscript to nine journals with no strong reviews, and many desk rejects without solid explanation. This is despite our journal search from 2024 focusing on seemingly suitable journals that met the criteria of tackling (i) health communication, (ii) propaganda, and (iii) previously having shared controversial articles questioning the official COVID-19 narrative. Since we cannot identify any viable new targets, we have decided to share our manuscript as a pre-print on SSRN and ResearchGate. We hope that readers there can at least offer solid, constructive criticism of our work.


As scholars know, every journal submission can take many hours for preparing the related documentation, plus formatting the manuscript to a journal's stylistic specifications, etc. To compensate for such lengthy academic labour, authors might reasonably expect that editorial teams will be highly ethical in providing detailed reasoning behind desk-rejections. Where there is a strong pattern of such feedback being absent, or poor, on controversial topics, dissident authors may justifiably perceive that they are negotiating an academic journal publication firewall. Why would editors be reluctant to go on record for their reasons for desk-rejection, if they are indisputable? Even when editorial staff's feedback is highly critical, this is still constructive for authors. They can then completely revise their manuscript for submission to new journals. Or perhaps save time, by confronting the reality that their manuscript's non- or weak-contribution means it must be abandoned!


Our frustration with not receiving constructive criticism is similar to accounts from many other dissenters against the official COVID-19 narrative. Notably, Professors Bhattacharya and Hanke (2023) documented dissidents’ censorship experiences via popular pre-print options. And Professor Norman Fenton (in Fighting Goliath, 2024) and Dr Robert Malone (in PsyWar, 2024) provide compelling accounts of shifting from being welcome journal authors and conference speakers, to unpublishable for any manuscript critical of COVID-19 statistics or treatment policies. Such experts would seem unlikely to have produced fallacious research unsuited to peer review given their high levels of expertise, plus long publication records.


Our wannabe-journal article tackles an important, albeit controversial, question, How might pharma- or medical propaganda be distinguished from health communication? South Africa's (SA) case of COVID-19 genetic vaccine promotion is described for how incentivization, coercion and deceptive messaging approximated to a non-consensual approach- preventing informed consent for pregnant women. In terms of generalisability, this case study can be described as a hard case- given the status of pregnant women as perhaps the most vulnerable and protected category in society, one expects health communicators to be extremely cautious about adopting non-consensual methods of persuasion. We show that this was indeed the case in South Africa, making it more likely that such tactics were used for other less vulnerable groups.


In desk rejecting our work, editors and reviewers may well have thought that evaluating persuasive communication in terms of whether or not it is deceptive and non-consensual is not, in some sense, a legitimate research question. In stark contrast, as Dr Piers Robinson argues (at the end of this Linked thread), our research question is indeed, 'an essential part of evaluating whether any given persuasion campaign can be said to meet appropriate ethical/democratic standards. With the attention to fake news and disinformation, there is in fact much in the way of scholarly attention to questions of deceptive or manipulative communication. So we are not asking a question that is not asked by many others and across a wide range of issue areas. And we utilised a conceptual framework developed and published elsewhere.'


Another concern may be that our manuscript it "biased" to 'reach a predetermined outcome'. This ignores the possibility that our work could have found no evidence of deceptive communication, and none for incentivization. However, the evidence presented does strongly support a major concern that pregnant women were incentivised, deceived and coerced into taking (poorly-tested) genetic vaccines (whose side-effects are also poorly tracked). In the absence of detailed editor rejection feedback, it's hard for us to improve our argument for a hoped-for peer review that's fair.


It's also important to acknowledge the context in which our paper was written, which is of considerable scientific concern over the COVID-19 event. Notably, rushed guidance based on weak evidence from international health organisations could well have perpetuated negative health and other societal outcomes, rather than ameliorating them (Noakes, Bell & Noakes, 2022). In particular, health authorities rushed approval of genetic vaccines as the primary response, and their "health promotion" seems a ripe target for robust critique. Particularly when successful early treatments were widely reported to have been suppressed so that Emergency Use Authorisation for genetic vaccines could be granted (Kennedy, 2021).


An unworthy topic?


Our negative experience of repeated, poorly (or un-) explained rejections would seem to suggest that presenting South Africa's case of COVID-19 genetic vaccine promotion as pharmaceutical/medical propaganda was not worthy of academic journals' review- even for those promising to tackle scientific controversies and challenging topics.


Not unexpectedly, SSRN removed our pre-print after a week, providing the following email rationale: 'Given the need to be cautious about posting medical content, SSRN is selective on the papers we post. Your paper has not been accepted for posting on SSRN.' So, no critique of the paper's facts or methods, just rapid removal of our COVID 19 "health communication" critique. In SSRN 's defence, its website's FAQs do flag that 'Medical or health care preprints at SSRN are designed for the rapid, early dissemination of research findings; therefore, in most instances, we do not post reviews or opinion-led pieces, as well as editorials and perspectives.' So perhaps the latter concern was indeed the most significant factor in SSRN's decision... But with no explicit/specific explanation for its rationale for its decision, it's also possible that our critique of COVID-19 "health science communication" weighed more heavily as a factor by human decision makers. Alternately, an Artificial Intelligence agent wrote the rejection email, triggered by our sensitive keywords. COVID-19 + proganda = (a must reject routine.)


A history of a manuscript's rejection in one image


We acknowledge that the initial submissions of our manuscript may well have been out-of-scope for the preliminary journals, or outside of the particular contributions to knowledge that they consider. 

Submission attempts versus journal publication firewall.png
Figure 1. Nine journals that rejected 'Promoting Vaccines in South Africa' (2025) 

Over two years, we also refined our manuscript to narrowly focus on 'non-consensual Health Science Communication', versus propaganda. While the latter is accurate, we recognised that it could be too contentious for some editors and reviewers, so revised the initial title. Our analysis was clearly bounded to describe the ways in which non-consensual persuasion tactics were employed in South Africa to promote uptake of the COVID-19 vaccines. There are several vulnerable categories (such as  teenagers), and we decided to focus on pregnant women, or women wanting to be mothers. We explored the local incentives and coercive measures (both consensual and non-consensual) that were used in South Africa during the COVID-19 event. Our manuscript then critiqued deceptive messaging on the safety of the Pfizer BioNTech Comirnaty® vaccine in a Western Cape government flyer. We also contrasted the South Africa Health Products Regulatory Authority's vaccine safety monitoring and reporting of adverse events following immunisation (SAHPRA AEFI) infrormation, contrasting how it (does not) report on outcomes for women's health, versus the Vaccine Adverse Report System (VAERS SA). If there is a methodological flaw in this approach, we are open to suggestions on improving it.

That said, there are some changes that we would like an opportunity to argue against. For example, our title might be criticised for not addressing harms to "pregnant people". However, following such advice would distract from how genetic vaccines have proven especially damaging to biological females. Likewise, our definition of "health science communication" can be criticised as a narrow one, especially for South Africa's myriad of health contexts. While this is true and we should gloss this limitation, we must also prioritise what is core to focus on within a 10,000 word limit. Expanding our focus to include a broad view of science communication in SA would inevitably require the removal of evidence related to the Organised Persuasive Communication Framework's consensual versus non-consensual aspects. This would distract from our paper's core focus.


The demands above may well be intended to create a more 'open minded' and 'less binary' paper. Nonetheless, should they be the primary reason for desk-rejection, they actually serve to undermine the broader academic discourse. Particularly the contribution our critique can play in supporting consideration for what constitutes genuine health communication in public health emergencies. Our paper's departure from a "progressive" imperative in its title and focused concepts, should not trump the paper's potential role for catalysing valuable discussions around medical/pharmaceutical propaganda. Especially around the consequences of health communications from SA authorities being deceptive, and potentially ill-suited for supporting informed consent. When combined with hefty financial reward incentives, and the coercion of losing one's livelihood, it seems irrational to argue against a non-consensual approach's existence. One  threatening pregnant women, their foetuses and babies. Surely, this warrants concern for academia in being apposite to genuine health communication via persuasion that allows for free and informed consent?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!


The inspiration for our original manuscript


The original paper was drafted for a special issue of the Transdisciplinary Research Journal of Southern Africa. It focuses on ‘Efficacy in health science communication in a post-pandemic age: Implications for Southern Africa’.  In a small way, our review article was inspired as a critique of two assumptions in the call for the special issue's opening paragraph: (1) 'Much of the broad population and indeed more of the intelligentsia than one would imagine arguably remain to a greater or lesser degree sceptical of science' and (2) 'widespread suspicion of the origin of the virus seemingly fuelled by conspiracy theories, and of surprising levels of vaccine hesitancy voiced in a range of guises.' 


In the first place, there is a different between science, and following The Science™ from a transglobal vaccine cartel. Individuals or groups did have sound scientific grounds to reject genetic vaccination. Indeed, individuals with PhDs were most likely to reject being "vaccinated" with a rushed and poorly-tested product. Secondly, the theory that COVID-19 emerged from the Wuhan lab is not a "conspiracy theory", but just one of four possible explanations {the others being zoonotic (animal-to-human) origins, a deliberate bio-weapon release, or a prior endemicity ‘discovered’ by an outbreak of testing}.


To flag the danger of assumptions, such as (1) and (2) being presented as "fact", our review originally sought to spotlight a major, but neglected, issue in the health communication field: what is pharmaceutical propaganda and how does it differ from health communication. Media studies and health communication scholars should be exercising hyper-reflexivity in considering how the communications they study typically emerge in an externally directed field. Their field's solutionist emphasis is often driven by powerful external groupings’ motives, such as national government departments or multinational pharmaceutical companies. Such actors can be incentivised to manipulate messaging for reasons other than the simple concern to protect the public's wellbeing during a perceived crisis or emergency. 


Our reflexive article was originally rejected without explanation by one of the special issue’s editors. I have tweeted about how such behaviour is unacceptable, plus how AOSIS could update its policy to specify that an editor must provide explicit feedback on the reasons for desk rejection. This would meet COPE’s guideline that editors meet the needs of authors. Otherwise rejected authors might suspect that an AOSIS journal is not championing freedom of expression (and rather practicing scientific suppression) and is not precluding business needs (e.g. pharmaceutical support) from compromising intellectual standards. Tackling the danger of “successful” communications for dangerous pharmaceutical interventions as pharmaceutical propaganda is important, particularly given the rise of health authoritarianism during a “pandemic”.


Constructive criticism, plus new journal targets welcome?


We believe that our topic of how incentivization, coercion and deceptive COVID-19 messaging approximates a non-consensual approach is highly salient. Without sound rationales for the rejections of our paper, academic social networks seem the most promising fora for receiving constructive criticism. Drs Robinson, Bell and I welcome such feedback. Kindly also let me know in the comments below should you know of a health communication journal that supports COVID-19 dissent, champions academic freedom and would be interested in giving our submission a fair review?


Future research


Dr Robinson & I are collating the accounts of prominent health experts who have described negotiating an academic journal publication firewall. There is an opportunity to formalise research into the problems of censorship and bias during COVID-19, documenting case studies and further evaluating what this tells us about academia. We will work on a formal research proposal that also includes developing an original definition for dissenters' 'academic journal publication firewall' experience(s).

Saturday, 29 March 2025

Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative

Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.

There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022Shir-Raz et al, 2023Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).


Narrowed Overton Window for COVID-19.


Figure 1. Narrowed Overton Window for COVID-19. Figures copied from (p137-138) in Dr Joseph Fraiman (2023). The dangers of self-censorship during the COVID-19 pandemic. In R. Malone, E. Dowd, & G. Fareed (Eds.), Canary In a Covid World: How Propaganda and Censorship Changed Our (My) World (pp. 132-147). Amazon Digital Services LLC - Kdp.


Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!


Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.


COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!


This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.


This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:


Practices for @Account suppression


#1 Deception - users are not alerted to unconstitutional limitations on their free speech


Social media users might assume that their constitutional right to free speech as citizens will be protected within, and across, digital platforms. However, global platforms may not support such rights in practice. No social media company openly discloses the extent to which users' accounts have, and are, being censored for expressing opinions on controversial topics. Nor do these platforms explicitly warn users what they consider to be impermissible opinions. Consequently, their users are not be forewarned regarding what may result in censorship. For example, many COVID19 dissidents were surprised that their legitimate critiques could result in account suspensions and bans (Shir-Raz, 2022). Typically, such censorship was justified by Facebook, Google, LinkedIn, Tik Tok, Twitter and YouTube, due to users' violation of "community rules". In most countries, the freedom of speech is a citizen’s constitutional right that should be illegal to over-ride. It should be deeply concerning that such protections were not supported in the Fourth Estate of the digital public square during the COVID-19 event. Instead, the supra-national interests of health authoritarians came to supersede national laws to prevent (unproven) harms. This pattern of censorship is noticeable in many other scientific issue arenas, ranging from criticism against man-made climate change to skeptics challenging transgender medical ideology.

#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents


An individual who exercises his or her voice against official COVID-19 narratives can expect to receive both legitimate, pro-social and unfair, anti-social criticism. While cyberstalking should be illegal, social media platforms readily facilitate the stalking and cyber-harassment of dissidents. An extreme example of this was Dr Christine Cotton's experiences on LinkedIn. Dr Cotton was an early whistleblower (Jan, 2022) against Pfizer's COVID-19 clinical trial's false claims of 95% efficacy for its treatments. 
Her report identified the presence of bias and major deviations from good clinical practice. In press interviews, she reported that the trial did ‘not support validity in terms of efficacy, immunogenicity and tolerance of the results provided in the various Pfizer clinical reports that were examined in the emergency by the various health authorities. Christine shared this report with her professional network on LinkedIn, asking for feedback from former contacts in the pharmaceutical industry. The reception was mostly positive, but it and related posts were subject to a rapid content takedown by LinkedIn, ostensibly for not meeting community standards. At the same time, her profile became hypersurveiled. It attracted unexpected visits from 133 lawyers, the Ministry of Defence, employees of the US Department of State, the World Health Organisation, and others (p142). None of these profile viewers contacted her directly.

#3 Othering - enabling public character assassination via cyber smears


Othering is a process whereby individuals or groups are defined, labeled or targeted as not fitting in within the norms of a social group. This influences how people perceive and treat those who are viewed as being part of the in-group, versus those in an out-group. At a small scale, othering can result in a scholar being ostracised from their university department following academic mobbing and online academic bullying (Noakes & Noakes, 2021). At a large scale, othering entails a few dissidents on social media platforms being targeted for hypercriticism by gangstalkers. 

Cyber gangstalking is a process of cyber harassment that follows cyberstalking, whereby a group of people target an individual online to harass him or her. Such attacks can involve gossip, teasing and bad-jacketing, repeated intimidation and threats, plus other fear-inducing behaviours. Skeptics' critical contributions can become swamped by pre-bunkers and fellow status-quo defenders. Such pseudo-skeptics may be sponsored to trivialise dissenters' critiques, thereby contributing to a fact choke against unorthodox opinions. 

In Dr Christine Cotton's case in March 2022 her  name was disclosed in a list as part of a French Senate investigation into adverse vaccine events. A ‘veritable horde of trolls seemingly emerged out of nowhere and started attacking’ her ‘relentlessly’ (p143). These trolls were inter-connected through subscribing to each others’ accounts, which allowed them to synchronise their attacks. They attempted to propagate as much negative information on Dr Cotton as possible in a ‘Twitter harassment scene’. Emboldened by their anonymity, the self-proclaimed “immense scientists” with masters in virology, vaccines, clinical research and biostatistics, launched a character assassination. They attacked her credentials and work history, whilst creating false associations (“Freemasonry” and “Illuminati”). 

This suggests how identity politics sensibilities and slurs are readily misused against renegades. In the US, those questioning COVID-19 policies were labelled “far right” or “fascist”, despite promoting a libertarian critique of healthcare authoritarianism! In addition, orchestrators of cybermobbing tagged dissidents accounts to be those of someone who is: 'anti-science', 'an anti-vaxxer', 'biased', 'charlatan', 'celebrity scientist', 'conspiracy theorist', 'controversial', 'COVID-19 denier', 'disgraced scientist', 'formerly-respected', 'fringe expert', 'grifter', 'narcissist with a Galileo complex', 'pseudo-scientist', 'quack', 'salesman', 'sell-out' and 'virus', amongst other perjoratives.  Such terms are used as a pre-emptive cognitive vaccine whose hypnotic language patterns ("conspiracy theorist") are intended to thwart audience engagement with critical perspectives. Likewise, these repeatedly used terms help grow a digital pillory that becomes foregrounded in the pattern of automated suggestions in search engine results.

In this Council of the Cancelled, Mike Benz, Prof Jay Bhattacharya, Nicole Shanahan and Dr Eric Weinstein speculate about hidden censorship architectures. One example is Google's automated tagging for "controversial" public figures. These can automatically feature in major mainstream news articles featuring COVID-19 dissidents. This is not merely a visual tag, but a cognitive tag. It marks "controversial" individuals with a contemporary (digital) scarlet letter.

In Dr Cotton's case, some trolls smeared her work in raising awareness of associations for the vaccine injured to be helping “anti-vaccine conspiracy sites”. She shares many cases of these injuries in her book and was amazed at the lack of empathy that Twitter users showed not just her, but also those suffering debilitating injuries. In response she featured screenshots of select insults on her blog at https://christinecotton.com/critics and blocked ‘hundreds of accounts’ online. In checking the Twitter profiles attacking her, she noticed that many with ‘behavioural issues were closeby’. Dr Cotton hired a ‘body and mind’ guard from a security company for 24-hour protection. Her account was reported for “homophobia”, which led to its temporary closing. After enduring several months of cyber-harassment by groups, a behaviour that can be severely be punished by EU law, Dr Cotton decided to file complaints against some of them. Christine crowdfunded filing legal complaints against Twitter harassers from a wide variety of countries. This complaint sought to work around how cyber harassers think anonymity is suitable for avoiding lawsuits for defamation, harassment and public insults.

#4 Not blocking impersonators or preventing brandjacked accounts


Impersonator's accounts claiming to belong to dissidents can quickly pop up on social media platforms. While a few may be genuine parodies, others serve identity jacking purposes. These may serve criminal purposes, in which scammers use fake celebrity endorsements to phish "customers" financial details for fraud. Alternately, intelligence services may use brandjacking for covert character assassination smears against dissidents.

The independent investigative journalist, Whitney Webb, has tweeted about her ongoing YouTube experience of having her channel's content buried under a fact choke of short videos created by other accounts:

Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
 
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.


Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning'or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.

#5 Filtering an account's visibility through ghostbanning


As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a  filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas. 

This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast,  the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:


Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.

The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.


An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.


#6 Penalising accounts that share COVID-19 "misinformation"


In addition to ghostbanning, social media platforms could target accounts for sharing content on COVID-19 that contradicted guidance from the Global Private Partnership (GP3)'s macro-level stakeholders, such as the Centre for Disease Control or the World Health Organisation. In Twitter's case, it introduced a specific COVID-19 misinformation policy in March, 2020, which prohibited claims about transmission, treatments, vaccines, or public health measures that the COVID-19 hegemony deemed “false or misleading.” Such content either had warning labels added to it, or was automatically deleted:

Tweets with suspected MDM were tagged with warnings like “This claim about COVID-19 is disputed” or with labels linking to curated "fact-checks" on G3P health authority pages. This was intended to reduce a tweet’s credibility without immediate removal, whilst also diminishing its poster's integrity. 

Tweets that broke this policy were deleted outright after flagging by automated systems or human moderators. For instance, Alex Berenson’s tweets questioning lockdown efficacy were removed, contributing to his eventual ban in August 2021. In Dr Christine Cotton's case, Twitter classified her account as “sensitive content”. It gradually lost visibility with the tens of thousands of followers it had attracted. In response, she created a new account to begin ‘from scratch’ in August 2022. The Twitter Files revealed that such censorship was linked to United States government requests (notably from the Joe Biden administration and Federal Bureau of Investigations). For example, 250,000 tweets flagged by Stanford’s Virality Project in 2021 were removed by Twitter.

In March 2020, Meta expanded its misinformation policies to target COVID-19-related MDM. Facebook and Instagram applied content labelling and down-ranking, with posts allegedly featuring MDM being labeled with warnings (such as 'False Information' or 'See why health experts say this is wrong') that linked to official sources. Such posts were also down-ranked in the News Feed to reduce their visibility. Users were notified of violations and warned that continued sharing could further limit reach or lead to harsher action. In late 2021, down-ranking also became applied to “vaccine-skeptical” content not explicitly violating rules but potentially discouraging vaccination. Posts violating policies were removed outright.

With LinkedIn's smaller, professional user base, and the platform's lower emphasis on real-time virality, led it to prefer the outright removal of accounts over throttling via shadow-bans. Accounts identified as posting MDM could face temporary limits, such as restricted posting privileges or inability to share articles for a set period. LinkedIn users received warnings after a violation, often with a chance to delete the offending post themselves to avoid further action. Such notices cited the policy breach, linking to LinkedIn’s stance on official health sources. This approach to COVID-19 MDM followed LinkedIn’s broader moderation tactics for policy violations.

In Dr Cotton's case, she shared her Pfizer COVID-19 clinical trial's critique on LinkedIn to get feedback from her professional network of former contacts in the pharmaceutical industry. This first post was removed within 24 hours (p.142), and her second within an hour. This hampered her ability to have a debate on the methodology of Pfizer's trial with competent people. Prof Kulldorff also had two posts deleted in August 2021: one linking to an interview on vaccine mandate risks and another reposting Icelandic health official comments on herd immunity.

Accounts that posted contents with links to external, alternate, independent media (such as Substack articles or videos on Rumble) also saw such posts down-ranked, hidden or automatically removed.

This is the first post on techniques for suppressing health experts' social media accounts (and the second on COVID-19 censorship in the Fifth Estate). My next in the series will address more extreme measures against COVID-19 dissidents, with salient examples.

Please follow this blog or me on social media to be alerted of the next post. If you'd like to comment, please share your views below, ta.

Friday, 23 December 2022

A summary of 'Who is watching the World Health Organisation? ‘Post-truth’ moments beyond infodemic research'

Written for infodemic/disinfodemic researchers and those interested in the scientific suppression of COVID-19 dissidents.

Dr David Bell, Emeritus Professor Tim Noakes and my opinion piece 'Who is watching the World Health Organisation? ‘Post-truth’ moments beyond infodemic research’ is available at https://td-sa.net/index.php/td/article/view/1263. It was written for a special issue, 'Fear and myth in a post-truth age’ from the Journal for Transdisciplinary Research in Southern Africa (see call at https://aosis.co.za/call-for-papers-special-collection-in-journal-for-transdisciplinary-research/).

A major criticism this paper raises is that infodemic research lacks earnest discussion on where health authorities’ own choices and guidelines might be contributing to ‘misinformation’, ‘disinformation’ and even ‘malinformation’. Rushed guidance based on weak evidence from international health organisations can perpetuate negative health and other societal outcomes, not ameliorate them! If health authorities’ choices are not up for review and debate, there is a danger that a hidden goal of the World Health Organisation (WHO) infodemic (or related disinfodemic funders’ research) could be to direct attention away from funders' multitude of failures in fighting pandemics with inappropriate guidelines and measures.

In The regime of ‘post-truth’: COVID-19 and the politics of knowledge (at https://www.tandfonline.com/doi/abs/10.1080/01596306.2021.1965544), Kwok, Singh and Heimans (2019) describe how the global health crisis of COVID-19 presents a fertile ground for exploring the complex division of knowledge labour in a ‘post-truth’ era. Kwok et al. (2019) illustrates this by describing COVID-19 knowledge production at university. Our paper focuses on the relationships between health communication, public health policy and recommended medical interventions.

Divisions of knowledge labour are described for (1) the ‘infodemic/disinfodemic research agenda’, (2) ‘mRNA vaccine research’ and (3) ‘personal health responsibility’. We argue for exploring intra- and inter relationships between influential knowledge development fields. In particular, the vaccine manufacturing pharmaceutical companies that drive and promote mRNA knowledge production. Within divisions of knowledge labour (1-3), we identify key inter-group contradictions between the interests of agencies and their contrasting goals. Such conflicts are useful to consider in relation to potential gaps in the WHO’s infodemic research agenda:

For (1), a key contradiction is that infodemic scholars benefit from health authority funding may face difficulties questioning their “scientific” guidance. We flag how the WHO ’s advice for managing COVID-19 departed markedly from a 2019 review of evidence it commissioned (see https://www.ncbi.nlm.nih.gov/pubmed/35444988).

(2)’s division features very different contradictions. Notably, the pivotal role that pharmaceutical companies have in generating vaccine discourse is massively conflicted. Conflict of interest arises in pursuing costly research on novel mRNA vaccines because whether the company producing these therapies will ultimately benefit financially from the future sales of these therapies depends entirely on the published efficacy and safety results from their own research. The division of knowledge labour for (2) mRNA vaccine development should not be considered separately from COVID-19’s in Higher Education or the (1) infodemic research agenda. Multinational pharmaceutical companies direct the research agenda in academia and medical research discourse through the lucrative grants they distribute. Research organisations dependant on external funding for covering budget shortfalls will be more susceptible to the influence of those funders on their research programs.


We spotlight the overwhelming evidence for the importance of (3) personal responsibility. In the COVID-19 pandemic, its discourses seemed largely ignored by Higher Education leadership and government. We flag how contradictions in (3)’s division of knowledge labour in a pandemic can explain such neglect. Personal responsibility is not a commercial site for generating large profits, some of which may be donated in supporting academic research. Research into effective, low-cost interventions seems to be at odds with the economic interests of both grant recipients and Big Pharma donors. Replacing costly treatments with low-cost alternatives would not only greatly diminish the profitability of existing funders, but also reduce the pool of new ones, plus the size of future donations. It is also important to reflect on how else the scientific enterprise at university lends itself to being an arena for misinformation. New information in science that refutes existing dogma does not become accepted immediately. Therefore a period exists when new ideas will be considered as misinformation especially by those with an agenda to suppress its acceptance.

However, from the perspective of orthodoxy, views that support new paradigms are unverified knowledge (and potentially "misinformation"). Any international health organisation that wishes to be an evaluator must have the scientific expertise for managing this ongoing ‘paradox’, or irresolvable contradiction. Organisations such as the WHO may theoretically be able to convene such knowledge, but their dependency on funding from conflicted parties would normally render them ineligible to perform such a task. This is particularly salient where powerful agents can collaborate across divisions of knowledge labour for establishing an institutional oligarchy. Such hegemonic collaboration can suppress alternative viewpoints that contest and query powerful agents’ interests.

It is concerning how many Communication and Media Studies researchers are ignoring such potential abuse of power, whilst supporting censorship of dissenters based on unproven "harms". Embedded researchers seem to ignore that the Centers for Disease Control, National Institute for Health and the WHO’s endorsement of multinational pharmaceutical companies’ products is a particularly troubling development: it marks a ‘new normal’ of institutional capture by industry sponsoring regulators who become their ‘lobbyists’. This contrasts to the silo efforts of external influence in the past, for example by lobbyists working for Big Tobacco or Big Food. They spun embedded scientific research touting the ‘benefits’ of smoking and processed foods. At the same time, evidence of harm was attacked as "junk science".

At least with cigarettes and ultra-processed foods, many individuals have the choice to buy or avoid paying. In stark contrast, tax-paying publics have no such option in avoiding the steep costs of mRNA vaccines. Public taxes pay for these treatments, while less expensive and potentially more effective interventions are ignored. Paying for vaccines takes funding away from interventions that would address wider and more pressing global health needs, in particular, poverty, malaria, tuberculosis and T2DM.

This paper alerts researchers to a broad range of ‘post-truth’ moments and flags the danger of relying on global health authorities to be the sole custodians of who is allowed to define what comprises an information disorder. Challenges to scientific propaganda from authorities captured by industry should not automatically be (mis-) characterised as low quality or harmful information. Rather, the digital voices of responsible dissenters can be valuable in protecting scientific integrity and public health (for example, @ProfTimNoakes should not be blocked from his Twitter account for expressing dissent!)

Image ™ @TexasLindsay_

Our article results from collaboration between The Noakes Foundation and PANDA. The authors thank JTSA’s editors for the opportunity to contribute to its special issue, the paper’s critical reviewers for their helpful suggestions and AOSIS for editing and proof-reading the paper.

This is the third publication from The Noakes Foundation’s Academic Free Speech and Digital Voices (AFSDV) project. Do follow me on Twitter or https://www.researchgate.net/project/Academic-Free-Speech-and-Digital-Voices-AFSDV for updates regarding it.


I welcome you sharing constructive comments, below.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (58) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest