Wednesday, 9 April 2025
Wanted - Fair critics of 'Promoting Vaccines in South Africa: Consensual or Non-Consensual Health Science Communication?'
Written for health science communication researchers concerned with genetic vaccination promotion being non-consensual and a form of propaganda.
Since June 2023, Dr Piers Robinson, myself and Dr David Bell have submitted the titular manuscript to nine journals with no strong reviews, and many desk rejects without solid explanation. This is despite our journal search from 2024 focusing on seemingly suitable journals that met the criteria of tackling (i) health communication, (ii) propaganda, and (iii) previously having shared controversial articles questioning the official COVID-19 narrative. Since we cannot identify any viable new targets, we have decided to share our manuscript as a pre-print on SSRN and ResearchGate. We hope that readers there can at least offer solid, constructive criticism of our work.
As scholars know, every journal submission can take many hours for preparing the related documentation, plus formatting the manuscript to a journal's stylistic specifications, etc. To compensate for such lengthy academic labour, authors might reasonably expect that editorial teams will be highly ethical in providing detailed reasoning behind desk-rejections. Where there is a strong pattern of such feedback being absent, or poor, on controversial topics, dissident authors may justifiably perceive that they are negotiating an academic journal publication firewall. Why would editors be reluctant to go on record for their reasons for desk-rejection, if they are indisputable? Even when editorial staff's feedback is highly critical, this is still constructive for authors. They can then completely revise their manuscript for submission to new journals. Or perhaps save time, by confronting the reality that their manuscript's non- or weak-contribution means it must be abandoned!
Our frustration with not receiving constructive criticism is similar to accounts from many other dissenters against the official COVID-19 narrative. Notably, Professors Bhattacharya and Hanke (2023) documented dissidents’ censorship experiences via popular pre-print options. And Professor Norman Fenton (in Fighting Goliath, 2024) and Dr Robert Malone (in PsyWar, 2024) provide compelling accounts of shifting from being welcome journal authors and conference speakers, to unpublishable for any manuscript critical of COVID-19 statistics or treatment policies. Such experts would seem unlikely to have produced fallacious research unsuited to peer review given their high levels of expertise, plus long publication records.
Our wannabe-journal article tackles an important, albeit controversial, question, How might pharma- or medical propaganda be distinguished from health communication? South Africa's (SA) case of COVID-19 genetic vaccine promotion is described for how incentivization, coercion and deceptive messaging approximated to a non-consensual approach- preventing informed consent for pregnant women. In terms of generalisability, this case study can be described as a hard case- given the status of pregnant women as perhaps the most vulnerable and protected category in society, one expects health communicators to be extremely cautious about adopting non-consensual methods of persuasion. We show that this was indeed the case in South Africa, making it more likely that such tactics were used for other less vulnerable groups.
In desk rejecting our work, editors and reviewers may well have thought that evaluating persuasive communication in terms of whether or not it is deceptive and non-consensual is not, in some sense, a legitimate research question. In stark contrast, as Dr Piers Robinson argues (at the end of this Linked thread), our research question is indeed, 'an essential part of evaluating whether any given persuasion campaign can be said to meet appropriate ethical/democratic standards. With the attention to fake news and disinformation, there is in fact much in the way of scholarly attention to questions of deceptive or manipulative communication. So we are not asking a question that is not asked by many others and across a wide range of issue areas. And we utilised a conceptual framework developed and published elsewhere.'
Another concern may be that our manuscript it "biased" to 'reach a predetermined outcome'. This ignores the possibility that our work could have found no evidence of deceptive communication, and none for incentivization. However, the evidence presented does strongly support a major concern that pregnant women were incentivised, deceived and coerced into taking (poorly-tested) genetic vaccines (whose side-effects are also poorly tracked). In the absence of detailed editor rejection feedback, it's hard for us to improve our argument for a hoped-for peer review that's fair.
It's also important to acknowledge the context in which our paper was written, which is of considerable scientific concern over the COVID-19 event. Notably, rushed guidance based on weak evidence from international health organisations could well have perpetuated negative health and other societal outcomes, rather than ameliorating them (Noakes, Bell & Noakes, 2022). In particular, health authorities rushed approval of genetic vaccines as the primary response, and their "health promotion" seems a ripe target for robust critique. Particularly when successful early treatments were widely reported to have been suppressed so that Emergency Use Authorisation for genetic vaccines could be granted (Kennedy, 2021).
An unworthy topic?
Our negative experience of repeated, poorly (or un-) explained rejections would seem to suggest that presenting South Africa's case of COVID-19 genetic vaccine promotion as pharmaceutical/medical propaganda was not worthy of academic journals' review- even for those promising to tackle scientific controversies and challenging topics.
Not unexpectedly, SSRN removed our pre-print after a week, providing the following email rationale: 'Given the need to be cautious about posting medical content, SSRN is selective on the papers we post. Your paper has not been accepted for posting on SSRN.' So, no critique of the paper's facts or methods, just rapid removal of our COVID 19 "health communication" critique. In SSRN 's defence, its website's FAQs do flag that 'Medical or health care preprints at SSRN are designed for the rapid, early dissemination of research findings; therefore, in most instances, we do not post reviews or opinion-led pieces, as well as editorials and perspectives.' So perhaps the latter concern was indeed the most significant factor in SSRN's decision... But with no explicit/specific explanation for its rationale for its decision, it's also possible that our critique of COVID-19 "health science communication" weighed more heavily as a factor by human decision makers. Alternately, an Artificial Intelligence agent wrote the rejection email, triggered by our sensitive keywords. COVID-19 + proganda = (a must reject routine.)
A history of a manuscript's rejection in one image
Over two years, we also refined our manuscript to narrowly focus on 'non-consensual Health Science Communication', versus propaganda. While the latter is accurate, we recognised that it could be too contentious for some editors and reviewers, so revised the initial title. Our analysis was clearly bounded to describe the ways in which non-consensual persuasion tactics were employed in South Africa to promote uptake of the COVID-19 vaccines. There are several vulnerable categories (such as teenagers), and we decided to focus on pregnant women, or women wanting to be mothers. We explored the local incentives and coercive measures (both consensual and non-consensual) that were used in South Africa during the COVID-19 event. Our manuscript then critiqued deceptive messaging on the safety of the Pfizer BioNTech Comirnaty® vaccine in a Western Cape government flyer. We also contrasted the South Africa Health Products Regulatory Authority's vaccine safety monitoring and reporting of adverse events following immunisation (SAHPRA AEFI) infrormation, contrasting how it (does not) report on outcomes for women's health, versus the Vaccine Adverse Report System (VAERS SA). If there is a methodological flaw in this approach, we are open to suggestions on improving it.
That said, there are some changes that we would like an opportunity to argue against. For example, our title might be criticised for not addressing harms to "pregnant people". However, following such advice would distract from how genetic vaccines have proven especially damaging to biological females. Likewise, our definition of "health science communication" can be criticised as a narrow one, especially for South Africa's myriad of health contexts. While this is true and we should gloss this limitation, we must also prioritise what is core to focus on within a 10,000 word limit. Expanding our focus to include a broad view of science communication in SA would inevitably require the removal of evidence related to the Organised Persuasive Communication Framework's consensual versus non-consensual aspects. This would distract from our paper's core focus.
The inspiration for our original manuscript
The original paper was drafted for a special issue of the Transdisciplinary Research Journal of Southern Africa. It focuses on ‘Efficacy in health science communication in a post-pandemic age: Implications for Southern Africa’. In a small way, our review article was inspired as a critique of two assumptions in the call for the special issue's opening paragraph: (1) 'Much of the broad population and indeed more of the intelligentsia than one would imagine arguably remain to a greater or lesser degree sceptical of science' and (2) 'widespread suspicion of the origin of the virus seemingly fuelled by conspiracy theories, and of surprising levels of vaccine hesitancy voiced in a range of guises.'
In the first place, there is a different between science, and following The Science™ from a transglobal vaccine cartel. Individuals or groups did have sound scientific grounds to reject genetic vaccination. Indeed, individuals with PhDs were most likely to reject being "vaccinated" with a rushed and poorly-tested product. Secondly, the theory that COVID-19 emerged from the Wuhan lab is not a "conspiracy theory", but just one of four possible explanations {the others being zoonotic (animal-to-human) origins, a deliberate bio-weapon release, or a prior endemicity ‘discovered’ by an outbreak of testing}.
To flag the danger of assumptions, such as (1) and (2) being presented as "fact", our review originally sought to spotlight a major, but neglected, issue in the health communication field: what is pharmaceutical propaganda and how does it differ from health communication. Media studies and health communication scholars should be exercising hyper-reflexivity in considering how the communications they study typically emerge in an externally directed field. Their field's solutionist emphasis is often driven by powerful external groupings’ motives, such as national government departments or multinational pharmaceutical companies. Such actors can be incentivised to manipulate messaging for reasons other than the simple concern to protect the public's wellbeing during a perceived crisis or emergency.
Our reflexive article was originally rejected without explanation by one of the special issue’s editors. I have tweeted about how such behaviour is unacceptable, plus how AOSIS could update its policy to specify that an editor must provide explicit feedback on the reasons for desk rejection. This would meet COPE’s guideline that editors meet the needs of authors. Otherwise rejected authors might suspect that an AOSIS journal is not championing freedom of expression (and rather practicing scientific suppression) and is not precluding business needs (e.g. pharmaceutical support) from compromising intellectual standards. Tackling the danger of “successful” communications for dangerous pharmaceutical interventions as pharmaceutical propaganda is important, particularly given the rise of health authoritarianism during a “pandemic”.
Constructive criticism, plus new journal targets welcome?
We believe that our topic of how incentivization, coercion and deceptive COVID-19 messaging approximates a non-consensual approach is highly salient. Without sound rationales for the rejections of our paper, academic social networks seem the most promising fora for receiving constructive criticism. Drs Robinson, Bell and I welcome such feedback. Kindly also let me know in the comments below should you know of a health communication journal that supports COVID-19 dissent, champions academic freedom and would be interested in giving our submission a fair review?
Future research
Saturday, 29 March 2025
Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022. Shir-Raz et al, 2023. Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).
Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!
Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.
COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020. Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021. Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!
This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.
This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:
Practices for @Account suppression
#1 Deception - users are not alerted to unconstitutional limitations on their free speech
#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents
#3 Othering - enabling public character assassination via cyber smears
#4 Not blocking impersonators or preventing brandjacked accounts
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.
Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning') or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.
#5 Filtering an account's visibility through ghostbanning
As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas.
This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast, the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:
Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.
The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.
An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.
#6 Penalising accounts that share COVID-19 "misinformation"
Please follow this blog or me on social media to be alerted of the next post. If you'd like to comment, please share your views below, ta.
Wednesday, 5 February 2025
Celebrities cannot stop their brandjacking, since many authorities are unable to help!
Since 2019, The Noakes Foundation has supported research into the brandjacking of influential celebrities' reputations on social media, and other poorly moderated platforms. The Fake Celebrity Endorsement (FCE) research team is documenting how this digital crime is an inscrutable, traumatic experience for celebrities, their representatives, and the financial victims who report being conned by fake endorsements. In addition to being traumatised by being featured in fake adverts, microcelebrities are further traumatised by the many reports from fans of their upset at being conned. A few celebrities have become targets for recurring cybervictimisation with no recourse, resulting in repeat trauma.
The FCE distinguishes 'digital crimes' from 'cybercrimes': Micro-fraudsters typically target private individuals with limited access to resources for combating digital crime. This contrasts to cybercrimes in which corporations are attacked (Olson, 2024). They are often well positioned to support their employees with costly resources that private individuals cannot afford. Research into the latter is well-resourced, as is interventions to stop it. By contrast, the fighting of digital crimes that impact private citizens are poorly resourced, particularly in the Global South. In the case of fake celebrity endorsements, press reports of this scam suggest that the problem grows each year- eleven South African celebrities fell victim to it in 2024, up from two in the first reports of 2014.
Fake celebrity endorsement is a digital crime that may require many authorities in society to combat it. Below is a list of the roleplayers that might potentially help prevent digital crimes: