Saturday, 29 March 2025
Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022. Shir-Raz et al, 2023. Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).
Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!
Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.
COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020. Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021. Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!
This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.
This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:
Practices for @Account suppression
#1 Deception - users are not alerted to unconstitutional limitations on their free speech
#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents
#3 Othering - enabling public character assassination via cyber smears
#4 Not blocking impersonators or preventing brandjacked accounts
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024
Instead of blanket censorship, I am having YouTube bury all my actual interviews/content with videos that use short, out of context clips from interviews to promote things I would never and have never said. Below is what happens when you search my name on YouTube, every single… pic.twitter.com/xNGrfMMq52
— Whitney Webb (@_whitneywebb) August 12, 2024Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.
Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning') or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.
#5 Filtering an account's visibility through ghostbanning
As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas.
This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast, the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:
Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.
The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.
An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.
#6 Penalising accounts that share COVID-19 "misinformation"
Please follow this blog or me on social media to be alerted of the next post. If you'd like to comment, please share your views below, ta.
Wednesday, 5 February 2025
Celebrities cannot stop their brandjacking, since many authorities are unable to help!
Since 2019, The Noakes Foundation has supported research into the brandjacking of influential celebrities' reputations on social media, and other poorly moderated platforms. The Fake Celebrity Endorsement (FCE) research team is documenting how this digital crime is an inscrutable, traumatic experience for celebrities, their representatives, and the financial victims who report being conned by fake endorsements. In addition to being traumatised by being featured in fake adverts, microcelebrities are further traumatised by the many reports from fans of their upset at being conned. A few celebrities have become targets for recurring cybervictimisation with no recourse, resulting in repeat trauma.
The FCE distinguishes 'digital crimes' from 'cybercrimes': Micro-fraudsters typically target private individuals with limited access to resources for combating digital crime. This contrasts to cybercrimes in which corporations are attacked (Olson, 2024). They are often well positioned to support their employees with costly resources that private individuals cannot afford. Research into the latter is well-resourced, as is interventions to stop it. By contrast, the fighting of digital crimes that impact private citizens are poorly resourced, particularly in the Global South. In the case of fake celebrity endorsements, press reports of this scam suggest that the problem grows each year- eleven South African celebrities fell victim to it in 2024, up from two in the first reports of 2014.
Fake celebrity endorsement is a digital crime that may require many authorities in society to combat it. Below is a list of the roleplayers that might potentially help prevent digital crimes:
Figure 1 shows a simplified process of the fake endorsement phishing scam. The authors of this digital crime are unknown- they can range from gangs, to the invisible threat of AI and bot armies, to even military intelligence agencies raising funds. Not only do these cybercriminals exploit scamming ecosystems inside popular social media platforms, they also exploit related ecosystems on platforms such as Huione Guarantee (now "Haowang Guarantee"), a Cambodian conglomerate. It offers a messaging app, stable coin, and crypto exchange, and has facilitated $2 billion in transactions. Such platforms are integral to the industrialisation and scaling-up of online scams, for example through supporting the outsourcing of scammers' money-laundering activities (The Economist, 2025).
1) Celebrity influencers
2) Financial victims
Fans who have developed a parasocial relationship with a particular celebrity they follow, may genuinely believe that the fake endorsement adverts are a legitimate offer. Notwithstanding, the product’s promise seeming to be too good to be true. Having been conned, its victims may be ashamed, or in denial. Many may consider their financial loss not worth reporting (as a micro-fraud versus a serious crime). Even if victims are willing to report the digital crime, it may not be obvious which authority the crime is best reported to.3) Social media advertising services
4) Poorly-moderated content hosts
5) Banks
As the financial victims legitimately authorise payments off their own accounts, victims do no enjoy recourse via their banks. To avoid new transactions from scammers, these victims often have to pay banks for new cards after terminating their old ones. It is unclear what role banks could adopt in combating digital crimes, wherein clients are defrauded whilst following a seemingly legitimate payment process.
6) Cyber defence companies
7) Cybercrime reporters and statistics gatherers
8) Cybercrime researchers, and educators, in companies
In a similar collaborative vein, cybercrime researchers and educators in companies are working together to help combat digital crimes targeting their employees and clients. In particular, banks and insurance companies in SA are pro-actively raising awareness around phishing and other common digital crimes. This is done in communications that range from email newsletters, to pop-up warnings that clients must acknowledge reading post log-in.
9) Anti-crime activists (PBOs and NGOs)
Anti-digital crime education tends to focus on educating high school students and working professionals with preventative knowledge in English. However, our research into fake celebrity endorsement victims' online commentary suggest that most are over fifty five, with English being their second language, at best. In response, The Noakes Foundation has supported the development of modules in English for educating silver surfers on the most common digital crimes. Ideally, though, these modules (and reportfakeendorsement.com's content) should be available in South Africa's 11 official languages.
10) Social media platforms and their Big Tech owners
Social media companies, and their Big Tech owners, would seem to have a particular responsibility for protecting users from digital crime threats on their platforms. In contrast, there is a decade-long history in SA of even influential celebrities not being well-supported via speedy responses to their brandjacking, and scam adverts are seldom taken down based on celebrities', their representatives' and other victims' reports.
The most popular platform for this scam in SA is Meta's Facebook and Instagram. Meta does not understand the content that its users share (Horwitz, 2023). Further, it does not report on scam ecosystems based inside its own platforms. Consequently, neither Facebook nor Instagram can pro-actively identify digital crimes, let alone quickly adapt their systems to stop emergent threats from micro-fraudsters. It's left up to whistleblowers, former employees, investigative journalists and researchers to create awareness on these platforms' serious flaws, such as it being used as a scammers' ecosystem tied to scam-as-service ones. This would seems at odds with corporate responsibility- Meta should publicly report regarding its progress in tackling scam-ecosystems on its FB, WhatsApp and Instagram platforms. It could also pro-actively warn vulnerable users, such as the aged, against the latest scam risks.
In a sense, digital crimes by cybercriminals on social networks can be considered a parasitic attack within a larger parasitic host: Meta’s Facebook and Instagram are infomediaries that misrepresent themselves as symbionts in supporting users’ communal connections online. In reality, Meta’s business model is parasitic in relying on 3 000 000 000 users to share content (Robbins, 2018). Much of this content is not the work of original/creative producers, but rather sampled from content that's proved popular on other platforms. In essence, social media platforms are middlemen between content creators and their audiences, taking most of the profits from advertising. These platforms also take the intellectual property of online content creators. In the Global South this serves as a form of neocolonial data extraction as Big Tech multinationals from the Global North extract its data, with little being reciprocated. For example, while powerful celebrities in the US can enjoy access to dedicated Facebook support, there is no equivalent offering for influential SA users. Instead, they are lucky to stumble onto internal staff or Trusted Partners who can best help them respond to the Facebookjacking or Instajacking crimes.
In contrast to the usefulness of human insiders, reports to Meta's AI that manages users' reports of dubious accounts and content is simply not capable of recognising malicious advertisers' accounts; At face value, there is in nothing “wrong” with how the scammers’ accounts are set up - They have have a human profile (fake name) managing a business profile (fake name and business). Reporting the scam accounts is useless, since the fraudsters fill-in all the right criteria to fly under the radar! The scammers use 'Like Farms' and a network of fake profiles to all create a sense of legitimacy through 'liking, sharing and commenting' on posts and ads. The criminals also use a “legitimate website” - this is a bought domain and hosing and (questionable design) - selling a “product” and accumulating data of visitors' info and credit card details. All this seems to be legitimate business behaviour to AI, but is malicious and AI cannot detect that. Scammers use a (stolen) credit card, or hijacked meta Ads manager profile and run advertIf this content had truly been verified by a human it would have been taken down immediately. Even to the most untrained eye it was obvious that this content was a deep fake.s through their “business page”. This works for a short while until the card or the account are stopped, and then they just create another one. These ADs are selling a product online, the product is seemly harmless and well within the legal parameters of Meta's Community standards. The fact that its a fake product is immaterial to Meta, the onus in on the customer to know when they are being scammed, and if users try to report this to be a harmful product, it doesn’t work” as that is deemed a matter of personal opinion! Where such content is checked by a human moderators, the content is so obviously fake that they taken it down quickly.
It appears that Meta's Facebook and Instagram are turning a blind eye to this digital advertising crime. The benefit to META is clear with them reaping the rewards through advertisers' spending: Trustfull's 2024 report expects Deep Fake Fraud to reach $ 15.7 billion in 2024. Meta is set to take a large chunk of that ad-spend revenue in distributing fake, malicious content. It’s hard not to draw to the conclusion that it seems irrelevant to META if the content is genuine or a scam, or if the account used to promote these scams has been hacked or cloned. Either way, META and Facebook still profits.
8) Cybercrime researchers, and educators, in companies
9) Anti-crime activists (PBOs and NGOs)
10) Social media platforms and their Big Tech owners
11) Financial investors
12) Government politicians
13) Local police
14) International law enforcement
15) Local law
16) Higher Education and research funders
17) Product regulators
Celebrity, you are on your own in responding to digital crime?!
Please comment with suggestions to improve this post
Acknowledgements
Thursday, 29 August 2024
After half-a-million views, "Dr Noakes" erection dysfunction "advert" taken down by Facebook + suggested actions for META to do better
![]() |
Figure 1. Screenshot from the fake 'Dr Noakes' erection dysfunction advert on Facebook (2024) |
![]() |
Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024) |
![]() |
Figure 3. Scammer account location behind fake Facebook Dr Tim Noakes adverts (2024) |
Our initial Facebook advert lookup revealed that one page was running four adverts (Figure 2). This account ("Tristan") was managed from Nepal and India (Figure 3).
![]() |
Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook (2024) |
This fake account page also leveraged fake interactions to suggest that it was liked, and followed (Figures 4 and 5).
![]() |
Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024) |
The scammers flick-flacked between varied accounts in committing this cybercrime- they initially used "Hughles" (Figures 6 and 7), "Cameron Sullivan Setting", and "Murthyrius" in launching the same deepfake ads. By the 28th of July, 13 of these "adverts were taken down by Facebook, but the scammers shifted to new accounts, "Longjiaren.com" (Figure 8) and "Brentlinger" (renamed "Brentlingerkk" after we reported it). On the 29th of August, these accounts and their adverts were disabled by Facebook.
![]() |
Figure 8. Screenshot of Longjiaren.com scammers Facebook account for fake adverts (2024) |
Such adverts typically reach viewers outside The Noakes Foundation, Nutrition Network and Eat Better South Africa’s networks. Their audiences know Professor Noakes does not endorse miracle weight loss and other cures. To reach vulnerable publics, The Noakes Foundation has run Facebook alerts to warn about this latest cybercrime. Ironically, the most recent advert attempting to flag the "Dr Noakes" scam was blocked by Facebook advertising (Figure 9)!
Actions for META to do better in fighting cybercrime on its platforms
2) Create a compliance team that is dedicated to thwarting cybercriminals' activities;
3) Offer at least one human contact on each META platform for serious reports of criminal misuse;
4) Promote frequent reporters of cybercrime by referring them to META's Trusted Partners or Business Partners for rapid aid;
5) Encourage external research on every platform regarding cybercriminals' activities (such initiatives could develop inexpensive tools. For example, for celebrities' reps to protect public figures from being deep faked in "adverts");
6) Provide more feedback on what was influential in reporting cybercrime for accounts and content to be removed. Without such feedback, fraud reporters may not be sure which reports are most effective;
7) Have a recommendation system in place for support networks that cybervictims can approach (such as referring South Africans to its national CyberSecurity hub).