Showing posts with label META. Show all posts
Showing posts with label META. Show all posts

Friday, 6 June 2025

Techniques for suppressing health experts' social media accounts (7 - 12, part 2) - The Science™ versus key opinion leaders challenging the COVID-19 narrative

Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.


This is the second post alerting readers to the myriad of techniques that social media companies continue to use against key opinion leaders that question the dominant consensus from The Science™. While examples are provided for techniques versus prominent critics of the official COVID-19 narrative, these are also readily applied for stifling the emergence of other issue arenas. These range from the United States of America's support for forever wars via a ghost budget (Bilmes, 2018), to man-made climate change, and from low carbohydrate diets to transgender "gender affirming" medical surgery ideology. These dogmatic prescriptions by the global policy decision makers of the Global Private Partnership (G3P or GPPP) are presented as a "scientific consensus", but are unscientific when protected from being questioned- especially by legitimate experts with dissenting arguments. 

In COVID-19's case, its proponents may claim that lockdowns, masking, distancing and genetic vaccines were based on science. In reality though, these measures were policy directives decided well in advance inside the G3P (Iain Davis, 2021). At the same time, its macro-level stakeholders have been busy for decades developing a 'consensus architecture' that precludes radically different interpretations from its preferred scientific dogmas. For example, "man made climate change" has become anchored as an issue, both scientifically and politically, through a decades-long program of sponsorship from the Rockefeller family, one of the world's leading private research funders (Nordangård, 2024). This has shaped the climate science field, where scientists selectively present data to align with policy goals that promote a fear-led narrative of urgent action (Pielke, 2010). The climate science community, particularly the Intergovernmental Panel on Climate Change (IPCC), relies on panic and "pro-social" censorship versus irrefutable evidence for advancing the anthropogenic model, sidelining dissenting perspectives and stifling critical debate as inherently unworthy. In pro-social censorship, work is 'rejected, and individuals cancelled, not because the work is substandard or flawed, but because it threatens to undermine a cherished ideology or someone else’s concept of societal safety and harmony. Such censorship is never portrayed as such, of course; the reason given is always that the individual(s) concerned were peddling substandard work leading to harmful misinformation.' (Ridgway, 2025).

Professor Brian Martin's framework for information control (2025) addresses the key aspects of overarching censorship, often miscast as being "pro-social" counter-"disinformation". The framework describes that the methods for information control against contrarian views are classifiable into four types, which form an interrelated ecology: (i) flooding, (ii) ignoring, (iii) censoring, and (iv) attacking: (i) Information flooding sees dominant views presented in a unified front, overwhelming contrary views by volume and consistency. (ii) Ignoring includes the absence of research on alternative approaches, failure to report on dissenting research, and not mentioning challenging views. (iii) Censoring involves active measures to prevent the circulation of contrary information and views. (iv) Attacking includes steps taken to silence and penalise individuals with heterodox views, and campaigns to discredit alternatives to the officially-sanctioned approaches. My blog discusses how dissident accounts and their content have been (iii) censored in the Fifth Estate's most popular social media platforms. This post focuses on social media censorship techniques against accounts that are more serious than the six described in part 1. Such content suppression is best contextualised as just one strategy within a broader propaganda omniwar that has weaponised language and made deceit ubiquitous (Hughes, 2024):

#7 Concealing the sources behind dissidents' censorship

An important aspect of information control is that the sources behind it will be very well hidden from the public. For the organisers of propaganda, their ideal targets do not appreciate that they are receiving propaganda, nor should they recognise its source. Their audiences' ignorance is a key aspect of psychological warfare (otherwise known as 5th generation warfare (Abbot, 2010, Krishnan, 2024). Likewise for censors, its targets and their followers should ideally not be aware that they are being censored, nor able to identity the sources of their censorship. Accordingly, there is significant deception around the primary sources for social media censorship being the platforms themselves, and their policies. Instead, these platforms are largely responding to co-ordinated COVID-19 narrative control from G3P members who span each of the six estates*.


{* Departing from the original 'French Estates of the Realm' framework, the contemporary estates can be defined as: A First Estate that consists of the government or ruling class. The Second Estate comprises the economic or social elite—think wealthy business magnates, corporate leaders, or influential families who hold disproportionate power through money and networks. The Third Estate is the general populace who don’t wield concentrated wealth or political authority - the working and middle classes who form the bulk of citizenry. The Fourth Estate consists of journalists and news outlets who can be a distinct force where holding power to account and shaping public opinion. The Fifth Estate describes the rise of digital platforms that support the more independent collectivity of networked individuals (Dutton, 2023). This contributes to a more pluralist role of individuals in shaping democratic political accountability, whilst impacting nearly every sector of society.  During the COVID-19 event, the BMGF rivalled corporations and governments in its influence. This reflects the growing importance of the Sixth Estate (multinational non-profit organisations) in driving consensus for The Science™. The vast scale of international philanthropy from trillionaires arguably constitutes a contemporary Sixth Estate, since these charities operate as a distinct force with a unique role. In most societies, large public benefit organisations (PBOs) typically operate outside government, corporate, and traditional media spheres, while focusing on advocacy, social change, or public welfare. Large charities can mobilise resources, influence policy, and amplify disenfranchised voices in ways that neither the mainstream press, nor online platforms can do alone. Charities assumed independence from profit motives or government control gives a different kind of credibility and reach to PBOs, arguably qualifying them as a separate "estate." At the same time, large charities have greater opportunities and leverage for working towards long-term goals. In contrast, most political figures, listed companies, and other organisations have to deliver on short-term objectives, and are more exposed to critique}

Opaque choices to suppress COVID-19 counter-narratives via the Fourth and Fifth media estates were largely demanded by external Global Private Partnership parties- G3Ps are structured collaborations between international intergovernmental organisations, such as the UN, WHO and WEF, and private companies to achieve shared goals and objectives. The G3Ps form a worldwide network of stakeholder capitalists and their partners who co-operate with global governance (UN), above state and society. The UN co-operates with G3P partners to set global agendas and policies, which then cascade to people in every nation via a policy intermediary, such as the International Monetary Fund (IMF). As a product of G3P collaborators, COVID-19 thought-policing is just one topic that the global industrial censorship complex's (GICC) broader work addresses. The GICC's activities seek to protect lucrative fabricated crises narratives as "settled science". The Science™ dictates urgent, universal solutions, which directly benefit the G3P's policy makers and corporate members- In the case of the COVID-19 event, the UN, WEF, WHO and its G3P corporate partners circumvented national sovereignty (whereby a nation’s laws cannot be subject to those of an outsider) to promote a monopolistic “World Health” policy. Its implementation primarily benefited an ‘elite cabal of media-, tech-, large pharma-, centralized finance, nongovernmental “pathophilanthropic,” and transnational corporations’ (Malone et al. 2024, p. 338). This corporatism (AKA fascism) contributed to a massive wealth gain for billionaires of $5 trillion from 2020–2021 (Oxfam, 2022). Oxfam notes that this was a larger increase than in the previous 14 years combined!

The well-funded, complicity theorists for a COVID-19 "Infodemic" (for example- Calleja et al., 2021Caulfield, 2020DiResta, 2022Schiffrin, 2022) may genuinely believe in advocating for censorship as a legitimate, organic counterweight to "malinformation". In contrast, researchers at the Unlimited Hangout point out that this censorship is highly centralised, aiming at opinions that are deemed "illegitimate" merely for disagreeing with the positions of the most powerful policy makers at the G3P's macro-level. Ian Davis writes that the G3P policy makers are Chatham House, the Club of Rome, the Council of Foreign Relations, the Rockefellers and the World Economic Forum. Each guides international policy distributors, including the; International Monetary Fund, The Intergovernmental Panel on Climate ChangeUnited Nations,  World Health Organisation, plus "philanthropists" {eg. the Bill and Melinda Gates Foundation (BMGF)}, multinational corporations and global non-governmental organisations..

Mr Bill Gates serves as an example of the Sixth Estate exercising undue influence on public health, especially Africa's: His foundation is the largest private benefactor of the World Health Organization. The BMGF finances the health ministries in virtually every African country. Mr Gates can place conditions on that financing, such as vaccinating a certain percentage of a country’s population. Some vaccines and health-related initiatives that these countries purchase are developed by companies that Gates’ Cascade Investment LLC invests in. As a result, he can benefit indirectly from stock appreciation. This is alongside tax savings from his donations, whilst his reputation as a ‘global health leader’ is further burnished. In South Africa, the BMGF have directly funded the Department of Health, SA’s regulator SAHPRA, plus its Medical Research Council, top medical universities and the media (such as the Mail and Guardian’s health journalism centre, Bhekisisa). All would seem highly motivated to protect substantial donations by not querying Mr Gates’ vaccine altruism. However, the many challenges of the Gates Foundation’s dominating role in its transnational philanthropy must not be ignored. Such dominance poses a challenge to justice- locals’ rights to control the institutions that can profoundly impact their basic interests (Blunt, 2022). While the BMGF cannot be directly tied to COVID-19 social media account censorship, it is indisputable that Mr Gates' financial power and partner organisations indirectly suppressed dissenting voices by prioritising certain COVID-19 treatment narratives (Politico, 2022A, 2022B).

At a meso-level, select G3P policy enforcers organise that macro-level's policy directives are followed by both national governments (and their departments, such as health) and scientific authorities (including the AMA, CDC, EMA, FDA, ICL, JCVI, NERVTAG, NIH, MHRA and SAGE). Enforcers strive to prevent rival scientific ideas gaining traction, and thereby challenging its policymakers' dictates. These bodies task psychological 'nudge' specialists (Junger and Hirsch, 2024), propagandists and other experts with convincing the public to accept, and ideally buy-into, G3P policies. This involves censorship and psychological manipulation via public relations, propaganda, disinformation and misinformation. The authors of such practices are largely unattributed. Dissidents facing algorithmic censorship through social media companies' opaque processes of content moderation are unlikely to be able to identify the true originator of their censorship in a complex process. Content moderation is a 'multi-dimensional process through which content produced by users is monitored, filtered, ordered, enhanced, monetised or deleted on social media platforms' (Badouard and Bellon, 2025). This process spans a 'great diversity of actors' who develop specific practices of content regulation (p3). Actors may range from activist users and researchers who flag content, to fact-checkers from non-governmental organisations and public authorities. If such actors disclose their involvement in censorship, this may only happen much later. For example, Mark Zuckerberg’s 2024 letter to the House Judiciary Committee revealed that the Biden administration pressured Meta to censor certain COVID-19 content, including humour and satire, in 2021.


#8 Blocking a user’s access to his or her account

A social media platform may stop a user from being able to login to his or her account. Where the platform does not make this blocking obvious to a users' followers, this is deceptive. For example, Emeritus Professor Tim Noakes' Twitter account was deactivated for months after querying health authorities' motivations in deciding on interventions during the COVID-19 "pandemic". Many viewers would not recognise that his seemingly live profile was in fact inactive, since it looked to be active. The only clue was that @ProfTimNoakes had not tweeted for a long time. This was highly unusual.


This suspension followed Twitter's introduction of a “five-strike” system, with repeat offenders or egregious violations leading to permanent bans. Twitter's system tracked violations, with the first and second strikes resulting in a warning or temporary lock. A third strike resulted in a 12-hour suspension, while a 7-day suspension followed a 4th strike. Users faced a permanent ban for a 5th strike. In Professor Tim Noakes' case, he was given a vague warning regarding 'breaking community rules etc.' (email correspondence, 24.10.2022), this followed him noticing a loss of followers and his tweets reach being restricted. Twitter 'originally said I was banned for 10 hours. But after 10 hours when I  tried to re-access it they would not send me a code to enter. When I complained they just told me I was banned. When I asked for how long, they did not answer.' In reviewing his tweets, Prof. Noakes noticed that some had been (mis-)labelled by Twitter to be "misleading" before his suspension (see Figure 1 below).


@ProfTimNoakes controversial Macron tweet 24 Oct 2022
Figure 1. Screenshot of @ProfTimNoakes' "controversial" tweet on President Macron not taking COVID-19 'experimental gene therapy' (24 October, 2022)

Prof Noakes had also tweet-quoted Alec Hogg’s BizNews article regarding Professor Salim Abdool Karim’s conflicts of interest, adding 'something about' cheque book science. The @ProfTimNoakes account was in a state of limbo after seven days, but was never permanently banned. Usually, accounts placed on “read-only” mode, or temporary lockouts, required tweet deletion to regain full access. However, @ProfTimNoakes latest tweets were not visible, and he was never asked to delete any. In addition to account login blocks, platforms may also suspend accounts from being visible. But this was not applied to @ProfTimNoakes. In response to being locked out, Prof Noakes shifted to using his alternate @loreofrunning account- its topics of nutrition, running and other sports seemed safe from the reach of unknown censors' Twitter influence.


#9 Temporary suspensions of accounts (temporary user bans)

Several dissident COVID-19 experts reported temporary suspensions of their Twitter accounts after contradicting official public health narratives, or Twitter’s "COVID-19 misinformation policies". Two examples are the epidemiologist's Dr. Martin Kulldorff's account, @MartinKulldorff, and the journalist Mr Alex Berenson's, @AlexBerenson: @MartinKulldorff was temporarily suspended after a March 15, 2021 tweet stating that not everyone needed the COVID-19 vaccine, especially those with prior natural infection or young children. As this diverged from CDC guidelines, Twitter flagged the tweet to be misleading, disabling user's options to reply or like that tweet. @AlexBerenson faced multiple suspensions, also for questioning the necessity of mRNA vaccines, plus their efficacy. @AlexBerenson
was temporarily suspended in the summer of 2021, with a permanent ban following shortly after. Internal Twitter communications obtained through Berenson’s lawsuit against the platform, revealed that White House officials had raised concerns about Berenson’s account during a meeting in April with Twitter executives. Senior COVID adviser Andy Slavitt asked why Berenson 'hasn’t been kicked off the platform', suggesting that Berenson was a key source of vaccine misinformation. Berenson’s lawsuit against Twitter resulted in his reinstatement in July 2022.

#10 Permanent suspension of accounts, pages and groups (complete bans)

In contrast to Twitter's five-strikes system, Meta's Facebook's was not as formalised. It tracked violations on accounts, pages and groups. The latter serve different functions in Facebook’s system architecture (Broniatowski, et al. 2023): only page administrators may post in pages, which are designed for brand promotion and marketing. In contrast, any member may post in groups. These serve as a forum for members to build community and discuss shared interests. In addition, pages may serve as group administrators. From December 2020, Meta began removing "false claims about COVID-19 vaccines" that were "debunked by public health experts". This included "misinformation" about their efficacy, ingredients, safety, or side effects. Repeatedly sharing "debunked claims" risked escalating penalties to individual users/administrators, pages and groups. Penalties ranged from from reduced visibility to removal and permanent suspension.  For example, if a user posted that 'COVID vaccines cause infertility' "without evidence", this violated policy thresholds. The user was then asked to acknowledge the violation, or appeal. Appeals were often denied if the content clashed with official narratives.

Meta could choose to permanently ban individual-, fan page- and group- accounts on Facebook. For example, high-profile repeated offenders were targeted for removals. In November 2020, the page "Stop Mandatory Vaccination", which was one of the platform’s largest "anti-vaccine" fan pages was removed. Robert F. Kennedy Jr.’s Instagram account was permanently removed in 2021 for "sharing debunked COVID-19 vaccine claims". The non-profit he founded, Children’s Health Defense was suspended from both Facebook and Instagram in August 2022 for its repeated violations of Meta’s COVID-19 misinformation policies.

Microsoft's LinkedIn generally has stricter content moderation for professional content than other social networks. It updated its 'Professional Community Policies' for COVID-19 to prohibit content contradicting guidance from global health organisations, like the CDC and WHO. This included promoting unverified treatments and downplaying the "pandemic"’s severity. Although LinkedIn has not disclosed specific thresholds, high-profile cases evidence that the persistent sharing of contrarian COVID-19 views—especially if flagged by users, or contradicting official narratives—would lead to removal. Dr. Mary Talley Bowden, Dr. Aseem Malhotra, Dr Robert Malone, and Mr Steve Kirsch and accounts have all been permanently suspended.


#11 Non-disclosure of information around banning's rationale for account-holders

Social media platforms' Terms of Service (TOS) may ensure that these companies are not legally obligated to share information with their users on the precise reasons for their accounts being suspended. Popular platforms like Facebook, LinkedIn and X can terminate accounts at their sole discretion without providing detailed information to users. Such suspensions are typically couched opaquely in terms of policy violation (such as being in breach of community standards).


Less opaque details may be forthcoming if the platform's TOS is superseded by a country, or regional bloc's, laws. In the US, section 230 of its Communications Decency Act allows platforms to moderate content as they see fit. They are only obligated to disclose reasons under a court order, or if a specific law applies (such as one related to data privacy). By contrast, companies operating under European Union countries are expected to comply with the EU's Digital Services Act (DSA). Here, platforms must provide a 'statement of reasons' for content moderation decisions, including suspensions, with some level of detail about the violation. Whilst compliant feedback must be clear and user-friendly, granular specifics may not be a DSA requirement. In the EU and USA, COVID-19 dissidents could only expect detailed explanations in response to legal appeals, or significant public pressure. Internal whistleblowing and investigative reports, such as the Facebook and Twitter files, also produced some transparency.


One outcome of this opaque feedback is that the reasons for dissidents' COVID-19 health experts' accounts being suspended are seldom made public. Even where dissidents have shared their experiences, the opaque processes and actors behind COVID-19 censorship remain unclear. Even reports from embedded researchers, such as The Center for Countering Digital Hate's "Disinformation Dozen", lack specificity. While it reported how Meta permanently banned 16 accounts, and restricted 22 others, for "sharing anti-vaccine content" in response to public reporting in 2021. However, the CCDH did not explicitly name the health experts given permanent suspensions. Hopefully, a recent 171-page federal civil rights suit by half of the dissidents mentioned in this report against the CCDH, Imran Ahmed, U.S. officials & tech giants will expose more about who is behind prominent  transnational censorship & reputational warfare (Ji, 2025).


#12 No public reports from platforms regarding account suspensions and censorship requests

Another important aspect of deception around social media censorship is that the most popular digital platforms have never provided ongoing, public reports for the number of accounts they suspend, and why. Nor do platforms that exercise censorship share ongoing information on who requests what accounts be suspended, and their rationales. Consequently, researchers and the public are unlikely to appreciate the scope of censorship that does occur on social media platforms, and who the authors behind it are, G3P policy enforcers, or otherwise.

Critical social justice as a protected ideologyin Higher Education
Figure 2. Slide on 'Critical social justice as a protected ideology in Higher Education, but contested in social media hashtag communities' (Noakes, 2024)

This is an important gap due to its implications for free speech. Many 'Critical Social Justice' assumptions and beliefs seem protected from debate in Higher Education and in the Fifth Estate. Likewise, the most popular social networks of the Sixth Estate may also be providing stealthy protection for G3P agenda dogmas via censorship. As this is never made available as part of the public record, it remains mostly hidden from the public, and largely inaccessible to scholarship.

More about censorship techniques against dissenters on social networks

  1. Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative
  2. Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media

N.B. I am writing a third post on account censorship during COVID-19, that will cover at least three more serious techniques. Do follow me on X to learn when that is published. Please suggest improvements to this post in the comments below, or reply to my tweet thread at https://x.com/travisnoakes/status/1930989080231203126.

Saturday, 29 March 2025

Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative

Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.

There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022Shir-Raz et al, 2023Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).


Narrowed Overton Window for COVID-19.


Figure 1. Narrowed Overton Window for COVID-19. Figures copied from (p137-138) in Dr Joseph Fraiman (2023). The dangers of self-censorship during the COVID-19 pandemic. In R. Malone, E. Dowd, & G. Fareed (Eds.), Canary In a Covid World: How Propaganda and Censorship Changed Our (My) World (pp. 132-147). Amazon Digital Services LLC - Kdp.


Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!


Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.


COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!


This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.


This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:


Practices for @Account suppression


#1 Deception - users are not alerted to unconstitutional limitations on their free speech


Social media users might assume that their constitutional right to free speech as citizens will be protected within, and across, digital platforms. However, global platforms may not support such rights in practice. No social media company openly discloses the extent to which users' accounts have, and are, being censored for expressing opinions on controversial topics. Nor do these platforms explicitly warn users what they consider to be impermissible opinions. Consequently, their users are not be forewarned regarding what may result in censorship. For example, many COVID19 dissidents were surprised that their legitimate critiques could result in account suspensions and bans (Shir-Raz, 2022). Typically, such censorship was justified by Facebook, Google, LinkedIn, Tik Tok, Twitter and YouTube, due to users' violation of "community rules". In most countries, the freedom of speech is a citizen’s constitutional right that should be illegal to over-ride. It should be deeply concerning that such protections were not supported in the Fourth Estate of the digital public square during the COVID-19 event. Instead, the supra-national interests of health authoritarians came to supersede national laws to prevent (unproven) harms. This pattern of censorship is noticeable in many other scientific issue arenas, ranging from criticism against man-made climate change to skeptics challenging transgender medical ideology.

#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents


An individual who exercises his or her voice against official COVID-19 narratives can expect to receive both legitimate, pro-social and unfair, anti-social criticism. While cyberstalking should be illegal, social media platforms readily facilitate the stalking and cyber-harassment of dissidents. An extreme example of this was Dr Christine Cotton's experiences on LinkedIn. Dr Cotton was an early whistleblower (Jan, 2022) against Pfizer's COVID-19 clinical trial's false claims of 95% efficacy for its treatments. 
Her report identified the presence of bias and major deviations from good clinical practice. In press interviews, she reported that the trial did ‘not support validity in terms of efficacy, immunogenicity and tolerance of the results provided in the various Pfizer clinical reports that were examined in the emergency by the various health authorities. Christine shared this report with her professional network on LinkedIn, asking for feedback from former contacts in the pharmaceutical industry. The reception was mostly positive, but it and related posts were subject to a rapid content takedown by LinkedIn, ostensibly for not meeting community standards. At the same time, her profile became hypersurveiled. It attracted unexpected visits from 133 lawyers, the Ministry of Defence, employees of the US Department of State, the World Health Organisation, and others (p142). None of these profile viewers contacted her directly.

#3 Othering - enabling public character assassination via cyber smears


Othering is a process whereby individuals or groups are defined, labeled or targeted as not fitting in within the norms of a social group. This influences how people perceive and treat those who are viewed as being part of the in-group, versus those in an out-group. At a small scale, othering can result in a scholar being ostracised from their university department following academic mobbing and online academic bullying (Noakes & Noakes, 2021). At a large scale, othering entails a few dissidents on social media platforms being targeted for hypercriticism by gangstalkers. 

Cyber gangstalking is a process of cyber harassment that follows cyberstalking, whereby a group of people target an individual online to harass him or her. Such attacks can involve gossip, teasing and bad-jacketing, repeated intimidation and threats, plus other fear-inducing behaviours. Skeptics' critical contributions can become swamped by pre-bunkers and fellow status-quo defenders. Such pseudo-skeptics may be sponsored to trivialise dissenters' critiques, thereby contributing to a fact choke against unorthodox opinions. 

In Dr Christine Cotton's case in March 2022 her  name was disclosed in a list as part of a French Senate investigation into adverse vaccine events. A ‘veritable horde of trolls seemingly emerged out of nowhere and started attacking’ her ‘relentlessly’ (p143). These trolls were inter-connected through subscribing to each others’ accounts, which allowed them to synchronise their attacks. They attempted to propagate as much negative information on Dr Cotton as possible in a ‘Twitter harassment scene’. Emboldened by their anonymity, the self-proclaimed “immense scientists” with masters in virology, vaccines, clinical research and biostatistics, launched a character assassination. They attacked her credentials and work history, whilst creating false associations (“Freemasonry” and “Illuminati”). 

This suggests how identity politics sensibilities and slurs are readily misused against renegades. In the US, those questioning COVID-19 policies were labelled “far right” or “fascist”, despite promoting a libertarian critique of healthcare authoritarianism! In addition, orchestrators of cybermobbing tagged dissidents accounts to be those of someone who is: 'anti-science', 'an anti-vaxxer', 'biased', 'charlatan', 'celebrity scientist', 'conspiracy theorist', 'controversial', 'COVID-19 denier', 'disgraced scientist', 'formerly-respected', 'fringe expert', 'grifter', 'narcissist with a Galileo complex', 'pseudo-scientist', 'quack', 'salesman', 'sell-out' and 'virus', amongst other perjoratives.  Such terms are used as a pre-emptive cognitive vaccine whose hypnotic language patterns ("conspiracy theorist") are intended to thwart audience engagement with critical perspectives. Likewise, these repeatedly used terms help grow a digital pillory that becomes foregrounded in the pattern of automated suggestions in search engine results.

In this Council of the Cancelled, Mike Benz, Prof Jay Bhattacharya, Nicole Shanahan and Dr Eric Weinstein speculate about hidden censorship architectures. One example is Google's automated tagging for "controversial" public figures. These can automatically feature in major mainstream news articles featuring COVID-19 dissidents. This is not merely a visual tag, but a cognitive tag. It marks "controversial" individuals with a contemporary (digital) scarlet letter.

In Dr Cotton's case, some trolls smeared her work in raising awareness of associations for the vaccine injured to be helping “anti-vaccine conspiracy sites”. She shares many cases of these injuries in her book and was amazed at the lack of empathy that Twitter users showed not just her, but also those suffering debilitating injuries. In response she featured screenshots of select insults on her blog at https://christinecotton.com/critics and blocked ‘hundreds of accounts’ online. In checking the Twitter profiles attacking her, she noticed that many with ‘behavioural issues were closeby’. Dr Cotton hired a ‘body and mind’ guard from a security company for 24-hour protection. Her account was reported for “homophobia”, which led to its temporary closing. After enduring several months of cyber-harassment by groups, a behaviour that can be severely be punished by EU law, Dr Cotton decided to file complaints against some of them. Christine crowdfunded filing legal complaints against Twitter harassers from a wide variety of countries. This complaint sought to work around how cyber harassers think anonymity is suitable for avoiding lawsuits for defamation, harassment and public insults.

#4 Not blocking impersonators or preventing brandjacked accounts


Impersonator's accounts claiming to belong to dissidents can quickly pop up on social media platforms. While a few may be genuine parodies, others serve identity jacking purposes. These may serve criminal purposes, in which scammers use fake celebrity endorsements to phish "customers" financial details for fraud. Alternately, intelligence services may use brandjacking for covert character assassination smears against dissidents.

The independent investigative journalist, Whitney Webb, has tweeted about her ongoing YouTube experience of having her channel's content buried under a fact choke of short videos created by other accounts:

Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
 
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.


Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning'or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.

#5 Filtering an account's visibility through ghostbanning


As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a  filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas. 

This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast,  the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:


Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.

The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.


An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.


#6 Penalising accounts that share COVID-19 "misinformation"


In addition to ghostbanning, social media platforms could target accounts for sharing content on COVID-19 that contradicted guidance from the Global Private Partnership (GP3)'s macro-level stakeholders, such as the Centre for Disease Control or the World Health Organisation. In Twitter's case, it introduced a specific COVID-19 misinformation policy in March, 2020, which prohibited claims about transmission, treatments, vaccines, or public health measures that the COVID-19 hegemony deemed “false or misleading.” Such content either had warning labels added to it, or was automatically deleted:

Tweets with suspected MDM were tagged with warnings like “This claim about COVID-19 is disputed” or with labels linking to curated "fact-checks" on G3P health authority pages. This was intended to reduce a tweet’s credibility without immediate removal, whilst also diminishing its poster's integrity. 

Tweets that broke this policy were deleted outright after flagging by automated systems or human moderators. For instance, Alex Berenson’s tweets questioning lockdown efficacy were removed, contributing to his eventual ban in August 2021. In Dr Christine Cotton's case, Twitter classified her account as “sensitive content”. It gradually lost visibility with the tens of thousands of followers it had attracted. In response, she created a new account to begin ‘from scratch’ in August 2022. The Twitter Files revealed that such censorship was linked to United States government requests (notably from the Joe Biden administration and Federal Bureau of Investigations). For example, 250,000 tweets flagged by Stanford’s Virality Project in 2021 were removed by Twitter.

In March 2020, Meta expanded its misinformation policies to target COVID-19-related MDM. Facebook and Instagram applied content labelling and down-ranking, with posts allegedly featuring MDM being labeled with warnings (such as 'False Information' or 'See why health experts say this is wrong') that linked to official sources. Such posts were also down-ranked in the News Feed to reduce their visibility. Users were notified of violations and warned that continued sharing could further limit reach or lead to harsher action. In late 2021, down-ranking also became applied to “vaccine-skeptical” content not explicitly violating rules but potentially discouraging vaccination. Posts violating policies were removed outright.

With LinkedIn's smaller, professional user base, and the platform's lower emphasis on real-time virality, led it to prefer the outright removal of accounts over throttling via shadow-bans. Accounts identified as posting MDM could face temporary limits, such as restricted posting privileges or inability to share articles for a set period. LinkedIn users received warnings after a violation, often with a chance to delete the offending post themselves to avoid further action. Such notices cited the policy breach, linking to LinkedIn’s stance on official health sources. This approach to COVID-19 MDM followed LinkedIn’s broader moderation tactics for policy violations.

In Dr Cotton's case, she shared her Pfizer COVID-19 clinical trial's critique on LinkedIn to get feedback from her professional network of former contacts in the pharmaceutical industry. This first post was removed within 24 hours (p.142), and her second within an hour. This hampered her ability to have a debate on the methodology of Pfizer's trial with competent people. Prof Kulldorff also had two posts deleted in August 2021: one linking to an interview on vaccine mandate risks and another reposting Icelandic health official comments on herd immunity.

Accounts that posted contents with links to external, alternate, independent media (such as Substack articles or videos on Rumble) also saw such posts down-ranked, hidden or automatically removed.

This is the first post on techniques for suppressing health experts' social media accounts (and the second on COVID-19 censorship in the Fifth Estate). My next in the series will address more extreme measures against COVID-19 dissidents, with salient examples.

I am writing a series of post on this topic that will cover more serious techniques. Do follow me on X to be alerted when they are published. Please share your views by commenting below, or reply to this tweet thread at https://x.com/travisnoakes/status/1906250555564900710.

Wednesday, 5 February 2025

Celebrities cannot stop their brandjacking, since many authorities are unable to help!

Written for cyber- and digital crime researchers and reporters. Plus, brandjacked celebrities and their representatives.


In discussion with reporters and PR experts, most seemed to be under the mistaken impression that celebrities enjoy a viable route to stop their brandjacking in scam adverts on social media platforms. I wrote this post to explain that although SA public figures seem well-resourced and influential, none have a viable route to prosecuting their brandjackers due to an absence of support from many authorities:

Since 2019, The Noakes Foundation has supported research into the brandjacking of influential celebrities' reputations on social media, and other poorly moderated platforms. The Fake Celebrity Endorsement (FCE) research team is documenting how this digital crime is an inscrutable, traumatic experience for celebrities, their representatives, and the financial victims who report being conned by fake endorsements. In addition to being traumatised by being featured in fake adverts, microcelebrities are further traumatised by the many reports from fans of their upset at being conned. A few celebrities have become targets for recurring cybervictimisation with no recourse, resulting in repeat trauma.


The FCE distinguishes 'digital crimes' from 'cybercrimes': Micro-fraudsters typically target private individuals with limited access to resources for combating digital crime. This contrasts to cybercrimes in which corporations are attacked (Olson, 2024). They are often well positioned to support their employees with costly resources that private individuals cannot afford. Research into the latter is well-resourced, as is interventions to stop it. By contrast, the fighting of digital crimes that impact private citizens are poorly resourced, particularly in the Global South. In the case of fake celebrity endorsements, press reports of this scam suggest that the problem grows each year- eleven South African celebrities fell victim to it in 2024, up from two in the first reports of 2014.


Fake celebrity endorsement is a digital crime that may require many authorities in society to combat it. Below is a list of the roleplayers that might potentially help prevent digital crimes:


1) celebrity influencers,
2) financial victims, 
3) social media advertising providers,
4) Poorly-moderated content hosts, 
5) banks, 
6) cyber defence companies,
7) cybercrime reporters and statistics gatherers (industry researchers),
8) cybercrime educators, 
9) anti-crime activists (PBOs and NGOs), 
10) social media platforms (eg Big Tech),
11) financial investors, 
12) government politicians,
13) the police, 
14) international law enforcement,
15) local law,
16) Higher Education and its funders,
17) Product regulators.

While digital crime victims might expect support from these thirteen other role-players, this post spotlights their limitations. Some simply are unable to prioritise fighting fake celebrity endorsements, while others' interests may not be served in tackling this crime!

Figure 1. The brandjacking digital crime process
Figure 1 - the brandjacking digital crime process


Figure 1 shows a simplified process of the fake endorsement phishing scam. The authors of this digital crime are unknown- they can range from gangs, to the invisible threat of AI and bot armies, to even military intelligence agencies raising funds. Not only do these cybercriminals exploit scamming ecosystems inside popular social media platforms, they also exploit related ecosystems on platforms such as Huione Guarantee (now "Haowang Guarantee"), a Cambodian conglomerate. It offers a messaging app, stable coin, and crypto exchange, and has facilitated $2 billion in transactions. Such platforms are integral to the industrialisation and scaling-up of online scams, for example through supporting the outsourcing of scammers' money-laundering activities (The Economist, 2025).

1) Celebrity influencers

On digital media, celebrity influencers are 'micro-celebrities', who can also be 'influencers' (if paid to share content). Micro-celebrities may not be aware of the dangers of hyper-visibility, since there are no 'Here Be Dragons' signs at the on-ramps to creating their digital profiles. Here, celebrities agree to legal contracts that are heavily one-sided in favour of social media platforms versus users (Sarafian, 2023). These contracts do not place an onus on social networks to warn or protect their users from digital visibility risks; such as brandjacking and impersonation. The FCE project has approached almost 50 South African celebrities via their agents to participate in its research. Each's reputation was reported to have been stolen for scam adverts. Despite offering incentives, only three (plus select representatives) agreed to participate. Most may want to put their negative experiences behind them, while fearing reputational risks from being involved in a research process they are unfamiliar with, and whose outputs may be misperceived as potentially damaging.  So, a big challenge exists in persuading micro-celebrities to contribute their experiences to research, so that they can be shared to inform digital crime fighters' responses.

2) Financial victims

Fans who have developed a parasocial relationship with a particular celebrity they follow, may genuinely believe that the fake endorsement adverts are a legitimate offer. Notwithstanding, the product’s promise seeming to be too good to be true. Having been conned, its victims may be ashamed, or in denial. Many may consider their financial loss not worth reporting (as a micro-fraud versus a serious crime). Even if victims are willing to report the digital crime, it may not be obvious which authority the crime is best reported to.

3) Social media advertising services

Online advertisers, and digital platforms, may not understand or monitor the threat of digital crimes, such as celebrity brandjacking. It is not well-defined and may also be challenging to report on, since it spans several crimes itself: 1) impersonation; 2) non-consensual image sharing; and 3) the infringement of a public figure's intellectual property (through copyright violation of still images and audio-video). In addition, the crime causes; 4) reputational damage by suggesting a public figure’s association with a scam that often involves 5) financial fraud and hacking. Social media advertising complaint reporting formats only permit the reporting of one type of infringement at a time. This potentially leaves a blindspot, as users cannot report all the aspects that characterise the celebrity brandjacking crime. If it is a widespread problem, social media advertisers may also prefer not to flag it as a concern, thereby protecting their public reputations. Albeit, at the expense of celebrity and other financial victims.

4) Poorly-moderated content hosts

To make their offers seem more credible, scammers also post fake content on poorly-moderated sites (such as "clickbait news", "positive reviews" on online forums, and "scientific papers" on academic social networks). Even if such fake content is reported and removed, it can be quickly shifted by scammers to worse-moderated hosts...

5) Banks

As the financial victims legitimately authorise payments off their own accounts, victims do no enjoy recourse via their banks. To avoid new transactions from scammers, these victims often have to pay banks for new cards after terminating their old ones. It is unclear what role banks could adopt in combating digital crimes, wherein clients are defrauded whilst following a seemingly legitimate payment process.


Figure 2 - Authorities who could contribute to fighting digital crimes
Figure 2 - Authorities who could contribute to fighting digital crimes


6) Cyber defence companies

Cyber defence businesses are focused on providing profitable services to corporates. Such services are often unaffordable to even the wealthiest celebrities in SA. However, some may be fortunate to work for companies that use cyber defence services that pro-actively monitor cyberspace, and warn employees against digital impersonation and related risks. Such services include Darkivore, Flashpoint, Netcraft (FraudWatch International), SGS and ZeroFox. It does not seem that cyber defence companies can produce a profitable service that supports rapid responses to fake endorsements and related crimes. Even if it such a service was not unaffordable, most SA celebrities have not been targeted for revictimization. So it seems unlikely they would subscribe to such a service annually.

7) Cybercrime reporters and statistics gatherers

While there have been many reports for weight-loss and money-making cryptocurrency scams featuring particular celebrities, the media, celebrities agents and PR companies seem to report on these brandjackings as once-off events. Reports typically cover the latest flare-up to negatively impact one or two stars, plus their fans. At the same time, cybercrime statistics do not include this digital crime, whose costs to victims in SA are unknown, and difficult to aggregate. This points to a need for developing an holistic view of digital crime from consolidated reports. Research into digital crimes that can bridge the work of journalists and crime statisticians seems urgently needed to describe such crime's extant, frequency and costs to society. Developing robust reporting mechanisms for digital crimes (particularly ones that are challenging like 'fake social media adverts for phishing' {as they include several sub-crimes}) would seem an important contribution that law enforcement, researchers and statisticians can make. Reporters and researchers can also develop robust definitions of emergent digital crimes to grow awareness of them. This should aid more accurate reports.

8) Cybercrime researchers, and educators, in companies

In a similar collaborative vein, cybercrime researchers and educators in companies are working together to help combat digital crimes targeting their employees and clients. In particular, banks and insurance companies in SA are pro-actively raising awareness around phishing and other common digital crimes. This is done in communications that range from email newsletters, to pop-up warnings that clients must acknowledge reading post log-in.


9) Anti-crime activists (PBOs and NGOs)

Anti-digital crime education tends to focus on educating high school students and working professionals with preventative knowledge in English. However, our research into fake celebrity endorsement victims' online commentary suggest that most are over fifty five, with English being their second language, at best. In response, The Noakes Foundation has supported the development of modules in English for educating silver surfers on the most common digital crimes. Ideally, though, these modules (and reportfakeendorsement.com's content) should be available in South Africa's 11 official languages.


10) Social media platforms and their Big Tech owners

Social media companies, and their Big Tech owners, would seem to have a particular responsibility for protecting users from digital crime threats on their platforms. In contrast, there is a decade-long history in SA of even influential celebrities not being well-supported via speedy responses to their brandjacking, and scam adverts are seldom taken down based on celebrities', their representatives' and other victims' reports. 

The most popular platform for this scam in SA is Meta's Facebook and Instagram. Meta does not  understand the content that its users share (Horwitz, 2023). Further, it does not report on scam ecosystems based inside its own platforms. Consequently, neither Facebook nor Instagram can pro-actively identify digital crimes, let alone quickly adapt their systems to stop emergent threats from micro-fraudsters. It's left up to whistleblowers, former employees, investigative journalists and researchers to create awareness on these platforms' serious flaws, such as it being used as a scammers' ecosystem tied to scam-as-service ones. This would seems at odds with corporate responsibility- Meta should publicly report regarding its progress in tackling scam-ecosystems on its FB, WhatsApp and Instagram platforms. It could also pro-actively warn vulnerable users, such as the aged, against the latest scam risks. 

In a sense, digital crimes by cybercriminals on social networks can be considered a parasitic attack within a larger parasitic host: Meta’s Facebook and Instagram are infomediaries that misrepresent themselves as symbionts in supporting users’ communal connections online. In reality, Meta’s business model is parasitic in relying on 3 000 000 000 users to share content (Robbins, 2018). Much of this content is not the work of original/creative producers, but rather sampled from content that's proved popular on other platforms. In essence, social media platforms are middlemen between content creators and their audiences, taking most of the profits from advertising. These platforms also take the intellectual property of online content creators. In the Global South this serves as a form of neocolonial data extraction as Big Tech multinationals from the Global North extract its data, with little being reciprocated. For example, while powerful celebrities in the US can enjoy access to dedicated Facebook support, there is no equivalent offering for influential SA users. Instead, they are lucky to stumble onto internal staff or Trusted Partners who can best help them respond to the Facebookjacking or Instajacking crimes.

In contrast to the usefulness of human insiders, reports to Meta's AI that manages users' reports of dubious accounts and content is simply not capable of recognising malicious advertisers' accounts; At face value, there is in nothing “wrong” with how the scammers’ accounts are set up - They have have a human profile (fake name) managing a business profile (fake name and business). Reporting the scam accounts is useless, since the fraudsters fill-in all the right criteria to fly under the radar! The scammers use 'Like Farms' and a network of fake profiles to all create a sense of legitimacy through 'liking, sharing and commenting' on posts and ads. The criminals also use a “legitimate website” - this is a bought domain and hosing and (questionable design) - selling a “product” and accumulating data of visitors' info and credit card details. All this seems to be legitimate business behaviour to AI, but is malicious and AI cannot detect that. Scammers use a (stolen) credit card, or hijacked meta Ads manager profile and run advertIf this content had truly been verified by a human it would have been taken down immediately. Even to the most untrained eye it was obvious that this content was a deep fake.s through their “business page”. This works for a short while until the card or the account are stopped, and then they just create another one. These ADs are selling a product online, the product is seemly harmless and well within the legal parameters of Meta's Community standards. The fact that its a fake product is immaterial to Meta, the onus in on the customer to know when they are being scammed, and if users try to report this to be a harmful product, it doesn’t work” as that is deemed a matter of personal opinion! Where such content is checked by a human moderators, the content is so obviously fake that they taken it down quickly.

It appears that Meta's Facebook and Instagram are turning a blind eye to this digital advertising crime. The benefit to META is clear with them reaping the rewards through advertisers' spending: Trustfull's 2024 report expects Deep Fake Fraud to reach $ 15.7 billion in 2024. Meta is set to take a large chunk of that ad-spend revenue in distributing fake, malicious content. It’s hard not to draw to the conclusion that it seems irrelevant to META if the content is genuine or a scam, or if the account used to promote these scams has been hacked or cloned. Either way, META and Facebook still profits. 

11) Financial investors

Investors are focused on the bottom-line of financial profit. To achieve it, social media platforms' developers spotlight the metrics of an ever-increasing flow of communication marking their platforms' commercial expansion. Given this all-consuming quantitative focus, it's unsurprising that these platforms' developers and investors are largely disinterested in paying the costs to understand the negative experiences on platforms. Particularly when combating these might impede their growth and monetisation!

12) Government politicians

SA's parasitic political class has been slow to take action for protecting its citizenry from the excesses of social media platforms, and digital crimes on them. For example, it has not protected the intellectual rights of original digital content producers by passing a Creators Bill of Rights to limit their online exploitation. On social media, SA creatives do not retain copyright, have rights of termination and appeal, etc. Globally, most online content creators struggle to make a living from the work that they do. Social media companies' oligarchic power and their poor regulation by the law contributes to this. Further, policy inactivity may suggest that government decision makers' are guided by Big Tech's funding and political support, more-so than its digital creatives' rights and needs.

13) Local police

Brandjacking is not viewed as a serious crime by SA authorities who might be expected to intervene as guardians. Cybercrime experts in the SA police have to focus their limited resources on fighting severe digital crimes (like the online trafficking of children, drugs, and guns). They simply cannot address digital crimes that have not been shown to have serious impacts on their victims, such as phishing micro-frauds. 

14) International law enforcement

In contrast to under-resourced local law enforcement authorities, global ones (such as Interpol) are better resourced to potentially offer some form of response to digital crimes on social media. However, until decent stats and reports for digital micro-frauds are documented and shared with global authorities, these digital crimes are not notifiable, so cannot be directly investigated at an international level.


15) Local law

Even if a foreign criminal network behind a scam is found though investigation, and the legal frameworks exist for their extradition, the costs for local law enforcement to prosecute scammers may well prove prohibitive to the State. The global proceeds of online fraud are probably more than $500bn a year, so another major concern is that online fraudsters have become rich and powerful enough to corrupt entire governments (Scam Inc podcasts, 2025). Scam "businesses" can turn countries into the cyber-scam equivalent of narco-states, and their operations can be found all over the world. Broadly, 1.5 million scammers are at work, from Namibia up to the Isle of Man, and from Mexico across to Fiji  (The Economist, 2025). Where scam bosses have strong clout within a political system, it becomes impossible to enact policies that undermine their fraud. Corrupted states, such as Cambodia, would seem unlikely to extradite criminals who pose a reputational risk in potentially implicating senior state officials. Extradition from lawless places, such as Myanmar, is also impossible.

16) Higher Education and research funders

Like banks, universities are stepping up their preventative digital crime awareness communications, and attracting research funding to build scholarship into cyber- and digital crimes. More generally, universities can lead discourse on the digital crimes issue, catalysing inter- and trans-disciplinary collaborations. Funders of university grants might support design thinking, or strategic design activities to develop solutions for the seemingly intractable brandjacking micro-fraud. Likewise, ethical research into the issues that emerge in researching digital crimes under fake personas. Given that the brandjacking of influential scholars would seem to also pose reputational risks to their university-as-employer, related research could be motivated as a neglected, but potentially valuable contribution. 

17) Product regulators

Fake celebrity endorsements typically promote dubious products, that may actually be delivered. As such, customers may assume that they are protected by local regulators for those particular product types. For example, British doctors have been brandjacked to market "wonder drugs" that cure blood pressure. "Their" customers might assume that they are protected by the UK's General Medical Council. However, this falls outside the GMCs remit, which can only address with promotions from genuine doctors on its register. The GMC cannot tackle 'computer generated videos' by unknown fraudsters (Stokel-Walker, 2014). It seems unlikely that any product regulator can help with tackling fake products marketed by unregistered and anonymous cybercriminals.

Celebrity, you are on your own in responding to digital crime?!

It should be clear that celebrities are unlikely to be supported in stopping fake social media adverts. While five key authorities are working to raise awareness against digital crimes (5, 7, 8, 9 & 16), there is little-to-no help available from eight (3, 4,  6, 12, 13, 14, 15 & 17). At the digital crime's fountainhead (10 & 11), social media platforms are actually disincentivised against tackling the problem. In the absence of support from criminal prosecutors, the law or cybercrime fighting businesses, it should be clear that the guardianship role of social media companies is non-negotiable. Social media platforms that are heavily used by scammers, could take responsibility for this by entrusting micro teams of moderators around the world to review adverts. Verification by well-trained human resources who can disable fraudulent accounts seems to be the best answer to stopping brandjacking on social media. The current AI approach is failing miserably, with counter-technologies many steps behind cybercriminals' "innovations".

There is not enough being done at an entry level to assist smaller companies and the general public who are constantly being attacked by digital crimes. The accounts of celebrities, plus their representatives, whom we've interviewed suggest that pressure must urgently be placed on social media platforms to provide effective brandjacking reporting, and prevention, tools. Without these, deep fake adverts can spread quickly- reaching tens-of-thousands of people a day. At the same time, celebrities must focus on building rapid awareness in the media, plus at the site of the digital crimes, to alert potential victims. 

Just as celebrity co-operation is important, so is that by civil society organisations who can collaborate for pressuring authorities to act in a more responsible and pro-active manner in tackling digital crime. For example, Anna Collard (SVP Content Strategy and Evangelist at KnowBe4 Africa) is doing important work in building networks that can collaborate in educating the public around the dangers of cyber- and digital crimes. Likewise, international and local networks must motivate for sounder strategic interventions from key role players to thwart global networks.

Serious funding is also needed to support awareness programs for educating vulnerable groups. In particular, communities like the elderly are a very high risk, though being less tech- and media savvy. Public benefactors are also needed to assist educational initiatives, like ReportFakeEndorsement, with reaching a broad audience and for greatly increasing the research being done into new digital crimes, and how to thwart them.

Please comment with suggestions to improve this post

I am not an expert on all 13 authorities and their responses, so welcome any constructive corrections for improving what's been shared above. Please comment below, so that I can review your feedback, and perhaps update this blogpost, acknowledging you for your advice, below.

Acknowledgements

Thanks to Dr Taryn van Niekerk for proposing the term ‘unchartered territory’ to describe celebrities’ (and their reps) challenges as novices responding to a brandjacking. That insight helped inspire this post, which maps authorities in an 'unknown territory', and the support they might offer, or cannot. Also, The Noakes Foundation and the FCE team appreciate Mr Byron Davel's advice regarding the ZeroFox offering. Plus the broader field of brand protection, and cyber-defence against dark web operations. TNF also appreciates the journalist Lyse Comins' critique of Meta's slow action in her news article- Meta criticised for slow action as deepfake adverts target South African celebrities.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (58) education (43) design (23) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest