Showing posts with label social media. Show all posts
Showing posts with label social media. Show all posts

Wednesday, 5 February 2025

Celebrities cannot stop their brandjacking, since many authorities are unable to help!

Written for cyber- and digital crime researchers and reporters. Plus, brandjacked celebrities and their representatives.


In discussion with reporters and PR experts, most seemed to be under the mistaken impression that celebrities enjoy a viable route to stop their brandjacking in scam adverts on social media platforms. I wrote this post to explain that although SA public figures seem well-resourced and influential, none have a viable route to prosecuting their brandjackers due to an absence of support from many authorities:

Since 2019, The Noakes Foundation has supported research into the brandjacking of influential celebrities' reputations on social media, and other poorly moderated platforms. The Fake Celebrity Endorsement (FCE) research team is documenting how this digital crime is an inscrutable, traumatic experience for celebrities, their representatives, and the financial victims who report being conned by fake endorsements. In addition to being traumatised by being featured in fake adverts, microcelebrities are further traumatised by the many reports from fans of their upset at being conned. A few celebrities have become targets for recurring cybervictimisation with no recourse, resulting in repeat trauma.


The FCE distinguishes 'digital crimes' from 'cybercrimes': Micro-fraudsters typically target private individuals with limited access to resources for combating digital crime. This contrasts to cybercrimes in which corporations are attacked (Olson, 2024). They are often well positioned to support their employees with costly resources that private individuals cannot afford. Research into the latter is well-resourced, as is interventions to stop it. By contrast, the fighting of digital crimes that impact private citizens are poorly resourced, particularly in the Global South. In the case of fake celebrity endorsements, press reports of this scam suggest that the problem grows each year- eleven South African celebrities fell victim to it in 2024, up from two in the first reports of 2014.


Fake celebrity endorsement is a digital crime that may require many authorities in society to combat it. Below is a list of the roleplayers that might potentially help prevent digital crimes:


1) celebrity influencers,
2) financial victims, 
3) social media advertising providers,
4) Poorly-moderated content hosts, 
5) banks, 
6) cyber defence companies,
7) cybercrime reporters and statistics gatherers (industry researchers),
8) cybercrime educators, 
9) anti-crime activists (PBOs and NGOs), 
10) social media platforms (eg Big Tech),
11) financial investors, 
12) government politicians,
13) the police, 
14) international law enforcement,
15) local law,
16) Higher Education and its funders,
17) Product regulators.

While digital crime victims might expect support from these thirteen other role-players, this post spotlights their limitations. Some simply are unable to prioritise fighting fake celebrity endorsements, while others' interests may not be served in tackling this crime!

Figure 1. The brandjacking digital crime process
Figure 1 - the brandjacking digital crime process


Figure 1 shows a simplified process of the fake endorsement phishing scam. The authors of this digital crime are unknown- they can range from gangs, to the invisible threat of AI and bot armies, to even military intelligence agencies raising funds. Not only do these cybercriminals exploit scamming ecosystems inside popular social media platforms, they also exploit related ecosystems on platforms such as Huione Guarantee (now "Haowang Guarantee"), a Cambodian conglomerate. It offers a messaging app, stable coin, and crypto exchange, and has facilitated $2 billion in transactions. Such platforms are integral to the industrialisation and scaling-up of online scams, for example through supporting the outsourcing of scammers' money-laundering activities (The Economist, 2025).

1) Celebrity influencers

On digital media, celebrity influencers are 'micro-celebrities', who can also be 'influencers' (if paid to share content). Micro-celebrities may not be aware of the dangers of hyper-visibility, since there are no 'Here Be Dragons' signs at the on-ramps to creating their digital profiles. Here, celebrities agree to legal contracts that are heavily one-sided in favour of social media platforms versus users (Sarafian, 2023). These contracts do not place an onus on social networks to warn or protect their users from digital visibility risks; such as brandjacking and impersonation. The FCE project has approached almost 50 South African celebrities via their agents to participate in its research. Each's reputation was reported to have been stolen for scam adverts. Despite offering incentives, only three (plus select representatives) agreed to participate. Most may want to put their negative experiences behind them, while fearing reputational risks from being involved in a research process they are unfamiliar with, and whose outputs may be misperceived as potentially damaging.  So, a big challenge exists in persuading micro-celebrities to contribute their experiences to research, so that they can be shared to inform digital crime fighters' responses.

2) Financial victims

Fans who have developed a parasocial relationship with a particular celebrity they follow, may genuinely believe that the fake endorsement adverts are a legitimate offer. Notwithstanding, the product’s promise seeming to be too good to be true. Having been conned, its victims may be ashamed, or in denial. Many may consider their financial loss not worth reporting (as a micro-fraud versus a serious crime). Even if victims are willing to report the digital crime, it may not be obvious which authority the crime is best reported to.

3) Social media advertising services

Online advertisers, and digital platforms, may not understand or monitor the threat of digital crimes, such as celebrity brandjacking. It is not well-defined and may also be challenging to report on, since it spans several crimes itself: 1) impersonation; 2) non-consensual image sharing; and 3) the infringement of a public figure's intellectual property (through copyright violation of still images and audio-video). In addition, the crime causes; 4) reputational damage by suggesting a public figure’s association with a scam that often involves 5) financial fraud and hacking. Social media advertising complaint reporting formats only permit the reporting of one type of infringement at a time. This potentially leaves a blindspot, as users cannot report all the aspects that characterise the celebrity brandjacking crime. If it is a widespread problem, social media advertisers may also prefer not to flag it as a concern, thereby protecting their public reputations. Albeit, at the expense of celebrity and other financial victims.

4) Poorly-moderated content hosts

To make their offers seem more credible, scammers also post fake content on poorly-moderated sites (such as "clickbait news", "positive reviews" on online forums, and "scientific papers" on academic social networks). Even if such fake content is reported and removed, it can be quickly shifted by scammers to worse-moderated hosts...

5) Banks

As the financial victims legitimately authorise payments off their own accounts, victims do no enjoy recourse via their banks. To avoid new transactions from scammers, these victims often have to pay banks for new cards after terminating their old ones. It is unclear what role banks could adopt in combating digital crimes, wherein clients are defrauded whilst following a seemingly legitimate payment process.


Figure 2 - Authorities who could contribute to fighting digital crimes
Figure 2 - Authorities who could contribute to fighting digital crimes


6) Cyber defence companies

Cyber defence businesses are focused on providing profitable services to corporates. Such services are often unaffordable to even the wealthiest celebrities in SA. However, some may be fortunate to work for companies that use cyber defence services that pro-actively monitor cyberspace, and warn employees against digital impersonation and related risks. Such services include Darkivore, Flashpoint, Netcraft (FraudWatch International), SGS and ZeroFox. It does not seem that cyber defence companies can produce a profitable service that supports rapid responses to fake endorsements and related crimes. Even if it such a service was not unaffordable, most SA celebrities have not been targeted for revictimization. So it seems unlikely they would subscribe to such a service annually.

7) Cybercrime reporters and statistics gatherers

While there have been many reports for weight-loss and money-making cryptocurrency scams featuring particular celebrities, the media, celebrities agents and PR companies seem to report on these brandjackings as once-off events. Reports typically cover the latest flare-up to negatively impact one or two stars, plus their fans. At the same time, cybercrime statistics do not include this digital crime, whose costs to victims in SA are unknown, and difficult to aggregate. This points to a need for developing an holistic view of digital crime from consolidated reports. Research into digital crimes that can bridge the work of journalists and crime statisticians seems urgently needed to describe such crime's extant, frequency and costs to society. Developing robust reporting mechanisms for digital crimes (particularly ones that are challenging like 'fake social media adverts for phishing' {as they include several sub-crimes}) would seem an important contribution that law enforcement, researchers and statisticians can make. Reporters and researchers can also develop robust definitions of emergent digital crimes to grow awareness of them. This should aid more accurate reports.

8) Cybercrime researchers, and educators, in companies

In a similar collaborative vein, cybercrime researchers and educators in companies are working together to help combat digital crimes targeting their employees and clients. In particular, banks and insurance companies in SA are pro-actively raising awareness around phishing and other common digital crimes. This is done in communications that range from email newsletters, to pop-up warnings that clients must acknowledge reading post log-in.


9) Anti-crime activists (PBOs and NGOs)

Anti-digital crime education tends to focus on educating high school students and working professionals with preventative knowledge in English. However, our research into fake celebrity endorsement victims' online commentary suggest that most are over fifty five, with English being their second language, at best. In response, The Noakes Foundation has supported the development of modules in English for educating silver surfers on the most common digital crimes. Ideally, though, these modules (and reportfakeendorsement.com's content) should be available in South Africa's 11 official languages.


10) Social media platforms and their Big Tech owners

Social media companies, and their Big Tech owners, would seem to have a particular responsibility for protecting users from digital crime threats on their platforms. In contrast, there is a decade-long history in SA of even influential celebrities not being well-supported via speedy responses to their brandjacking, and scam adverts are seldom taken down based on celebrities', their representatives' and other victims' reports. 

The most popular platform for this scam in SA is Meta's Facebook and Instagram. Meta does not  understand the content that its users share (Horwitz, 2023). Further, it does not report on scam ecosystems based inside its own platforms. Consequently, neither Facebook nor Instagram can pro-actively identify digital crimes, let alone quickly adapt their systems to stop emergent threats from micro-fraudsters. It's left up to whistleblowers, former employees, investigative journalists and researchers to create awareness on these platforms' serious flaws, such as it being used as a scammers' ecosystem tied to scam-as-service ones. This would seems at odds with corporate responsibility- Meta should publicly report regarding its progress in tackling scam-ecosystems on its FB, WhatsApp and Instagram platforms. It could also pro-actively warn vulnerable users, such as the aged, against the latest scam risks. 

In a sense, digital crimes by cybercriminals on social networks can be considered a parasitic attack within a larger parasitic host: Meta’s Facebook and Instagram are infomediaries that misrepresent themselves as symbionts in supporting users’ communal connections online. In reality, Meta’s business model is parasitic in relying on 3 000 000 000 users to share content (Robbins, 2018). Much of this content is not the work of original/creative producers, but rather sampled from content that's proved popular on other platforms. In essence, social media platforms are middlemen between content creators and their audiences, taking most of the profits from advertising. These platforms also take the intellectual property of online content creators. In the Global South this serves as a form of neocolonial data extraction as Big Tech multinationals from the Global North extract its data, with little being reciprocated. For example, while powerful celebrities in the US can enjoy access to dedicated Facebook support, there is no equivalent offering for influential SA users. Instead, they are lucky to stumble onto internal staff or Trusted Partners who can best help them respond to the Facebookjacking or Instajacking crimes.

In contrast to the usefulness of human insiders, reports to Meta's AI that manages users' reports of dubious accounts and content is simply not capable of recognising malicious advertisers' accounts; At face value, there is in nothing “wrong” with how the scammers’ accounts are set up - They have have a human profile (fake name) managing a business profile (fake name and business). Reporting the scam accounts is useless, since the fraudsters fill-in all the right criteria to fly under the radar! The scammers use 'Like Farms' and a network of fake profiles to all create a sense of legitimacy through 'liking, sharing and commenting' on posts and ads. The criminals also use a “legitimate website” - this is a bought domain and hosing and (questionable design) - selling a “product” and accumulating data of visitors' info and credit card details. All this seems to be legitimate business behaviour to AI, but is malicious and AI cannot detect that. Scammers use a (stolen) credit card, or hijacked meta Ads manager profile and run advertIf this content had truly been verified by a human it would have been taken down immediately. Even to the most untrained eye it was obvious that this content was a deep fake.s through their “business page”. This works for a short while until the card or the account are stopped, and then they just create another one. These ADs are selling a product online, the product is seemly harmless and well within the legal parameters of Meta's Community standards. The fact that its a fake product is immaterial to Meta, the onus in on the customer to know when they are being scammed, and if users try to report this to be a harmful product, it doesn’t work” as that is deemed a matter of personal opinion! Where such content is checked by a human moderators, the content is so obviously fake that they taken it down quickly.

It appears that Meta's Facebook and Instagram are turning a blind eye to this digital advertising crime. The benefit to META is clear with them reaping the rewards through advertisers' spending: Trustfull's 2024 report expects Deep Fake Fraud to reach $ 15.7 billion in 2024. Meta is set to take a large chunk of that ad-spend revenue in distributing fake, malicious content. It’s hard not to draw to the conclusion that it seems irrelevant to META if the content is genuine or a scam, or if the account used to promote these scams has been hacked or cloned. Either way, META and Facebook still profits. 

11) Financial investors

Investors are focused on the bottom-line of financial profit. To achieve it, social media platforms' developers spotlight the metrics of an ever-increasing flow of communication marking their platforms' commercial expansion. Given this all-consuming quantitative focus, it's unsurprising that these platforms' developers and investors are largely disinterested in paying the costs to understand the negative experiences on platforms. Particularly when combating these might impede their growth and monetisation!

12) Government politicians

SA's parasitic political class has been slow to take action for protecting its citizenry from the excesses of social media platforms, and digital crimes on them. For example, it has not protected the intellectual rights of original digital content producers by passing a Creators Bill of Rights to limit their online exploitation. On social media, SA creatives do not retain copyright, have rights of termination and appeal, etc. Globally, most online content creators struggle to make a living from the work that they do. Social media companies' oligarchic power and their poor regulation by the law contributes to this. Further, policy inactivity may suggest that government decision makers' are guided by Big Tech's funding and political support, more-so than its digital creatives' rights and needs.

13) Local police

Brandjacking is not viewed as a serious crime by SA authorities who might be expected to intervene as guardians. Cybercrime experts in the SA police have to focus their limited resources on fighting severe digital crimes (like the online trafficking of children, drugs, and guns). They simply cannot address digital crimes that have not been shown to have serious impacts on their victims, such as phishing micro-frauds. 

14) International law enforcement

In contrast to under-resourced local law enforcement authorities, global ones (such as Interpol) are better resourced to potentially offer some form of response to digital crimes on social media. However, until decent stats and reports for digital micro-frauds are documented and shared with global authorities, these digital crimes are not notifiable, so cannot be directly investigated at an international level.


15) Local law

Even if a foreign criminal network behind a scam is found though investigation, and the legal frameworks exist for their extradition, the costs for local law enforcement to prosecute scammers may well prove prohibitive to the State. The global proceeds of online fraud are probably more than $500bn a year, so another major concern is that online fraudsters have become rich and powerful enough to corrupt entire governments (Scam Inc podcasts, 2025). Scam "businesses" can turn countries into the cyber-scam equivalent of narco-states, and their operations can be found all over the world. Broadly, 1.5 million scammers are at work, from Namibia up to the Isle of Man, and from Mexico across to Fiji  (The Economist, 2025). Where scam bosses have strong clout within a political system, it becomes impossible to enact policies that undermine their fraud. Corrupted states, such as Cambodia, would seem unlikely to extradite criminals who pose a reputational risk in potentially implicating senior state officials. Extradition from lawless places, such as Myanmar, is also impossible.

16) Higher Education and research funders

Like banks, universities are stepping up their preventative digital crime awareness communications, and attracting research funding to build scholarship into cyber- and digital crimes. More generally, universities can lead discourse on the digital crimes issue, catalysing inter- and trans-disciplinary collaborations. Funders of university grants might support design thinking, or strategic design activities to develop solutions for the seemingly intractable brandjacking micro-fraud. Likewise, ethical research into the issues that emerge in researching digital crimes under fake personas. Given that the brandjacking of influential scholars would seem to also pose reputational risks to their university-as-employer, related research could be motivated as a neglected, but potentially valuable contribution. 

17) Product regulators

Fake celebrity endorsements typically promote dubious products, that may actually be delivered. As such, customers may assume that they are protected by local regulators for those particular product types. For example, British doctors have been brandjacked to market "wonder drugs" that cure blood pressure. "Their" customers might assume that they are protected by the UK's General Medical Council. However, this falls outside the GMCs remit, which can only address with promotions from genuine doctors on its register. The GMC cannot tackle 'computer generated videos' by unknown fraudsters (Stokel-Walker, 2014). It seems unlikely that any product regulator can help with tackling fake products marketed by unregistered and anonymous cybercriminals.

Celebrity, you are on your own in responding to digital crime?!

It should be clear that celebrities are unlikely to be supported in stopping fake social media adverts. While five key authorities are working to raise awareness against digital crimes (5, 7, 8, 9 & 16), there is little-to-no help available from eight (3, 4,  6, 12, 13, 14, 15 & 17). At the digital crime's fountainhead (10 & 11), social media platforms are actually disincentivised against tackling the problem. In the absence of support from criminal prosecutors, the law or cybercrime fighting businesses, it should be clear that the guardianship role of social media companies is non-negotiable. Social media platforms that are heavily used by scammers, could take responsibility for this by entrusting micro teams of moderators around the world to review adverts. Verification by well-trained human resources who can disable fraudulent accounts seems to be the best answer to stopping brandjacking on social media. The current AI approach is failing miserably, with counter-technologies many steps behind cybercriminals' "innovations".

There is not enough being done at an entry level to assist smaller companies and the general public who are constantly being attacked by digital crimes. The accounts of celebrities, plus their representatives, whom we've interviewed suggest that pressure must urgently be placed on social media platforms to provide effective brandjacking reporting, and prevention, tools. Without these, deep fake adverts can spread quickly- reaching tens-of-thousands of people a day. At the same time, celebrities must focus on building rapid awareness in the media, plus at the site of the digital crimes, to alert potential victims. 

Just as celebrity co-operation is important, so is that by civil society organisations who can collaborate for pressuring authorities to act in a more responsible and pro-active manner in tackling digital crime. For example, Anna Collard (SVP Content Strategy and Evangelist at KnowBe4 Africa) is doing important work in building networks that can collaborate in educating the public around the dangers of cyber- and digital crimes. Likewise, international and local networks must motivate for sounder strategic interventions from key role players to thwart global networks.

Serious funding is also needed to support awareness programs for educating vulnerable groups. In particular, communities like the elderly are a very high risk, though being less tech- and media savvy. Public benefactors are also needed to assist educational initiatives, like ReportFakeEndorsement, with reaching a broad audience and for greatly increasing the research being done into new digital crimes, and how to thwart them.

Please comment with suggestions to improve this post

I am not an expert on all 13 authorities and their responses, so welcome any constructive corrections for improving what's been shared above. Please comment below, so that I can review your feedback, and perhaps update this blogpost, acknowledging you for your advice, below.

Acknowledgements

Thanks to Dr Taryn van Niekerk for proposing the term ‘unchartered territory’ to describe celebrities’ (and their reps) challenges as novices responding to a brandjacking. That insight helped inspire this post, which maps authorities in an 'unknown territory', and the support they might offer, or cannot. Also, The Noakes Foundation and the FCE team appreciate Mr Byron Davel's advice regarding the ZeroFox offering. Plus the broader field of brand protection, and cyber-defence against dark web operations. TNF also appreciates the journalist Lyse Comins' critique of Meta's slow action in her news article- Meta criticised for slow action as deepfake adverts target South African celebrities.

Friday, 26 July 2024

Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media

Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for  dissidents challenging orthodox narratives in science.


The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).


In an ideal world, social media platforms would be considered to be a public accommodation, and the Fifth Estate's users would benefit from legal protection of their original content, including strong measures against unjustified suppression and censorship. The latter should recognise the asymmetric challenges that individual dissenters, whistleblowers and their allies must confront in contradicting hegemonic social forces that can silence their opponents' (digital) voices: As recently evidenced in the COVID-19 "pandemic", the Twitter Files and other investigations reveal how multinational pharmaceutical companies, unelected global "health" organisations, national governments, social media and traditional broadcast companies all conspired to silence dissent that oppossed costly COVID-19 interventions. Regardless of their levels of expertise, critics who questioned this narrative in the Fourth or Fifth Estate were forced to negotiate censorship for the wrong-think of sharing "dangerous" opinions. 

Such sanctions reflect powerful authorities' interests in controlling (scientific) language, the window of permissable opinion, and the social discourses that the public might select from, or add. Under the pretext of public "safety", the censorship industrial complex strong arms the broadcast media and social media companies into restricting dissidents' voices as "misinformation" that is "unsafe". Facing no contest, the views of powerful officialdoms earn frequent repetition within a tightly controlled, narrow narrative window. At the same legitimate reports of mRNA injuries are falsely redefined to be "malinformation", and censored.
 
Consequently, instead of a pluralist distribution of power in the Fifth Estate that can support vital expression,  powerful authorities are enforcing internet policy interventions that increasingly surveil and censor users' digital voices. Infodemic scholars whose work endorses such suppression would seem to be ignorant of how problematic it is to define disinformation, in general. Particularly in contemporary science, where: knowledge monopolies and research cartels may be dominant; dissenting minds should be welcomed for great science, and a flawed scientific consensus can itself be dangerous. Silencing dissent has important public health ramifications, particularly where the potential for suggesting, and exploring, better interventions becomes closed. Science-, health communication, and media studies scholars may also ignore the inability of medical experts to accurately define what disinformation is, particularly where global policy makers face conflicts of interest (as in the World Health Organisation's support for genetic vaccines).

Censorship and the suppression of legitimate COVID-19 dissent is dangerously asymmetrical: health authorities already benefit from ongoing capital cascades whose exchange largely serve their interests. Such exchanges span financial, social, cultural, symbolic and even other (e.g. embodied) forms of capital (Bourdieu, 1986:2018). By contrast, individual critics can quickly be silenced by attacks on their limited capital, effectively preventing their exercise of the  basic right to free speech, and delivering sustained critiques. A related concern is that the censorial actions of artificial intelligence designers and digital platform moderators are often opaque to a platforms' users. Original content creators may be unaware that they will be de-amplified for sharing unorthodox views, as algorithms penalise the visibility of content on 'banned' lists, and any accounts that amplify "wrongthink". 

Content suppression on social media is an important, but neglected topic, and this post strives to flag the wide variety of techniques that may be use in digital content suppression. Techniques are listed in order of seemingly increasingly severe techniques:

#1 Covering up algorithmic manipulation

Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.


#2 Fact choke versus counter-narratives

A fact choke involves burying unfavourable commentary amongst a myriad of content. This term was coined by Margaret Anna Alice to describe how "fact checking" was abused to suppress legitimate dissent.
An example she tweeted about was the BBC's Trusted New Initiative warning in
 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation". 

With the "unvaccinated" demonised by the mainstream media to create division, susceptible audiences were nudged to become vaccine compliant to confirm their compassionate virtue. At the same time to retain belief in mRNA genetic vaccine "safety", personal accounts, aggregated reports (such as "died suddenly" on markcrispinmiller.substack.com) and statistical reports (see Cause Unknown) for genetic vaccine injuries became suppressed as "malinformation" despite their factual accuracy. Other "controversial content", such as medical professionals' criticism of dangerous COVID-19 treatment protocols (see What the Nurses Saw) or criticism of a social media platform's policies (such as application of lifetime bans and critiques of platform speech codes) have been algorithmically suppressed.

Critical commentary may also be drowned out when platforms, such as YouTube, bury long format interviews amongst short 'deep fake' videos. These can range from featuring comments the critic never made, to fake endorsements from cybercriminals (as described on X by Whitney Webb, or Professor Tim Noakes on YouTube).

#3 Title-jacking

For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.


#4 Blacklisting trending dissent

Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures. 

After its publication, all three authors experienced censorship on search engines (Google deboosted results for the declaration), social media platforms (Facebook temporarily removed the declaration's page, while Reddit removed links to its discussion) and on video (Youtube removed a roundtable discussion with Florida's Governor Ron DeSantis whose participants questioned the efficacy and appropriateness of requiring children to wear face masks). 

#5 Blacklisting content due to dodgy account interactions or external platform links

Limited visibility filtering also occurs when comments are automatically commented on by pornbots, or feature engagement by other undesirable accounts. For example, posts mentioning the keywords/subjects such as 'vaccine, Pfizer' may receive automated forms of engagement, which then sees posts receiving such "controversial" engagement becoming added to a list ensuring these posts censorship (see 32 mins into Alex Kriel's talk on the 'The Role of Fake Bot Traffic on Twitter/X'.

Social media platforms' algorithms may also blacklist content from external platforms that are not viewed to be credible sources (for example, part of an alternative {alt-right} media), or seen as competing rivals (X penalises the visibility of posts that feature links to external platforms).

#6 Making content unlikeable and unsharable

This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.

Youtube dislike Rassmussen Reports video on Vaccine Deaths
Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)  

Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.


#7 Disabling public commentary

Social media platforms may limit the mentionability of content, by not offering the opportunity to quote public posts. User's right-to-reply may be blocked, and critiques may be concealed by preventing them from being linked to from replies.

#8 Making content unsearchable within, and across, digital platforms

Social media companies applied search blacklists to prevent their users from finding blacklisted content. Content contravening COVID-19 "misinformation" policies was hidden from search users. For example, Twitter applied a COVID-19 misleading information policy that ended in November, 2022. In June 20023, META began to end its policy for curbing the spread of "misinformation" related to COVID-19 on  Facebook and Instagram. 

#9  Rapid content takedowns

Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube  removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.

PropagandaInFocus describes how LinkedIn users are subject to a policy of misinformation that prevents content being shared that 'directly contradicts guidance from leading global health organisations and public health authorities'. Dr David Thunder shared an example of his automated LinkedIn post removal for (1) sharing a scientific study that confirmed that children are at negligible risk of suffering severe disease from COVID-19, and (2) questioning the FDA decision to approve Emergency Use Authorisation for COVID-19 vaccines for children as young as 6 months old. No matter that many other studies confirm both positions, LinkedIn took this post down and threatened to restrict his account.

#10 Creating memory holes

Extensive content takedowns can serve a memory-holing aim, whereby facts and memories of the past become suppressed, erased or forgotten for political convenience. Long after the COVID-19's "pandemic", an Orwellian Ministry of Truth continues to memory-hole many health authority decision makers' failures, plus those of the mainstream media's and most national governments. As discussed here on YouTube by Mary Lou Singleton, Meghan Murphy and Jennifer Sey, such failures included: mandating masking and school-closures for children (who were never at risk); never questioning the official COVID-19 statistics (such as CNN's 'death ticker'); straight quoting Pfizer press releases as "journalism", whilst mocking individuals who chose to 'do their own research'. 

Dr Mark Changizi presents four science moments on memory-holing. In X video 1 and X video 2, he highlights how memory-holing on social media is very different from its traditional form. He uses X (formerly X) as an autobiographical tool, creating long threads that serve as a form of visual memory that he can readily navigate. The unique danger of social media account removal/suspension for censorship extends beyond losing one's history-of-'use' on that platform, to include all 'mentions' related to its content (ranging from audience likes, to their reply and quote threads). This changes the centrally-controlled communication history of what has occurred on a social media platform. Such censorship violates the free speech rights of all persons who have engaged with that removed account, even its fiercest critics, as they also lose an historical record of what they said. 

By contrast decentralised publications (such as hardcopy publications) are very hard for authorities to memory hole, since sourcing all hardcopies can be nearly impossible for censors. While winners can write history, historians who have access to historical statements can rewrite it. As COVID-19 memory-holing on social media platforms challenges such rewriting, its users must think around creating uncensorable records (such as the book Team Reality: Fighting the Pandemic of the Uninformed). In  X video 3, he highlights that freedom of expression is a liability, as expressions push reputation chips on the table. The more claims one stake's, the greater the risk to one's reputation if they're wrong. So, another aspect of memory holing lies in an individual's potential desire for memory-holing their own platform content, should they prove to be wrong. In X video 4, Dr Changizi also spotlights that the best form of memory-holing is self-censorship, whereby individuals see other accounts been suspended, or removed for expressing particular opinions. The witnesses then decide not to express such opinions, since it might endanger their ability to express other opinions. While such absence of speech is immeasurable, it would seem the most powerful memory-holing technique. Individuals' silencing their own voices do not create history.

#11 Rewriting history

Linking back to the Fact Choke technique are attempts at historical revisionism by health authoritians, and their allies. An example of this are claims in the mainstream media that critics of the orthodox narrative were "right for the wrong reasons" regarding the failure of COVID-19 lockdowns, the many negative impacts of closing schooling, businesses, and enforcing mandatory vaccination policies.

#12 Concealing the motives behind censorship, and who its real enforcers are

Social media platforms not only hide algorithmic suppression from users, but may also be misused to hide from users the full rationale for censorship, or who is ultimately behind it. Professor David Hughes prepared a glossary of deceptive terms and their true meanings (2024, pp 194-195) to highlight how the meaning of words is damaged by propaganda. A term resonating with technique #9 is “Critical” - pretending to speak truth to power whilst turning a blind eye to deep state power structures. 

The official narrative positioned COVID-19 as a (i) pandemic that had zoonotic (animal-to-human) origins, and alternate explanations were strongly suppressed. As this is the least likely explanation, other hypotheses merit serious investigation as they are more plausible. SARS-COV-2 might have stemmed from (ii) an outbreak at the Wuhan Lab's "gain of function" research, or a (iii) deliberate release in several countries from a biological weapons research project?  Some critics even dispute the existence of SARS-COV-2, alleging that (iv) viral transmission is unproven, and that the entire  COVID-19 "pandemic" is a psychological propaganda operation.

By silencing dissident views like these, social media platforms stop their users from learning about the many legitimate COVID-19 debates that are taking place. This is not a matter of keeping users "secure" from "unsafe" knowledge, but rather networked publics being targeted for social control in the interests of powerful conspirators. In particular, the weaponised deception of social media censorship suits the agenda of the Global Public-Private Partnership (GPPP or G3P), and its many stakeholders. As described by Dr Joseph Mercola in The Rise of the Global Police State, each organisational stakeholder plays a policy enforcement role in a worldwide network striving to centralise authority at a global level.

Global Public-Private Partnership G3P organogram
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception.

G3P stakeholders have a strong stake in growing a censorship industrial complex to thwart legitimate dissent. Critiques of the official COVID-19 "pandemic" measures are just one example  The industrial censorship complex also strives to stifle robust critiques of (1) climate change "science", (2) "gender affirming" (transgender) surgery, (3) mass migration (aka the Great Replacement), (4) and rigged "democratic" elections, amongst other "unacceptable" opinions. Rather than being for the public's good, such censorship actually serves the development of a transhumanist, global technocratic society. The  digital surveillance dragnet of the technocracy suits the interests of a transnational ruling class in maintaining social control of Western society, and other vassals. This will be expanded upon in a future post tackling the many censorship and suppression techniques that are being against (ii) accounts.

N.B. This post is a work-in-progress and the list above is not exhaustive- kindly comment to recommend techniques that should be added, and suggestions for salient examples are most welcome.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (56) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest