Wednesday, 5 February 2025
Celebrities cannot stop their brandjacking, since many authorities are unable to help!
Since 2019, The Noakes Foundation has supported research into the brandjacking of influential celebrities' reputations on social media, and other poorly moderated platforms. The Fake Celebrity Endorsement (FCE) research team is documenting how this digital crime is an inscrutable, traumatic experience for celebrities, their representatives, and the financial victims who report being conned by fake endorsements. In addition to being traumatised by being featured in fake adverts, microcelebrities are further traumatised by the many reports from fans of their upset at being conned. A few celebrities have become targets for recurring cybervictimisation with no recourse, resulting in repeat trauma.
The FCE distinguishes 'digital crimes' from 'cybercrimes': Micro-fraudsters typically target private individuals with limited access to resources for combating digital crime. This contrasts to cybercrimes in which corporations are attacked (Olson, 2024). They are often well positioned to support their employees with costly resources that private individuals cannot afford. Research into the latter is well-resourced, as is interventions to stop it. By contrast, the fighting of digital crimes that impact private citizens are poorly resourced, particularly in the Global South. In the case of fake celebrity endorsements, press reports of this scam suggest that the problem grows each year- eleven South African celebrities fell victim to it in 2024, up from two in the first reports of 2014.
Fake celebrity endorsement is a digital crime that may require many authorities in society to combat it. Below is a list of the roleplayers that might potentially help prevent digital crimes:
Figure 1 shows a simplified process of the fake endorsement phishing scam. The authors of this digital crime are unknown- they can range from gangs, to the invisible threat of AI and bot armies, to even military intelligence agencies raising funds. Not only do these cybercriminals exploit scamming ecosystems inside popular social media platforms, they also exploit related ecosystems on platforms such as Huione Guarantee (now "Haowang Guarantee"), a Cambodian conglomerate. It offers a messaging app, stable coin, and crypto exchange, and has facilitated $2 billion in transactions. Such platforms are integral to the industrialisation and scaling-up of online scams, for example through supporting the outsourcing of scammers' money-laundering activities (The Economist, 2025).
1) Celebrity influencers
2) Financial victims
Fans who have developed a parasocial relationship with a particular celebrity they follow, may genuinely believe that the fake endorsement adverts are a legitimate offer. Notwithstanding, the product’s promise seeming to be too good to be true. Having been conned, its victims may be ashamed, or in denial. Many may consider their financial loss not worth reporting (as a micro-fraud versus a serious crime). Even if victims are willing to report the digital crime, it may not be obvious which authority the crime is best reported to.3) Social media advertising services
4) Poorly-moderated content hosts
5) Banks
As the financial victims legitimately authorise payments off their own accounts, victims do no enjoy recourse via their banks. To avoid new transactions from scammers, these victims often have to pay banks for new cards after terminating their old ones. It is unclear what role banks could adopt in combating digital crimes, wherein clients are defrauded whilst following a seemingly legitimate payment process.
6) Cyber defence companies
7) Cybercrime reporters and statistics gatherers
8) Cybercrime researchers, and educators, in companies
In a similar collaborative vein, cybercrime researchers and educators in companies are working together to help combat digital crimes targeting their employees and clients. In particular, banks and insurance companies in SA are pro-actively raising awareness around phishing and other common digital crimes. This is done in communications that range from email newsletters, to pop-up warnings that clients must acknowledge reading post log-in.
9) Anti-crime activists (PBOs and NGOs)
Anti-digital crime education tends to focus on educating high school students and working professionals with preventative knowledge in English. However, our research into fake celebrity endorsement victims' online commentary suggest that most are over fifty five, with English being their second language, at best. In response, The Noakes Foundation has supported the development of modules in English for educating silver surfers on the most common digital crimes. Ideally, though, these modules (and reportfakeendorsement.com's content) should be available in South Africa's 11 official languages.
10) Social media platforms and their Big Tech owners
Social media companies, and their Big Tech owners, would seem to have a particular responsibility for protecting users from digital crime threats on their platforms. In contrast, there is a decade-long history in SA of even influential celebrities not being well-supported via speedy responses to their brandjacking, and scam adverts are seldom taken down based on celebrities', their representatives' and other victims' reports.
The most popular platform for this scam in SA is Meta's Facebook and Instagram. Meta does not understand the content that its users share (Horwitz, 2023). Further, it does not report on scam ecosystems based inside its own platforms. Consequently, neither Facebook nor Instagram can pro-actively identify digital crimes, let alone quickly adapt their systems to stop emergent threats from micro-fraudsters. It's left up to whistleblowers, former employees, investigative journalists and researchers to create awareness on these platforms' serious flaws, such as it being used as a scammers' ecosystem tied to scam-as-service ones. This would seems at odds with corporate responsibility- Meta should publicly report regarding its progress in tackling scam-ecosystems on its FB, WhatsApp and Instagram platforms. It could also pro-actively warn vulnerable users, such as the aged, against the latest scam risks.
In a sense, digital crimes by cybercriminals on social networks can be considered a parasitic attack within a larger parasitic host: Meta’s Facebook and Instagram are infomediaries that misrepresent themselves as symbionts in supporting users’ communal connections online. In reality, Meta’s business model is parasitic in relying on 3 000 000 000 users to share content (Robbins, 2018). Much of this content is not the work of original/creative producers, but rather sampled from content that's proved popular on other platforms. In essence, social media platforms are middlemen between content creators and their audiences, taking most of the profits from advertising. These platforms also take the intellectual property of online content creators. In the Global South this serves as a form of neocolonial data extraction as Big Tech multinationals from the Global North extract its data, with little being reciprocated. For example, while powerful celebrities in the US can enjoy access to dedicated Facebook support, there is no equivalent offering for influential SA users. Instead, they are lucky to stumble onto internal staff or Trusted Partners who can best help them respond to the Facebookjacking or Instajacking crimes.
In contrast to the usefulness of human insiders, reports to Meta's AI that manages users' reports of dubious accounts and content is simply not capable of recognising malicious advertisers' accounts; At face value, there is in nothing “wrong” with how the scammers’ accounts are set up - They have have a human profile (fake name) managing a business profile (fake name and business). Reporting the scam accounts is useless, since the fraudsters fill-in all the right criteria to fly under the radar! The scammers use 'Like Farms' and a network of fake profiles to all create a sense of legitimacy through 'liking, sharing and commenting' on posts and ads. The criminals also use a “legitimate website” - this is a bought domain and hosing and (questionable design) - selling a “product” and accumulating data of visitors' info and credit card details. All this seems to be legitimate business behaviour to AI, but is malicious and AI cannot detect that. Scammers use a (stolen) credit card, or hijacked meta Ads manager profile and run advertIf this content had truly been verified by a human it would have been taken down immediately. Even to the most untrained eye it was obvious that this content was a deep fake.s through their “business page”. This works for a short while until the card or the account are stopped, and then they just create another one. These ADs are selling a product online, the product is seemly harmless and well within the legal parameters of Meta's Community standards. The fact that its a fake product is immaterial to Meta, the onus in on the customer to know when they are being scammed, and if users try to report this to be a harmful product, it doesn’t work” as that is deemed a matter of personal opinion! Where such content is checked by a human moderators, the content is so obviously fake that they taken it down quickly.
It appears that Meta's Facebook and Instagram are turning a blind eye to this digital advertising crime. The benefit to META is clear with them reaping the rewards through advertisers' spending: Trustfull's 2024 report expects Deep Fake Fraud to reach $ 15.7 billion in 2024. Meta is set to take a large chunk of that ad-spend revenue in distributing fake, malicious content. It’s hard not to draw to the conclusion that it seems irrelevant to META if the content is genuine or a scam, or if the account used to promote these scams has been hacked or cloned. Either way, META and Facebook still profits.
8) Cybercrime researchers, and educators, in companies
9) Anti-crime activists (PBOs and NGOs)
10) Social media platforms and their Big Tech owners
11) Financial investors
12) Government politicians
13) Local police
14) International law enforcement
15) Local law
16) Higher Education and research funders
17) Product regulators
Celebrity, you are on your own in responding to digital crime?!
Please comment with suggestions to improve this post
Acknowledgements
Friday, 26 July 2024
Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media
Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for dissidents challenging orthodox narratives in science.
The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).
#1 Covering up algorithmic manipulation
Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.
#2 Fact choke versus counter-narratives
An example she tweeted about was the BBC's Trusted New Initiative warning in 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation".
#3 Title-jacking
For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.
#4 Blacklisting trending dissent
Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures.#5 Blacklisting content due to dodgy account interactions or external platform links
#6 Making content unlikeable and unsharable
This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.

Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)
Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.

#7 Disabling public commentary
#8 Making content unsearchable within, and across, digital platforms
#9 Rapid content takedowns
Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.#10 Creating memory holes
#11 Rewriting history
#12 Concealing the motives behind censorship, and who its real enforcers are
![]() |
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception. |