Thursday 29 August 2024

After half-a-million views, "Dr Noakes" erection dysfunction "advert" taken down by Facebook + suggested actions for META to do better

I am pleased to report that The Noakes Foundation has succeeded in getting a fake 'Dr Noakes' advert for erectile dysfunction pills removed from META. This is after a month of trying varied methods without success to stop the brandjacking of Professor Tim Noakes' identity, and his impersonation via deepfake reels and accounts on Facebook.

Brandjacking is the ‘allegedly illegal use of trademarked brand names - on social network sites’ (Ramsey, 2010 p851). Cybercriminals misuse the trademarks of others without authorization. For example, ‘Facebookjacking’ and ‘Instajacking’ see public figures’ usernames, account names, and/or digital content being used for fake accounts and video "adverts" on Meta’s respective popular social networks- Facebook and Instagram. Such brandjacking via fake celebrity endorsement spans several types of crime: (1) Impersonation, (2) Non-consensual image sharing, and (3) the Infringement of a public figure's  intellectual property through copyright violation of still images and audio-video. In addition to causing (4) Reputation damage to the public figure through suggesting association with a scam, cybercriminals may use it for (5) Financial fraud and hacking. Given that these are serious crimes, it is worrying that public figures in South Africa seem to receive minimal, if any, support from social media companies for stopping the fake endorsement digital crime. There is also a gap in scholarship for how public figures worldwide, and in SA, might best tackle this persistent crime.

Figure 1. Screenshot from the fake 'Dr Noakes' erection dysfunction advert on Facebook (2024)  

On Thursday the 25th of July we were first alerted to a deep fake advert featuring Emeritus Professor Tim Noakes that ran on META's Facebook, and Tik Tok. As Figure 1 shows, the Facebook advert had been viewed over 584,000 times, liked by 637 accounts, and received 56 comments. While much of the likes and comments may be from bots, such high viewership of the reel itself is highly concerning. It suggests how rapidly a cybercriminals' adverts spread to potential victims- at over 16,000 views per day! 

Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024).jpg
Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024)
Figure 3. Scammer account location behind fake Facebook Dr Tim Noakes adverts (2024)  

Our initial Facebook advert lookup revealed that one page was running four adverts (Figure 2). This account ("Tristan") was managed from Nepal and India (Figure 3).

Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook.jpg
Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook (2024) 


This fake account page also leveraged fake interactions to suggest that it was liked, and followed (Figures 4 and 5).
Figure 5. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook
Figure 5. Screenshot of fake "Tristan" Facebook account details behind Dr Tim Noakes adverts (2024)

This account was reported to Facebook via a third-party. During this “warning period”, the account's owners launched four new "Dr Tim Noakes" campaigns. Each was documented and reported to Facebook. Interestingly, the links to the online store “sites” were dead ends. However, a 'Call Now' button could still support a call agent's phishing of victims financial details.

The absence of a link for data gathering suggested that this scam was primarily not for phishing such sensitive data, or selling fake products. Rather the advert's design seems geared for stealing advertising revenue via deepfake creation. The scammers hack into the "advertiser"'s Meta account to distribute fake adverts that run up tens-of-thousands of dollars in spend. In this case it was a government-based account from an unknown location. Such adverts may also carry malware, with users clicking on them being vulnerable to hacking. These paid ads also have the impact of pushing potential followers to the advertiser’s page. More followers results in more people seeing the content, and Meta indirectly benefiting from the cybercrime's increased visibility by achieving higher advertising rates.


Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024).jpg
Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024)


Figure 7. Screenshot of scammers' "Hughles" Facebook account's Dr Tim Noakes adverts (2024).jpg
Figure 7. Screenshot of scammers' "Hughles" Facebook account's Dr Tim Noakes adverts (2024)

The scammers flick-flacked between varied accounts in committing this cybercrime- they initially used "Hughles" (Figures 6 and 7), "Cameron Sullivan Setting", and "Murthyrius" in launching the same deepfake ads. By the 28th of July, 13 of these "adverts were taken down by Facebook, but the scammers shifted to new accounts, "Longjiaren.com" (Figure 8) and "Brentlinger" (renamed "Brentlingerkk" after we reported it). On the 29th of August, these accounts and their adverts were disabled by Facebook.

Screenshot of Longjiaren.com scammers Facebook account for fake adverts.jpeg
Figure 8. Screenshot of Longjiaren.com scammers Facebook account for fake adverts (2024)

Such adverts typically reach viewers outside The Noakes Foundation, Nutrition Network and Eat Better South Africa’s networks. Their audiences know Professor Noakes does not endorse miracle weight loss and other cures. To reach vulnerable publics, The Noakes Foundation has run Facebook alerts to warn about this latest cybercrime. Ironically, the most recent advert attempting to flag the "Dr Noakes" scam was blocked by Facebook advertising (Figure 9)!

Screenshot of Facebook rejecting anti scam ad from The Noakes Foundation.jpg
Figure 9. Facebook rejects anti scam ad from The Noakes Foundation (2024)

Actions for META to do better in fighting cybercrime on its platforms


As Anna Collard (KnowBe4) spotlights in her recent interview with eNews, social media platforms are a vital source for news in Africa. Consequently, these platforms must be held more accountable for any slow responses to synthetic- and deep-fakes. It is greatly concerning that META's Facebook platform is so rife with many serious crimes (ranging from sextortion and child-trafficking to drug pushing). 

META can be more pro-active in tackling such cybercrimes {plus less serious ones like fake celebrity endorsement}, by prioritising these seven steps below:

1) Actively communicate that all users' must have a 'zero trust' mindset;
2) Create a compliance team that is dedicated to thwarting cybercriminals' activities;
3) Offer at least one human contact on each META platform for serious reports of criminal misuse;
4) Promote frequent reporters of cybercrime by referring them to META's Trusted Partners or Business Partners for rapid aid;
5) Encourage external research on every platform regarding cybercriminals' activities (such initiatives could develop inexpensive tools. For example, for celebrities' reps to protect public figures from being deep faked in "adverts");
6) Provide more feedback on what was influential in reporting cybercrime for accounts and content to be removed. Without such feedback, fraud reporters may not be sure which reports are most effective;
7) Have a recommendation system in place for support networks that cybervictims can approach (such as referring South Africans to its national CyberSecurity hub).

In addition, META might consider these suggestions from The Noakes Foundation's Report Fake Endorsement initiative, to: (8) enhance deepfake detection technology, (9) apply stricter verification processes, (10) increase transparency and reporting tools, (11) support local educational initiatives, (12) promote collaborations with local cybercrime experts, (13) implement proactive monitoring systems to detect unusual patterns in ads, and (14) reinforced consequences for violations.

By sharing this "Dr Noakes" case study (and developing others), The Noakes Foundation hopes to raise awareness of the fake celebrity endorsement cybercrime, plus the importance of Big Tech guardians stepping up to fulfil their responsibilities. We are also liaising with sympathetic allies (KnowBe4® Africa Security AwarenessOrange Defence, Wolfpack Information Risk and others) to grow the networks necessary to better support cybercrime prevention in South Africa. 

Much can be done for targeted digital literacy education for vulnerable targets of cybercrime (such as #StopTheScam for silver surfers). We will also continue advocating that capable guardians (such as META, Twitter and TikTok) become more pro-active in protecting vulnerable publics on their platforms. Their gatekeeping role is vital, as the traditional bulwarks against crime (education, the police and the law) seem unable to catch-up with the "evolution"of global cybercrimes!

Thursday 22 August 2024

META profits off fake celebrity endorsement ads (& no, "Dr Noakes" cannot help your erection problem 🙄!)

Yes, the "Dr Noakes" advert below for men’s erection problems is the latest brandjack of Prof Noakes' identity on Facebook (and Tik Tok). So-called "Bretlinger"'s recent deep fake advert (Figure 1) features "Tim Noakes" promising a "second youth" through "science" for those suffering from erectile dysfunction. This Facebook reel is accompanied with the text; 'I⁣'⁣v⁣e⁣ ⁣d⁣o⁣n⁣e⁣ ⁣t⁣h⁣i⁣s⁣ ⁣t⁣h⁣r⁣e⁣e⁣ ⁣t⁣i⁣m⁣e⁣s⁣ ⁣–⁣ ⁣a⁣n⁣d⁣ ⁣f⁣o⁣r⁣ ⁣f⁣i⁣v⁣e⁣ ⁣y⁣e⁣a⁣r⁣s⁣ ⁣n⁣o⁣w⁣,⁣ ⁣I⁣ ⁣h⁣a⁣v⁣e⁣n⁣'⁣t⁣ ⁣h⁣a⁣d⁣ ⁣a⁣n⁣y⁣ ⁣i⁣s⁣s⁣u⁣e⁣s⁣ ⁣w⁣i⁣t⁣h⁣ ⁣e⁣r⁣e⁣c⁣t⁣i⁣l⁣e⁣ ⁣d⁣y⁣s⁣f⁣u⁣n⁣c⁣t⁣i⁣o⁣n⁣.⁣ ⁣W⁣r⁣i⁣t⁣e⁣ ⁣d⁣o⁣w⁣n⁣ ⁣m⁣y⁣ ⁣p⁣r⁣e⁣s⁣c⁣r⁣i⁣p⁣t⁣i⁣o⁣n⁣ ⁣f⁣r⁣o⁣m⁣ ⁣D⁣r⁣.⁣ ⁣T⁣i⁣m⁣ ⁣N⁣o⁣a⁣k⁣e⁣s⁣:⁣ ⁣a⁣n⁣ ⁣e⁣a⁣s⁣y⁣ ⁣w⁣a⁣y⁣ ⁣t⁣o⁣ ⁣r⁣e⁣s⁣t⁣o⁣r⁣e⁣ ⁣y⁣o⁣u⁣r⁣ ⁣m⁣a⁣n⁣h⁣o⁣o⁣d⁣ ⁣i⁣n⁣ ⁣s⁣e⁣c⁣o⁣n⁣d⁣s⁣.' (Don't expect many men will be reporting to Facebook, or publicly commenting for being scammed with this particular health problem 😡).

Figure 1. Screenshot of fake "Bretlinger" company's advert reel for "Dr Noakes" erection product on Facebook
Figure 1. Screenshot of fake "Bretlinger" company's advert reel for "Dr Noakes" erection product on Facebook (2024)

The Noakes Foundation and friends have reported this cybercriminals'* advert many times to META. BUT guess what, just like R Kelly... META Facebook's bots and human reviewers "do not see anything wrong" with the ad! Apparently adverts stealing a person's reputation, plus Facebook users' hard-earned money is no problem for META, since the scam is not going "against Facebook's community standards" (Figure 2).


Figure 2. Facebook support message reply to fake "Dr Noakes erection advert" report (2024)

Even if Facebook is just a conduit for sharing ads that are strictly the advertiser's responsibility, META still faces a legal risk in benefiting from crime (receiving "advertising" fees repeatedly!). For several months, The Noakes Foundation's representatives have provided a third-party with spreadsheets of scam accounts and hyperlinks for META's reference. This seemed effective in blunting the persistent brandjacking of Professor Tim Noakes' reputation across Facebook and Instagram. However, META has shifted position and will no longer use these spreadsheets. Paraphrasing, META thinks that 'none of the adverts are threatening or will lead to grievous bodily harm or death'.

As previously described for Twitter (now X), there is a familiar pattern of popular social media platforms turning a blind eye to cybercriminals' creation of imposter accounts to launch phishing attacks via fake adverts. Our preliminary research (2022-23) revealed that Facebook and Instagram were the most popular platforms reported by the financial victims of the Dr Michael Mol and Tim Noakes brandjackings. Cybercriminals are now also making use of Tik Tok for widespread promotion of the "Dr Noakes' erection solution'. While their earlier brandjacks have used synthetic content, the latest feature deep fake videos of Tim speaking for most of the "advert".

As "Tim's" Facebook advert is accompanied by negative comments from victims (such as "You stole my money!") it is seems inconceivable that a genuine, impartial, humane reviewer can argue that such scamming content is "acceptable". Perhaps an alternate explanation is that META is following orders from powerful outsiders in not applying its own policies! In exploring the patterns of this cybercrime's victims, it is notable that low-carbohydrate experts were targeted across continents. Amongst the fake advertising for them, Emeritus Professor Tim Noakes' stands out for his likeness/reputation having been re-used the most. It is not unrealistic to hypothesise that Western intelligence agencies may ask social media platforms to ignore content that degrades the reputations of prominent state propaganda critics. Dr Piers Robinson suggested this hypothesis based on experiences in facing character assassination via corporate media after setting up the Working Group on Syria, Media and Propaganda. It stumbled upon a strategic deception, perpetrated by the US, UK and French governments, regarding chemical weapons attacks in Syria, and their improper investigation. As Piers tweeted, 'from detaining and interrogating journalists such as Julian Assange, Richi Medhurst, Vanessa Beeley and Kit Klarenberg, through to smearing/character assassination across social media against Sharyl Attkisson, there are a myriad of ways used to suppress dissent.'

Prof Noakes has also faced character assassination after publicly challenging health propaganda related to "The Science"™ behind the US government's (high-carbohydrate-promoting) food pyramid, plus the worldwide promotion of (poorly/untested) COVID-19 genetic "vaccinations". Perhaps META's repeated failure to block these adverts is not an oversight of its responsibility to its users, but rather deliberate in following external directives. Worse, some scams may not be just the work of cybercriminals, but also be supported by intelligence agencies experimenting with fake endorsements as a new tool in their character assassination arsenal?!

In an interview on the Deepfake content 'explosion', a Facebook spokesperson recently stated that 'Content that purposefully intends to deceive or exploit others for money violates our policies, and we remove violating content when its found.' Interestingly, this spokesperson did not detail what META's response is to content being reported. There is also a missed opportunity to check whether the Facebook response to the reported victims and their representatives (Leanne Manas, Patrice Motsepe, Elon Musk, Nicky Oppenheimer, President Cyril Ramaphosa and Johann Rupert) actually corroborates this PR claim. In the case of the Noakes Foundation and Prof Noakes, certainly not!

Regardless of the scam's sources, The Noakes Foundation, its associates and I will continue to raise public awareness (via reportfakeendorsement.com and other channels) against fake celebrity endorsement ads. Such activism helps the general public develop zero trust in any "endorsements". These are NO endorsements from Professor Tim Noakes (or his associates at The Noakes Foundation, the Nutrition Network and Eat Better South Africa NPC). We will continue to research and advocate for social media platforms to take pro-active steps for preventing a decade long micro-fraud. It costs vulnerable South Africans millions of Rands each year, whilst also harming celebrities' reputations-and-wellbeing.

Online companies and social media platforms must do more to protect vulnerable audiences from fraudsters. Ironically, The Noakes Foundation's recent post warning about the latest scam has a comment section that then gets used as the next fraud trap. Scammers offer "help" to victims on it, promising that they can "find the criminals" and get a victim's "money back". These FB accounts may be bots that are automated to add comments to posts featuring tags such as 'hack(ed)', 'scam(med)', 'deepfake', etc.

Figure 3 The Noakes Foundation scam alert post on Dr Noakes scam with scammer comments
Figure 3. Screenshot of The Noakes Foundation scam alert post on Dr Noakes scam with scammer comments, top (2024)
 
This scam alert attracted 16 comments, almost all of which seem dubious. Whilst the comments seem to be from "individual" accounts, they may also be synchronised from a "bot farm" to reduce a post's visibility. On X, tweets that are replied to by dodgy accounts are algorithmically penalised by being adding to a blacklist. If true on Facebook, then this co-ordinated commentary might also be evidence of a malicious actor trying to reduce the visibility of The Noakes Foundation's response to a scam.

Figure 4. Screenshot of scammer comments to The Noakes Foundation's scam alert post for the Dr Noakes scam, bottom (2024)
Figure 4.  Screenshot of scammer comments to The Noakes Foundation's scam alert post for the Dr Noakes scam, bottom (2024)


META's Facebook and Instagram do not offer its public page managers any option to quickly respond to a barrage of scammy, fake comments. As a result, in addition to responding to fake ads, organisations must also use scarce resources to manage this scammy commentary. Each dodgy Facebook user's comment (Figure 4) must be- hidden and reported (as false information) one-at-a-time. Likewise for blocking each account, and hiding their feed. The Noakes Foundation is going to flag to META that it must provide page administrators with decent tools to efficiently tackle the 'fake comments' threat. Let's just hope that META is not turning a blind eye to that threat, too... 

To paraphrase ALIEN's Ash synthetic's sardonic comment to humans faced confronting the Xenormorph threat, 'I can't lie to you about your chances, but you have my sympathies... in dealing with META (and the CIA)!'

Figure 5. ALIEN Ash sympathies meme
Figure 5. ALIEN Ash sympathies meme


Friday 26 July 2024

Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media

Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for  dissidents challenging orthodox narratives in science.


The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).


In an ideal world, social media platforms would be considered to be a public accommodation, and the Fifth Estate's users would benefit from legal protection of their original content, including strong measures against unjustified suppression and censorship. The latter should recognise the asymmetric challenges that individual dissenters, whistleblowers and their allies must confront in contradicting hegemonic social forces that can silence their opponents' (digital) voices: As recently evidenced in the COVID-19 "pandemic", the Twitter Files and other investigations reveal how multinational pharmaceutical companies, unelected global "health" organisations, national governments, social media and traditional broadcast companies all conspired to silence dissent that oppossed costly COVID-19 interventions. Regardless of their levels of expertise, critics who questioned this narrative in the Fourth or Fifth Estate were forced to negotiate censorship for the wrong-think of sharing "dangerous" opinions. 

Such sanctions reflect powerful authorities' interests in controlling (scientific) language, the window of permissable opinion, and the social discourses that the public might select from, or add. Under the pretext of public "safety", the censorship industrial complex strong arms the broadcast media and social media companies into restricting dissidents' voices as "misinformation" that is "unsafe". Facing no contest, the views of powerful officialdoms earn frequent repetition within a tightly controlled, narrow narrative window. At the same legitimate reports of mRNA injuries are falsely redefined to be "malinformation", and censored.
 
Consequently, instead of a pluralist distribution of power in the Fifth Estate that can support vital expression,  powerful authorities are enforcing internet policy interventions that increasingly surveil and censor users' digital voices. Infodemic scholars whose work endorses such suppression would seem to be ignorant of how problematic it is to define disinformation, in general. Particularly in contemporary science, where: knowledge monopolies and research cartels may be dominant; dissenting minds should be welcomed for great science, and a flawed scientific consensus can itself be dangerous. Silencing dissent has important public health ramifications, particularly where the potential for suggesting, and exploring, better interventions becomes closed. Science-, health communication, and media studies scholars may also ignore the inability of medical experts to accurately define what disinformation is, particularly where global policy makers face conflicts of interest (as in the World Health Organisation's support for genetic vaccines).

Censorship and the suppression of legitimate COVID-19 dissent is dangerously asymmetrical: health authorities already benefit from ongoing capital cascades whose exchange largely serve their interests. Such exchanges span financial, social, cultural, symbolic and even other (e.g. embodied) forms of capital (Bourdieu, 1986:2018). By contrast, individual critics can quickly be silenced by attacks on their limited capital, effectively preventing their exercise of the  basic right to free speech, and delivering sustained critiques. A related concern is that the censorial actions of artificial intelligence designers and digital platform moderators are often opaque to a platforms' users. Original content creators may be unaware that they will be de-amplified for sharing unorthodox views, as algorithms penalise the visibility of content on 'banned' lists, and any accounts that amplify "wrongthink". 

Content suppression on social media is an important, but neglected topic, and this post strives to flag the wide variety of techniques that may be use in digital content suppression. Techniques are listed in order of seemingly increasingly severe techniques:

#1 Covering up algorithmic manipulation

Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.


#2 Fact choke versus counter-narratives

A fact choke involves burying unfavourable commentary amongst a myriad of content. This term was coined by Margaret Anna Alice to describe how "fact checking" was abused to suppress legitimate dissent.
An example she tweeted about was the BBC's Trusted New Initiative warning in
 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation". 

With the "unvaccinated" demonised by the mainstream media to create division, susceptible audiences were nudged to become vaccine compliant to confirm their compassionate virtue. At the same time to retain belief in mRNA genetic vaccine "safety", personal accounts, aggregated reports (such as "died suddenly" on markcrispinmiller.substack.com) and statistical reports (see Cause Unknown) for genetic vaccine injuries became suppressed as "malinformation" despite their factual accuracy. Other "controversial content", such as medical professionals' criticism of dangerous COVID-19 treatment protocols (see What the Nurses Saw) or criticism of a social media platform's policies (such as application of lifetime bans and critiques of platform speech codes) have been algorithmically suppressed.

Critical commentary may also be drowned out when platforms, such as YouTube, bury long format interviews amongst short 'deep fake' videos. These can range from featuring comments the critic never made, to fake endorsements from cybercriminals (as described on X by Whitney Webb, or Professor Tim Noakes on YouTube).

#3 Title-jacking

For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.


#4 Blacklisting trending dissent

Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures. 

After its publication, all three authors experienced censorship on search engines (Google deboosted results for the declaration), social media platforms (Facebook temporarily removed the declaration's page, while Reddit removed links to its discussion) and on video (Youtube removed a roundtable discussion with Florida's Governor Ron DeSantis whose participants questioned the efficacy and appropriateness of requiring children to wear face masks). 

#5 Blacklisting content due to dodgy account interactions or external platform links

Limited visibility filtering also occurs when comments are automatically commented on by pornbots, or feature engagement by other undesirable accounts. For example, posts mentioning the keywords/subjects such as 'vaccine, Pfizer' may receive automated forms of engagement, which then sees posts receiving such "controversial" engagement becoming added to a list ensuring these posts censorship (see 32 mins into Alex Kriel's talk on the 'The Role of Fake Bot Traffic on Twitter/X'.

Social media platforms' algorithms may also blacklist content from external platforms that are not viewed to be credible sources (for example, part of an alternative {alt-right} media), or seen as competing rivals (X penalises the visibility of posts that feature links to external platforms).

#6 Making content unlikeable and unsharable

This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.

Youtube dislike Rassmussen Reports video on Vaccine Deaths
Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)  

Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.


#7 Disabling public commentary

Social media platforms may limit the mentionability of content, by not offering the opportunity to quote public posts. User's right-to-reply may be blocked, and critiques may be concealed by preventing them from being linked to from replies.

#8 Making content unsearchable within, and across, digital platforms

Social media companies applied search blacklists to prevent their users from finding blacklisted content. Content contravening COVID-19 "misinformation" policies was hidden from search users. For example, Twitter applied a COVID-19 misleading information policy that ended in November, 2022. In June 20023, META began to end its policy for curbing the spread of "misinformation" related to COVID-19 on  Facebook and Instagram. 

#9  Rapid content takedowns

Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube  removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.

PropagandaInFocus describes how LinkedIn users are subject to a policy of misinformation that prevents content being shared that 'directly contradicts guidance from leading global health organisations and public health authorities'. Dr David Thunder shared an example of his automated LinkedIn post removal for (1) sharing a scientific study that confirmed that children are at negligible risk of suffering severe disease from COVID-19, and (2) questioning the FDA decision to approve Emergency Use Authorisation for COVID-19 vaccines for children as young as 6 months old. No matter that many other studies confirm both positions, LinkedIn took this post down and threatened to restrict his account.

#10 Creating memory holes

Extensive content takedowns can serve a memory-holing aim, whereby facts and memories of the past become suppressed, erased or forgotten for political convenience. Long after the COVID-19's "pandemic", an Orwellian Ministry of Truth continues to memory-hole many health authority decision makers' failures, plus those of the mainstream media's and most national governments. As discussed here on YouTube by Mary Lou Singleton, Meghan Murphy and Jennifer Sey, such failures included: mandating masking and school-closures for children (who were never at risk); never questioning the official COVID-19 statistics (such as CNN's 'death ticker'); straight quoting Pfizer press releases as "journalism", whilst mocking individuals who chose to 'do their own research'. 

Dr Mark Changizi presents four science moments on memory-holing. In X video 1 and X video 2, he highlights how memory-holing on social media is very different from its traditional form. He uses X (formerly X) as an autobiographical tool, creating long threads that serve as a form of visual memory that he can readily navigate. The unique danger of social media account removal/suspension for censorship extends beyond losing one's history-of-'use' on that platform, to include all 'mentions' related to its content (ranging from audience likes, to their reply and quote threads). This changes the centrally-controlled communication history of what has occurred on a social media platform. Such censorship violates the free speech rights of all persons who have engaged with that removed account, even its fiercest critics, as they also lose an historical record of what they said. 

By contrast decentralised publications (such as hardcopy publications) are very hard for authorities to memory hole, since sourcing all hardcopies can be nearly impossible for censors. While winners can write history, historians who have access to historical statements can rewrite it. As COVID-19 memory-holing on social media platforms challenges such rewriting, its users must think around creating uncensorable records (such as the book Team Reality: Fighting the Pandemic of the Uninformed). In  X video 3, he highlights that freedom of expression is a liability, as expressions push reputation chips on the table. The more claims one stake's, the greater the risk to one's reputation if they're wrong. So, another aspect of memory holing lies in an individual's potential desire for memory-holing their own platform content, should they prove to be wrong. In X video 4, Dr Changizi also spotlights that the best form of memory-holing is self-censorship, whereby individuals see other accounts been suspended, or removed for expressing particular opinions. The witnesses then decide not to express such opinions, since it might endanger their ability to express other opinions. While such absence of speech is immeasurable, it would seem the most powerful memory-holing technique. Individuals' silencing their own voices do not create history.

#11 Rewriting history

Linking back to the Fact Choke technique are attempts at historical revisionism by health authoritians, and their allies. An example of this are claims in the mainstream media that critics of the orthodox narrative were "right for the wrong reasons" regarding the failure of COVID-19 lockdowns, the many negative impacts of closing schooling, businesses, and enforcing mandatory vaccination policies.

#12 Concealing the motives behind censorship, and who its real enforcers are

Social media platforms not only hide algorithmic suppression from users, but may also be misused to hide from users the full rationale for censorship, or who is ultimately behind it. Professor David Hughes prepared a glossary of deceptive terms and their true meanings (2024, pp 194-195) to highlight how the meaning of words is damaged by propaganda. A term resonating with technique #9 is “Critical” - pretending to speak truth to power whilst turning a blind eye to deep state power structures. 

The official narrative positioned COVID-19 as a (i) pandemic that had zoonotic (animal-to-human) origins, and alternate explanations were strongly suppressed. As this is the least likely explanation, other hypotheses merit serious investigation as they are more plausible. SARS-COV-2 might have stemmed from (ii) an outbreak at the Wuhan Lab's "gain of function" research, or a (iii) deliberate release in several countries from a biological weapons research project?  Some critics even dispute the existence of SARS-COV-2, alleging that (iv) viral transmission is unproven, and that the entire  COVID-19 "pandemic" is a psychological propaganda operation.

By silencing dissident views like these, social media platforms stop their users from learning about the many legitimate COVID-19 debates that are taking place. This is not a matter of keeping users "secure" from "unsafe" knowledge, but rather networked publics being targeted for social control in the interests of powerful conspirators. In particular, the weaponised deception of social media censorship suits the agenda of the Global Public-Private Partnership (GPPP or G3P), and its many stakeholders. As described by Dr Joseph Mercola in The Rise of the Global Police State, each organisational stakeholder plays a policy enforcement role in a worldwide network striving to centralise authority at a global level.

Global Public-Private Partnership G3P organogram
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception.

G3P stakeholders have a strong stake in growing a censorship industrial complex to thwart legitimate dissent. Critiques of the official COVID-19 "pandemic" measures are just one example  The industrial censorship complex also strives to stifle robust critiques of (1) climate change "science", (2) "gender affirming" (transgender) surgery, (3) mass migration (aka the Great Replacement), (4) and rigged "democratic" elections, amongst other "unacceptable" opinions. Rather than being for the public's good, such censorship actually serves the development of a transhumanist, global technocratic society. The  digital surveillance dragnet of the technocracy suits the interests of a transnational ruling class in maintaining social control of Western society, and other vassals. This will be expanded upon in a future post tackling the many censorship and suppression techniques that are being against (ii) accounts.

N.B. This post is a work-in-progress and the list above is not exhaustive- kindly comment to recommend techniques that should be added, and suggestions for salient examples are most welcome.

Monday 8 January 2024

Presentation notes for Cybermobs for online academic bullying - a new censorship option to protect The Science™ ’s status-quo

Written for viewers of the Slideshare presentation.

Here is the transcript of the talk that accompanied https://www.slideshare.net/TravisNoakes/cybermobs-for-online-academic-bullying-2023pptx. For more on its background, click here.


    

Slide #1
Thanks for the joining this talk on how academic cybermobs can serve as a new censorship option for protecting scientific orthodoxy. Mobs that seek to silence dissenters are a small part of a much greater concern regarding the censorship of legitimate disagreement… and scientific truths online.


#2
You can all read faster than I can speak, so please do for this organizer of my presentation. After introducing yours truly and The Noakes Foundation (TNF), I am going to define the key concepts of The Science, scientific suppression, undone science, digital voice and online censorship


#3
And how digital voice in the Fifth Estate is useful for working around scientific suppression, changing science and guidelines. Dissidents who succeed in gaining public attention can face hard and soft forms of censorship, which include the distinctive actions of an academic cybermobs, plus facing as a myriad of forms of censorship on digital platforms. The talk closes with the challenges of researching academic cybermobs and a brief intro into celebrity cybermobbing research that TNF assists.


#4
My doctorate was in Media Studies and my research is qualitative and highly interdisciplinary. It spans the fields of culture, digital media, education and the health sciences. A common thread is exploring how individual’s communicative agency relates to social structure. I am an Adjunct Scholar at CPUT, whilst also volunteering at The Noakes Foundation (TNF) for its brand development as evidenced via its new website, plus you’ll see emoji sticker designs from my Create With business in this presentation too.

#5
TNF largely supports research into low-carbohydrate lifestyles as Big Food and Big Pharma generally don’t. I lead TNF’s Academic Free Speech and Digital Voices research project to explore how dissident researchers use digital voice for promoting their research, whilst negotiating scientific suppression and censorship from supporters of The (Current) Science™. The Science™️ is a phenomenon in Higher Education whereby tenured staff must defend the correct science of their time, arguing against alternate explanations. It has a religious overtone, as orthodox scientists strive to protect their life-long contribution of “correct beliefs” against questioning from heretical outsiders. Measures to protect The Science™️’s body of knowledge can include suppression and even “legitimate” censorship of “harmful” counter-opinions and interpretations. The Science™️ is unscientific as it does not encourage dissent’s radically different interpretations of the data.

#6
It’s important to recognise that scientific censorship of differing opinions may only be a last-resort, because the formal assimilation of what is considered prestigious research is so powerful. Therefor before my presentation tackles academic cybermobs, the ‘safe’, incremental knowledge that Higher Education’s funders and leadership support must be critiqued: In media, a few powerful social agents can effectively control research capacity by only funding research directions that serve their business interests. At the same time, false ethical concerns can be used to delimit what’s good to research in a field. TNF’s research beneficiaries have plenty of experience with submitting research proposals that are repeatedly blocked because of dogma that “Eating fat is harmful”, so participants who are urged to do so (and eat less “healthy” sugar in highly processed foods) WILL BE harmed. Ethical compliance in academia can serve unethical ends in slowing, if not preventing, competition between paradigms. Another concern is that conflicts of interest in supporting the silent long-term interests of third-party funders are often undisclosed. Mr Bill Gates is much wealthier now that when he promised to give away his fortune… In part, thanks to the BMGF’s philanthropic support for genetic vaccine research and Mr Gates’ investments in the companies that sell this product. At worst, academic research can be likened to a buyer’s market for real-estate in which the funders as buyers strongly dictate the agenda. In the Health Sciences, huge competition exists between many researchers keen to secure scarce funding from the few large funders who might provide it.

#7
Such public critical reflection on how funders impact academic freedom is disincentivized as career-limiting for most academics. For PANDA, TNF and any research beneficiary, Bourdieusian epistemic reflexivity can provides a vital tool to interrogate their scientific interrogation. The concept of reflexivity helps to spotlight how scholars make judgements of which research problems to focus on and what gets excluded… Perhaps a dominant paradigm can be identified that is a restrictive gatekeeper to new challengers, or a relativist ‘anything goes’ approach is missing the woods for the trees? Pierre Bourdieu’s relational critique helps us fit how our and other scholars’ interpretations link not just to disciplinary fields as agents, but are structured in relation to broader, dynamic social patterns and causalities.

#8
For example the sociology of scientific knowledge helps us understand why economic capital is foundation to developing the other academic capitals shown here. Economic capital from donors (or long-term knowledge investors like Mr Gates) not only support the fieldwork, outputs, academic relationships and prestige in different types of capital exchange within the Higher Educaton field, but contribute to the ongoing development of academic fields and what’s considered legitimate and most valuable in them. Likewise, what is neglected or ignored as shown in the example on the right. That may range from low-cost COVID-19 treatments… to the related deprioritisation of other major health concerns, such as HIV, TB and Malaria in Africa, as described by Dr David Bell. Capital exchange also helps situate whether symbolic recognition for research is high (such for mRNA innovation), low or non-existent.

#9
Clearly, a complex inter-relationship of extant (and future) capitals is at work in Higher Education (HE) relationships. They typically underpin a “safety first” knowledge landscape where the ideas on the right are endorsed as unquestionably “beneficial”. It is in the interest of The Science™’s business funders to maintain such a beneficient impression with HE experts serving as a bulwark of talking heads versus “science deniers”. The Science™ favors an absence of scientific controversy in HE. This absence suggests universal, expert consensus, and that there is no no need to consider new explanations as the truth is settled. Where debates do occur, Scientific Controversy research methods {such as Venturini and Munk’s ‘Controversy Mapping’ (2022)} can be applied. These help scholars frame the actors, their networks and alliances, plus the debates themselves. However, an economic focus on how capacity for which viewpoints receive funding and develop the strongest capacity would seem the most useful avenue for a sociology of scientific knowledge to develop an holistic picture for what scientific explanations are routinely supported in Higher Education… Conversely, which promising ones receive no support (or are incapacitated) as Undone Science... These evidence potential areas of scientific suppression.

#10
There is important research that could be done, but is not encouraged by the dominant orthodoxy of The Science™. Undone Science exists where research projects’ potential findings may be counter to The Science™’s funders publicity and other interests. For example, international health organisations are unlikely to fund communication studies into how the guidelines they endorsed and paid to amplify caused harms that outweighed overstated risks and inflated benefits. Similarly, multinational genetic vaccine manufacturers will not fund research regarding personal responsibility and low-cost treatments… Related findings may pose an existential threat to Big Pharma’s businesses- especially those that profits from experimental drugs being mandated and tested on so-called “patients” at their own risk.

#11
In contrast to ‘undone science’, scientific suppression speaks to impedance of research that is unfair, unjust and counter to academic standards of behaviour. In theory, academics should enjoy the right to context the prescribed orthodoxy in their academic work and lives. This right seeks to protect academics from the vested interests of other parties, giving those who’ve earned the right an opportunity to speak their truth. It’s a foundational right that should support scholars with advancing and expanding knowledge, for example by accommodating diverse voices. Without strong support for this right, scientific autonomy is unsustainable where funders’ and administrators’ needs subsume any independent scholarship.

#12
True scientific autonomy poses a risk to the powerful, especially where its findings suggest an improved, alternative way of doing things (such as eating low-carb diets for controlling diabetes, versus solely injecting Insulin meds daily). So, in our contemporary marketized universities, which increasingly rely on corporate funding, the on-the-ground financial realities of pleasing long-term funders will contradict the ideals of autonomy, objectivity and free speech. Powerful internal and external groups do not support building capacity for risky research or controversial debates that might upset powerful funders. Rather they fund incrementalistic research in support of more-of-the-same. ‘Revolutionary technology’ mRNA products simply boost Big Pharma’s existing business models. Embedded academics are keen to create debates on the importance of mandatory vaccination, rather than whether the mRNA platform is sufficiently tested to merit being termed a ‘vaccine’. Aware that there is no equal treatment or due process, especially for dissidents with a public following, skeptics protect their reputations and career trajectories by self-censoring, avoiding the time-drain of debating The Science™’s truth. If academics or students have challenging conversations, these may be policed for “wrongthink” leading to career cancelation, especially if their pursuit of objective scientific truth conflicts with the “current thing”. A university may promote “safe spaces”, but these seldom include research into controversial ideas that must confront complex ethical challenges.

#13
The market university is just one site of knowledge production in which social groups try to dominate the development of educational knowledge. Professor Henry Kwok et al. argues that the global health crisis of COVID-19 presents…a fertile ground for exploring the complex division of knowledge labour in a ‘post-truth’ era. In contrast to post-truth which has many definitions and a broad conceptualisation, knowledge production is positioned as a narrow concept well suited for exploring the social conditions of knowledge. This slide’s example shows three ‘fields’ under Rules- Each transformation of knowledge takes place in a particular field (see Table 1), within which different expert agents work. Discourse is produced in HE by a range of agents; A process of Pedagogisation then occurs in which specialist medical knowledge that is inaccessible to the public is recontextualized. Knowledge becomes translated into novel forms that non-specialist audiences might access and understand more readily. What counts as ‘valid’ knowledge and practice in the division of knowledge labour is determined by evaluative rules. Here government officials decide how COVID-19 policy should impact the public in response to guidance from experts. This analysis clarifies that researchers should explore the relations between and within each division’s fields. Such analysis reveals areas of contradiction and conflict between fields, and even agents within them.

#14
Contradictions occur between agents and agencies with different interests, which are directed by and reflected in their divergent goals. An analysis of these contradictions is helpful for broadening our understanding of where ‘post-truth’ moments lie… In this example for the WHO’s infodemic research agenda, it can illustrate examples of disinformation that the WHO’s infodemic research agenda might miss or neglect. As Dr David Bell, my father and I wrote, the WHO leads the infodemic research agenda and positions itself and its international health organisation partners to be evaluators of what “misinformation” is.  This has the potential to create an intragroup contradiction, when infodemic scholars at universities research the WHO’s decisions but learn that these and related guidance have shifted dramatically, sometimes with no clear justification!

#15
For example, Abir Balan’s work here lists the key guidelines provided by the WHO for ‘mitigating the risk and impact of epidemic and pandemic influenza’. However, a cursory glance shows that the public health measures applied in 2019 would be radically altered just months later. Scholars who are dependent on research funding from the WHO (or those whose funding sustains it, such as BMGF) would seem unlikely to criticise such sudden and unexplained shifts in guidance.

#16
Conventional division of knowledge labour diagrams place the tertiary academic field as the leader of discourse production. By contrast, the division for mRNA vaccine research (see Table 3) highlights how companies manufacturing vaccines drive contemporary research and the distributive rules in knowledge labour. Only wealthy pharmaceutical companies have the financial and other resources to drive mRNA research at scale and at warp speed. This of course creates a massive conflict of interest because whether the company producing these therapies will ultimately benefit financially from the future sales of these therapies depends entirely on the published efficacy and safety results from their own research! Another contradiction exists between the deliberation and recontextualisation fields, where vaccine-manufacturing pharmaceutical companies can use their large online advertising budgets to influence content on digital platforms and fact-checking. For example, dissident health professionals and academic scholars who promoted personal responsibility faced censorship, not just on campus and by medical authorities, but also on the most popular social media platforms (such as Facebook and Twitter).

#17
Such censorship of digital platforms is an important concern, since social media platforms have enabled dissident experts to network their expertise and launch conventional science projects that evolved from anecdotes into published research. As Holmberg’s scholarship shows, online low carb high fat diet advocacy was very important for contesting the flawed nutritional guidelines of the National Swedish Food Agency. This raised political awareness around low carb diets and provided vital opportunities to contest the nutritional authorities with academic research that helped to change Sweden’s nutritional guidelines.

#18
Professor William H Dutton argues in his recent The Fifth Estate book that social media platforms now form part of a Fifth Estate. In a recent email to the Association of Internet Researchers he describes how his book ‘makes a case for the internet and related media and communication technologies enabling the most important power shift of the digital age. A network power shift has been driven by enabling ordinary people to search, originate, network, collaborate, and leak information in ways than enhance their informational and communicative power. In such ways, the internet is empowering many ordinary individuals to form a more independent collectivity of networked individuals—a Fifth Estate. This network power shift enables greater democratic accountability, whilst empowering networked individuals in their everyday life and work. Suggesting these platforms importance in how digital content creators generate and share news that digital publics amplify via networked affordances.'

#19
Professor Holmberg is one of very few scholars who have written how dissident scientists have successfully exercised digital voice to change both science and government guidelines. There is a large research gap concerning empirical research into scientific censorship. We do know that it has two forms: hard and soft. Authorities try to prevent dissemination with the former… or pressurize dissidents with threats of reputational damage and exclusion from their fields of knowledge production. With dissenters’ digital voices emerging as as potent force for creating social movements via the powerful Fifth Estate
authorities’ desires to exercise censorship via Big Tech’s social media platforms is an emergent reality.

#20
Naim and Bennett (2014) proposed this 21st century censorship matrix for government influence on the production and dissemination of information and opinion. Such censorship can be obvious, in being direct and visible. In contrast, it may be hard to sport as stealthy and/or indirect.

#21
This is a similar matrix for what has been evidenced against low-carb scholars, from Australasia to South Africa, the USA onto Scandinavia. Various roleplayers aim to prevent research and teaching into the insulin resistance model inside Academia, and to create the perception amongst online audiences that the science behind LCHF is illegitimate, unscientific and promoted by self-serving charlatans.

#22
Senior scientific dissidents with a public following will be a lightning rod for such attacks, since their position highlights that the science is not settled. My father, Emeritus Professor Noakes, has made this major contribution to his institutional employer over a long academic career. He shifted to a low carb, or Banting lifestyle in 2011 and shared the benefits of a low-carb lifestyle which supported the reversal of diabetes.

#23
Heavily processed, Big Food industries and insulin-pushing Big Pharma businesses want to limit the public’s attention to low-carb science as it threatens their profits. Such powerful companies can support these strategies for breaking the causal chain between Prof Noakes’ information dissemination and individuals’ willingness to act. In Prof Noakes’ case, they could support critics who sought to delegitimate his research journal publications and books he wrote on low carb; pseudo-skeptics questioned his credibility and that of "Tim Hoax"’s associates in a myriad of publications. Their attacks could involve many forms of cyber harassment.

#24
Daniel Citron’s excellent book, ‘Hate Crimes in Cyberspace’, provides this common definition for harassment on page 124- ‘Harassment is typically understood as a willful and malicious ‘course of conduct’ directed at a person that would cause a reasonable person to suffer substantial emotional distress and that does cause the person to suffer distress.’

#25
The term ‘cyber harassment’ is necessary, as Citron points out, for describing how the reach and pervasiveness of the internet can exacerbate the injuries that targets suffer; In cyber harassment, there is an interesting paradox between how the texts, images, sounds and videos shared by cyber harassers seem banal and trivial BUT the impact of this content can actually threaten families, careers and lives! Repeated privacy invasions, threats of violence and attacks on a target’s reputation may sabotage their professional and family lives, future opportunities…and even lead to suicide or its recipient “going postal”.

#26
Fortunately, Professor Noakes is very tough and has survived nearly a decade of such cyber harassment with such experiences shared in this partially pseudonymized case.

#27
There are many activities that the perpetrators of cyber harassment can follow… All should be regarded seriously as they can result in emotional, physical and professional harm to their targets. Using Tim’s example, I’ll talk through two threats that you may be unfamiliar with: The first is ‘Google bombing’ in which a search engine page is gamed to elevate the rankings of negative and destructive pages… in this case, if you search for Prof’s ‘Lore of Nutrition’ book, but what first appears is a biased, negative review from a pediatrician… versus the many positive reviews that this book earned. A technical explanation for this high-ranking of the review is certainly not its quality… rather that it is cited on Wikipedia, which Google deems a credible source. The second threat are “digital pariah” profiles created by Wikipedia’s and Rationalwiki’s editors’ choices. Such “crowdsourced” profiles are strongly shaped by an anti-dissident editorial bias. Remember that when next you are asked to donate by the “independent” Wikipedia.

#28
Digital pariah profiles and Google Bombs are digital extensions of academic workplace mobbing techniques. Academic mobbing seeks to eject scholars from academia, involving aggressive techniques for ostracization. Recontextualising an A1 rated scientist’s career and books as flawed are clearly an example of this. Unlike organic trolling from complete strangers, colleagues in an academic cybermob can launch concerted attacks. This makes academic cybermobbing a distinct, emergent threat.

#29
Dissidents voicing pro Insulin Resistance model and offering low carbohydrate advice communicate in diverse issue arenas, ranging from the model’s science to the lifestyle’s impact on agriculture. These are PR areas that corporations, institutions and their employees have high stakes in. Dissidents may attract direct and indirect criticism from any of these agents concerned about such issues.

#30
What was notable in Prof Noakes’ case was the vast number of South African and International bodies who had stakes in challenging his opinions. Health organisations and academic institutions may also become involved in correspondence.

#31
The criticisms from work colleagues here would seem unethical and unacceptable in most workplaces. They also create a problem for the recipient in how to respond appropriately to such criticism with no germane successful examples to follow. Here we see the types of slurs used on Twitter and elsewhere  that hypercritical interlocutors used in arguing that Professor Tim Noakes had morphed into a dangerous “anti-science” hack. 

#32
Academic cybermobbing differs from workplace mobbing which is defined as an embodied covert process inside a university employee’s faculty’s department. Here are 16 key points covering the ways in which academic cybermobbing can be worse. In particular, the network of attacking groups and individuals is often visible, making it easy to jump on the bandwagon. Sensationalist criticism is encouraged by digital platform algorithms that reward controversy with attention - cyber mobs drive circles of outrage that contribute to spiraling cyber harassment. A dissident can easily exhaust him or herself trying to respond to many phases of criticism from different groups on many platforms across different timezones. And there may well be no institutional recourse against colleagues whose freedom of speech ironically undermines the aforesaid for dissidents. Overall there is complete asymmetry between a dissident’s capabilities to respond, and critics myriad of opportunities for attack. 

#33
The agents in an academic cybermobbing can also differ to those in academic mobbing, While the latter will have private orchestrators and supporters who are all academics in a shared field, an academic cybermobbing can involve recontexualisers from other fields. For example, public criticism from trolls keen to syphon off a dissident’s public views. As a public spectacle, it is also concerning for dissidents when their colleagues simply act as witnesses and bystanders to cyber harassment.

#34
Participating in criticism of dissidents can also be used for capital exchange. Pseudo-skeptics can gain hypervisibility as thought leaders, that they cannot achieve without holding PhDs and making real contributions to academia. Likewise, reaping symbolic capital in terms of the numbers of followers they attract. Dogmatists can earn social capital bridging them with to new groups, plus their defence of the orthodoxy may reap rewards ranging from content payments to securing better academic positions.

#35
While I use negative terms such as dogmatist and pseudo-skeptic, it is important to keep in mind that the academic defenders of The Science™ do readily justify their censorship activities as well-meaning,  benevolent for their peers and the public, and pro-social overall for human wellbeing.

#36
Overt censorship tactics are not only applied by platforms, but can be requested and applied by defenders of the Science™ in certain instances. For example, mobs can launch matrix attacks for deplatforming their targets. Dissidents can be reported for being in breach of platform safety… such as Twitter’s pre-Musk COVID-19 communication policy.

#37
Digital platforms have many covert censorship mechanisms that can be used for stifling free speech. Around 24 of them are listed on this slide and the next…

#38
While academic cybermobs may not be responsible for such tactics, they certainly may take actions to promote systematic censorship against the misinformation from dissidents to prevent its assumed harms. This itself may have a serious harm in serving as scientific censorship that will suppress accurate information, such as supporting a fake consensus for dysfunctional interventions.

#39
It is hard to research such censorship and there are many obstacles to researching academic cybermobs even if you can find scarce funding. These include: there not being a strong rationale or examples one can follow. One must access data under highly restrictive research user agreements. There can be much missing data (cyber harassment from private, deleted or banned accounts is not provided via APIs). Data is provided in structures that can make it hard to track key analytical foci (e.g. conversation threads on X (healthy conversations). There are challenges in cleaning the data and representing the original users’ experience (spreadsheet data vs multimodal tweets). There are also important ethical challenges in researching colleagues’ anti-social activities and producing research outputs from them! 

#40
One challenging, but less ethically difficult proposition is to explore the activity of cybermobs outside academia. For example, The Noakes Foundation, Younglings Africa and the SMILR lab support PhD candidate Pinky Motshware with studying celebrity cybermobs. Their attacks led to life changing outcomes for local black male celebrities, which Pinky is preparing case studies for.

#41
Thank you for your attention and continued support.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (56) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> ORCID research profile

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest