Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Wednesday, 9 April 2025

Wanted - Fair critics of 'Promoting Vaccines in South Africa: Consensual or Non-Consensual Health Science Communication?'

Written for health science communication researchers concerned with genetic vaccination promotion being non-consensual and a form of propaganda.


Since June 2023, Dr Piers Robinson, myself and Dr David Bell have submitted the titular manuscript to nine journals with no strong reviews, and many desk rejects without solid explanation. This is despite our journal search from 2024 focusing on seemingly suitable journals that met the criteria of tackling (i) health communication, (ii) propaganda, and (iii) previously having shared controversial articles questioning the official COVID-19 narrative. Since we cannot identify any viable new targets, we have decided to share our manuscript as a pre-print on SSRN and ResearchGate. We hope that readers there can at least offer solid, constructive criticism of our work.


As scholars know, every journal submission can take many hours for preparing the related documentation, plus formatting the manuscript to a journal's stylistic specifications, etc. To compensate for such lengthy academic labour, authors might reasonably expect that editorial teams will be highly ethical in providing detailed reasoning behind desk-rejections. Where there is a strong pattern of such feedback being absent, or poor, on controversial topics, dissident authors may justifiably perceive that they are negotiating an academic journal publication firewall. Why would editors be reluctant to go on record for their reasons for desk-rejection, if they are indisputable? Even when editorial staff's feedback is highly critical, this is still constructive for authors. They can then completely revise their manuscript for submission to new journals. Or perhaps save time, by confronting the reality that their manuscript's non- or weak-contribution means it must be abandoned!


Our frustration with not receiving constructive criticism is similar to accounts from many other dissenters against the official COVID-19 narrative. Notably, Professors Bhattacharya and Hanke (2023) documented dissidents’ censorship experiences via popular pre-print options. And Professor Norman Fenton (in Fighting Goliath, 2024) and Dr Robert Malone (in PsyWar, 2024) provide compelling accounts of shifting from being welcome journal authors and conference speakers, to unpublishable for any manuscript critical of COVID-19 statistics or treatment policies. Such experts would seem unlikely to have produced fallacious research unsuited to peer review given their high levels of expertise, plus long publication records.


Our wannabe-journal article tackles an important, albeit controversial, question, How might pharma- or medical propaganda be distinguished from health communication? South Africa's (SA) case of COVID-19 genetic vaccine promotion is described for how incentivization, coercion and deceptive messaging approximated to a non-consensual approach- preventing informed consent for pregnant women. In terms of generalisability, this case study can be described as a hard case- given the status of pregnant women as perhaps the most vulnerable and protected category in society, one expects health communicators to be extremely cautious about adopting non-consensual methods of persuasion. We show that this was indeed the case in South Africa, making it more likely that such tactics were used for other less vulnerable groups.


In desk rejecting our work, editors and reviewers may well have thought that evaluating persuasive communication in terms of whether or not it is deceptive and non-consensual is not, in some sense, a legitimate research question. In stark contrast, as Dr Piers Robinson argues (at the end of this Linked thread), our research question is indeed, 'an essential part of evaluating whether any given persuasion campaign can be said to meet appropriate ethical/democratic standards. With the attention to fake news and disinformation, there is in fact much in the way of scholarly attention to questions of deceptive or manipulative communication. So we are not asking a question that is not asked by many others and across a wide range of issue areas. And we utilised a conceptual framework developed and published elsewhere.'


Another concern may be that our manuscript it "biased" to 'reach a predetermined outcome'. This ignores the possibility that our work could have found no evidence of deceptive communication, and none for incentivization. However, the evidence presented does strongly support a major concern that pregnant women were incentivised, deceived and coerced into taking (poorly-tested) genetic vaccines (whose side-effects are also poorly tracked). In the absence of detailed editor rejection feedback, it's hard for us to improve our argument for a hoped-for peer review that's fair.


It's also important to acknowledge the context in which our paper was written, which is of considerable scientific concern over the COVID-19 event. Notably, rushed guidance based on weak evidence from international health organisations could well have perpetuated negative health and other societal outcomes, rather than ameliorating them (Noakes, Bell & Noakes, 2022). In particular, health authorities rushed approval of genetic vaccines as the primary response, and their "health promotion" seems a ripe target for robust critique. Particularly when successful early treatments were widely reported to have been suppressed so that Emergency Use Authorisation for genetic vaccines could be granted (Kennedy, 2021).


An unworthy topic?


Our negative experience of repeated, poorly (or un-) explained rejections would seem to suggest that presenting South Africa's case of COVID-19 genetic vaccine promotion as pharmaceutical/medical propaganda was not worthy of academic journals' review- even for those promising to tackle scientific controversies and challenging topics.


Not unexpectedly, SSRN removed our pre-print after a week, providing the following email rationale: 'Given the need to be cautious about posting medical content, SSRN is selective on the papers we post. Your paper has not been accepted for posting on SSRN.' So, no critique of the paper's facts or methods, just rapid removal of our COVID 19 "health communication" critique. In SSRN 's defence, its website's FAQs do flag that 'Medical or health care preprints at SSRN are designed for the rapid, early dissemination of research findings; therefore, in most instances, we do not post reviews or opinion-led pieces, as well as editorials and perspectives.' So perhaps the latter concern was indeed the most significant factor in SSRN's decision... But with no explicit/specific explanation for its rationale for its decision, it's also possible that our critique of COVID-19 "health science communication" weighed more heavily as a factor by human decision makers. Alternately, an Artificial Intelligence agent wrote the rejection email, triggered by our sensitive keywords. COVID-19 + proganda = (a must reject routine.)


A history of a manuscript's rejection in one image


We acknowledge that the initial submissions of our manuscript may well have been out-of-scope for the preliminary journals, or outside of the particular contributions to knowledge that they consider. 

Submission attempts versus journal publication firewall.png
Figure 1. Nine journals that rejected 'Promoting Vaccines in South Africa' (2025) 

Over two years, we also refined our manuscript to narrowly focus on 'non-consensual Health Science Communication', versus propaganda. While the latter is accurate, we recognised that it could be too contentious for some editors and reviewers, so revised the initial title. Our analysis was clearly bounded to describe the ways in which non-consensual persuasion tactics were employed in South Africa to promote uptake of the COVID-19 vaccines. There are several vulnerable categories (such as  teenagers), and we decided to focus on pregnant women, or women wanting to be mothers. We explored the local incentives and coercive measures (both consensual and non-consensual) that were used in South Africa during the COVID-19 event. Our manuscript then critiqued deceptive messaging on the safety of the Pfizer BioNTech Comirnaty® vaccine in a Western Cape government flyer. We also contrasted the South Africa Health Products Regulatory Authority's vaccine safety monitoring and reporting of adverse events following immunisation (SAHPRA AEFI) infrormation, contrasting how it (does not) report on outcomes for women's health, versus the Vaccine Adverse Report System (VAERS SA). If there is a methodological flaw in this approach, we are open to suggestions on improving it.

That said, there are some changes that we would like an opportunity to argue against. For example, our title might be criticised for not addressing harms to "pregnant people". However, following such advice would distract from how genetic vaccines have proven especially damaging to biological females. Likewise, our definition of "health science communication" can be criticised as a narrow one, especially for South Africa's myriad of health contexts. While this is true and we should gloss this limitation, we must also prioritise what is core to focus on within a 10,000 word limit. Expanding our focus to include a broad view of science communication in SA would inevitably require the removal of evidence related to the Organised Persuasive Communication Framework's consensual versus non-consensual aspects. This would distract from our paper's core focus.


The demands above may well be intended to create a more 'open minded' and 'less binary' paper. Nonetheless, should they be the primary reason for desk-rejection, they actually serve to undermine the broader academic discourse. Particularly the contribution our critique can play in supporting consideration for what constitutes genuine health communication in public health emergencies. Our paper's departure from a "progressive" imperative in its title and focused concepts, should not trump the paper's potential role for catalysing valuable discussions around medical/pharmaceutical propaganda. Especially around the consequences of health communications from SA authorities being deceptive, and potentially ill-suited for supporting informed consent. When combined with hefty financial reward incentives, and the coercion of losing one's livelihood, it seems irrational to argue against a non-consensual approach's existence. One  threatening pregnant women, their foetuses and babies. Surely, this warrants concern for academia in being apposite to genuine health communication via persuasion that allows for free and informed consent?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!?!


The inspiration for our original manuscript


The original paper was drafted for a special issue of the Transdisciplinary Research Journal of Southern Africa. It focuses on ‘Efficacy in health science communication in a post-pandemic age: Implications for Southern Africa’.  In a small way, our review article was inspired as a critique of two assumptions in the call for the special issue's opening paragraph: (1) 'Much of the broad population and indeed more of the intelligentsia than one would imagine arguably remain to a greater or lesser degree sceptical of science' and (2) 'widespread suspicion of the origin of the virus seemingly fuelled by conspiracy theories, and of surprising levels of vaccine hesitancy voiced in a range of guises.' 


In the first place, there is a different between science, and following The Science™ from a transglobal vaccine cartel. Individuals or groups did have sound scientific grounds to reject genetic vaccination. Indeed, individuals with PhDs were most likely to reject being "vaccinated" with a rushed and poorly-tested product. Secondly, the theory that COVID-19 emerged from the Wuhan lab is not a "conspiracy theory", but just one of four possible explanations {the others being zoonotic (animal-to-human) origins, a deliberate bio-weapon release, or a prior endemicity ‘discovered’ by an outbreak of testing}.


To flag the danger of assumptions, such as (1) and (2) being presented as "fact", our review originally sought to spotlight a major, but neglected, issue in the health communication field: what is pharmaceutical propaganda and how does it differ from health communication. Media studies and health communication scholars should be exercising hyper-reflexivity in considering how the communications they study typically emerge in an externally directed field. Their field's solutionist emphasis is often driven by powerful external groupings’ motives, such as national government departments or multinational pharmaceutical companies. Such actors can be incentivised to manipulate messaging for reasons other than the simple concern to protect the public's wellbeing during a perceived crisis or emergency. 


Our reflexive article was originally rejected without explanation by one of the special issue’s editors. I have tweeted about how such behaviour is unacceptable, plus how AOSIS could update its policy to specify that an editor must provide explicit feedback on the reasons for desk rejection. This would meet COPE’s guideline that editors meet the needs of authors. Otherwise rejected authors might suspect that an AOSIS journal is not championing freedom of expression (and rather practicing scientific suppression) and is not precluding business needs (e.g. pharmaceutical support) from compromising intellectual standards. Tackling the danger of “successful” communications for dangerous pharmaceutical interventions as pharmaceutical propaganda is important, particularly given the rise of health authoritarianism during a “pandemic”.


Constructive criticism, plus new journal targets welcome?


We believe that our topic of how incentivization, coercion and deceptive COVID-19 messaging approximates a non-consensual approach is highly salient. Without sound rationales for the rejections of our paper, academic social networks seem the most promising fora for receiving constructive criticism. Drs Robinson, Bell and I welcome such feedback. Kindly also let me know in the comments below should you know of a health communication journal that supports COVID-19 dissent, champions academic freedom and would be interested in giving our submission a fair review?


Future research


Dr Robinson & I are collating the accounts of prominent health experts who have described negotiating an academic journal publication firewall. There is an opportunity to formalise research into the problems of censorship and bias during COVID-19, documenting case studies and further evaluating what this tells us about academia. We will work on a formal research proposal that also includes developing an original definition for dissenters' 'academic journal publication firewall' experience(s).

Saturday, 29 March 2025

Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative

Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.

There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022Shir-Raz et al, 2023Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).


Narrowed Overton Window for COVID-19.


Figure 1. Narrowed Overton Window for COVID-19. Figures copied from (p137-138) in Dr Joseph Fraiman (2023). The dangers of self-censorship during the COVID-19 pandemic. In R. Malone, E. Dowd, & G. Fareed (Eds.), Canary In a Covid World: How Propaganda and Censorship Changed Our (My) World (pp. 132-147). Amazon Digital Services LLC - Kdp.


Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!


Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.


COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!


This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.


This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:


Practices for @Account suppression


#1 Deception - users are not alerted to unconstitutional limitations on their free speech


Social media users might assume that their constitutional right to free speech as citizens will be protected within, and across, digital platforms. However, global platforms may not support such rights in practice. No social media company openly discloses the extent to which users' accounts have, and are, being censored for expressing opinions on controversial topics. Nor do these platforms explicitly warn users what they consider to be impermissible opinions. Consequently, their users are not be forewarned regarding what may result in censorship. For example, many COVID19 dissidents were surprised that their legitimate critiques could result in account suspensions and bans (Shir-Raz, 2022). Typically, such censorship was justified by Facebook, Google, LinkedIn, Tik Tok, Twitter and YouTube, due to users' violation of "community rules". In most countries, the freedom of speech is a citizen’s constitutional right that should be illegal to over-ride. It should be deeply concerning that such protections were not supported in the Fourth Estate of the digital public square during the COVID-19 event. Instead, the supra-national interests of health authoritarians came to supersede national laws to prevent (unproven) harms. This pattern of censorship is noticeable in many other scientific issue arenas, ranging from criticism against man-made climate change to skeptics challenging transgender medical ideology.

#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents


An individual who exercises his or her voice against official COVID-19 narratives can expect to receive both legitimate, pro-social and unfair, anti-social criticism. While cyberstalking should be illegal, social media platforms readily facilitate the stalking and cyber-harassment of dissidents. An extreme example of this was Dr Christine Cotton's experiences on LinkedIn. Dr Cotton was an early whistleblower (Jan, 2022) against Pfizer's COVID-19 clinical trial's false claims of 95% efficacy for its treatments. 
Her report identified the presence of bias and major deviations from good clinical practice. In press interviews, she reported that the trial did ‘not support validity in terms of efficacy, immunogenicity and tolerance of the results provided in the various Pfizer clinical reports that were examined in the emergency by the various health authorities. Christine shared this report with her professional network on LinkedIn, asking for feedback from former contacts in the pharmaceutical industry. The reception was mostly positive, but it and related posts were subject to a rapid content takedown by LinkedIn, ostensibly for not meeting community standards. At the same time, her profile became hypersurveiled. It attracted unexpected visits from 133 lawyers, the Ministry of Defence, employees of the US Department of State, the World Health Organisation, and others (p142). None of these profile viewers contacted her directly.

#3 Othering - enabling public character assassination via cyber smears


Othering is a process whereby individuals or groups are defined, labeled or targeted as not fitting in within the norms of a social group. This influences how people perceive and treat those who are viewed as being part of the in-group, versus those in an out-group. At a small scale, othering can result in a scholar being ostracised from their university department following academic mobbing and online academic bullying (Noakes & Noakes, 2021). At a large scale, othering entails a few dissidents on social media platforms being targeted for hypercriticism by gangstalkers. 

Cyber gangstalking is a process of cyber harassment that follows cyberstalking, whereby a group of people target an individual online to harass him or her. Such attacks can involve gossip, teasing and bad-jacketing, repeated intimidation and threats, plus other fear-inducing behaviours. Skeptics' critical contributions can become swamped by pre-bunkers and fellow status-quo defenders. Such pseudo-skeptics may be sponsored to trivialise dissenters' critiques, thereby contributing to a fact choke against unorthodox opinions. 

In Dr Christine Cotton's case in March 2022 her  name was disclosed in a list as part of a French Senate investigation into adverse vaccine events. A ‘veritable horde of trolls seemingly emerged out of nowhere and started attacking’ her ‘relentlessly’ (p143). These trolls were inter-connected through subscribing to each others’ accounts, which allowed them to synchronise their attacks. They attempted to propagate as much negative information on Dr Cotton as possible in a ‘Twitter harassment scene’. Emboldened by their anonymity, the self-proclaimed “immense scientists” with masters in virology, vaccines, clinical research and biostatistics, launched a character assassination. They attacked her credentials and work history, whilst creating false associations (“Freemasonry” and “Illuminati”). 

This suggests how identity politics sensibilities and slurs are readily misused against renegades. In the US, those questioning COVID-19 policies were labelled “far right” or “fascist”, despite promoting a libertarian critique of healthcare authoritarianism! In addition, orchestrators of cybermobbing tagged dissidents accounts to be those of someone who is: 'anti-science', 'an anti-vaxxer', 'biased', 'charlatan', 'celebrity scientist', 'conspiracy theorist', 'controversial', 'COVID-19 denier', 'disgraced scientist', 'formerly-respected', 'fringe expert', 'grifter', 'narcissist with a Galileo complex', 'pseudo-scientist', 'quack', 'salesman', 'sell-out' and 'virus', amongst other perjoratives.  Such terms are used as a pre-emptive cognitive vaccine whose hypnotic language patterns ("conspiracy theorist") are intended to thwart audience engagement with critical perspectives. Likewise, these repeatedly used terms help grow a digital pillory that becomes foregrounded in the pattern of automated suggestions in search engine results.

In this Council of the Cancelled, Mike Benz, Prof Jay Bhattacharya, Nicole Shanahan and Dr Eric Weinstein speculate about hidden censorship architectures. One example is Google's automated tagging for "controversial" public figures. These can automatically feature in major mainstream news articles featuring COVID-19 dissidents. This is not merely a visual tag, but a cognitive tag. It marks "controversial" individuals with a contemporary (digital) scarlet letter.

In Dr Cotton's case, some trolls smeared her work in raising awareness of associations for the vaccine injured to be helping “anti-vaccine conspiracy sites”. She shares many cases of these injuries in her book and was amazed at the lack of empathy that Twitter users showed not just her, but also those suffering debilitating injuries. In response she featured screenshots of select insults on her blog at https://christinecotton.com/critics and blocked ‘hundreds of accounts’ online. In checking the Twitter profiles attacking her, she noticed that many with ‘behavioural issues were closeby’. Dr Cotton hired a ‘body and mind’ guard from a security company for 24-hour protection. Her account was reported for “homophobia”, which led to its temporary closing. After enduring several months of cyber-harassment by groups, a behaviour that can be severely be punished by EU law, Dr Cotton decided to file complaints against some of them. Christine crowdfunded filing legal complaints against Twitter harassers from a wide variety of countries. This complaint sought to work around how cyber harassers think anonymity is suitable for avoiding lawsuits for defamation, harassment and public insults.

#4 Not blocking impersonators or preventing brandjacked accounts


Impersonator's accounts claiming to belong to dissidents can quickly pop up on social media platforms. While a few may be genuine parodies, others serve identity jacking purposes. These may serve criminal purposes, in which scammers use fake celebrity endorsements to phish "customers" financial details for fraud. Alternately, intelligence services may use brandjacking for covert character assassination smears against dissidents.

The independent investigative journalist, Whitney Webb, has tweeted about her ongoing YouTube experience of having her channel's content buried under a fact choke of short videos created by other accounts:

Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
 
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.


Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning'or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.

#5 Filtering an account's visibility through ghostbanning


As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a  filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas. 

This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast,  the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:


Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.

The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.


An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.


#6 Penalising accounts that share COVID-19 "misinformation"


In addition to ghostbanning, social media platforms could target accounts for sharing content on COVID-19 that contradicted guidance from the Global Private Partnership (GP3)'s macro-level stakeholders, such as the Centre for Disease Control or the World Health Organisation. In Twitter's case, it introduced a specific COVID-19 misinformation policy in March, 2020, which prohibited claims about transmission, treatments, vaccines, or public health measures that the COVID-19 hegemony deemed “false or misleading.” Such content either had warning labels added to it, or was automatically deleted:

Tweets with suspected MDM were tagged with warnings like “This claim about COVID-19 is disputed” or with labels linking to curated "fact-checks" on G3P health authority pages. This was intended to reduce a tweet’s credibility without immediate removal, whilst also diminishing its poster's integrity. 

Tweets that broke this policy were deleted outright after flagging by automated systems or human moderators. For instance, Alex Berenson’s tweets questioning lockdown efficacy were removed, contributing to his eventual ban in August 2021. In Dr Christine Cotton's case, Twitter classified her account as “sensitive content”. It gradually lost visibility with the tens of thousands of followers it had attracted. In response, she created a new account to begin ‘from scratch’ in August 2022. The Twitter Files revealed that such censorship was linked to United States government requests (notably from the Joe Biden administration and Federal Bureau of Investigations). For example, 250,000 tweets flagged by Stanford’s Virality Project in 2021 were removed by Twitter.

In March 2020, Meta expanded its misinformation policies to target COVID-19-related MDM. Facebook and Instagram applied content labelling and down-ranking, with posts allegedly featuring MDM being labeled with warnings (such as 'False Information' or 'See why health experts say this is wrong') that linked to official sources. Such posts were also down-ranked in the News Feed to reduce their visibility. Users were notified of violations and warned that continued sharing could further limit reach or lead to harsher action. In late 2021, down-ranking also became applied to “vaccine-skeptical” content not explicitly violating rules but potentially discouraging vaccination. Posts violating policies were removed outright.

With LinkedIn's smaller, professional user base, and the platform's lower emphasis on real-time virality, led it to prefer the outright removal of accounts over throttling via shadow-bans. Accounts identified as posting MDM could face temporary limits, such as restricted posting privileges or inability to share articles for a set period. LinkedIn users received warnings after a violation, often with a chance to delete the offending post themselves to avoid further action. Such notices cited the policy breach, linking to LinkedIn’s stance on official health sources. This approach to COVID-19 MDM followed LinkedIn’s broader moderation tactics for policy violations.

In Dr Cotton's case, she shared her Pfizer COVID-19 clinical trial's critique on LinkedIn to get feedback from her professional network of former contacts in the pharmaceutical industry. This first post was removed within 24 hours (p.142), and her second within an hour. This hampered her ability to have a debate on the methodology of Pfizer's trial with competent people. Prof Kulldorff also had two posts deleted in August 2021: one linking to an interview on vaccine mandate risks and another reposting Icelandic health official comments on herd immunity.

Accounts that posted contents with links to external, alternate, independent media (such as Substack articles or videos on Rumble) also saw such posts down-ranked, hidden or automatically removed.

This is the first post on techniques for suppressing health experts' social media accounts (and the second on COVID-19 censorship in the Fifth Estate). My next in the series will address more extreme measures against COVID-19 dissidents, with salient examples.

Please follow this blog or me on social media to be alerted of the next post. If you'd like to comment, please share your views below, ta.

Thursday, 29 August 2024

After half-a-million views, "Dr Noakes" erection dysfunction "advert" taken down by Facebook + suggested actions for META to do better

I am pleased to report that The Noakes Foundation has succeeded in getting a fake 'Dr Noakes' advert for erectile dysfunction pills removed from META. This is after a month of trying varied methods without success to stop the brandjacking of Professor Tim Noakes' identity, and his impersonation via deepfake reels and accounts on Facebook.

Brandjacking is the ‘allegedly illegal use of trademarked brand names - on social network sites’ (Ramsey, 2010 p851). Cybercriminals misuse the trademarks of others without authorization. For example, ‘Facebookjacking’ and ‘Instajacking’ see public figures’ usernames, account names, and/or digital content being used for fake accounts and video "adverts" on Meta’s respective popular social networks- Facebook and Instagram. Such brandjacking via fake celebrity endorsement spans several types of crime: (1) Impersonation, (2) Non-consensual image sharing, and (3) the Infringement of a public figure's  intellectual property through copyright violation of still images and audio-video. In addition to causing (4) Reputation damage to the public figure through suggesting association with a scam, cybercriminals may use it for (5) Financial fraud and hacking. Given that these are serious crimes, it is worrying that public figures in South Africa seem to receive minimal, if any, support from social media companies for stopping the fake endorsement digital crime. There is also a gap in scholarship for how public figures worldwide, and in SA, might best tackle this persistent crime.

Figure 1. Screenshot from the fake 'Dr Noakes' erection dysfunction advert on Facebook (2024)  

On Thursday the 25th of July we were first alerted to a deep fake advert featuring Emeritus Professor Tim Noakes that ran on META's Facebook, and Tik Tok. As Figure 1 shows, the Facebook advert had been viewed over 584,000 times, liked by 637 accounts, and received 56 comments. While much of the likes and comments may be from bots, such high viewership of the reel itself is highly concerning. It suggests how rapidly a cybercriminals' adverts spread to potential victims- at over 16,000 views per day! 

Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024).jpg
Figure 2. Screenshot of scammers' Facebook account featuring "Dr Tim Noakes" erection pill adverts (2024)
Figure 3. Scammer account location behind fake Facebook Dr Tim Noakes adverts (2024)  

Our initial Facebook advert lookup revealed that one page was running four adverts (Figure 2). This account ("Tristan") was managed from Nepal and India (Figure 3).

Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook.jpg
Figure 4. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook (2024) 


This fake account page also leveraged fake interactions to suggest that it was liked, and followed (Figures 4 and 5).
Figure 5. Screenshot of fake Tristan account header behind Dr Tim Noakes adverts on Facebook
Figure 5. Screenshot of fake "Tristan" Facebook account details behind Dr Tim Noakes adverts (2024)

This account was reported to Facebook via a third-party. During this “warning period”, the account's owners launched four new "Dr Tim Noakes" campaigns. Each was documented and reported to Facebook. Interestingly, the links to the online store “sites” were dead ends. However, a 'Call Now' button could still support a call agent's phishing of victims financial details.

The absence of a link for data gathering suggested that this scam was primarily not for phishing such sensitive data, or selling fake products. Rather the advert's design seems geared for stealing advertising revenue via deepfake creation. The scammers hack into the "advertiser"'s Meta account to distribute fake adverts that run up tens-of-thousands of dollars in spend. In this case it was a government-based account from an unknown location. Such adverts may also carry malware, with users clicking on them being vulnerable to hacking. These paid ads also have the impact of pushing potential followers to the advertiser’s page. More followers results in more people seeing the content, and Meta indirectly benefiting from the cybercrime's increased visibility by achieving higher advertising rates.


Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024).jpg
Figure 6. Screenshot of scammers' "Hughles" Facebook account (2024)


Figure 7. Screenshot of scammers' "Hughles" Facebook account's Dr Tim Noakes adverts (2024).jpg
Figure 7. Screenshot of scammers' "Hughles" Facebook account's Dr Tim Noakes adverts (2024)

The scammers flick-flacked between varied accounts in committing this cybercrime- they initially used "Hughles" (Figures 6 and 7), "Cameron Sullivan Setting", and "Murthyrius" in launching the same deepfake ads. By the 28th of July, 13 of these "adverts were taken down by Facebook, but the scammers shifted to new accounts, "Longjiaren.com" (Figure 8) and "Brentlinger" (renamed "Brentlingerkk" after we reported it). On the 29th of August, these accounts and their adverts were disabled by Facebook.

Screenshot of Longjiaren.com scammers Facebook account for fake adverts.jpeg
Figure 8. Screenshot of Longjiaren.com scammers Facebook account for fake adverts (2024)

Such adverts typically reach viewers outside The Noakes Foundation, Nutrition Network and Eat Better South Africa’s networks. Their audiences know Professor Noakes does not endorse miracle weight loss and other cures. To reach vulnerable publics, The Noakes Foundation has run Facebook alerts to warn about this latest cybercrime. Ironically, the most recent advert attempting to flag the "Dr Noakes" scam was blocked by Facebook advertising (Figure 9)!

Screenshot of Facebook rejecting anti scam ad from The Noakes Foundation.jpg
Figure 9. Facebook rejects anti scam ad from The Noakes Foundation (2024)

Actions for META to do better in fighting cybercrime on its platforms


As Anna Collard (KnowBe4) spotlights in her recent interview with eNews, social media platforms are a vital source for news in Africa. Consequently, these platforms must be held more accountable for any slow responses to synthetic- and deep-fakes. It is greatly concerning that META's Facebook platform is so rife with many serious crimes (ranging from sextortion and child-trafficking to drug pushing). 

META can be more pro-active in tackling such cybercrimes {plus less serious ones like fake celebrity endorsement}, by prioritising these seven steps below:

1) Actively communicate that all users' must have a 'zero trust' mindset;
2) Create a compliance team that is dedicated to thwarting cybercriminals' activities;
3) Offer at least one human contact on each META platform for serious reports of criminal misuse;
4) Promote frequent reporters of cybercrime by referring them to META's Trusted Partners or Business Partners for rapid aid;
5) Encourage external research on every platform regarding cybercriminals' activities (such initiatives could develop inexpensive tools. For example, for celebrities' reps to protect public figures from being deep faked in "adverts");
6) Provide more feedback on what was influential in reporting cybercrime for accounts and content to be removed. Without such feedback, fraud reporters may not be sure which reports are most effective;
7) Have a recommendation system in place for support networks that cybervictims can approach (such as referring South Africans to its national CyberSecurity hub).

In addition, META might consider these suggestions from The Noakes Foundation's Report Fake Endorsement initiative, to: (8) enhance deepfake detection technology, (9) apply stricter verification processes, (10) increase transparency and reporting tools, (11) support local educational initiatives, (12) promote collaborations with local cybercrime experts, (13) implement proactive monitoring systems to detect unusual patterns in ads, and (14) reinforced consequences for violations.

By sharing this "Dr Noakes" case study (and developing others), The Noakes Foundation hopes to raise awareness of the fake celebrity endorsement cybercrime, plus the importance of Big Tech guardians stepping up to fulfil their responsibilities. We are also liaising with sympathetic allies (KnowBe4® Africa Security AwarenessOrange Defence, Wolfpack Information Risk and others) to grow the networks necessary to better support cybercrime prevention in South Africa. 

Much can be done for targeted digital literacy education for vulnerable targets of cybercrime (such as #StopTheScam for silver surfers). We will also continue advocating that capable guardians (such as META, Twitter and TikTok) become more pro-active in protecting vulnerable publics on their platforms. Their gatekeeping role is vital, as the traditional bulwarks against crime (education, the police and the law) seem unable to catch-up with the "evolution"of global cybercrimes!

Tuesday, 26 September 2023

Noteworthy disparities with four CAQDAS tools: explorations in organising live Twitter (now known as X) data

Written for researchers interested in extracting live X (formerly Twitter) data via Qualitative Data Analysis Software tools

Social Science Computer Review (SSRC) has just published a paper by yours truly, Dr Pat Harpur and Dr Corrie Uys to https://doi.org/10.1177/08944393231204163. As the article's title suggests, we focus on the contrasting the Qualitative Data Analysis Software (QDAS) packages that currently support live Twitter data imports. 

QDAS tools that support live data extraction are a relatively recent innovation. At the time of our fieldwork, four prominent QDAS provided this: only ATLAS.ti™, NVivo™, MAXQDA™ and QDA Miner™ had Twitter data import functionalities. Little has been written concerning the research implications of differences between their functionalities, and how such disparities might contribute to contrasting analytical opportunities. Consequently, early-stage researchers may experience difficulties in choosing an apt QDAS to extract live data for Twitter academic research.
In response to both methodological gaps, we spent almost a year working on a software comparison to address the research question (RQ) 'How do QDAS packages differ in what they offer for live Twitter data research during the organisational stage of qualitative analysis?'. Comparing their possible disparities seems worthwhile since what QDAS cannot, or poorly, support may strongly impact researchers’ microblogging data, its organisation, and scholars’ potential findings. In the preliminary phase of research, we developed a features checklist for each package, based on their online manuals, product descriptions and forum feedback related to live Twitter imports. This checklist confirmed wide-ranging disparities between QDAS, which were not unexpected since they are priced very differently- ranging from $600 for an ATLAS.ti subscription, to $3,650 for a QDAMiner (as part of the Provalis Research’s ProSuite package, which also includes WordStat 10 & Simstat).

To ensure that each week's Twitter data extractions could produce much data for potential evaluation, we focused on extracting and organising communiqués from the national electrical company, the Electricity Supply Commission (Eskom). ‘Load-shedding’ is the Pan South African Language Board’s word of the year for 2022 (PanSALB, 2022), due to it most frequent use in credible print, broadcast and online media. Invented as a euphemism by Eskom’s public-relations team, load-shedding describes electricity blackouts. Since 2007, planned rolling blackouts have been used in a rotating schedule for periods ‘where short supply threatens the integrity of the grid’ (McGregor & Nuttall, 2013). In the weeks up to, and during, the researchers’ fieldwork, Eskom, and the different stages of loadshedding strongly trended on Twitter. These tweets reflected the depth of public disapproval, discontent, anger, frustration, and general concern.

QDAS packages commonly serve as tools that researchers can use for four broad activities in the qualitative analysis process (Gilbert, Jackson, & di Gregorio, 2014). These are (a) organising- coding sets, families and hyperlinking; (b) exploring - models, maps, networks, coding and text searches; (c) reflecting - through memoing, annotating and mapping; and (d) integrating qualitative data through memoing with hyperlinks and merging projects (Davidson & di Gregorio, 2011; Di Gregorio, 2010; Lewins & Silver, 2014).
Notwithstanding the contrasts in the costs for different QDAS packages, it was still surprising how much the QDAS tools varied for the first activity, (a) ‘organising data’ in our qualitative research project: Notably, the quantum of data extracted for the same query differed, largely due to contrasts in the types and amount of data that the four QDAS could extract. Variations in how each supported visual organisation and thematic analysis also shaped researchers’ opportunities for becoming familiar with Twitter users and their tweet content. 
Such disparities suggest that choosing a suitable QDAS for organising live Twitter data must dovetail with a researcher’s focus: ATLAS.ti accommodates scholars focused on wrangling unstructured data for personal meaning-making, while MAXQDA suits the mixed-methods researcher. QDA Miner’s easy-to-learn user interface suits a highly efficient implementation of methods, whilst NVivo supports relatively rapid analysis of tweet content.
We hope that these findings might help guide Twitter social science researchers and others in QDAS tool selection. Our research has suggested recommendations for these tools developers to follow for potentially improving the user experience for Twitter researchers. Future research might explore disparities in other qualitative research phases, or contrast data extraction routes for a variety of microblogging services.  More broadly,  an opportunity for a methodological contribution exists regarding research that can define a strong rationale for the software comparison method.
The authors greatly appreciate the SSRC's editor, Professor Stephen Lyon, advice on improving our final manuscript. We also thank The Noakes Foundation for its grant AFSDV02- our interdisciplinary software comparison would not have been possible without funding to cover subscriptions to the most extensive versions of MAXQDA Analytics Pro and QDA Miner. All authors are affiliated with the Cape Peninsula University of Technology (CPUT) and appreciate CPUT's provision of licensed versions of ATLAS.ti.

Please comment below if you have any questions or comments regarding our paper?

Thursday, 29 June 2023

Twitter Support must do better for helping celebrity and public victims of a global diet phishing scam!

Worldwide, diet scammers are marketing fake “endorsements” from celebrities across social media adverts, search engine ads and online content to phish victims’ financial details. The sheer volume of content the fraudsters produce is very difficult for celebrities and their representatives to tackle alone. One major obstacle to stopping the false marketing of “miracle weightloss products” is the reluctance of social media platforms to take down fake accounts and ads timeously. The fraudsters typically run the ads regionally for a few days in which they are displayed to hundreds of thousands of people. Just a fraction of an ad’s viewers need to share their financial details for the scam to be highly profitable!

This post presents the underwhelming example of reporting diet phishing accounts to Twitter Support as a way to spotlight the difficulties of tackling fraud via social media platforms. Hopefully publicly shaming @TwitterSupport will encourage its leaders to help address the global diet phishing scam properly, whilst also providing decent reporting options for celebrities and their representatives:

South African celebrities hijacked in fake diet adverts

A major factor in the “success" of this global scam (it has been running since 2014!) is the poor response from Facebook, Instagram, Twitter and other social media companies to formal requests to close fake accounts and their advertisement campaigns. Their ineffective responses are legally shortsighted: social media companies that repeatedly permit diet phishing ads on their platforms are complicit in a fraud, and possibly in the delict of passing off. For example, in South Africa, the diet phishing scam has undoubtedly harmed the reputation of Prof Tim Noakes and The Noakes Foundation through its fraudulent, direct misrepresentation, of fake products. These have certainly confused the public and @TheNoakesF has lost goodwill from the many victims of the fraud’s misrepresentation! 

Prof Noakes, is just one of many well-known individuals whose identities have been hijacked. The South African version of the scam has seen: Minki van der Westhuizen, Jeannie D (@Jeannieous), Basetsana Kumalo (@basetsanakumalo), Nkhensani Nkosi (@NkhensaniNkosi1), Shashi Naidoo (@SHASHINAIDOO), Tumi Morake (@tumi_morake), Dawn King (@DawnTKing), Ina Parmaan (@inapaarman) and Dr Shabir Madhi (@ShabirMadh) all having their reputations tarnished.

Since Prof Noakes’ identity was first hijacked in 2020, The Noakes Foundation (TNF) and partners (such as Dr Michael Mol and Hello Doctor) have tried many options to stop the scam. For example, TNF developed and publicised content against it via blogposts, such as Keto Extreme Scams Social Media Users Out of Thousands. TNF also produced these videos: Professor Tim Noakes vs. Diet Phishing: Exposing a Global Scam with Fake Celebrity Endorsements, Dr Michael Mol highlighting Diet Scams and Prof Noakes Speaks Out Against The Ongoing Diet Scam. Sadly, The Noakes Foundation’s repeated warnings to the public don’t seem to be making much difference in preventing new victims!

American, Australian, British and Swedish celebs hijacked, too!

In the United States, the diet phishing scam has also stolen the identities of major celebrities. Most are in popular TV franchises: Oprah Winfrey (@Oprah), Dr Mehmet Oz (@DrOz) Dr Phil (@DrPhil), Dolly Parton (@DollyParton), Kelly Clarkson (@kellyclarkson), the Kardashian Family (@kardashianshulu + @KimKardashian), Kelly Osbourne (@KellyOsbourne), Chrissy Teigen (@chrissyteigen), Martha Maccallum (@marthamaccallum), Blake Shelton (@blakeshelton) and #TomSelleck 🥸. It’s a Magnum opus of fraud!

Amazing female celebs in the United Kingdom have also seen their identities stolen. Diet phishing scammers have hijacked the IDs of Holly Willis (@hollywills), Amanda Holden (@AmandaHolden), Anne Hegerty (@anne_hegerty) and Dawn French (@Dawn_French). Even the British (@RoyalFamily) has not been immune, with the targeting of Catherine, the Princess of Wales (@KensingtonRoyal) and the Former Queen Elisabeth II, RIP and God Bless. Sadly, Meghan Duchess of Sussex, has been targeted too...

Down Under, well-known Australian personalities, such as its national treasure Maggie Beer (@maggie_beer) and Farmer Wants A Wife host Sam Armytage (@sam_armytage) have had their identities misused for fake #weightloss endorsements. And also Mr Embarrassing Bodies Down Under himself, Dr Brad McKay (@DrBradMcKay).

In Sweden, Dr Andreas Eenfeldt (@DrEenfeldt from @DietDoctor), another leader in the low carbohydrate movement, has been targeted in promotions of fake #keto products. Sadly, the fake ads seem to generate far more attention and action than his or my father's health advice!

N.B. The examples above are not extensive in terms of all victims. We largely know of celebrities in the Anglosphere whose identities were stolen, then featured in English language reports and related search engine results.


Deceptive "Tim Noakes" Twitter accounts market Keto Gummies

Just as the celebrity names stolen for the fake ads change often, so do the product names. A few examples of these fake names are Capsaicin, FigurWeightLossCapsules, Garcinia, Ketovatru and KetoLifePlus. Be warned that new "products" are added every month! One particularly common term used in the scammers'  product names is "Keto Gummies". A recent Twitter search for "Tim Noakes keto gummies" suggested many fake accounts in Figure 1 (just the top view!), plus diverse "product" names.




Figure 1. Twitter search results for Tim Noakes keto gummies (fake product accounts) (20 June, 2023)



Twitter Support does not think fake accounts are misleading and deceptive?!

These accounts have clearly been setup to fraudulently market "keto gummies" by suggesting an  association with "Tim Noakes". So, the logical response for any representative of The Noakes Foundation would seem to be reporting each fake account for violating Twitter’s misleading and deceptive identities policy, right?



Figure 2. Reporting the fake Tim Noakes Keto Gummies account to Twitter support

This is a very time consuming process- in the first place, the same complaint must be individually submitted for each account. Secondly, the representative reporting these complaints must also upload and/or email related proof of ID, business and legal documentation to Twitter Support before it will consider investigating whether impersonation is taking place.

Fake Twitter accounts, including those below, were reported to Twitter, with support documentation:
@NoakesGumm28693 0327118996     @TimNoakesHoax 0327120384
@TimGummies 0327119602                 @NoakesGumm91126 0327119675
@gummies_tim 0327120030                 @TimNoakes_ZA 0327119741
@tim_gummies 0327118910                 @NoakesSouth 0327118634
@timnoakesketo0 0327119362              @NoakesGumm22663 0327119487

In each case, @TwitterSupport replied that the following accounts are NOT in violation of Twitter’s misleading and deceptive identities policy. This would seem to contradict the obvious evidence that Tim Noakes' name has been hijacked by scammers for misleading victims with a fake product!

The Noakes Foundation has supplied its legal team with Twitter's related correspondence for review. I will update this post as developments progress (or fail to!) with the remarkably unhelpful and potentially criminally negligent @TwitterSupport.

This "Tim Noakes keto gummies" Twitter account is not deceptive?!

Figure 3. Fake @TimNoakesKetoGummies account

Figure 3. Fake @TimNoakesKetoGummies account
 
Figure 3 shows a typical example of a fake account's style. It uses Tim Noakes' name, plus stock photography in marketing a non-existent product. It only tweeted on May the 24th, and is followed by one person. Any knowledgeable complaint reviewer would surely consider this to be a case of a scammer creating a misleading and deceptive account for gaming Twitter's search engine. However, Twitter Support does not agree, nor explain why in its generic correspondence around each scam account.

From stealing victims' banking details to delivering dubious products

As fitness expert Reggie Wilson (@fitforfreelance) deftly explains in his 30 second video, Keto Gummies cannot work. It is most concerning that The Noakes Foundation has received reports that scammers are now delivering a physical product to South African victims. Not only are fake #KetoGummies products being marketed locally via takealot.com BUT are also offered internationally via Amazon.com, and possibly other major online retailers!

Just as the scammers link themselves to celebs on Twitter, they also target the popular television franchises they're from. Notably: AmericasNextTopModel, DragonsDen, The Kardashians, The Oprah Show and Shark Tank. On Twitter, national businesses are also being misrepresented as selling these fake products, such as Walmart in the US, Jean Coutu pharmacies in Canada and Dischem in South Africa. Type in keto gummies into these retailers search engines and you will see that many options pop up, some seemingly associated with popular celebrities and TV franchises.

The Noakes Foundation is keen to work with affected celebrities, their representatives and business to raise the pressure on social media companies to make a proper response to the scammers and fake ads they host. Do let us know if you would like to help using the comments below, or by emailing reportdietscam@gmail.com.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (58) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest