Showing posts with label intellectual harassment. Show all posts
Showing posts with label intellectual harassment. Show all posts

Saturday, 29 March 2025

Techniques for suppressing health experts' social media accounts, part 1 - The Science™ versus key opinion leaders challenging the COVID-19 narrative

Written for researchers and others interested in the many techniques used to suppress COVID-19 dissidents' social media accounts, and digital voices.

There has been extensive censorship of legitimate, expert criticism during the COVID-19 event (Kheriaty, 2022Shir-Raz et al, 2023Hughes, 2024). Such scientific suppression makes the narrow frame visible for what the sponsors of global health authoritarianism permit for questioning of The Science™. In contrast to genuine science which innovates through critique, incorporated science does not welcome questioning. Like fascism, corporatist science views critiques of its interventions to be heresy. In the COVID-19 event, key opinion leaders who criticised the lack of scientific rigour behind public health measures (such as genetic vaccine mandates) were treated as heretics by a contemporary version of the Inquisition (Malone et al., 2024). Dissidents were accused of sharing "MDM" (Misinformation, Disinformation and Malinformation) assumed to place the public's lives at risk. Particularly in prestigious medical universities, questioning the dictates of health authorities and their powerful sponsors was viewed as being unacceptable, completely outside an Overton Window that had become far more restrictive due to fear- mongering around a "pandemic" (see Figure 1).


Narrowed Overton Window for COVID-19.


Figure 1. Narrowed Overton Window for COVID-19. Figures copied from (p137-138) in Dr Joseph Fraiman (2023). The dangers of self-censorship during the COVID-19 pandemic. In R. Malone, E. Dowd, & G. Fareed (Eds.), Canary In a Covid World: How Propaganda and Censorship Changed Our (My) World (pp. 132-147). Amazon Digital Services LLC - Kdp.


Higher Education is particularly susceptible to this groupthink, as it lends itself to a purity spiral, which in turn contributes to the growing spiral of silence for "unacceptable views". A purity spiral is a form of groupthink in which it is more beneficial to hold some views than to not hold them. In a process of moral outbidding, individual academics with more extreme views are rewarded. This was evidenced at universities where genetic vaccine proponents loudly supported the mandatory vaccination of students, despite them having minimal, if any, risk. In contrast, scholars expressing moderation, doubt or nuance faced ostracism as "anti-vaxxers". In universities, there are strong social conformity factors within its tight-knit community. Grants, career-support and other forms of institutional support depend on collegiality and alignment with prevailing norms. Being labeled a contrarian for questioning a ‘sacred cow’, such as "safe and effective" genetic vaccines, is likely to jeopardise one’s reputation, and academic future. Academic disciplines coalesce around shared paradigms and axiomatic truths, routinely amplifying groupthink. Challenging reified understandings as shibboleths can lead to exclusion from conferences, journals and cost scholars departmental, faculty, and even university support. Particularly where powerful funders object to such dissent!


Here, administrative orthodoxy can signal an “official” position for the university that chills debate. Dissenters fears of isolation and reprisal (such as poor evaluations and formal complaints for not following the official line) may convince them to self-censor. Particularly where the nonconformist assesses that the strength of opinion against his or her opinion is virulent, alongside high costs to expressing a disagreeable viewpoint- such as negotiating cancelation culture. Individuals who calculate that they have a low chance of success to convince others, and are likely to pay a steep price, self censor and contribute to the growing spiral of silence. The COVID-19 event serves as an excellent example for this growing spiral’s chilling effect versus free speech and independent enquiry.


COVID-19 is highly pertinent for critiquing censorship in the Medical and Health Sciences. Particularly as it featured conflicts of interest that contributed to global health "authorities" policy guidance. Notably, the World Health Organisation promoted poorly substantiated and even unscientific guidelines (Noakes et al., 2021), that merit being considered MDM. In following such dictates from the top policy makers of the Global Public-Private Partnership (GPPP or G3P), most governments' health authorities seemed to ignore key facts. Notably: i. COVID-19 risk was steeply age-stratified (Verity et al, 2019. Ho et al, 2020Bergman et al, 2021); ii. Prior COVID-19 infection can provide substantial immunity (Nattrass et al., 2021); iii. COVID-19 genetic vaccines did not stop disease transmission (Eyre et al. 2022, Wilder-Smith, 2022); iv. mass-masking was ineffective (Jefferson et al., 2023. Halperin, 2024); v. school closures were unwarranted (Wu et al., 2021); and, vi. there were better alternatives to lengthy, whole-society lockdowns (Coccia, 2021, Gandhi and Venkatesh, 2021Herby et al., 2024). Both international policy makers' and local health authorities' flawed guidance must be open debate and rigorous critique. If public health interventions had been adapted to such key facts during the COVID-19 event, the resultant revised guidance could well have contributed to better social-, health-, and economic outcomes for billions of people!


This post focuses on six types of suppression techniques that were used against dissenting accounts whose voices are deemed illegitimate "disinformation" spreaders by the Global public-Private Partnerships (G3P)-sponsored industrial censorship complex. This an important concern, since claims that the suppression of free speech's digital reach can "protect public safety" were proved false during COVID-19. A case in point is the censorship of criticism against employee's vaccine mandates. North American employers' mandates are directly linked to excess disabilities and deaths for hundreds and thousands of working-age employees (Dowd, 2024). Deceptive censorship of individuals' reports of vaccine injuries as "malinformation", or automatically-labelling criticism of Operation Warp Speed as "disinformation", would hamper US employee's abilities to make fully-informed decisions on the safety of genetic vaccines. Such deleterious censorship must be critically examined by academics. In contrast, 'Disinformation-for-hire' scholars (Harsin, 2024) will no doubt remain safely ensconced behind their profitable MDM blinkers.


This post is the first in a series that spotlights the myriad of account suppression techniques that exist. For each, examples of censorship against health experts' opinions are provided. Hopefully, readers can then better appreciate the asymmetric struggle that dissidents face when their accounts are targeted by the censorship industrial complex with a myriad of these strategies spanning multiple social media platforms:


Practices for @Account suppression


#1 Deception - users are not alerted to unconstitutional limitations on their free speech


Social media users might assume that their constitutional right to free speech as citizens will be protected within, and across, digital platforms. However, global platforms may not support such rights in practice. No social media company openly discloses the extent to which users' accounts have, and are, being censored for expressing opinions on controversial topics. Nor do these platforms explicitly warn users what they consider to be impermissible opinions. Consequently, their users are not be forewarned regarding what may result in censorship. For example, many COVID19 dissidents were surprised that their legitimate critiques could result in account suspensions and bans (Shir-Raz, 2022). Typically, such censorship was justified by Facebook, Google, LinkedIn, Tik Tok, Twitter and YouTube, due to users' violation of "community rules". In most countries, the freedom of speech is a citizen’s constitutional right that should be illegal to over-ride. It should be deeply concerning that such protections were not supported in the Fourth Estate of the digital public square during the COVID-19 event. Instead, the supra-national interests of health authoritarians came to supersede national laws to prevent (unproven) harms. This pattern of censorship is noticeable in many other scientific issue arenas, ranging from criticism against man-made climate change to skeptics challenging transgender medical ideology.

#2 Cyberstalking - facilitating the virtual and physical targeting of dissidents


An individual who exercises his or her voice against official COVID-19 narratives can expect to receive both legitimate, pro-social and unfair, anti-social criticism. While cyberstalking should be illegal, social media platforms readily facilitate the stalking and cyber-harassment of dissidents. An extreme example of this was Dr Christine Cotton's experiences on LinkedIn. Dr Cotton was an early whistleblower (Jan, 2022) against Pfizer's COVID-19 clinical trial's false claims of 95% efficacy for its treatments. 
Her report identified the presence of bias and major deviations from good clinical practice. In press interviews, she reported that the trial did ‘not support validity in terms of efficacy, immunogenicity and tolerance of the results provided in the various Pfizer clinical reports that were examined in the emergency by the various health authorities. Christine shared this report with her professional network on LinkedIn, asking for feedback from former contacts in the pharmaceutical industry. The reception was mostly positive, but it and related posts were subject to a rapid content takedown by LinkedIn, ostensibly for not meeting community standards. At the same time, her profile became hypersurveiled. It attracted unexpected visits from 133 lawyers, the Ministry of Defence, employees of the US Department of State, the World Health Organisation, and others (p142). None of these profile viewers contacted her directly.

#3 Othering - enabling public character assassination via cyber smears


Othering is a process whereby individuals or groups are defined, labeled or targeted as not fitting in within the norms of a social group. This influences how people perceive and treat those who are viewed as being part of the in-group, versus those in an out-group. At a small scale, othering can result in a scholar being ostracised from their university department following academic mobbing and online academic bullying (Noakes & Noakes, 2021). At a large scale, othering entails a few dissidents on social media platforms being targeted for hypercriticism by gangstalkers. 

Cyber gangstalking is a process of cyber harassment that follows cyberstalking, whereby a group of people target an individual online to harass him or her. Such attacks can involve gossip, teasing and bad-jacketing, repeated intimidation and threats, plus other fear-inducing behaviours. Skeptics' critical contributions can become swamped by pre-bunkers and fellow status-quo defenders. Such pseudo-skeptics may be sponsored to trivialise dissenters' critiques, thereby contributing to a fact choke against unorthodox opinions. 

In Dr Christine Cotton's case in March 2022 her  name was disclosed in a list as part of a French Senate investigation into adverse vaccine events. A ‘veritable horde of trolls seemingly emerged out of nowhere and started attacking’ her ‘relentlessly’ (p143). These trolls were inter-connected through subscribing to each others’ accounts, which allowed them to synchronise their attacks. They attempted to propagate as much negative information on Dr Cotton as possible in a ‘Twitter harassment scene’. Emboldened by their anonymity, the self-proclaimed “immense scientists” with masters in virology, vaccines, clinical research and biostatistics, launched a character assassination. They attacked her credentials and work history, whilst creating false associations (“Freemasonry” and “Illuminati”). 

This suggests how identity politics sensibilities and slurs are readily misused against renegades. In the US, those questioning COVID-19 policies were labelled “far right” or “fascist”, despite promoting a libertarian critique of healthcare authoritarianism! In addition, orchestrators of cybermobbing tagged dissidents accounts to be those of someone who is: 'anti-science', 'an anti-vaxxer', 'biased', 'charlatan', 'celebrity scientist', 'conspiracy theorist', 'controversial', 'COVID-19 denier', 'disgraced scientist', 'formerly-respected', 'fringe expert', 'grifter', 'narcissist with a Galileo complex', 'pseudo-scientist', 'quack', 'salesman', 'sell-out' and 'virus', amongst other perjoratives.  Such terms are used as a pre-emptive cognitive vaccine whose hypnotic language patterns ("conspiracy theorist") are intended to thwart audience engagement with critical perspectives. Likewise, these repeatedly used terms help grow a digital pillory that becomes foregrounded in the pattern of automated suggestions in search engine results.

In this Council of the Cancelled, Mike Benz, Prof Jay Bhattacharya, Nicole Shanahan and Dr Eric Weinstein speculate about hidden censorship architectures. One example is Google's automated tagging for "controversial" public figures. These can automatically feature in major mainstream news articles featuring COVID-19 dissidents. This is not merely a visual tag, but a cognitive tag. It marks "controversial" individuals with a contemporary (digital) scarlet letter.

In Dr Cotton's case, some trolls smeared her work in raising awareness of associations for the vaccine injured to be helping “anti-vaccine conspiracy sites”. She shares many cases of these injuries in her book and was amazed at the lack of empathy that Twitter users showed not just her, but also those suffering debilitating injuries. In response she featured screenshots of select insults on her blog at https://christinecotton.com/critics and blocked ‘hundreds of accounts’ online. In checking the Twitter profiles attacking her, she noticed that many with ‘behavioural issues were closeby’. Dr Cotton hired a ‘body and mind’ guard from a security company for 24-hour protection. Her account was reported for “homophobia”, which led to its temporary closing. After enduring several months of cyber-harassment by groups, a behaviour that can be severely be punished by EU law, Dr Cotton decided to file complaints against some of them. Christine crowdfunded filing legal complaints against Twitter harassers from a wide variety of countries. This complaint sought to work around how cyber harassers think anonymity is suitable for avoiding lawsuits for defamation, harassment and public insults.

#4 Not blocking impersonators or preventing brandjacked accounts


Impersonator's accounts claiming to belong to dissidents can quickly pop up on social media platforms. While a few may be genuine parodies, others serve identity jacking purposes. These may serve criminal purposes, in which scammers use fake celebrity endorsements to phish "customers" financial details for fraud. Alternately, intelligence services may use brandjacking for covert character assassination smears against dissidents.

The independent investigative journalist, Whitney Webb, has tweeted about her ongoing YouTube experience of having her channel's content buried under a fact choke of short videos created by other accounts:

Whether such activities are from intelligence services or cybercriminals, they are very hard for dissidents and/or their representatives to respond effectively against. Popular social media companies (notably META, X and TikTok) seldom respond quickly to scams, or to the digital "repersoning" discussed in a Corbett Report discussion between James Corbett and Whitney Webb.
 
In Corbett's case, after his account was scrubbed from YouTube, many accounts featuring his identity started cropping up there. In Webb's case, she does not have a public profile outside of X, but these were created featuring her identity on Facebook and YouTube. "Her" channels clipped old interviews she did and edited them into documentaries on material Whitney has never publicly spoken about, such as Bitcoin and CERN. They also misrepresented her views on the transnational power structure behind the COVID-19 event, suggesting she held just Emmanuel Macron and Klaus Schwab responsible for driving it. They used AI thumbnails of her, and superimposed her own words in the interviews. Such content proved popular and became widely reshared via legitimate accounts, pointing to the difficulty of dissidents countering it. She could not get Facebook to take down the accounts, without supplying a government-issued ID to verify her own identity.


Digital platforms may be disinterested in offering genuine support- they may not take any corrective action when following proxy orders from the US Department of State (aka 'jawboning'or members of the Five Eyes (FVEY) intelligence agency. In stark contrast to marginalised dissenters, VIPS in multinationals enjoy access to online threat protection services (such as ZeroFox) for executives that cover brandjacking and over 100 other cybercriminal use-cases.

#5 Filtering an account's visibility through ghostbanning


As the Google Leaks (2019) and Facebook- (2021) and Twitter Files (2022) revelations have spotlighted, social media platforms have numerous algorithmic censorship options, such as the filtering the visibility of users' accounts. Targeted users may be isolated and throttled for breaking "community standards" or government censorship rules. During the COVID-19 event, dissenters' accounts were placed in silos, de-boosted, and also subject to reply de-boosting. Contrarians' accounts were subject to ghostbanning (AKA shadow-banning)- this practice will reduce an account’s visibility or reach secretly, without explicitly notifying its owner. Ghostbanning limits who can see the posts, comments, or interactions. This includes muting replies and excluding targeted accounts' results under trends, hashtags, searches and in followers’ feeds (except where users seek a  filtered account's profile directly). Such suppression effectively silences a user's digital voice, whilst he or she continues to post under the illusion of normal activity. Ghostbanning is thus a "stealth censorship" tactic linked to content moderation agendas. 

This term gained prominence with the example of the Great Barrington Declaration's authors, Professors Jay Bhattacharya, Martin Kulldorff, and Sunetra Gupta. Published on October 4, 2020, this public statement and proposal flagged grave concerns about the damaging physical and mental health impacts of the dominant COVID-19 policies. It argued that an approach for focused protection should rather be followed than blanket lockdowns, and that allowing controlled spread among low-risk groups would eventually result in herd immunity. Ten days later, a counter- John Snow Memorandum was published in defence of the official COVID-19 narrative's policies. Mainstream media and health authorities amplified it, as did social media given the memorandum's alignment with prevailing platform policies against "misinformation" circa-2020. In contrast,  the Great Barrington Declaration was targeted indirectly through platform actions against its proponents and related content:


Stanford Professor of Medicine, Dr Jay Bhattacharya’s Twitter account was revealed (via the 2022 Twitter Files) to have been blacklisted, reducing its visibility. His tweets questioning lockdown efficacy and vaccine mandates were subject to algorithmic suppression. Algorithms could flag his offending content with terms like “Visibility Filtering” (VF) or “Do Not Amplify”, reducing its visibility. For instance, Bhattacharya reported that his tweets about the Declaration and seroprevalence studies (showing wider COVID-19 spread than official numbers suggested) were throttled. Journalist Matt Taibbi's reporting on the "Twitter Files" leaks confirmed that Twitter had blacklisted Prof Bhattacharya's account, limiting its reach due to his contrarian stance. YouTube also removed videos in which he featured, such as interviews in which he criticised lockdown policies.

The epidemiologist and biostatistician, Prof Kulldorff observed that social media censorship stifled opportunities for scientific debate. He experienced direct censorship on multiple platforms, which included shadowbans. Twitter temporarily suspended his account in 2021 for tweeting that not everyone needed the COVID-19 vaccine ('Those with prior natural infection do not need it. Nor children'). Posts on X and web reports indicate Kulldorff was shadowbanned beyond this month-long suspension. The Twitter Files, released in 2022, revealed he was blacklisted, meaning his tweets’ visibility was algorithmically reduced. Twitter suppressed Kulldorff's accurate genetic vaccine critique, preventing comments and likes. Internal Twitter flags like “Trends Blacklisted” or “Search Blacklisted” (leaked during the 2020 Twitter hack) suggest Kulldorff's account was throttled in searches and trends, a hallmark of shadowbanning where reach is curtailed without notification. Algorithmic deamplification excluded Prof Kulldorff's tweets from being seen under trends, search results, or followers’ feeds- except where users sought his profile directly. This reflects how social media companies may apply visibility filters (such as a Not Safe For Work (NSFW) view). Kulldorff also flagged that LinkedIn’s censorship pushed him to platforms like Gab, implying a chilling effect on his professional network presence.


An Oxford University epidemiologist, Professor Gupta faced less overt account-level censorship, but still had to negotiate content suppression. Her interviews and posts on Twitter advocating for herd immunity via natural infection amongst the young and healthy were often flagged, or down-ranked.


#6 Penalising accounts that share COVID-19 "misinformation"


In addition to ghostbanning, social media platforms could target accounts for sharing content on COVID-19 that contradicted guidance from the Global Private Partnership (GP3)'s macro-level stakeholders, such as the Centre for Disease Control or the World Health Organisation. In Twitter's case, it introduced a specific COVID-19 misinformation policy in March, 2020, which prohibited claims about transmission, treatments, vaccines, or public health measures that the COVID-19 hegemony deemed “false or misleading.” Such content either had warning labels added to it, or was automatically deleted:

Tweets with suspected MDM were tagged with warnings like “This claim about COVID-19 is disputed” or with labels linking to curated "fact-checks" on G3P health authority pages. This was intended to reduce a tweet’s credibility without immediate removal, whilst also diminishing its poster's integrity. 

Tweets that broke this policy were deleted outright after flagging by automated systems or human moderators. For instance, Alex Berenson’s tweets questioning lockdown efficacy were removed, contributing to his eventual ban in August 2021. In Dr Christine Cotton's case, Twitter classified her account as “sensitive content”. It gradually lost visibility with the tens of thousands of followers it had attracted. In response, she created a new account to begin ‘from scratch’ in August 2022. The Twitter Files revealed that such censorship was linked to United States government requests (notably from the Joe Biden administration and Federal Bureau of Investigations). For example, 250,000 tweets flagged by Stanford’s Virality Project in 2021 were removed by Twitter.

In March 2020, Meta expanded its misinformation policies to target COVID-19-related MDM. Facebook and Instagram applied content labelling and down-ranking, with posts allegedly featuring MDM being labeled with warnings (such as 'False Information' or 'See why health experts say this is wrong') that linked to official sources. Such posts were also down-ranked in the News Feed to reduce their visibility. Users were notified of violations and warned that continued sharing could further limit reach or lead to harsher action. In late 2021, down-ranking also became applied to “vaccine-skeptical” content not explicitly violating rules but potentially discouraging vaccination. Posts violating policies were removed outright.

With LinkedIn's smaller, professional user base, and the platform's lower emphasis on real-time virality, led it to prefer the outright removal of accounts over throttling via shadow-bans. Accounts identified as posting MDM could face temporary limits, such as restricted posting privileges or inability to share articles for a set period. LinkedIn users received warnings after a violation, often with a chance to delete the offending post themselves to avoid further action. Such notices cited the policy breach, linking to LinkedIn’s stance on official health sources. This approach to COVID-19 MDM followed LinkedIn’s broader moderation tactics for policy violations.

In Dr Cotton's case, she shared her Pfizer COVID-19 clinical trial's critique on LinkedIn to get feedback from her professional network of former contacts in the pharmaceutical industry. This first post was removed within 24 hours (p.142), and her second within an hour. This hampered her ability to have a debate on the methodology of Pfizer's trial with competent people. Prof Kulldorff also had two posts deleted in August 2021: one linking to an interview on vaccine mandate risks and another reposting Icelandic health official comments on herd immunity.

Accounts that posted contents with links to external, alternate, independent media (such as Substack articles or videos on Rumble) also saw such posts down-ranked, hidden or automatically removed.

This is the first post on techniques for suppressing health experts' social media accounts (and the second on COVID-19 censorship in the Fifth Estate). My next in the series will address more extreme measures against COVID-19 dissidents, with salient examples.

Please follow this blog or me on social media to be alerted of the next post. If you'd like to comment, please share your views below, ta.

Thursday, 25 February 2021

Some background for 'Distinguishing online academic bullying: identifying new forms of harassment in a dissenting Emeritus Professor’s case'

Written for academics and researchers interested in academic cyberbullies, peer victimisation, scientific suppression and intellectual harassment.

The Heliyon journal has published Distinguishing online academic bullying: identifying new forms of harassment in a dissenting Emeritus Professor’s case. It is an open-access article that's freely available from sciencedirect.com/science/article/pii/S240584402100431X.

Adjunct Professor Tim Noakes and I wrote it to foreground how the shift of academic discourse to online spaces without guardians presents cyberbullies from Higher Education (HE) with a novel opportunity to harass their peers and other vulnerable recipients. We argue that cyberbullying from HE employees is a neglected phenomenon, despite the dangers it can pose to academic free speech, as well as other negative outcomes.
Ringleader of the tormentors graphic by Create With
Background to the Online Academic Bullying (OAB) research project
The inspiration for researching OAB as a distinctive phenomenon arose during the lead author’s presentation to a research group in November, 2018. In this talk, I presented on designing new emojis as conversation stoppers for combating trolling (SAME, 2018). The attendees' questions in response suggested the necessity of researching how cyber harassment plays out in academic disputes on social media platforms.

My original PostDoc research proposal aimed to research emoji design projects in Africa, whilst also  working on the creative direction for Shushmoji™ emoji sticker sets (for example, Stop, academic bully! at https://createwith.net/academic.html). This particular set was inspired by the cyber harassment of  insulin resistance model of chronic ill-health (IRMCIH) experts on Twitter by defenders of the dominant “cholesterol” model of chronic disease development (CMCDD).

As I began my PostDoc, a review of the academic cyberbullying literature produced a surprising result. There seemed to be very little conceptual or empirical research concerning academic employees who harass scholars online. In response to a neglected negative phenomenon that would seem highly important to study, my PostDoc's focus shifted to initiating the Online Academic Bullying (OAB) research project.

Nitpicker_who_does_not_add_to_the_debate graphic from Create With
Professor Noakes and I then setup the new research theme, Academic free speech and digital voices, under The Noakes Foundation. Under this theme, the OAB research project’s first stage (2018-2021) has focused on proposing a theoretically grounded conceptualisation for a recipient's experiences of OAB. We wrote 'Distinguishing online academic bullying' over a two year period in which the theoretical lens was refined to better address OAB's distinguishing characteristics. Our manuscript underwent four major rewrites and three revisions to accommodate diverse reviewers' plus an editor's constructive criticism.

Academic free speech and digital voices
Many studies in the field of scientific communication have focused on the dissemination of medical disinformation. By contrast, very few seem to explore the legitimate use of digital voice by scientific experts and health professionals who must work around scientific suppression in HE. In the Health Sciences scientific suppression and intellectual harassment is particularly dangerous where it: 
  1. entrenches an outdated and incorrect scientific model; 
  2. suppresses scholarly debate over rival models; 
  3. continues to support poor advice and interventions that result in sub-par outcomes versus proven and relatively inexpensive alternatives. 

It would seem unethical to suppress the testing of scientific models and development of academic knowledge that may greatly benefit public health. Nevertheless, this continues to occur in HE regarding the academic free speech of IRMCIH scholars. Although there is growing evidence for their model and the efficacy of its interventions, the rival blood lipid hypothesis and CMCCD model for the causation of heart disease largely remains the only one taught and researched by medical schools. There are few examples of legitimate debates between IRMCIH and CMCDD scholars in HE (Lustig, 2013; Taubes, 2007; 2011; 2017; 2020; Teicholz, 2014). Opportunities for IRMCIH research and teaching in HE are heavily constrained by scientific suppression of CMCDD dissenters (Noakes and Sboros, 2017, 2019).

In HE, scientific suppression can be understood as a normative category of impedance that is unfair, unjust and counter to the standards of academic behaviour (Delborne, 2016). Such impedance is apparent in the treatment of dissenting scholars who challenge the CMCDD model, then become ostracised from the Health Sciences as "heretics". In theory, universities should encourage academic free speech and robust debate on the CMCDD versus IRMCIH models. By contrast, in HE practice, IRMCIH scholars cannot exercise their rights to academic free speech.

Academic freedom is a special right of academics- a right to freedom from prescribed orthodoxy in their teaching, research, and lives as academics (Turk, 2014). This right seeks to avoid corruption from the vested interests of other parties, which ranges from scholarly peers and university board members to corporate donors. This right is foundational in supporting scholars to advance and expand knowledge, for example by accommodating diverse voices (Saloojee, 2013).

Academic free speech is a failed ideal where IRMCIH scholars do not enjoy opportunities to research and teach this emergent paradigm. Instead, dissenting IRMCHI scientists must negotiate scientific suppression by a multitude of entrenched networks and embedded academics. These have varied stakes in the medical establishment's highly profitable “cholesterol” model and its costly, but largely ineffective, interventions. This orthodox regime heavily constrains the IRMCIH model's development, whilst applying double-standards for evidence and proof. These demands typically ignore the sociological context of scientific knowledge. It flags key constraints, including:
  1. The relatively minuscule funding for IRMCIH studies 
  2. Many unethical ”ethical" or pseudo-skeptic "scientific" arguments used for delaying IR research projects
  3. Long-standing anti-IRMCIH, pro- CMCDD scholarly citation rings
  4. Academic mobs that defame IR scholars and create a chilling effect for their colleagues
  5. Likewise, pseudoskeptic academics, politicians and "science" journalists may unwittingly serve as agents of industry by diverting public attention from Fiat science™ and consensus silence to IRMCIH “failures”.

Online academic bullying as an emergent extension of scientific censorship 
Mob dogpiler graphic from Create With

A contemporary form of censorship exists that denies attention and stifles opportunities for turning scholarship and innovation into better options for public policy (Tufekci, 2017). For IRMCIH experts, cyber harassment has emerged as a 21st century form of attention-denial that CMCDD's defenders leverage. They apply a range of strategies to stifle dissident scientists' and health experts' outreach to online audiences and affinity networks. As this 21st century censorship matrix illustrates, cyber harassment is just one of many visible and direct strategies that powerful networks have used to censor dissenting IRMCIH scholars in HE.



With a wide range of vitriolic critics within and outside academia, we focused on the case of an Emeritus Professor as a convenience sample. He had first-hand exposure to OAB for almost a decade across varied social media platforms. In 'Distinguishing online academic bullying', OAB is clearly differentiated from the traditional forms of bullying (eg. academic mobbing) that he had to negotiate after taking the unorthodox, but scientific, position for IRMCIH. Major aspects are shown in the article's abstract graphic, below- academic cyberbullies strategies in OAB may range from misrepresenting an employer's position as "official" to hypercritical academic bloggers whose chains of re-publication become sourced for defamatory online profiles.

Distinguishing online academic bullying abstract graphic

There were also many minor forms that we may cover in a future article. For example, scholars' could signal ostracism in small ways, such as removing the Emeritus Professor as a co-contributor on their Google Scholar profiles.

Reporting on cyber-victimisation with routine activity theory
While writing our article, we also developed a reporting instrument for OAB recipients. Targets of academic cyberbullies can use a Google form at https://bit.ly/3pnyE6w to develop reports on their experiences of cyber harassment. They can share it with decision- and policy-makers at the institutions they are targeted from, as well as our OAB research project. This reporting instrument is based on Routine Activity Theory (RAT) and is being refined with IRMCIH and other experts' feedback. 

The problem of cyber harassment is not easy to fix, since it requires individual, systemic and collective action (Hodson, Gosse, Veletsianos, & Houlden, 2018). We hope that spotlighting OAB’s distinctive attacks will raise awareness amongst researchers and institutional policy makers. We argue that it is important for HE employers and related professional organisations to consider strategies that can guard against academic cyberbullies and their negative impacts.

Academic myopia graphic from Create With

Credits
Stop, academic bully! shushmoji™ graphics courtesy of Create With, Cape Town.

Acknowledgements
The authors would like to thank the funders, software developers, researchers and Heliyon's reviewers who have made the best version of this article possible: 

The Noakes Foundation’s project team of Jayne Bullen, Jana Venter, Alethea Dzerefos Naidoo and Sisipho Goniwe have contributed to expanding the scope of the researchers’ OAB project. The software development contributions of Yugendra ‘Darryl’ Naidoo, Cheryl Mitchell and the support of Alwyn van Wyk (Younglings) and the developers Tia Demas, Ruan Erasmus, Paul Geddes, Sonwabile Langa and Zander Swanepoel have enabled the researchers to gain the broadest view of Twitter’s historical data. The feedback from the South African Multimodality in Education research group after the authors’ shared the Emeritus Professor’s case indirectly suggested the topic of this article. Mark Phillips and Dr Cleo Protogerou’s feedback on the ensuing manuscripts proved invaluable in guiding it into a tightly-focused research contribution. We would also like to thank CPUT’s Design Research Activities Workgroup (DRAW) for its feedback on a progress presentation, especially Professor Alettia Chisin, Dr Daniela Gachago and Associate Professor Izak van Zyl. He and Adjunct Professor Patricia Harpur provided valuable guidance that helped shape the OAB reporting tool into a productive research instrument.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (58) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest