Showing posts with label suppression. Show all posts
Showing posts with label suppression. Show all posts

Friday, 26 July 2024

Content suppression techniques against dissent in the Fifth Estate - examples of COVID-19 censorship on social media

Written for researchers and others interested in the many methods available to suppress dissidents' digital voices. These techniques support contemporary censorship online, posing a digital visibility risk for  dissidents challenging orthodox narratives in science.


The Fourth Estate emerged in the eighteenth century as the printing press enabled the rise of an independent press that could help check the power of governments, business, and industry. In similar ways, the internet supports a more independent collectivity of networked individuals, who contribute to a Fifth Estate (Dutton, 2023). This concept acknowledges how a network power shift results from individuals who can search, create, network, collaborate, and leak information in strategic ways. Such affordances can enhance individuals' informational and communicative power vis-à-vis other actors and institutions. A network power shift enables greater democratic accountability, whilst empowering networked agents in their everyday life and work. Digital platforms do enable online content creators to generate and share news that digital publics amplify via networked affordances (such as 💌 likes, " quotes " and sharing via # hashtag communities).


In an ideal world, social media platforms would be considered to be a public accommodation, and the Fifth Estate's users would benefit from legal protection of their original content, including strong measures against unjustified suppression and censorship. The latter should recognise the asymmetric challenges that individual dissenters, whistleblowers and their allies must confront in contradicting hegemonic social forces that can silence their opponents' (digital) voices: As recently evidenced in the COVID-19 "pandemic", the Twitter Files and other investigations reveal how multinational pharmaceutical companies, unelected global "health" organisations, national governments, social media and traditional broadcast companies all conspired to silence dissent that oppossed costly COVID-19 interventions. Regardless of their levels of expertise, critics who questioned this narrative in the Fourth or Fifth Estate were forced to negotiate censorship for the wrong-think of sharing "dangerous" opinions. 

Such sanctions reflect powerful authorities' interests in controlling (scientific) language, the window of permissable opinion, and the social discourses that the public might select from, or add. Under the pretext of public "safety", the censorship industrial complex strong arms the broadcast media and social media companies into restricting dissidents' voices as "misinformation" that is "unsafe". Facing no contest, the views of powerful officialdoms earn frequent repetition within a tightly controlled, narrow narrative window. At the same legitimate reports of mRNA injuries are falsely redefined to be "malinformation", and censored.
 
Consequently, instead of a pluralist distribution of power in the Fifth Estate that can support vital expression,  powerful authorities are enforcing internet policy interventions that increasingly surveil and censor users' digital voices. Infodemic scholars whose work endorses such suppression would seem to be ignorant of how problematic it is to define disinformation, in general. Particularly in contemporary science, where: knowledge monopolies and research cartels may be dominant; dissenting minds should be welcomed for great science, and a flawed scientific consensus can itself be dangerous. Silencing dissent has important public health ramifications, particularly where the potential for suggesting, and exploring, better interventions becomes closed. Science-, health communication, and media studies scholars may also ignore the inability of medical experts to accurately define what disinformation is, particularly where global policy makers face conflicts of interest (as in the World Health Organisation's support for genetic vaccines).

Censorship and the suppression of legitimate COVID-19 dissent is dangerously asymmetrical: health authorities already benefit from ongoing capital cascades whose exchange largely serve their interests. Such exchanges span financial, social, cultural, symbolic and even other (e.g. embodied) forms of capital (Bourdieu, 1986:2018). By contrast, individual critics can quickly be silenced by attacks on their limited capital, effectively preventing their exercise of the  basic right to free speech, and delivering sustained critiques. A related concern is that the censorial actions of artificial intelligence designers and digital platform moderators are often opaque to a platforms' users. Original content creators may be unaware that they will be de-amplified for sharing unorthodox views, as algorithms penalise the visibility of content on 'banned' lists, and any accounts that amplify "wrongthink". 

Content suppression on social media is an important, but neglected topic, and this post strives to flag the wide variety of techniques that may be use in digital content suppression. Techniques are listed in order of seemingly increasingly severe techniques:

#1 Covering up algorithmic manipulation

Social media users who are not aware about censorship are unlikely to be upset about it (Jansen & Martin, 2015). Social media platforms have not been transparent about how they manipulated their recommender algorithms to provide higher visibility for the official COVID-19 narrative, or in crowding out original contributions from dissenters on social media timelines, and in search results. Such boosting ensured that dissent was seldom seen, or perceived as fringe minority's concern. As Dr Robert Malone tweeted, the computational algorithm-based method now 'supports the objectives of a Large Pharma- captured and politicised global public health enterprise'. Social media algorithms have come to serve a medical propaganda purpose that crafts and guides the 'public perception of scientific truths'. While algorithmic manipulation underpins most of the techniques listed below, it is concealed from social media platform users.


#2 Fact choke versus counter-narratives

A fact choke involves burying unfavourable commentary amongst a myriad of content. This term was coined by Margaret Anna Alice to describe how "fact checking" was abused to suppress legitimate dissent.
An example she tweeted about was the BBC's Trusted New Initiative warning in
 2019 about anti-vaxxers gaining traction across the internet, requiring algorithmic intervention to neutralise "anti-vaccine" content. In response, social media platforms were urged to flood users' screens with repetitive pro-(genetic)-vaccine messages normalising these experimental treatments. Simultaneously, messaging attacked alternate treatments that posed a threat to the vaccine agenda. Fact chokes also included 'warning screens' that were displayed before users could click on content flagged by "fact checkers" as "misinformation". 

With the "unvaccinated" demonised by the mainstream media to create division, susceptible audiences were nudged to become vaccine compliant to confirm their compassionate virtue. At the same time to retain belief in mRNA genetic vaccine "safety", personal accounts, aggregated reports (such as "died suddenly" on markcrispinmiller.substack.com) and statistical reports (see Cause Unknown) for genetic vaccine injuries became suppressed as "malinformation" despite their factual accuracy. Other "controversial content", such as medical professionals' criticism of dangerous COVID-19 treatment protocols (see What the Nurses Saw) or criticism of a social media platform's policies (such as application of lifetime bans and critiques of platform speech codes) have been algorithmically suppressed.

Critical commentary may also be drowned out when platforms, such as YouTube, bury long format interviews amongst short 'deep fake' videos. These can range from featuring comments the critic never made, to fake endorsements from cybercriminals (as described on X by Whitney Webb, or Professor Tim Noakes on YouTube).

#3 Title-jacking

For the rare dissenting content that can achieve high viewership, another challenge is that title-jackers will leverage this popularity for very different outputs under exactly the same (or very similar) production titles. This makes it less easy for new viewers to find the original work. For example, Liz Crokin's 'Out of the Shadows’ documentary describes how Hollywood and the mainstream media manipulate audiences with propaganda. Since this documentary's release, several videos were published with the same title.


#4 Blacklisting trending dissent

Social media search engines typically allow their users to see what is currently the most popular content. In Twitter, dissenting hashtags and keywords that proved popular enough to feature amongst trending content, were quickly added to a 'trend blacklist' that hid unorthodox viewpoints. Tweets posted by accounts on this blacklist are prevented from trending regardless of how many likes or retweets they receive. On Twitter, Stanford Health Policy professor Jay Bhattacharya argues he was added to this blacklist for tweeting on a focused alternative to the indiscriminate COVID-19 lockdowns that many governments followed. In particular, The Great Barrington Declaration he wrote with Dr. Sunetra Gupta and Dr. Martin Kulldorff, which attracted over 940,000 supporting signatures. 

After its publication, all three authors experienced censorship on search engines (Google deboosted results for the declaration), social media platforms (Facebook temporarily removed the declaration's page, while Reddit removed links to its discussion) and on video (Youtube removed a roundtable discussion with Florida's Governor Ron DeSantis whose participants questioned the efficacy and appropriateness of requiring children to wear face masks). 

#5 Blacklisting content due to dodgy account interactions or external platform links

Limited visibility filtering also occurs when comments are automatically commented on by pornbots, or feature engagement by other undesirable accounts. For example, posts mentioning the keywords/subjects such as 'vaccine, Pfizer' may receive automated forms of engagement, which then sees posts receiving such "controversial" engagement becoming added to a list ensuring these posts censorship (see 32 mins into Alex Kriel's talk on the 'The Role of Fake Bot Traffic on Twitter/X'.

Social media platforms' algorithms may also blacklist content from external platforms that are not viewed to be credible sources (for example, part of an alternative {alt-right} media), or seen as competing rivals (X penalises the visibility of posts that feature links to external platforms).

#6 Making content unlikeable and unsharable

This newsletter from Dr Steven Kirsch's (29.05.2024) described how a Rasmussen Reports video on YouTube had its 'like' button removed. As Figure 1 shows, users could only select a 'dislike' option. This button was restored for www.youtube.com/watch?v=NS_CapegoBA.

Youtube dislike Rassmussen Reports video on Vaccine Deaths
Figure 1. Youtube only offers dislike option for Rassmussen Reports video on Vaccine Deaths- sourced from Dr Steven Kirsch's newsletter (29.05.2024)  

Social media platforms may also prevent resharing such content, or prohibit links to external websites that are not supported by these platforms' backends, or have been flagged for featuring inappropriate content.


#7 Disabling public commentary

Social media platforms may limit the mentionability of content, by not offering the opportunity to quote public posts. User's right-to-reply may be blocked, and critiques may be concealed by preventing them from being linked to from replies.

#8 Making content unsearchable within, and across, digital platforms

Social media companies applied search blacklists to prevent their users from finding blacklisted content. Content contravening COVID-19 "misinformation" policies was hidden from search users. For example, Twitter applied a COVID-19 misleading information policy that ended in November, 2022. In June 20023, META began to end its policy for curbing the spread of "misinformation" related to COVID-19 on  Facebook and Instagram. 

#9  Rapid content takedowns

Social media companies could ask users to take down content that was in breach of COVID-19 "misinformation" policies, or automatically remove such content without its creators' consent. In 2021, META reported that it had removed more than 12 million pieces of content on COVID-19 and vaccines that global health experts had flagged as misinformation. YouTube has a medical misinformation policy that follows the World Health Organisation (WHO) and local health authorities guidance. In June 2021, YouTube  removed a podcast in which the evidence of a reproductive hazard of mRNA shots was discussed between Dr Robert Malone and Steve Kirsch on Prof Bret Weinstein's DarkHorse channel. Teaching material that critiqued genetic vaccine efficacy data was automatically removed within seconds for going against its guidelines (see Shir Raz, Elisha, Martin, Ronnel, Guetzkow, 2022). The WHO reports that its guidance contributed to 850,000 videos related to harmful or misleading COVID-19 misinformation being removed from YouTube between February 2020 and January 2021.

PropagandaInFocus describes how LinkedIn users are subject to a policy of misinformation that prevents content being shared that 'directly contradicts guidance from leading global health organisations and public health authorities'. Dr David Thunder shared an example of his automated LinkedIn post removal for (1) sharing a scientific study that confirmed that children are at negligible risk of suffering severe disease from COVID-19, and (2) questioning the FDA decision to approve Emergency Use Authorisation for COVID-19 vaccines for children as young as 6 months old. No matter that many other studies confirm both positions, LinkedIn took this post down and threatened to restrict his account.

#10 Creating memory holes

Extensive content takedowns can serve a memory-holing aim, whereby facts and memories of the past become suppressed, erased or forgotten for political convenience. Long after the COVID-19's "pandemic", an Orwellian Ministry of Truth continues to memory-hole many health authority decision makers' failures, plus those of the mainstream media's and most national governments. As discussed here on YouTube by Mary Lou Singleton, Meghan Murphy and Jennifer Sey, such failures included: mandating masking and school-closures for children (who were never at risk); never questioning the official COVID-19 statistics (such as CNN's 'death ticker'); straight quoting Pfizer press releases as "journalism", whilst mocking individuals who chose to 'do their own research'. 

Dr Mark Changizi presents four science moments on memory-holing. In X video 1 and X video 2, he highlights how memory-holing on social media is very different from its traditional form. He uses X (formerly X) as an autobiographical tool, creating long threads that serve as a form of visual memory that he can readily navigate. The unique danger of social media account removal/suspension for censorship extends beyond losing one's history-of-'use' on that platform, to include all 'mentions' related to its content (ranging from audience likes, to their reply and quote threads). This changes the centrally-controlled communication history of what has occurred on a social media platform. Such censorship violates the free speech rights of all persons who have engaged with that removed account, even its fiercest critics, as they also lose an historical record of what they said. 

By contrast decentralised publications (such as hardcopy publications) are very hard for authorities to memory hole, since sourcing all hardcopies can be nearly impossible for censors. While winners can write history, historians who have access to historical statements can rewrite it. As COVID-19 memory-holing on social media platforms challenges such rewriting, its users must think around creating uncensorable records (such as the book Team Reality: Fighting the Pandemic of the Uninformed). In  X video 3, he highlights that freedom of expression is a liability, as expressions push reputation chips on the table. The more claims one stake's, the greater the risk to one's reputation if they're wrong. So, another aspect of memory holing lies in an individual's potential desire for memory-holing their own platform content, should they prove to be wrong. In X video 4, Dr Changizi also spotlights that the best form of memory-holing is self-censorship, whereby individuals see other accounts been suspended, or removed for expressing particular opinions. The witnesses then decide not to express such opinions, since it might endanger their ability to express other opinions. While such absence of speech is immeasurable, it would seem the most powerful memory-holing technique. Individuals' silencing their own voices do not create history.

#11 Rewriting history

Linking back to the Fact Choke technique are attempts at historical revisionism by health authoritians, and their allies. An example of this are claims in the mainstream media that critics of the orthodox narrative were "right for the wrong reasons" regarding the failure of COVID-19 lockdowns, the many negative impacts of closing schooling, businesses, and enforcing mandatory vaccination policies.

#12 Concealing the motives behind censorship, and who its real enforcers are

Social media platforms not only hide algorithmic suppression from users, but may also be misused to hide from users the full rationale for censorship, or who is ultimately behind it. Professor David Hughes prepared a glossary of deceptive terms and their true meanings (2024, pp 194-195) to highlight how the meaning of words is damaged by propaganda. A term resonating with technique #9 is “Critical” - pretending to speak truth to power whilst turning a blind eye to deep state power structures. 

The official narrative positioned COVID-19 as a (i) pandemic that had zoonotic (animal-to-human) origins, and alternate explanations were strongly suppressed. As this is the least likely explanation, other hypotheses merit serious investigation as they are more plausible. SARS-COV-2 might have stemmed from (ii) an outbreak at the Wuhan Lab's "gain of function" research, or a (iii) deliberate release in several countries from a biological weapons research project? (iv) Critics of these three explanations allege that a prior endemicity was ‘discovered’ by an outbreak of testing. Some critics even dispute the existence of SARS-COV-2, alleging that (iv) viral transmission is unproven, and that the entire  COVID-19 "pandemic" is a psychological propaganda operation

By silencing dissident views like these, social media platforms stop their users from learning about the many legitimate COVID-19 debates that are taking place between experts. This is not a matter of keeping users "secure" from "unsafe" knowledge, but rather networked publics being targeted for social control in the interests of powerful conspirators. In particular, the weaponised deception of social media censorship suits the agenda of the Global Public-Private Partnership (GPPP or G3P), and its many stakeholders. As described by Dr Joseph Mercola in The Rise of the Global Police State, each organisational stakeholder plays a policy enforcement role in a worldwide network striving to centralise authority at a global level.

Global Public-Private Partnership G3P organogram
Figure 2. Global Public-Private Partnership (G3P) stakeholders - sourced from IainDavis.com (2021) article at https://unlimitedhangout.com/2021/12/investigative-reports/the-new-normal-the-civil-society-deception.

G3P stakeholders have a strong stake in growing a censorship industrial complex to thwart legitimate dissent. Critiques of the official COVID-19 "pandemic" measures are just one example  The industrial censorship complex also strives to stifle robust critiques of (1) climate change "science", (2) "gender affirming" (transgender) surgery, (3) mass migration (aka the Great Replacement), (4) and rigged "democratic" elections, amongst other "unacceptable" opinions. Rather than being for the public's good, such censorship actually serves the development of a transhumanist, global technocratic society. The  digital surveillance dragnet of the technocracy suits the interests of a transnational ruling class in maintaining social control of Western society, and other vassals. This will be expanded upon in a future post tackling the many censorship and suppression techniques that are being against (ii) accounts.

N.B. This post is a work-in-progress and the list above is not exhaustive- kindly comment to recommend techniques that should be added, and suggestions for salient examples are most welcome.

Thursday, 25 February 2021

Some background for 'Distinguishing online academic bullying: identifying new forms of harassment in a dissenting Emeritus Professor’s case'

Written for academics and researchers interested in academic cyberbullies, peer victimisation, scientific suppression and intellectual harassment.

The Heliyon journal has published Distinguishing online academic bullying: identifying new forms of harassment in a dissenting Emeritus Professor’s case. It is an open-access article that's freely available from sciencedirect.com/science/article/pii/S240584402100431X.

Adjunct Professor Tim Noakes and I wrote it to foreground how the shift of academic discourse to online spaces without guardians presents cyberbullies from Higher Education (HE) with a novel opportunity to harass their peers and other vulnerable recipients. We argue that cyberbullying from HE employees is a neglected phenomenon, despite the dangers it can pose to academic free speech, as well as other negative outcomes.
Ringleader of the tormentors graphic by Create With
Background to the Online Academic Bullying (OAB) research project
The inspiration for researching OAB as a distinctive phenomenon arose during the lead author’s presentation to a research group in November, 2018. In this talk, I presented on designing new emojis as conversation stoppers for combating trolling (SAME, 2018). The attendees' questions in response suggested the necessity of researching how cyber harassment plays out in academic disputes on social media platforms.

My original PostDoc research proposal aimed to research emoji design projects in Africa, whilst also  working on the creative direction for Shushmoji™ emoji sticker sets (for example, Stop, academic bully! at https://createwith.net/academic.html). This particular set was inspired by the cyber harassment of  insulin resistance model of chronic ill-health (IRMCIH) experts on Twitter by defenders of the dominant “cholesterol” model of chronic disease development (CMCDD).

As I began my PostDoc, a review of the academic cyberbullying literature produced a surprising result. There seemed to be very little conceptual or empirical research concerning academic employees who harass scholars online. In response to a neglected negative phenomenon that would seem highly important to study, my PostDoc's focus shifted to initiating the Online Academic Bullying (OAB) research project.

Nitpicker_who_does_not_add_to_the_debate graphic from Create With
Professor Noakes and I then setup the new research theme, Academic free speech and digital voices, under The Noakes Foundation. Under this theme, the OAB research project’s first stage (2018-2021) has focused on proposing a theoretically grounded conceptualisation for a recipient's experiences of OAB. We wrote 'Distinguishing online academic bullying' over a two year period in which the theoretical lens was refined to better address OAB's distinguishing characteristics. Our manuscript underwent four major rewrites and three revisions to accommodate diverse reviewers' plus an editor's constructive criticism.

Academic free speech and digital voices
Many studies in the field of scientific communication have focused on the dissemination of medical disinformation. By contrast, very few seem to explore the legitimate use of digital voice by scientific experts and health professionals who must work around scientific suppression in HE. In the Health Sciences scientific suppression and intellectual harassment is particularly dangerous where it: 
  1. entrenches an outdated and incorrect scientific model; 
  2. suppresses scholarly debate over rival models; 
  3. continues to support poor advice and interventions that result in sub-par outcomes versus proven and relatively inexpensive alternatives. 

It would seem unethical to suppress the testing of scientific models and development of academic knowledge that may greatly benefit public health. Nevertheless, this continues to occur in HE regarding the academic free speech of IRMCIH scholars. Although there is growing evidence for their model and the efficacy of its interventions, the rival blood lipid hypothesis and CMCCD model for the causation of heart disease largely remains the only one taught and researched by medical schools. There are few examples of legitimate debates between IRMCIH and CMCDD scholars in HE (Lustig, 2013; Taubes, 2007; 2011; 2017; 2020; Teicholz, 2014). Opportunities for IRMCIH research and teaching in HE are heavily constrained by scientific suppression of CMCDD dissenters (Noakes and Sboros, 2017, 2019).

In HE, scientific suppression can be understood as a normative category of impedance that is unfair, unjust and counter to the standards of academic behaviour (Delborne, 2016). Such impedance is apparent in the treatment of dissenting scholars who challenge the CMCDD model, then become ostracised from the Health Sciences as "heretics". In theory, universities should encourage academic free speech and robust debate on the CMCDD versus IRMCIH models. By contrast, in HE practice, IRMCIH scholars cannot exercise their rights to academic free speech.

Academic freedom is a special right of academics- a right to freedom from prescribed orthodoxy in their teaching, research, and lives as academics (Turk, 2014). This right seeks to avoid corruption from the vested interests of other parties, which ranges from scholarly peers and university board members to corporate donors. This right is foundational in supporting scholars to advance and expand knowledge, for example by accommodating diverse voices (Saloojee, 2013).

Academic free speech is a failed ideal where IRMCIH scholars do not enjoy opportunities to research and teach this emergent paradigm. Instead, dissenting IRMCHI scientists must negotiate scientific suppression by a multitude of entrenched networks and embedded academics. These have varied stakes in the medical establishment's highly profitable “cholesterol” model and its costly, but largely ineffective, interventions. This orthodox regime heavily constrains the IRMCIH model's development, whilst applying double-standards for evidence and proof. These demands typically ignore the sociological context of scientific knowledge. It flags key constraints, including:
  1. The relatively minuscule funding for IRMCIH studies 
  2. Many unethical ”ethical" or pseudo-skeptic "scientific" arguments used for delaying IR research projects
  3. Long-standing anti-IRMCIH, pro- CMCDD scholarly citation rings
  4. Academic mobs that defame IR scholars and create a chilling effect for their colleagues
  5. Likewise, pseudoskeptic academics, politicians and "science" journalists may unwittingly serve as agents of industry by diverting public attention from Fiat science™ and consensus silence to IRMCIH “failures”.

Online academic bullying as an emergent extension of scientific censorship 
Mob dogpiler graphic from Create With

A contemporary form of censorship exists that denies attention and stifles opportunities for turning scholarship and innovation into better options for public policy (Tufekci, 2017). For IRMCIH experts, cyber harassment has emerged as a 21st century form of attention-denial that CMCDD's defenders leverage. They apply a range of strategies to stifle dissident scientists' and health experts' outreach to online audiences and affinity networks. As this 21st century censorship matrix illustrates, cyber harassment is just one of many visible and direct strategies that powerful networks have used to censor dissenting IRMCIH scholars in HE.



With a wide range of vitriolic critics within and outside academia, we focused on the case of an Emeritus Professor as a convenience sample. He had first-hand exposure to OAB for almost a decade across varied social media platforms. In 'Distinguishing online academic bullying', OAB is clearly differentiated from the traditional forms of bullying (eg. academic mobbing) that he had to negotiate after taking the unorthodox, but scientific, position for IRMCIH. Major aspects are shown in the article's abstract graphic, below- academic cyberbullies strategies in OAB may range from misrepresenting an employer's position as "official" to hypercritical academic bloggers whose chains of re-publication become sourced for defamatory online profiles.

Distinguishing online academic bullying abstract graphic

There were also many minor forms that we may cover in a future article. For example, scholars' could signal ostracism in small ways, such as removing the Emeritus Professor as a co-contributor on their Google Scholar profiles.

Reporting on cyber-victimisation with routine activity theory
While writing our article, we also developed a reporting instrument for OAB recipients. Targets of academic cyberbullies can use a Google form at https://bit.ly/3pnyE6w to develop reports on their experiences of cyber harassment. They can share it with decision- and policy-makers at the institutions they are targeted from, as well as our OAB research project. This reporting instrument is based on Routine Activity Theory (RAT) and is being refined with IRMCIH and other experts' feedback. 

The problem of cyber harassment is not easy to fix, since it requires individual, systemic and collective action (Hodson, Gosse, Veletsianos, & Houlden, 2018). We hope that spotlighting OAB’s distinctive attacks will raise awareness amongst researchers and institutional policy makers. We argue that it is important for HE employers and related professional organisations to consider strategies that can guard against academic cyberbullies and their negative impacts.

Academic myopia graphic from Create With

Credits
Stop, academic bully! shushmoji™ graphics courtesy of Create With, Cape Town.

Acknowledgements
The authors would like to thank the funders, software developers, researchers and Heliyon's reviewers who have made the best version of this article possible: 

The Noakes Foundation’s project team of Jayne Bullen, Jana Venter, Alethea Dzerefos Naidoo and Sisipho Goniwe have contributed to expanding the scope of the researchers’ OAB project. The software development contributions of Yugendra ‘Darryl’ Naidoo, Cheryl Mitchell and the support of Alwyn van Wyk (Younglings) and the developers Tia Demas, Ruan Erasmus, Paul Geddes, Sonwabile Langa and Zander Swanepoel have enabled the researchers to gain the broadest view of Twitter’s historical data. The feedback from the South African Multimodality in Education research group after the authors’ shared the Emeritus Professor’s case indirectly suggested the topic of this article. Mark Phillips and Dr Cleo Protogerou’s feedback on the ensuing manuscripts proved invaluable in guiding it into a tightly-focused research contribution. We would also like to thank CPUT’s Design Research Activities Workgroup (DRAW) for its feedback on a progress presentation, especially Professor Alettia Chisin, Dr Daniela Gachago and Associate Professor Izak van Zyl. He and Adjunct Professor Patricia Harpur provided valuable guidance that helped shape the OAB reporting tool into a productive research instrument.

Total pageviews since 2008's launch =

+ TRANSLATE

> Translate posts into your preferred language

+ SEARCH

> Search travisnoakes.co.za

+ or search by labels (keywords)

research (58) education (43) design (22) nvivo (16) multimodal (9) visual culture (4)

+ or search blogposts by date

+ FOLLOW
+ RELATED ONLINE PRESENCES

> Tweets

> Kudos

> ResearchGate profile
Articles + chapters

> Web of Science


> Social bookmarks + Edublogs listing
diigo education pioneer Find this blog in the education blogs directory

> Pinterest
> Create With Pinterest