Decolonising the Mind

Ngũgĩ wa Thiong’o published “Decolonising the Mind” in 1986. David Guignion shares a 2-part summary analysis of the work on his Theory and Philosophy site.

I used NotebookLLM to produce this short podcast: [Content no longer extant] https://notebooklm.google.com/notebook/7698ab0b-43ab-47d4-a50f-703866cfb1b9/audio

Decolonising the Mind: A Summary

Ngũgĩ wa Thiong’o’s book Decolonising the Mind centres on the profound impact of colonialism on language, culture, and thought. It argues that imposing a foreign language on colonised people is a key tool of imperial domination. This linguistic imperialism leads to colonial alienation, separating the colonised from their own culture and forcing them to view the world through the lens of the coloniser.

Here are some key points from the concept of decolonising the mind:

  • Language is intimately tied to culture and worldview: Language shapes how individuals perceive and understand the world. When colonised people are forced to adopt the language of the coloniser, they are also compelled to adopt their cultural framework and values.
  • Colonial education systems perpetuate mental control: By privileging the coloniser’s language and devaluing indigenous languages, colonial education systems reinforce the dominance of the coloniser’s culture and worldview. This process results in colonised children being alienated from their own cultural heritage and internalising a sense of inferiority.
  • Reclaiming indigenous languages is crucial for decolonisation: wa Thiong’o advocates for a return to writing and creating in indigenous African languages. He sees this as an act of resistance against linguistic imperialism and a way to reconnect with authentic African cultures. He further argues that it’s not enough to simply write in indigenous languages; the content must also reflect the struggles and experiences of the people, particularly the peasantry and working class.
  • The concept extends beyond literature: While wa Thiong’o focuses on language in literature, the concept of decolonising the mind has broader implications. It calls for a critical examination of all aspects of life affected by colonialism, including education, politics, and economics.

It is important to note that decolonising the mind is a complex and ongoing process. There are debates about the role of European languages in postcolonial societies, and the concept itself continues to evolve. However, wa Thiong’o’s work remains a seminal text in postcolonial studies, raising crucial questions about the enduring legacy of colonialism on thought and culture.

AI Apocalypse Now?

Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.

Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.

Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.

If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.

By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.


This article was originally shared on LinkedIn: https://www.linkedin.com/posts/brywillis_when-silicon-valleys-ai-warriors-came-to-activity-7147239217687887872-6Byv/

Systematic Violence

As humans, we often leverage systems. They seem to make life easier. Whether a routine or a step-by-step instruction through an unknown process, a system can guide us. Systems are also connected, interactive entities, but that’s not for this segment. I am more interested in the loss of humanity that systematic processes and bureaucracy bring, so I am interested in imposed systems rather than systems we invent to find our keys and wallets.

Podcast: Audio rendition of this page content
Image: Spectrum of System versus Human

If we consider systematisation and humanity on a scale, we can see that any move toward systematisation comes at the expense of humanity. It might make logical sense to make this trade-off to some degree or another. The biggest hit to humanity is the one-size-fits-all approach to a problem. It removes autonomy or human agency from the equation. If a system can be that mechanised, then automate it. Don’t assign a human to do it. This is an act of violence.

As I’ve been reading and writing a lot about Iain McGilchrist’s work lately, I feel one can easily map this to left versus right cerebral hemisphere dominance. System-building is inherently human, but it’s in the domain of the left hemisphere. But my imposition of a system on another is violence—one might even argue that it’s immoral.

As with bureaucracy, these imposed systems are Procrustean beds. Everyone will fit, no matter what. And when human beings need to interact with systems, we can not only feel the lack of humanity, but our own humanity suffers at the same time.

A close friend of mine recently checked herself into a mental health facility. After a few days, she called and asked if I could bring her a change of clothes and some toiletries—deodorant, soap, and shampoo. She had some in her house, but the packaging needed to be unopened and factory sealed. I stopped at a shop to buy these items and I brought them to the facility.

At the reception area, I needed to be cross-referenced as an authorised visitor, so I was asked to show proof of my identity as if it mattered who was delivering clothing that was going to be checked anyway. No big deal, they recorded my licence number on a form and ask me to fill it out—name, phone number, and what I was delivering.

The form stated that any open consumable items would not be allowed. I signed the form. An attendant took the bag and told me that I needed to remove the ‘chemicals’, that they would not be delivered. I pointed to the lines on the form that read that this restriction was for open items and reinforced that I had just purchased these and showed her the sales receipt. She told me that the patient would need to obtain a doctor’s permission, and she assured me that the patients all had soap.

I’m sure she thought she was being compassionate and assertive. I experienced it as patronising. Me being me, I chided her lack of compassion and humanity, not a great match for a mental health attendant. In fact, it reminded me of a recent post I wrote on Warmth. In it, I suggested that service staff should at least fake conviviality. I take that back. Faux congeniality is patronising. She mimicked me. “Yes, systems are so inhumane, but here we follow a system.” My first thought was of Adolf Eichmann, who kept the trains on schedule without a care for the cargo. This is the violence inherent in systems.

Systems are not illogical. In fact, they are hyper-logical. And that’s the problem, logic is traded off at the expense of empathy. And one might have a strong argument for some accounting or financial system process, but I’ll retort that this should be automated. A human should not have to endure such pettiness.

I can tell that this will devolve quickly into a rant and so I’ll take my leave and not foist this violence upon you.

exstinctionem hominum

Would human extinction be a good thing for the good of the planet? We’re all familiar with the concept of the greater good, but what is the domain of the greater? We presume it to be the domain of all humans or at least our chosen in-group. But if we dilate the aperture, we might encircle the entire biosphere. In my experience, humans rarely extend the circle beyond themselves and barely even do that, opting to extend it to their race or tribe. Whilst some humans are not as self-centred as some narcissists and sociopaths, the radius doesn’t go too far.

Human beings really are this virus upon the earth, and the earth's running a fever, you know? If you step away from that kind of inherent human sentimentality and just look at it neutrally, the universe is neutral morally. —Eef Barzelay

Is one a misanthrope if one considers the greater good to be the earth devoid of the human virus? Perhaps, yes, if stated in those terms. But if one calculates that humans do more harm than good, doesn’t the cost-benefit calculus indicate that fewer people or no people would be better for the earth. I’ve long been fond of the late George Carlin’s routine where he proses that we don’t have to save Earth; the earth will remain long after humans no longer inhabit it. It’s been said that 99.9% of species that ever occupied the earth as no longer extant. Humans are past the mean duration of a species. Perhaps it’s time to move on.

I started to write this post some time ago after having had a discussion on antinatalism. Rather, I defended anti-natalism in the course of a conversation on the inherited notion that humans as sacred.

I supposed I am not a strict antinatalist, but neither do I feel that life is somehow sacred. Mine, of course, but except that. Just kidding. If you are reading, yours is, too. Just kidding, not you either. Interestingly, this ties into the post on the narrative gravity of the self.

As I write this in a world with a population of almost 8 billion people dominated by a handful and no picnic for that lot either, there are likely enough people already. I do feel that even if population trends continue upward—given offsetting depopulation trends in some regions—, humans will cap out at around 10 billion anyway. Perhaps in a Malthusian manner, but I am thinking in terms of deer herds and population limiting factors as expressed by equations like Xn-1 = rxn(1-xn).

Life does appear to have at least common characteristics and perhaps only one: the need to procreate. The second is the need to live, but that can probably be reduced to the need to live long enough to procreate. This is core to Richard Dawkins’ Selfish Gene theory. I like Robert Sapolsky’s treatment of the subject in Behave: The Biology of Humans at Our Best and Worst.

The concept of ‘sacred‘ is a religious vestige. I’m not sure why this needed to be codified, but religious dogma seems to capture the notion ‘thou shalt not kill’, as if it needed to be said. I won’t spend any time on the hypocrisy of the many people who espouse this edict.

Except for that motherfucker right there!

It may be a valid position to consider me a misanthrope, but that’s probably overstated, but I’m generally not a fanboy. I guess what bothers me most is the hype and self-promotion. I don’t find it to be particularly inconsistent to see the small positive aspects humans bring and still consider them to be parasitic. This is a compositional challenge–a dimensional consideration that moves away from binary-trending heuristics, the age-old right and wrong, good and bad, good and evil, and on and on.

As with geocentrism, we put ourselves at the centre because this is how we experience life—inside out. All else seems to extend from this model, except there is no centre. It’s just our perspective. I experience life the same way. I’m no exception. Nonetheless, I don’t seem to need to cling to this central notion—this notion of centrality.

When all is said and done—when the last human has made their exit, there will be no epilogue or postscript, afterword, or coda. Humanity is a story in need of a narrator. The ongoing codicil will cease, and to copy-paste the high art of Monty Python’s parrot sketch:

E’s not pinin’! ‘E’s passed on! This parrot is no more! He has ceased to be! ‘E’s expired and gone to meet ‘is maker! ‘E’s a stiff! Bereft of life, ‘e rests in peace! If you hadn’t nailed ‘im to the perch ‘e’d be pushing up the daisies! ‘Is metabolic processes are now ‘istory! ‘E’s off the twig! ‘E’s kicked the bucket, ‘e’s shuffled off ‘is mortal coil, run down the curtain and joined the bleedin’ choir invisible!! THIS IS AN EX-PARROT!!

Monty Python – Pet Shop Skit