Why Machines Will Never Rule the World

A Reflection on AI, Bias, and the Limits of Technology

In their 2022 book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear,” Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.

The Argument Against AGI

Landgrebe and Smith’s central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikely—it is fundamentally impossible.

The Human Factor: Cognitive Bias in AI

While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.

Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionally—but pervasively—reproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.

The Problem of Homogenisation

The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.

In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.

Imagining a Different Future: AI as a Mirror

One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programming—not in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.

This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?

Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AI—not as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.

A New Narrative for AI

Landgrebe and Smith’s book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?

Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.

Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.

The Tyranny of Order and the Freedom of Chaos

I’m not happy with the summary review of The Blind Owl by Sadegh Hedayat. I liked the book, and the review was positive, but the summary was at too high of a level and didn’t convey how I experienced the reading. I’m tempted to try again at a lower level. That’s not the subject of this piece.

I happened on a short clip of Claudia de Rham pointing out that physicists (and scientists more generally) are not dismayed when their theories are disproven or someone finds a puzzle piece that doesn’t fit into their puzzle. The reaction she highlights is similar to reactions to Nihilism and Anarchy.

VIDEO: The Institute of Art and Ideas.

Here’s a longer (but still not full) version.

VIDEO: Can physics ever explain the universe? | Avshalom Elitzur debates Claudia de Rham

Most people, it seems, are incessantly grasping for order. A select few crave structure. But what truly fascinates me is the interplay of perception and expectations.

The ‘Orderlies’ – those fastidious devotees of tidiness – become apoplectic at the mere sight of disorder. They needn’t even experience it directly. “Oh, those physicists must return to the drawing board! Their model needs reassessment!” For fuck’s sake, it gives them purpose – a raison d’être, if you will.

This phenomenon extends to the habitual ‘Believers’ scrutinising nihilists. “Without belief, I’d embark on a murderous rampage,” they proclaim – though always directed at the world beyond themselves. Never them, of course, but those ‘other’ people. If not for God, who would maintain order? Evidently, these individuals don’t venture out much if they genuinely believe their deity is keeping things shipshape.

I frequently encounter notions that Nihilists must navigate life burdened by existential dread, their existence devoid of meaning, the universe an empty void. Speaking for myself, I require no such structure. Nothing is absent. There is no dread.

The religious perceive a void – a “God-sized hole,” as a mate once pontificated, that can only be filled by the divine. Naturally, he was moralising, declaring that sex, drugs, and other vices could never satiate this cosmic emptiness. But there is no hole. Perhaps they’re grappling with some psychological vacancy. I sympathise, truly, but stuffing a God-sized hole with imagination seems no more nourishing than consuming an imaginary sandwich. Sod it, I might as well gorge on an imaginary chateaubriand if we’re going all in. I’ll still need sustenance after this illusory feast.

Then there are those who yearn to be governed. They crave traditions and institutions, lacking the critical faculties to evaluate them. Whatever they possess must surely be superior to the worst they can conjure. I suspect they’re envisioning an alternate world populated by like-minded individuals. In fact, I ought to be terrified by such a prospect too.

In the end, perhaps the true freedom lies not in order or belief, but in embracing the beautiful chaos of existence. After all, in a universe of infinite possibilities, why constrain ourselves to imaginary feasts when we can savour the rich banquet of reality?

Declaration of Independence

It’s July. The season of independence in the United States. Independence from the overt tyranny of Britain, but not from the tacit tyranny of their government—the government purported to be ‘of the people, by the people, and for the people‘ per Abraham Lincoln’s 1863 Gettysburg Address. As their Constitution reads, ‘We the People‘. Governments may be of the people and by the people, but governments are an emergent phenomenon as happens when oxygen and hydrogen combine just so and create water. Two gases combine to create a new substance—water. Some forget that, like water, government are a distinct element to the people that constitute it. Some think it resembles them. It doesn’t. It’s Hobbes’ Leviathan—or a Jabberwok.

In preparation for the traditional Summer season, I took to reading Derrida’s 1976 essay, Declarations of Independence. It was interesting, but I was hoping to get more from it. I decided to deconstruct the opening paragraph—the preamble—of the Declaration of Independence:

Deconstructing Binary Oppositions

Self-Evident vs. Non-Self-Evident

The Declaration boldly asserts that ‘these truths’ are ‘self-evident’,’ a claim that is nothing more than a rhetorical trick. By presenting these ideas as self-evident, the authors seek to place them beyond questioning, discouraging dissent and critical examination. In reality, these ‘truths’ are far from universal; they are the product of a specific cultural and historical context, shaped by the interests and perspectives of the privileged few who drafted the document.

Interrogating Assumptions and Hierarchies The Declaration of Independence asserts that certain truths are ‘self-evident’, implying that these truths are so obvious that they require no further justification. However, the concept of self-evidence itself is far from universally accepted. It is deeply embedded in the philosophical tradition of Enlightenment rationalism, which holds that reason and logic can reveal fundamental truths about the world.

  1. Philosophical Foundations of Self-Evidence
    • Enlightenment Rationalism: The idea of self-evidence relies heavily on Enlightenment rationalism, which posits that certain truths can be known directly through reason and are therefore beyond dispute. Philosophers such as René Descartes and Immanuel Kant emphasised the power of human reason to uncover self-evident truths. Descartes, for instance, argued for the self-evident nature of ‘Cogito, ergo sum‘ (‘I think, therefore I am’) as a fundamental truth (Descartes, Meditations on First Philosophy).
    • Critique of Rationalism: Critics of Enlightenment rationalism, including existentialists like Friedrich Nietzsche and phenomenologists like Martin Heidegger, argue that what is considered self-evident is often culturally and historically contingent. Nietzsche, for example, contended that what we take as ‘truth’ is a product of our perspective and historical context, not an absolute given (Nietzsche, Beyond Good and Evil).
  2. Cultural and Philosophical Contingency
    • Cultural Relativity: Different cultures and philosophical traditions may not find the same truths to be self-evident. For instance, the concept of individual rights as self-evident truths is a product of Western liberal thought and may not hold the same self-evident status in other cultural frameworks. In many Eastern philosophies, the focus is more on community and harmony rather than individual rights.
    • Subjectivity of Self-Evidence: The term ‘self-evident’ implies an inherent, unquestionable truth, yet what one group or culture finds self-evident, another may not. This variability reveals the instability and subjectivity of the claim. For example, in traditional Confucian societies, the emphasis is placed on hierarchy and duty rather than equality and individual rights, demonstrating a different set of ‘self-evident’ truths.
  3. Constructed Nature of Truth
    • Language and Context: Jacques Derrida’s concept of différance illustrates how meaning is not fixed but constantly deferred through language. What we consider to be “truth” is constructed through linguistic and social contexts. Derrida argues that texts do not have a single, stable meaning but rather a multiplicity of interpretations that change depending on the reader’s perspective and context (Derrida, Of Grammatology).
    • Social Construction: Michel Foucault’s analysis of power and knowledge further deconstructs the notion of objective truth. Foucault argues that what is accepted as truth is produced by power relations within society. Truths are constructed through discourses that serve the interests of particular social groups, rather than being objective or self-evident (Foucault, Discipline and Punish).

Created Equal vs. Not Created Equal

The Declaration’s claim that ‘all men are created equal’ is a blatant falsehood, a manipulative promise designed to appease the masses whilst maintaining the status quo. The glaring contradictions of slavery and gender inequality expose the hollowness of this assertion. Equality, as presented here, is nothing more than an ideological construct, a tool for those in power to maintain their dominance while paying lip service to the ideals of justice and fairness.

Creator vs. No Creator

The Declaration refers to a ‘Creator’ who endows individuals with rights, grounding its claims in a divine or natural law. This invokes a theistic worldview where moral and legal principles are derived from a higher power. However, Derrida challenges this by showing that the concept of a creator is a cultural and philosophical construct, not a universal truth.

The presence of the creator in the text serves to legitimise the rights it declares. However, this legitimacy is contingent on accepting the cultural narrative of a creator. Secular and non-theistic perspectives are marginalised by this assertion, revealing the ideological biases inherent in the Declaration. The authority of the declaration is thus shown to be dependent on particular beliefs, rather than an objective reality.

Unalienable vs. Alienable

The notion of ‘unalienable Rights’ is another empty promise, a rhetorical flourish designed to inspire loyalty and obedience. In practice, these supposedly inherent and inviolable rights are regularly violated and denied, particularly to those on the margins of society. The Declaration’s lofty language of ‘Life, Liberty and the pursuit of Happiness’ rings hollow in the face of systemic oppression and injustice. These rights are not unalienable; they are contingent upon the whims of those in power.

Conclusion

Through this deconstruction, we expose the Declaration of Independence for what it truly is: a masterful work of propaganda, filled with false promises and manipulative rhetoric. The document’s purported truths and self-evident principles are revealed as arbitrary constructs, designed to serve the interests of the powerful while placating the masses with empty platitudes.

As some celebrate this 4th of July, let us not be fooled by the high-minded language and lofty ideals of our founding documents. Instead, let us recognise them for what they are: tools of control and manipulation, employed by those who seek to maintain their grip on power. Only by constantly questioning and deconstructing these texts can we hope to expose the truth behind the facade and work towards a more genuine understanding of freedom and equality.

References

  • Jacques Derrida, “Declarations of Independence,” in Negotiations: Interventions and Interviews 1971-2001, ed. Elizabeth Rottenberg (Stanford: Stanford University Press, 2002).
  • Jacques Derrida, Of Grammatology (Baltimore: Johns Hopkins University Press, 1976).
  • Friedrich Nietzsche, Beyond Good and Evil (Cambridge: Cambridge University Press, 2002).
  • Michel Foucault, Discipline and Punish: The Birth of the Prison (New York: Vintage Books, 1995).
  • Immanuel Kant, Critique of Pure Reason (Cambridge: Cambridge University Press, 1998).
  • René Descartes, Meditations on First Philosophy (Cambridge: Cambridge University Press, 1996).

Objective Challenges

I’ve just published this video on YouTube, and I want to extend the commentary.

Video: What do Objective, Relative, and Subjective mean in philosophy?

Many people I’ve encountered don’t seem to grasp the distinctions between objective, subjective, and relative. Subjective and relative seem to be the biggest culprits of confusion. Let’s focus on morality just because.

There are really two main perspectives to adopt. If one believes in Objective Morality, one believes morality derives from some external source and is bestowed or mandated upon us. The source might be important to the believer, but it’s unimportant for this article. If one believes in Relative Morality then the source is socially dictated and has similar challenges to the notions of Social Contract Theory insomuch as one may not subscribe to the expectations.

For the Objective moralist, there may exist a schism between the expectations of the mandate and the subjective feelings of the individual. In fact, this may occur for Relative moralists as well. The individual will always maintain some subjective perspective on morality and then compare and contrast it with the higher order, whether Objective or Relative. In either case, acting on this subjective impulse risks being at odds with the members of the higher order. If this morality is codified into law – as it often is – then to act on that impulse makes one a criminal.

Take abortion for example. Whether this is an edict from God or just a social construct doesn’t matter. If one is in a society where abortion is seen as ‘bad’ or ‘wrong’, one’s subjective position on the matter is of little value. However, a Relativist society might also adopt a position of tolerance that is less likely to come from Objectivists.

A challenge is that a Subjectivist may only become apparent if one is counter the Relative or Absolute position. If your society is against abortion and you are, too, is this your subjective position or have you been indoctrinated with it and accept it uncritically, whether it’s deemed Objective or Relative.

Perhaps you feel that eating dogs or monkeys is immoral if not disgusting, but if you had been reared in a culture that does this, you might find it immoral to eat pork or beef. The question remains, is this a Subjective position, or did you merely inherit the Objective or Relative stance?

This question is very apparent in which religion one adopts. It is no surprise that the largest factor in which religion you choose is the religion of your family and their family and so on – so not so much a choice.

I was raised in a WASP family in New England among predominately Italian Roman Catholic peers. Despite this, I identified as an atheist early on. In my late teens, I stumbled on Buddhism and identified with it. However, I remain ignostic except when it encroaches on my personal autonomy – for example in the case of laws restricting access to safe abortions.

AI Apocalypse Now?

Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.

Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.

Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.

If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.

By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.


This article was originally shared on LinkedIn: https://www.linkedin.com/posts/brywillis_when-silicon-valleys-ai-warriors-came-to-activity-7147239217687887872-6Byv/

Atheist, Agnostic, Other

This article appeared in my social feed, What are the different types of atheism? But the author makes at least two notable mistakes. Firstly, he conflates atheism with agnosticism, the first being about belief and the last being about knowledge. So, one can believe or disbelieve in something, but that doesn’t speak to knowing. This debate is specifically about gods, so one can believe in a god but not know; one can not believe in gods and not know; one can believe in a god and know; and one can disbelieve in gods and not know. As for me, I am an igtheist: I don’t care about gods. It’s a silly place to spend my time.

For a theist or atheist, the existence of gods is a truth statement. For me, the question is not ruth apt; it’s ostensibly gibberish. Even then, I am still agnostic, which might also be ignostic because not only don’t I know, neither do I care.

As Ricky Gervais has pointed out (recasting per the linked article), if there are 10,000 gods, a typical Christian doesn’t believe in 9,999 gods. They believe in their god. Just 1. Of course, the other gods are nonsense. Ditto for Muslims. Ditto for Jews. In the end, they claim the same underlying deity, but they argue over which cohort He favours, and their god identifies as a male, so they’ve adopted male pronouns.

Secondly, whilst the author mentions religious and non-religious, he misses the spiritual cohort. This is a subset of non-religious. In some cases, I and many others might argue that spirituality is simply a personal religion, so the distinction would be one of community. The religious congregate en masse whilst the spiritual take this journey alone. One may also argue that some spiritual folks also congregate. I’ve attended more than one Wiccan or Pagan group event, but the ties may be looser than with a mainstream religion.

In the case of some spiritual adherents, non-religious is shorthand for being opposed to Big Religion. Perhaps not coincidentally, many of these are opposed to Big Pharma and Big Agriculture, but my purpose here is not a psychological profile.

I recently heard Robert Sapolsky say in a lecture that the religious live longer and are happier on average than non-religious, which is to say the spiritual and the atheists alike, so he notes this could provide an underlying evolutionary explanation for religious belief. Neither will I comment further on this notion, but there you have it. Take id or leave it.

Rhetoric is Truth; Morality, Emotion

I’ve been reengaging with philosophy, though my positions haven’t changed recently. My last change was to shift from being a qualified material realist to a qualified idealist in the shape of Analytic Idealism. In most matters I can think of, I am an anti-realist, which is to say concepts like truth and morality are not objective; rather they are mind-dependent.

I’ve long been on record of taking the stance that Capital-T Truth, moral truths, are derived rhetorically. There is no underlying Truth, only what we are aggregately convinced of, by whatever route we’ve taken. As a moral non-cognitivist, I am convinced that morality is derived through emotion and expressed or prescribed after a quick stop through logic gates. Again, there is nothing objective about morality.

Truth and morality are subjective and relative constructs. They resonate with us emotionally, so we adopt them.

Were I a theist — more particularly a monotheist —, I might be inclined to be emotionally invested in some Divine Command theory, where I believe that some god may have dictated these moral truths. Of course, this begs the question of how these so-called “Truths” were conveyed from some spirit world to this mundane world. I have no such conflict.

But let’s ask how an atheist might believe in moral realism. Perhaps, they might adopt a Naturalistic stance: we have some natural intuition or in-built moral mechanism that is not mind-dependent or socially determined. I am not a naturalist and I don’t take a universalist approach to the world, so this doesn’t resonate with me. I can agree that we have an in-built sense of fairness, and this might become a basis for some aspects of morality, but this is still triggered by an emotional response that is mind-dependent.

Another curious thing for me is why non-human animals cannot commit immoral acts. Isn’t this enough to diminish some moral universal? In the end, they are an extension of language by some definition. No language, not even a semblance of morality.

Anyway, there’s nothing new here. I just felt like creating a philosophical post as I’ve been so distracted by my health and writing.

John Vervaeke and Lex Fridman on the Meaning Crisis

jimoeba mentioned that he enjoyed an interview with Vervake and Fridman in a comment, so I thought I’d give it a listen. It turns out there are several including a 3-plus-hour version. Arbitrarily, I chose this one. Even if it’s not the particular interview on the meaning crisis, it gives me a sense of the two and their dynamics. I’m glad I listened to it. I like Vervaeke. I can’t say I’m much of a Fridman fan on first listen.

John Vervaeke and Lex Fridman interview: Human civilisation is facing a meaning crisis

This interview content provides an orientation of where Vervaeke is coming from. It helps to clarify his position. His claim seems to be that many people today identify as having no religion but being spiritual. By extension, he posits that this cohort is searching for meaning. I can’t disagree. What it tells me is that I am not in his target demographic. I have no religion, as I am an atheist. I have no spiritual void to fill. This is Vervaeke’s goal—to find something to perform the function of the religion without the, perhaps, baggage and dogma.

I sympathise with his goal. He brings up Nietzsche’s “God is dead” quote, famous or infamous depending on your worldview. Essentially, he wants to answer Nietzsche’s query of what to do now that it’s been revealed that humans created God, not the other way around. His aim is to replace the font of wisdom for this generation.

For me, wisdom is a heuristic, part of the Gestalt McGilchrist mentions. McGilchrist’s work is even referenced here. Of course, I interpret McGilchrist’s references in this space to be metaphorical. It seems that he views it as ‘real’. I’m not sure where Vervaeke places it. Somehow, I feel that if there is a spectrum, Vervaeke leans closer to McGilchrist than me, and that’s OK. They just happen to be wrong.

I still don’t get the need for meaning. I don’t feel despondent that there is no inherent meaning in anything, but we are free to invite or adopt one or many. I remember a Christian mate of mine who explained that people have a God-sized hole that can only be filled by God. Essentially, Vervaeke is making a similar claim, but his void is filled by wisdom. I suppose that I don’t feel I have a void doesn’t mean there isn’t one.

GOD BE IN MY HEAD

God be in my head,
And in my understanding;
God be in mine eyes,
And in my looking;
God be in my mouth,
And in my speaking;
God be in my heart,
And in my thinking;
God be at mine end,
And at my departing.

Podcast: Audio rendition of this page content

Sir Henry Walford Davies put this traditional prayer to music as a hymn. Iain McGilchrist recited it as a poem after a brief setup in an interview.

I am an atheist, and the closest I get to gods is through metaphor, allegory, or allusion. And I don’t engage in it, but I understand when others invoke it. And to be completely honest, I was multitasking when Iain was reciting, and I misheard it, and this miss was more profound for me.

God be in my head,
And in my understanding;
Don’t be in mine eyes,
And in my looking;

That’s what prompted me to seek it out and pen a post. In the original form, it’s more of an invocation. In my misinterpretation, I felt he was saying to keep God in your head as a metaphorical reference—as an archetype—, but God is not for the eyes and the looking. God is a matter of faith.

As for the rest, it flows the same. Speak as you understand it. Feel God in your heart, if you should so choose. Think about him if you wish. And carry this thought with you until the end if it brings you comfort.

Myself, I get no comfort from the notion. I don’t feel I need it, but it is a cultural phenomenon, so to be aware is a part of cultural and emotional intelligence.

I feel that I’ve always intuitively understood metaphor. I remember listening to Joseph Campbell in the 1980s as he was describing how one of his biggest challenges was to get people to understand the embodiment of metaphor and not just the vapidity of simple simile.

And there you have it.

Path to the Fall

By fall, I don’t mean autumn except perhaps metaphorically speaking. The accompanying image illustrates a progression from the pre-Enlightenment reformation and the factors leading to the Modern Condition and increases in schizophrenia in people, societies, and enterprises.

Podcast: Audio rendition of this page content.

This image is essentially composited from a later chapter in Iain McGilchrist’s The Master and His Emissary. In it, he outlines a path that commences at the Reformation that led to Lutheranism and Protestantism and further to Calvinism (not separately depicted). Max Weber argued that Capitalism is inextricably linked to Calvinism and the workmanship ideal tradition.

McGilchrists argument is founded on the notion that Catholocism is a communally oriented belief system whilst Protestantism is focused on the individual and salvation through personal work. The essence of capitalism is the same.

Of course, history isn’t strictly linear. In fact, there are more elements than one could realistically account for, so we rely on a reduction. In concert with the Reformation but on a slight delay is the so-called Age of Enlightenment, the Age of Reason, which led not only to faith in science but then to the pathology of Scientism.

This Protestant-Scientismic nexus brought us to Capitalism and into the Industrial Revolution, where humans were devivified or devitalised, trading their souls to be pawns to earn a few shekels to survive. Capitalism and the Industrial Revolution led to Marxism, through Marx’s critique of Capitalism, but Marxism has the same fatal flaw as Capitalism inasmuch as it doesn’t view people as humans. It does afford them a slightly higher function as workers, but this still leaves humanity as a second-tier aspect and even historicity is elevated above as a sort of meta-trend or undercurrent.

From there, we transition to Modernity, which yields the modern condition and schizophrenics in one fell swoop. This is no coincidence.

Although I end this journey at Modernism, McGilchrist is also leery of the effects of post-modernism as well as philosophy itself as overly reductionist in its attempts to categorise and systematise, valuing signs and symbols over lived experience. His main complaint with postmodernism is that it moves from the objective perspective of Modernity to the subjective perspective, and so there remains no base foundation, which is the shared experience. I’m not sure I agree with his critique, but I’m not going to contemplate it here and now.

In the end, this journey and illustration are gross simplifications, but I still feel it provides valuable perspective. The challenge is that one can’t readily put the genie back into the bottle, and the question is where do we go from here, if not Modernism or Postmodernism. I shouldn’t even mention Metamodernism because that seems like an unlikely synthesis, as well-intentioned as it might be. McGilchrist gives examples of reversals in the trend toward left-hemisphere bias, notably the Romantic period, but that too was reversed, recommencing the current trajectory. My feeling is that if we continue down this dark path, we’ll reach a point of no return.

It seems to be that it’s growing at an increasing rate, like a snowball careening down a slope. It not only drives the left-dominant types further left because an analytical person would reinforce the belief that if only s/he and the world were more analytical things would be so much better—even in a world where net happiness is trending downward—, but it also forces this worldview on other cultures, effectively destroying them and assimilating them into the dark side, if I can borrow a Star Wars reference.

Epilogue

I wasn’t planning to share this story—at least not now. In another forum, I responded to a statement, and I was admonished by Professor Stephen Hicks, author of the book of dubious scholarship, Explaining Postmodernism.

I responded to this query:

If you’re a single mother and have a son I’d suggest putting him in a sport or martial arts to add some masculine energy to his life. It’s not a replacement for the actual father but it can help instil structure and discipline into the core of his being.

— Julian Arsenio

“Perhaps this world needs less discipline and structure, not more,” was my response, to which Hicks replied.

The quotation is not about “the world.” It is about boys without fathers. Evaluate the quotation in its context.

— Stephen Hicks

“Disciplined boys create a disciplined world. Not a world I’d prefer to create or live in. We need more right-hemisphere people. Instead, we are being overwhelmed by left hemisphere types, leading to Capitalism and the denouement of humanity as it encroaches like cancer, devouring or corrupting all it touches.

“In the end, it is about the world, which from a left hemisphere perspective is a sum of its parts. Right-hemisphere thinkers know otherwise,” was my reply. He responded,

You seem to have difficulty focusing. From a quotation about fatherless boys you free associate to [sic] weird psychology and global apocalptic [sic] pessimism. Pointless.

— Stephen Hicks

“I’ll suggest that the opposite is true, and perhaps you need to focus less and appreciate the Gestalt. This was not free association. Rather, it is a logical connexion between the disposition of the people in the world and lived reality.

“Clearly, you are a left-hemisphere structured thinker. The world is literally littered with this cohort.

“I suggest broadening your worldview so as not to lose the woods for the trees. I recommend Dr Iain McGilchrist as an apt guide. Perhaps reading The Master and His Emissary and/or The Matter with Things would give you another perspective. #JustSaying”

His final repartee is,

And still, rather than addressing the issue of fatherless boys, you go off on tangents, this time psychologizing about people you’ve zero first-hand knowledge of.

— Stephen Hicks

Feel free to interpret this as you will. For me, his attempt to limit discussion to some notion he had in his head and his failure to see the woods for the trees, as I write, suggests that he is a left-brain thinker. Having watched some of his videos, whether lectures or interviews, this was already evident to me. This exchange is just another proof point.

I considered offering the perspective of Bruno Bettleheim’s importance of unstructured play, but as is evidenced above, he is not open to dialogue. His preference appears to be a monologue. This is the left hemisphere in action. This is an example of how insidious this convergent thinking is, and it makes me worry about what’s ahead in a world of people demanding more structure and discipline. Foucault’s Discipline and Surveillance comes to the forefront.