Decolonising the Mind

Ngũgĩ wa Thiong’o published “Decolonising the Mind” in 1986. David Guignion shares a 2-part summary analysis of the work on his Theory and Philosophy site.

I used NotebookLLM to produce this short podcast: [Content no longer extant] https://notebooklm.google.com/notebook/7698ab0b-43ab-47d4-a50f-703866cfb1b9/audio

Decolonising the Mind: A Summary

Ngũgĩ wa Thiong’o’s book Decolonising the Mind centres on the profound impact of colonialism on language, culture, and thought. It argues that imposing a foreign language on colonised people is a key tool of imperial domination. This linguistic imperialism leads to colonial alienation, separating the colonised from their own culture and forcing them to view the world through the lens of the coloniser.

Here are some key points from the concept of decolonising the mind:

  • Language is intimately tied to culture and worldview: Language shapes how individuals perceive and understand the world. When colonised people are forced to adopt the language of the coloniser, they are also compelled to adopt their cultural framework and values.
  • Colonial education systems perpetuate mental control: By privileging the coloniser’s language and devaluing indigenous languages, colonial education systems reinforce the dominance of the coloniser’s culture and worldview. This process results in colonised children being alienated from their own cultural heritage and internalising a sense of inferiority.
  • Reclaiming indigenous languages is crucial for decolonisation: wa Thiong’o advocates for a return to writing and creating in indigenous African languages. He sees this as an act of resistance against linguistic imperialism and a way to reconnect with authentic African cultures. He further argues that it’s not enough to simply write in indigenous languages; the content must also reflect the struggles and experiences of the people, particularly the peasantry and working class.
  • The concept extends beyond literature: While wa Thiong’o focuses on language in literature, the concept of decolonising the mind has broader implications. It calls for a critical examination of all aspects of life affected by colonialism, including education, politics, and economics.

It is important to note that decolonising the mind is a complex and ongoing process. There are debates about the role of European languages in postcolonial societies, and the concept itself continues to evolve. However, wa Thiong’o’s work remains a seminal text in postcolonial studies, raising crucial questions about the enduring legacy of colonialism on thought and culture.

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

Why Machines Will Never Rule the World

A Reflection on AI, Bias, and the Limits of Technology

In their 2022 book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear,” Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.

The Argument Against AGI

Landgrebe and Smith’s central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikely—it is fundamentally impossible.

The Human Factor: Cognitive Bias in AI

While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.

Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionally—but pervasively—reproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.

The Problem of Homogenisation

The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.

In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.

Imagining a Different Future: AI as a Mirror

One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programming—not in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.

This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?

Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AI—not as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.

A New Narrative for AI

Landgrebe and Smith’s book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?

Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.

Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.

Conspicuous Consumption is Immoral

In a world where our worth is increasingly measured by what we own rather than who we are, it’s time to question the hollow pursuit of conspicuous consumption. Thorstein Veblen, over a century ago, saw through the shiny façade of status-driven consumerism. He coined the term conspicuous consumption to describe a phenomenon that has only grown more insidious in our time: the wasteful, performative display of wealth and luxury to signal social standing.

From luxury cars to designer handbags, we live in a society that encourages us to broadcast our economic worth at every turn. But what does this ostentatious display really achieve? Veblen would argue that it’s not about personal fulfilment or the intrinsic value of what we buy, but about outshining others—a relentless race with no finish line. We’re not buying goods; we’re buying validation, trying to purchase a place in an imagined hierarchy that is as fragile as the latest trend.

Peter Singer, the moral philosopher known for his advocacy of effective altruism, would take this critique even further. For Singer, conspicuous consumption is not just wasteful; it’s morally indefensible. His ethical stance, grounded in utilitarianism, compels us to consider the opportunity cost of our spending. Every pound spent on a designer item is a pound that could have been used to alleviate suffering—whether through combating extreme poverty, funding life-saving medical treatments, or addressing global environmental crises. In a world where preventable suffering and death are rampant, lavish spending on status symbols is not just misguided; it’s a moral failure.

Singer’s principle of equal consideration of interests demands that we weigh the impact of our actions on others as seriously as we consider their impact on ourselves. When we choose to engage in conspicuous consumption, we’re prioritising superficial self-expression over the tangible needs of those who are suffering. It’s a glaring contradiction to claim we value human life while squandering resources on goods that serve no purpose beyond ostentation.

The issue with conspicuous consumption isn’t just its superficiality—it’s the economic and social rot it perpetuates. It sustains a cycle of pecuniary emulation, where those lower on the socio-economic ladder are pressured to mimic the consumption patterns of those above them. The result? A society locked in an absurd competition, where individuals drown in debt just to keep up appearances, and resources are squandered on goods that serve no practical purpose other than to flaunt.

But why should we care? Isn’t spending on luxury just a harmless personal choice? Not quite. Every pound spent on a status symbol is a pound diverted from something meaningful—education, healthcare, the environment. When the affluent engage in extravagant spending, they set a toxic standard, promoting the idea that personal worth is tied to material excess. It’s not just bad taste; it’s bad ethics, bad economics, and bad for the planet.

More disturbingly, this drive to display wealth erodes the social fabric. It deepens divisions, creating resentment and alienation. The ultra-wealthy, insulated in their gilded cages, become oblivious to the realities faced by the rest of society. Meanwhile, those struggling to maintain the illusion of prosperity are left with the fallout—financial instability, stress, and a gnawing sense of inadequacy.

Veblen saw this trap for what it was: a hollow spectacle. Singer’s ethics give us a way out. He challenges us to shift our focus from self-indulgence to selflessness, from conspicuous consumption to conscious contribution. Instead of spending exorbitantly to bolster our social standing, why not use that wealth to create real, measurable change in the world?

We don’t need another luxury brand or another status symbol. We need a society that values substance over style, sustainability over excess, and human dignity over the incessant demand to buy more to be more. Singer’s challenge is simple: consider the true cost of your choices—not just for yourself, but for the world around you.

Conspicuous consumption is a dead end. It’s time to take a different road—one that leads not to a display case, but to a more equitable, thoughtful, and genuinely prosperous society.

Cognitive Processing Flow Model

The Cognitive Process Flow Model illustrates how we process the phenomenal world. It’s reductionist and is missing aspects because it is just a back-of-the-napkin sketch. I created it because I uttered, “I can model it for you”. And so I did.

EDIT: I’ve updated the model slightly as the article head image, but the copy content refers to the first draft.

My response was to a person making the claim, that all you need to facts and logic prevails. Rather than restate the argument, I’ll just walk through the diagramme.

There’s meta information to set it up. We are subjective entities in the world. We have a sense-perception apparatus as we exist in it. Countless events occur in this world. We recognise only a few of them within our limited range, though technology expands this range in various ways.

Most of us interact in the world. Some are less ambulatory, so the world visits them. Some have sense-perception deficits whilst others have cognitive deficits. My point is not to capture every edge and corner case. This is just a generalised model.

It starts with an event. Events occur ceaselessly. In our small portion of the world and elsewhere. For the purpose of the model, the first thing that happens is an event catches our attention. We might notice a shape, a colour, or a movement; we might hear a sound, smell an aroma, feel a sensation, or taste something.

A pre-emotion, pre-logic function serves to process these available inputs. Perhaps, you hear a report on anthropogenic climate change or read something about a political candidate. This emotional filter will police sensory inputs and unconsciously or preconsciously determine if you will react to the initial stimulus. If not, you’ll continue in an attention-seeking loop. Not that kind of attention-seeking.

As my dialogue was about the presentation of facts, our next stop will be logical evaluation. Does this make sense to us, or can we otherwise make it? This is a process in itself. I’ll assume here that it requires little elaboration. Instead, I’ll focus on the operating environment.

Our logical processes are coloured by past experiences and tainted by cognitive biases and deficits. We may also trigger the calling of additional facts through past experiences or the current engagement.

We’ll process these fragments and reach some logical conclusion. But we’re not done. We take this intermediate conclusion and run it through more emotional processing. Cognitive biases come back into play. If the event conforms with your past experiences and cognitive biases, we may run it through a cognitive dissonance routine. To be honest, this probably is part of the emotional reconciliation process, but I’ve drawn it here, so I’ll let it be. In this case, it’s just a filter. If it happens to conform to our belief system, it will pass unfettered; otherwise, it will be squared with our beliefs. Again, this leads me to believe it’s a subcomponent of emotional reconciliation. I’ll update the chart later.

In any case, we’ll end at Final Acceptance. This acceptance may be that we accept or reject the logic, but we arrive at an opinion that gets catalogued with the rest of them. Some may be elevated to facts or truths in the epistemological hierarchy. Although an end marker is identified, it’s really a wait state for the next event. Rinse and repeat until death.

I’ll update this presently. Be on the lookout. It could include more dimensions and interactions, but that might have to wait until version 3.

Meantime, does this feel right to you? Did it even get your attention?

An Example: Anthropogenic Climate Change

Let’s wrap up with an example. I’ll use climate change. An article comes into your attention field, and you have an interest in these things, so it passes through the emotional filter. If your propensity for these articles is high, it might race to the next stage.

You read the article, and it contains some facts—rather, it contains claims for evaluation. To do this, you’ll recall past experiences and cognitive biases are always lying in wait. You may have to look for new facts to add to the mix. These will have to take a similar route past your attention gatekeeper and emotional sidekick.

If you are already predisposed that climate change is a hoax, these facts will filter through that lens—or vice versa.

When all of this is resolved, you’ll have arrived at a conclusion—perhaps we’ll call it a proto-conclusion. It hasn’t been set yet.

You are still going to introspect emotionally and decide if this is a position you want to hold. Perhaps, you feel that climate change is a hoax but this doesn’t jive with that position. Here, you’ll either accept these facts and flip a bit to a sceptical believer or cognitive dissonance will kick in and ensure your sense of the world isn’t thrown off kilter. You may update your belief system to include this datum for future assessments.

Now we are ready for final acceptance. You can now express your established opinion. If the net event is to counter that acceptance, rinse and repeat ad infinitum.

Life Consciousness

Language is life. Yet, this assertion immediately raises a fundamental question: which came first, life or consciousness? It’s a classic chicken-and-egg conundrum. Physicist Stuart Hameroff posits an intriguing idea—that consciousness might predate life itself. This radical notion suggests that consciousness isn’t merely a byproduct of biological processes but could be an intrinsic feature of the universe. However, there’s a snag.

The challenge lies in defining life and consciousness, two terms that lack universally accepted definitions. The absence of clarity here opens the door to a multitude of interpretations, making it easy to drift into what could be considered ‘airy faerie’ ambiguity. One must beware of the temptation to engage in intellectual exercises that lead nowhere—what might be termed ‘mental masturbation.’ This is a prime example of the insufficiency of language.

Audio: Podcast commentary on this topic.

Life and consciousness, as concepts, are elusive. Unlike straightforward nouns or adjectives—where we can confidently say, “That’s a dog,” “That’s a tree,” or “That’s green”—these terms are far more complex. They are attempts to encapsulate observed phenomena, yet we lack the precise language and understanding to pin them down definitively. The video linked above provides perspectives on various approaches to defining these terms, but none prove wholly satisfactory. This lack of satisfaction might suggest that our conventional understanding of life and consciousness is flawed. To be fair, one might even entertain the idea that life itself is an illusion, a construct of consciousness.

This ambiguity isn’t confined to the realms of life and consciousness. I recently shared a post on the topic of gender, which illustrates a similar issue. Originally, there was no concept of gender. The earliest distinctions made were between animate and inanimate. Over time, these distinctions became more nuanced. Whether or not a proto-word for life existed at that time is unclear, but the idea of animation being linked to life was beginning to take shape. The concept of gender evolved much later, driven by the need to categorize and define differences within the animate category.

The evolution of language reflects the evolution of thought. Yet, when we dig deep into these foundational concepts, we encounter the same problem: how can we argue the precedence of two concepts—life and consciousness—when neither has a solid foundation in language? If our words are inadequate, if they fail to capture the essence of what we are trying to convey, then what does that say about our understanding of the world?

Perhaps it suggests that our linguistic and cognitive tools are still too crude to grasp the true nature of reality. Or maybe it hints at a deeper truth: that some aspects of existence are beyond the scope of human understanding, no matter how sophisticated our language becomes. After all, if consciousness predates life, as Hameroff suggests, then we may need to rethink our fundamental assumptions about existence itself.

Ultimately, this exploration reveals a paradox at the heart of human knowledge. We seek to define and categorise, to impose order on the chaos of the universe. Yet in doing so, we must confront the limits of our language and, by extension, our understanding. Perhaps the true essence of life and consciousness lies not in definitions or categories but in the very act of questioning, the relentless pursuit of knowledge that drives us forward, even when the answers remain elusive.

Perfect and Relative Pitch and Reality

Perception of Reality™ is akin to having relative pitch. Unlike pitch, where some people have perfect pitch – the ability to name a note or chordal composition without any other reference – it is unlikely that anyone has or will have access to objective reality – analogically: perfect pitch for reality.

As I’ve mentioned, I believe that all our experiences and interactions with reality are relative, if not wholly subjective. There may exist an objective reality, but for reasons already noted – cognitive and sense perception deficits –, we can never access it.

Musically, If someone plays and identifies a reference note, say A (or do in movable do solfège), and then plays a major fifth above (or sol), a person with relative pitch can hear that fifth interval and identify it as an E. Everything is about relationships. In music, the relationships are intervalic, but we know where we are based on where we’ve been. A person with perfect pitch requires no such priming. They can identify the first A note without prompting.

Our experience with reality is also relative, but no one has the equivalence of perfect pitch. No one has access to objective reality – if there even is one.

I don’t deny that there could be an objective reality. I just believe it’s inaccessible. I am a qualified realist – so, not a physicalist –, but I don’t believe in supernatural or paranormal events. A so-called ‘supernatural’ event is merely an event that hasn’t yet been described in ‘natural’ terms.

Now that I got that off my chest, what are your thoughts on objective reality? Lemme know.

VIDEO: Response to Response on Sapolsky v. Dennett Debate

It’s been a minute since I’ve posted a video. Restart the clock. In this video, I critique Outside Philosopher’s critique of the debate between Robert Sapolsky and Daniel Dennett on Free Will and Determinism. He attempts to leverage Gödel’s Uncertainty Principle in his defence.

Feel free to leave comments on YouTube or below. Cheers.

Illusions of Self: Evanescent Instants in Time

In the realm of existential contemplation, the notion of the ‘self’ is akin to a fleeting present moment. It flits into existence for a fraction of an attosecond, vanishing before our grasp. Much like the illusory present, the ‘self’ manifests briefly and then fades into the annals of the past, a mere connection of temporal slices.

When we traverse the corridors of time, we effortlessly speak of the ‘past,’ stringing together these slices into a continuous narrative. This amalgamation serves our language and thought processes, aiding idiomatic expression. Yet, it remains a construct, a fiction we collectively weave. It is akin to the frames of a movie, where the illusion of movement and coherence is crafted by arranging individual frames in rapid succession.

The ‘self’ follows a similar illusionary trajectory. It exists only inasmuch as we christen it, attributing a name to a fleeting instance of being. However, this existence is as fleeting and ephemeral as a mirage. We name it, we perceive it, but it dissolves like smoke upon closer inspection.

This existential musing reminds one of the fictional entity – the unicorn. We can name it, describe it, and even envision it, yet its tangible existence eludes us. The ‘self’ aligns itself with this enigmatic unicorn, an abstract concept woven into the fabric of human understanding.

In this dance of philosophical thought, published works echo similar sentiments. Renowned thinkers like Nietzsche, in his exploration of eternal recurrence, or Camus, delving into the absurdity of life, have grappled with the transient nature of the ‘self.’ Their writings form a canvas, painting the portrait of an existence that flits through time, leaving only traces of memory and illusion in its wake.

In conclusion, the ‘self’ is a fleeting enigma, a temporal wisp that vanishes as quickly as it appears. Like a raindrop in the river of time, it merges and dissipates, leaving behind an evanescent trace of what we conceive as ‘I’. The philosophical gaze peers through the mist, challenging the very essence of this ephemeral entity, inviting us to question the very fabric of our perceived reality.

To Be or Not to Be (Free)

I recently posted a YouTube Short video titled You Have No Free Will, but this is still debatable.

Video: You Have No Free Will

The premise of the belief in free-will is that human decisions are made approximately half a second before we are conscious of them, and then the conscious brain convinces itself that it just made a choice. This sounds pretty damning, but let’s step back for a moment.

Podcast: Audio rendition of this page content

If you’ve been following this blog these past few months, you’ll be aware that I feel the question of free will is a pseudo-question hinging primarily on semantics. As well, there’s the causa sui argument that I’d like to ignore for the purpose of this post.

There remains a semantic issue. The free will argument is centred around the notion that a person or agent has control or agency over their choices. This means that how we define the agent matters.

In the study this references, the authors define the agent as having conscious awareness. Since this occurs after the decision is made, then the person must have had no agency. But I think an argument can be made that the earlier decision gateway is formed through prior experience. Applying computer metaphors, we can say that this pre-consciousness is like embedded hardware or read-only logic. It’s like autopilot.

In business, there are various decision management schemes. In particular, the conscious but slow version is for a person to be notified to approve or deny a request. But some decisions are automatic. If a purchase is over, say 50,000 then a manager needs to sign off on the request. But if the purchase is under 50,000, then the request is made automatically and then the manager is notified for later review if so desired.

I am not saying that I buy into this definition, but I think the argument could be made.

You might not know it by the number of posts discussing it, but I am not really concerned about whether or not free will really exists. I don’t lose any sleep over it. At the same time, I tend to react to it. Since I feel it’s a pseudo-problem where tweaking the definition slightly can flip the answer on its head, it’s just not worth the effort. On to better things.