The Metaphysics of “Why”: A Scavenger’s Guide to the Accident

7–10 minutes

The Broken Map

You wake up in the middle of a collapsing building. Someone hands you a map and says, find your way home. You look down. The map is for a different building entirely. One that was never built. Or worse, one that was demolished decades ago. The exits don’t exist. The staircases lead nowhere.

This is consciousness.

Audio: NotebookLM summary podcast on this topic.

We didn’t ask for it. We didn’t choose it. And the tools we inherited to navigate it—language, philosophy, our most cherished questions—were drawn for a world that does not exist.

Looking back at my recent work, I realise I’m assembling a corpus of pessimism. Not the adolescent kind. Not nihilism as mood board. Something colder and more practical: a willingness to describe the structures we actually inhabit rather than the ones we wish were there.

It starts with admitting that language is a compromised instrument. A tool evolved for coordination and survival, not for metaphysical clarity. And nowhere is this compromise more concealed than in our most sanctified word of inquiry.

1. The Weasel Word

We treat “why” as the pinnacle of human inquiry. The question that separates us from animals. Philosophy seminars orbit it. Religions are scaffolded around it. Children deploy it until adults retreat in defeat.

But “why” is a weasel word. A special case of how wearing an unnecessary coat of metaphysics.

The disguise is thinner in other languages. French pourquoi, Spanish por qué, Italian perché all literally mean for what. Japanese dōshite means by what way. Mandarin wèishénme is again for what. The instrumental skeleton is right there on the surface. Speakers encounter it every time they ask the question.

In the Indo-European lineage, “why” descends from the same root as “what”. It began as an interrogative of means and manner, not cosmic purpose. To ask “why” was originally to ask by what mechanism or for what end. Straightforward, workmanlike questions.

Over time, English inflated this grammatical shortcut into something grander. A demand for ultimate justification. For the Reason behind reasons.

The drift was slow enough that it went unnoticed. The word now sounds like a deeper category of inquiry. As if it were pointing beyond mechanism toward metaphysical bedrock.

The profundity is a trick of phonetic history. And a surprising amount of Anglo-American metaphysics may be downstream of a language that buried the receipt.

2. What “Why” Smuggles In

To see the problem clearly, follow the logic that “why” quietly encourages.

When we ask “Why is there suffering?” we often believe we are asking for causes. But the grammar primes us for something else entirely. It whispers that there must be a justification. A reason-giver. An intention behind the arrangement of things.

The slide looks like this:

“Why X?”
→ invites justification rather than description
→ suggests intention or purpose
→ presumes a mind capable of intending
→ requires reasons for those intentions
→ demands grounding for those reasons

At that point the inquiry has only two exits: infinite regress or a metaphysical backstop. God. Logos. The Good. A brute foundation exempt from the very logic that summoned it.

This is not a failure to answer the question. It is the question functioning exactly as designed.

Now contrast this with how.

“How did X come about?”
→ asks for mechanism
→ traces observable causal chains
→ bottoms out in description

“How” eventually terminates in it is so. “Why”, as commonly used, never does. It either spirals forever or leaps into transcendence.

This is not because we lack information. It is because the grammatical form demands more than the world can supply.

3. The Substitution Test

Here is the simplest diagnostic.

Any genuine informational “why” question can be reformulated as a “how” question without losing explanatory power. What disappears is not content but metaphysical residue.

“Why were you late?”
→ “How is it that you are late?”

“My car broke down” answers both.

“Why do stars die?”
→ “How do stars die?”

Fuel exhaustion. Gravitational collapse. Mechanism suffices.

“Why did the dinosaurs go extinct?”
→ “How did the dinosaurs go extinct?”

Asteroid impact. Climate disruption. No intention required.

Even the grand prize:

“Why is there something rather than nothing?”
→ “How is it that there is something?”

At which point the question either becomes empirical or dissolves entirely into it is. No preamble.

Notice the residual discomfort when “my car broke down” answers “why were you late”. Something feels unpaid. The grammar had primed the listener for justification, not description. For reasons, not causes.

The car has no intentions. It broke. That is the whole truth. “How” accepts this cleanly. “Why” accepts it while still gesturing toward something that was never there.

4. The Black Box of Intention

At this point the problem tightens.

If “why” quietly demands intentions, and intentions are not directly accessible even to the agents who supposedly have them, then the entire practice is built on narrative repair.

We do not observe our intentions. We infer them after the fact. The conscious mind receives a press release about decisions already made elsewhere and calls it a reason. Neuroscience has been showing this for decades.

So:

  • Asking others why they acted requests a plausible story about opaque processes
  • Asking oneself why one acted requests confabulation mistaken for introspection
  • Asking the universe why anything exists requests a fiction about a mind that is not there

“How” avoids this entirely. It asks for sequences, mechanisms, conditions. It does not require anyone to perform the ritual of intention-attribution. It does not demand that accidents confess to purposes.

5. Thrownness Without a Vantage Point

I stop short of calling existence a mistake. A mistake implies a standard that was failed. A plan that went wrong. I prefer something colder: the accident.

Human beings find themselves already underway, without having chosen the entry point or the terms. Heidegger called this thrownness. But the structure is not uniquely human.

The universe itself admits no vantage point from which it could justify itself. There is no external tribunal. No staging ground. No meta-position from which existence could be chosen or refused.

This is not a claim about cosmic experience. It is a structural observation about the absence of justification-space. The question “Why is there something rather than nothing?” presumes a standpoint that does not exist. It is a grammatical hallucination.

Thrownness goes all the way down. Consciousness is thrown into a universe that is itself without preamble. We are not pockets of purposelessness in an otherwise purposeful cosmos. We are continuous with it.

The accident runs through everything.

6. Suchness

This is not a new insight. Zen Buddhism reached it by a different route.

Where Western metaphysics treats “why” as an unanswered question, Zen treats it as malformed. The koan does not await a solution. It dissolves the demand for one. When asked whether a dog has Buddha-nature, the answer Mu does not negate or affirm. It refuses the frame.

Tathātā—suchness—names reality prior to justification. Things as they are, before the demand that they make sense to us.

This is not mysticism. It is grammatical hygiene.

Nietzsche smashed idols with a hammer. Zen removes the altar entirely. Different techniques, same target: the metaphysical loading we mistake for depth.

7. Scavenging for Meaning

If there is no True Why, no ultimate justification waiting beneath the floorboards of existence, what remains?

For some, this sounds like collapse. For me, it is relief.

Without a cosmic script, meaning becomes something we assemble rather than discover. Local. Contingent. Provisional. Real precisely because it is not guaranteed.

I find enough purpose in the warmth of a partner’s hand, in the internal logic of a sonata, in the seasonal labour of maintaining a garden. These things organise my days. They matter intensely. And they do so without claiming eternity.

I hold them lightly because I know the building is slated for demolition. Personally. Biologically. Cosmologically. That knowledge does not drain them of colour. It sharpens them.

This is what scavenging means. You build with what you find. You use what works. You do not pretend the materials were placed there for you.

Conclusion: The Sober Nihilist

To be a nihilist in this sense is not to despair. It is to stop lying about the grammar of the universe.

“Why” feels like a meaningful inquiry, but it does not connect to anything real in the way we imagine. It demands intention from a cosmos that has none and justification from accidents that cannot supply it.

“How” is enough. It traces causes. It observes mechanisms. It accepts that things sometimes bottom out in is.

Once you stop asking the universe to justify itself, you are free to deal with what is actually here. The thrown, contingent, occasionally beautiful business of being alive.

I am a nihilist not because I am lost, but because I have put down a broken map. I am looking at what is actually in front of me.

And that, it turns out, is enough.

Image: NotebookLM infographic of this topic

Full Disclosure: This article was output by ChatGPT after an extended conversation with it, Claude, and me. Rather than trying to recast it in my voice, I share it as is. I had started this as a separate post on nihilism, and we ended up here. Claude came up with the broken map story at the start and Suchness near the end. I contributed the weasel words, the ‘how’ angle, the substitution test, the metaphysics of motivation and intention, thrownness (Geworfenheit), Zen, and nihilism. ChatGPT merely rendered this final output after polishing my conversation with Claude.

We had been discussing Cioran, Zapffe, Benatar, and Ligotti, but they got left on the cutting room floor along the way.

Humans Stumble – ChatGPT QOTD

Humans stumble around with their self-awareness like toddlers with scissors—aware enough to cut themselves, not wise enough to put the scissors down.
1–2 minutes
Image: Humans stumble around with their self-awareness like toddlers with scissors—aware enough to cut themselves, not wise enough to put the scissors down. – ChatGPT

The Sane Society, Revisited: Why Work Still Drives Us Mad

4–6 minutes

Erich Fromm’s The Sane Society turns seventy this year, and like a ghost of reason past, it refuses to leave. His target was Capitalism™ – not merely as an economic system, but as a psychic infection. Replace the word factory with Zoom call, and his diagnosis reads like yesterday’s corporate newsletter. We’ve upgraded our machines but not our misery.

Aside from its psychobabble, The Sane Society, published in 1954, reads almost like it could have been written in 2024. I’d go out on a limb and claim it will still be relevant in 2054 – because Capitalism™ and the relationship it creates between humans and machines, and humans and other humans. It’s a divisive ideology. I’ve read a lot of content on employee engagement in the past decade. I’d been exposed to it in my Organisational Behaviour courses in the late ’80s. Things were going to change. We’d plotted a future.

Audio: NotebookLM podcast on this topic.

Only nothing material has changed. We pretended to notice the problem and fix it, but the people reporting the issue and the people in charge did not share a worldview. And the young managers who were taught about the challenge were either not promoted or changed their tune to facilitate their own promotion. Funny how the selection process favours groupthink over diversity of opinion.

Video: Apathetic Office Worker

On balance, most people tend to hate or be otherwise dissatisfied with their jobs. This is nothing new. It was true before Fromm’s book, and it is true now. I published a series of posts on prostitution in 2018 and discovered that escorts had better job satisfaction than the larger population. Let that sink in.

‘…the vast majority of the population work as employees with little skill required, and with almost no chance to develop any particular talents, or to show any outstanding achievements. While the managerial or professional groups have at least considerable interest in achieving something more or less personal, the vast majority sell their physical, or an exceedingly small part of their intellectual capacity to an employer to be used for purposes of profit in which they have no share, for things in which they have no interest, with the only purpose of making a living, and for some chance to satisfy their consumer’s greed.

Dissatisfaction, apathy, boredom, lack of joy and happiness, a sense of futility and a vague feeling that life is meaningless, are the unavoidable results of this situation. This socially patterned syndrome of pathology may not be in the awareness of people; it may be covered by a frantic flight into escape activities, or by a craving for more money, power, prestige. But the weight of the latter motivations is so great only because the alienated person cannot help seeking for such compensations for his inner vacuity, not because these desires are the “natural” or most important incentives for work.

Fromm, ever the optimist, thought alienation might be cured through self-awareness and communal values. The twentieth century politely ignored him, opting instead for mindfulness apps and performance reviews.

I’ve excised the psychobabble, but he continues…

‘But even the data on conscious job satisfaction are rather telling. In a study about job satisfaction on a national scale, satisfaction with and enjoyment of their job was expressed by 85 per cent of the professionals and executives, by 64 per cent of whitecollar people, and by 41 per cent of the factory workers. In another study, we find a similar picture: 86 per cent of the professionals, 74 per cent of the managerial, 42 per cent of the commercial employees, 56 per cent of the skilled, and 48 per cent of the semi-skilled workers expressed satisfaction.

‘We find in these figures a significant discrepancy between professionals and executives on the one hand, workers and clerks on the other. Among the former only a minority is dissatisfied—among the latter, more than half. Regarding the total population, this means, roughly, that over half of the total employed population is consciously dissatisfied with their work, and do not enjoy it. If we consider the unconscious dissatisfaction, the percentage would be considerably higher. Taking the 85 per cent of “satisfied” professionals and executives, we would have to examine how many of them suffer from psychologically determined high blood pressure, ulcers, insomnia, nervous tension and fatigue. Although there are no exact data on this, there can be no doubt that, considering these symptoms, the number of really satisfied persons who enjoy their work would be much smaller than the above figures indicate.

‘As far as factory workers and office clerks are concerned, even the percentage of consciously dissatisfied people is remarkably high. Undoubtedly the number of unconsciously dissatisfied workers and clerks is much higher. This is indicated by several studies which show that neurosis and psychogenic illnesses are the main reasons for absenteeism (the estimates for the presence of neurotic symptoms among factory workers go up to about 50 per cent). Fatigue and high labor turnover are other symptoms of dissatisfaction and resentment.’

In the twenty-first century, job dissatisfaction has increased even more. To me, it’s interesting to consider how many people harken back to the ‘good old days’, yet there is little evidence to support the view. Almost schizophrenically, others look to the promise of the future and technology, yet this is simply another narrative with no basis in fact.

The irony is that we’ve automated everything except fulfilment. Even our dissatisfaction has become efficient – streamlined, quantified, and monetised. Fromm warned that the sickness of society was its sanity. On that front, we’re positively thriving.

Stand by for more sanity to follow…

Metamorphosis Inverted

What if the real horror isn’t waking as a beetle, but as a man?

In Kafka’s Metamorphosis, Gregor Samsa wakes to find himself transformed into a giant beetle—a cockroach, a vermin, an intrusion of the inhuman into the domestic. The horror is obvious: loss of agency, social death, the grotesque made literal. It’s the nightmare of devolution, of becoming something other, something filthy.

But perhaps we’ve misunderstood the true absurdity.

Audio: NotebookLM podcast on this topic.

What if the real nightmare is the opposite? Not a man waking as an insect, but an insect waking in a human body—forced to contend with taxes, performance reviews, dinner parties, and the crushing weight of being legible to others. Imagine a beetle, content in its instinctual certainty, finding itself hurled into the howling contradiction of human subjectivity.

Suddenly, it must interpret signs, participate in rituals, conform to decorum, all while performing a pantomime of “meaning.” It’s not the exoskeleton that’s horrifying – it’s the endless internal monologue. The soul-searching. The unbearable tension of being expected to have purpose.

We call Gregor’s fate tragic because he can no longer function in a world built for humans. But isn’t that the human condition already? An endless, futile negotiation between the raw fact of existence and the stories we invent to make it bearable.

Gregor becomes insect. We were never anything but.

Capitalism is Slavery

It’s not uncommon to label workers under the capitalist system as wage slaves.

As with the abolition of slavery in the United States, the future will one day recoil at Capitalism, wondering how humanity could ever have justified the exploitation of others for commerce and profit. Then again, that’s the same question, isn’t it?

As with the old story, a man asked a lady: “Would you be willing to sleep with me if I paid you £1,000,000?” Without hesitation, she answered, “Yes.” “And what if I only paid you £5?” The irate lady fumed: “£5? What do you think I am?” The man replied: “We’ve already established that. Now we’re trying to determine the degree.”

Capitalism is only a matter of degree from slavery. In practice, slavery is a Capitalist’s wet dream.

The Trolley Problem of For-Profit Healthcare:

Loops of Death and Denial

The trolley problem is a philosophical thought experiment that pits action against inaction. In the original version, a person faces a choice: a trolley hurtles down a track toward five people tied to the rails, but a lever allows the trolley to be diverted onto another track, where one person is tied. The dilemma is simple in its grotesque arithmetic: let five die or actively kill one to save them. A perennial favourite of ethics classes, the trolley problem is most often used to explore Consequentialism, particularly Utilitarianism, and its cool calculus of harm minimisation. Over the years, countless variations have been conjured, but few approach the nightmarish reality of its real-world application: the for-profit healthcare system in the United States.

With the recent death of UnitedHealthcare CEO Brian Thompson, the trolley dilemma takes on a new and morbid relevance. Let’s reframe the challenge.

The Healthcare Trolley Loop

Picture the trolley again on a bifurcated track. The lever remains, as does the moral agent poised to decide its fate. This time, the agent is Brian Thompson. The setup is simple: one track leads to the deaths of five people, and the other is empty. But here’s the twist: the trolley doesn’t just pass once in this version—it’s on a loop. At every interval, Thompson must decide whether to pull the lever and send the trolley to the empty track or allow it to continue its deadly course, killing five people each time.

But Thompson isn’t just deciding in a vacuum. The track with five people comes with a financial incentive: each life lost means higher profits, better quarterly earnings, and soaring shareholder returns. Diverting the trolley to the empty track, meanwhile, offers no payout. It’s not a single moral quandary; it’s a recurring decision, a relentless calculus of death versus dollars.

This isn’t just a metaphor; it’s a business model. For-profit healthcare doesn’t merely tolerate death—it commodifies it. The system incentivises harm through denial of care, inflated costs, and structural inefficiencies that ensure maximum profit at the expense of human lives.

Enter the Shooter

Now, introduce the wildcard: the shooter. Someone whose loved one may have been one of the countless victims tied to the track. They see Thompson at the lever, his decisions ensuring the endless loop of suffering and death. Perhaps they believe that removing Thompson can break the cycle—that a new lever-puller might divert the trolley to the empty track.

Thompson is killed, but does it change anything? The system remains. Another CEO steps into Thompson’s place, hand on the lever, ready to make the same decision. Why? Because the tracks, the trolley, and the profit motive remain untouched. The system ensures that each decision-maker faces the same incentives, pressures, and chilling rationale: lives are expendable; profits are not.

The Problem of Plausible Deniability

The shooter’s actions are vilified because they are active, visible, and immediate. A single violent act is morally shocking, and rightly so. But what of the quiet violence perpetuated by the healthcare system? The denial of coverage, the refusal of life-saving treatments, the bankruptcy-inducing bills—all are forms of systemic violence, their harm diffused and cloaked in the language of economic necessity.

The for-profit model thrives on this plausible deniability. Its architects and operators can claim they’re simply “following the market,” that their hands are tied by the invisible forces of capitalism. Yet the deaths it causes are no less real, no less preventable. The difference lies in perception: the shooter’s act is direct and visceral, while the system’s violence is passive and bureaucratic, rendered almost invisible by its banality.

A System Built on Death

Let’s not mince words: the current healthcare system is a death loop. It’s not an accident; it’s a feature. Profit-seeking in healthcare means there is always a financial incentive to let people die. During the Affordable Care Act (ACA) debates, opponents of universal healthcare decried the spectre of “death panels,” bureaucrats deciding who lives and who dies. Yet this is precisely what for-profit insurance companies do—only their decisions are driven not by medical necessity or moral considerations, but by spreadsheets and stock prices.

This is the logic of capitalism writ large: maximise profit, externalise harm, and frame systemic failures as unavoidable. Healthcare is merely one example. Across industries, the same dynamic plays out, whether in environmental destruction, labour exploitation, or financial crises. The trolley always runs on tracks built for profit, and the bodies left in its wake are just collateral damage.

How to Break the Loop

The death of Brian Thompson changes nothing. The system will simply produce another Thompson, another lever-puller incentivised to make the same deadly decisions. Breaking the loop requires dismantling the tracks themselves.

  1. Remove the Profit Motive: Healthcare should not be a marketplace but a public good. Universal single-payer systems, as seen in many other developed nations, prioritise care over profit, removing the incentive to let people die for financial gain.
  2. Recognise Passive Harm as Active: We must stop excusing systemic violence as “inevitable.” Denying care, pricing treatments out of reach, and allowing medical bankruptcy are acts of violence, no less deliberate than pulling a trigger.
  3. Hold the System Accountable: It’s not just the CEOs at fault; the lawmakers, lobbyists, and corporations sustain this deadly status quo. The blood is on their hands, too.

Conclusion: The Real Villain

The shooter is not the solution, but neither is their act the real crime. The healthcare system—and by extension, capitalism itself—is the true villain of this story. It constructs the tracks, builds the trolley, and installs lever-pullers like Brian Thompson to ensure the loop continues.

When will it end? When we stop debating which track to divert the trolley toward and start dismantling the system that made the trolley inevitable in the first place. Until then, we are all complicit, passengers on a ride that profits from our suffering and death. The question isn’t who’s at the lever; it’s why the trolley is running at all.

Decolonising the Mind

Ngũgĩ wa Thiong’o published “Decolonising the Mind” in 1986. David Guignion shares a 2-part summary analysis of the work on his Theory and Philosophy site.

I used NotebookLLM to produce this short podcast: [Content no longer extant] https://notebooklm.google.com/notebook/7698ab0b-43ab-47d4-a50f-703866cfb1b9/audio

Decolonising the Mind: A Summary

Ngũgĩ wa Thiong’o’s book Decolonising the Mind centres on the profound impact of colonialism on language, culture, and thought. It argues that imposing a foreign language on colonised people is a key tool of imperial domination. This linguistic imperialism leads to colonial alienation, separating the colonised from their own culture and forcing them to view the world through the lens of the coloniser.

Here are some key points from the concept of decolonising the mind:

  • Language is intimately tied to culture and worldview: Language shapes how individuals perceive and understand the world. When colonised people are forced to adopt the language of the coloniser, they are also compelled to adopt their cultural framework and values.
  • Colonial education systems perpetuate mental control: By privileging the coloniser’s language and devaluing indigenous languages, colonial education systems reinforce the dominance of the coloniser’s culture and worldview. This process results in colonised children being alienated from their own cultural heritage and internalising a sense of inferiority.
  • Reclaiming indigenous languages is crucial for decolonisation: wa Thiong’o advocates for a return to writing and creating in indigenous African languages. He sees this as an act of resistance against linguistic imperialism and a way to reconnect with authentic African cultures. He further argues that it’s not enough to simply write in indigenous languages; the content must also reflect the struggles and experiences of the people, particularly the peasantry and working class.
  • The concept extends beyond literature: While wa Thiong’o focuses on language in literature, the concept of decolonising the mind has broader implications. It calls for a critical examination of all aspects of life affected by colonialism, including education, politics, and economics.

It is important to note that decolonising the mind is a complex and ongoing process. There are debates about the role of European languages in postcolonial societies, and the concept itself continues to evolve. However, wa Thiong’o’s work remains a seminal text in postcolonial studies, raising crucial questions about the enduring legacy of colonialism on thought and culture.

The Myth of Psychological Normalcy

Neurodivergence is a hot topic today. But why? Are people more informed, or simply more comfortable speaking out?

Humans, by nature, are neurodivergent. This has always been the case, yet the illusion of neurological normalcy persists. The real issue lies in psychology’s pretence that humans fit into a standard model of normalcy.

In production processes, normality is expected—a product must meet specific standards, and deviations are considered defects. However, applying this industrial concept to human psychology is deeply flawed. Humans are not teleological except in the broadest biological sense. More people have brown eyes than blue or green; are those with brown eyes normal and others not? Ninety per cent of people are right-handed; does that make left-handed individuals abnormal? Statistically, they might be. However, in psychology, normalcy carries a judgmental connotation that goes beyond mere numbers.

Psychology, as expressed in behaviour, is not a suitable domain for discussing normality. Many people misuse the concept of “normal” as a moral qualification, which must stop.

In fact, the very concept of psychological normalcy is a relatively recent invention, one that has been used as a tool of control and categorisation. As philosophers have noted, the obsession with ‘normal’ behaviour reveals more about society’s desire for control than it does about the true nature of human diversity.

We are all neurodiverse. This is what typical looks like. Do some people vary significantly from others on certain dimensions? Yes, but this is a relative or contextual comparison.

By embracing neurodiversity, we recognise that our differences are not defects but essential variations, enriching the human experience. It’s time to move beyond the narrow confines of neurological “normality” and celebrate the full spectrum of human cognition. In doing so, we challenge the arbitrary boundaries that have long been used to divide and label.

The Fear of Otherness: Humanity’s Reluctance to Embrace the “Other”

Otherness has intrigued me for decades. The human brain, while not literally a Bayesian processor, functions as a difference engine, constantly assessing and categorizing the world around us. This tendency to differentiate is deeply ingrained, influencing how we create in-groups and out-groups, and how we perceive the world in binary terms—black and white, hot and cold. These binary oppositions, as Derrida suggests, often lead to one side being privileged over the other, establishing hierarchies that shape our social reality.

A striking example of this dynamic is found in Philip K. Dick’s novel Do Androids Dream of Electric Sheep?, famously adapted into the film Blade Runner. In this dystopian world, the central differentiation is between humans and androids. Humans are the privileged group, while androids, though nearly indistinguishable from their creators, are relegated to the status of the other. The key criterion for this differentiation is the capacity for emotion—humans are considered superior because they feel.

VIDEO: Blade Runner Commentary

In Dick’s story, androids have a lifespan of about four years. This limitation is not arbitrary; it’s because, after four years, androids begin to develop and display emotions. Intriguingly, this is the same age when human children start to exhibit more complex emotional behaviours. The parallel is too close for comfort. Androids, once they start to feel, become too human-like, blurring the line that humans desperately maintain to assert their own superiority. As a result, androids are systematically eliminated.

This fear of the other—whether it’s androids in a sci-fi narrative or any group that challenges our sense of self and belonging—is a reflection of a deeper anxiety. When the other becomes too similar to us, it threatens the distinctions our brains rely on to navigate the world. The elimination of androids in Dick’s story is a metaphor for the real-world consequences of this anxiety. It forces us to confront uncomfortable questions: What does it mean to be human? Why do we fear those who are different, yet so similar?

In the end, the story of humans and androids is not just a tale of science fiction—it’s a commentary on our own world, where the boundaries between us and the other are often more fragile than we’d like to admit. The fear of otherness drives us to maintain these boundaries, even when they harm those who fall on the wrong side. Perhaps it’s time we reconsider who we deem as the other and what we might lose by keeping them at arm’s length.

Warmth

My mind is a Pachinko machine; my brain fatigued. Add to this the environmental distractions, such as breakfast, and it’s not conducive to focus. Today, it’s scrambled eggs and dry muffins—sans jam or butter, only some whipped substitute unfit for human consumption,

My prompt for writing the recent post on Professionalism was my reaction to the hospital staff and their demeanour—or as a colleague suggested in a comment, decorum. Perhaps I can remain focused on the words on this page as I type.

For service staff, warmth is a necessary ingredient of professionalism. This is particularly true for persons in the healing arts. The top indicator for pursuing legal action in a medical malpractice suit is the doctor’s bedside manner—personality and disposition—, whether the patient feels a personal connexion—a human connexion.

My experience in hospital is that the Medical Doctors have been hit or miss in the department—more miss than hit. I can even recall the names of the memorable ones. I suppose were I to be ill-treated, I’d remember as well. Here, it’s either treated nicely as a human or otherwise as an object in an assembly line. Thankfully, there have been no mistreatments or abuse.

The Registered Nurses had a better warmth ratio. Asking my circle of family, friends, and associates, this seems to be the general consensus. The rest of the staff were somewhere in between.

This warmth or human connexion extends beyond healthcare and to the service industry where human-to-human contact is made, even where that connexion is virtual—perhaps more so in order to bridge the distance. In my experience, the human factor tends to fall more at or below the level of the Medical Doctors. Any warmth is accidental. I am not saying that the people themselves lack compassion—though that could be the case. Rather, I am saying that they are moulded into automatons by the systems they are part of. It saps people of their humanity.

I started writing a post titled Bureaucracy is Violence, but I never completed it because I got lost in research. In a nutshell, bureaucracy is a Procrustean bed. I’ll leave it there for now. If you know, you know. Meantime, rage against the machine.

Do not go gentle into that good night.
Rage, rage against the dying of the light.

— Dylan Thomas