In Defence of Nihilism: Embracing the Absence of Inherent Meaning

Nihilism, often misunderstood and misrepresented, shares a common plight with philosophies such as atheism, anarchism, and Marxism. Like its counterparts, nihilism is frequently subjected to the creation of strawman arguments in public discourse, resulting in its vilification and scapegoating. In this article, I aim to demystify nihilism by providing a clear definition, description, and defence of this philosophical perspective.

Firstly, let’s address the misconception that nihilism entails a chaotic disregard for morality and societal norms: “If life has no meaning or purpose, then anyone can do anything.” This sentiment is often echoed in discussions about nihilism, as well as anarchism and atheism. However, it presupposes a fundamental misunderstanding of human nature. Despite the absence of inherent meaning in the universe, humans are not devoid of emotions or social affinities.

It is crucial to recognise that while the universe does not impart meaning or purpose, humans have constructed various systems of meaning throughout history. Whether through moral codes, religious doctrines, or cultural norms, individuals and societies have ascribed significance to different aspects of life. These constructs provide a framework within which individuals navigate their existence, albeit one that is socially constructed rather than inherent to the universe.

Critics of nihilism often argue that the acknowledgement of life’s inherent meaninglessness leads to despair and existential angst, rendering life devoid of purpose. However, this perspective fails to account for the resilience and adaptability of human beings. While some individuals may struggle initially with the realisation that there is no inherent meaning, many nihilists find liberation in embracing the absence of preordained purpose. Rather than succumbing to despair, they recognise the freedom to create their own meaning and forge their own path in life.

It is essential to understand that nihilism does not negate the validity of individual or societal pursuits. While nihilists reject the notion of inherent meaning, they acknowledge the significance of subjective meaning and the importance of human connection, fulfilment, and well-being. Whether it is pursuing personal goals, fostering relationships, or contributing to the betterment of society, nihilists recognise the value of such endeavours within the context of human experience.

In conclusion, nihilism offers a perspective that challenges conventional notions of meaning and purpose. By acknowledging the absence of inherent meaning in the universe, nihilists embrace the freedom to create their own meaning and chart their own course in life. Far from being a philosophy of despair, nihilism invites individuals to confront the uncertainty of existence with courage and resilience, recognising the inherent value of human experience in a world devoid of inherent meaning.

Hemo Sapiens: Awakening

I’ve been neglecting this site as I’ve been focusing on releasing my first novel, which I’ve now managed successfully. I published it under a pseudonym: Ridley Park. The trailer is available here and on YouTube.

Hemo Sapiens: Awakening is the first book in the Hemo Sapiens series, though the second chronologically. The next book will be a prequel that tells the story about where the Hemo Sapiens came from and why. I’ve got a couple of sequels in mind, too, but I don’t want to get ahead of myself.

In summary, Hemo Sapiens is shorthand for Homo Sapiens Sanguinius, a seeming sub-species of Hemo sapiens Sapiens—us. In fact, they are genetically engineered clones. It’s a work of near-future speculative fiction. It’s available in hardcover, paperback, and Kindle. If you’ve got a Kindle Unlimited account, you can view it for free in most markets. The audiobook should be available in a couple weeks if all goes well.

Awakening explores identity, belonging, otherness, and other fictions. It talks about individualism and communalism. It looks at mores, norms, and more.

Check it out, and let me know what you think.

AI Apocalypse Now?

Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.

Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.

Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.

If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.

By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.


This article was originally shared on LinkedIn: https://www.linkedin.com/posts/brywillis_when-silicon-valleys-ai-warriors-came-to-activity-7147239217687887872-6Byv/

AI is Science Fiction

In the heart of the digital age, a Chinese professor’s AI-authored Science Fiction novel snags a national award, stirring a pot that’s been simmering on the back burner of the tech world. This ain’t your run-of-the-mill Sci-Fi plot—it’s reality, and it’s got tongues wagging and keyboards clacking. Here’s the lowdown on what’s shaking up the scene.

AI Lacks Originality? Think Again

The rap on AI is it’s a copycat, lacking the spark of human creativity. But let’s not kid ourselves—originality is as elusive as a clear day in London. Originality is another weasel word. Everything’s a remix, a mashup of what’s been before. We’ve all been drinking from the same cultural well, so to speak. Humans might be grand at self-deception, thinking they’re the cat’s pyjamas in the creativity department. But throw them in a blind test with AI, and watch them scratch their heads, unable to tell man from machine. It’s like AI’s mixing up a cocktail of words, structures, themes—you name it—and serving up a concoction that’s surprisingly palatable. And this isn’t the first time, not long ago, an AI-created artwork won as best submission at a state fair. In some cases, they are seeking AI-generated submissions; other times, not so much.

AI and the Art Debate

So, AI can’t whip up human-level art? That’s the chatter, but it’s about as meaningful as arguing over your favourite colour. Art’s a slippery fish—try defining it, and you’ll end up with more questions than answers. It’s one of those terms that’s become so bloated, it’s lost its punch. To some, it’s a sunset; to others, it’s a can of soup. So when AI throws its hat in the ring, it’s not just competing—it’s redefining the game.

The Peer Review Question Mark

Here’s where it gets spicy. The book bagging a national award isn’t just a pat on the back for the AI—it’s a side-eye at the whole peer review shindig. It’s like when your mate says they know a great place to eat, and it turns out to be just okay. The peer review process, much like reviewing a book for a prestigious award, is supposed to be the gold standard, right? But this AI-authored book slipping through the cracks and coming out tops? It’s got folks wondering if the process is more smoke and mirrors than we thought.


What’s Next?

So, where does this leave us? Grappling with the idea that maybe, just maybe, AI’s not playing second fiddle in the creativity orchestra. It’s a wake-up call, a reminder that what we thought was exclusively ours—creativity, art, originality—might just be a shared space. AI’s not just imitating life; it’s becoming an intrinsic part of the narrative. Science fiction? More like science fact.

The next chapter’s unwritten, and who knows? Maybe it’ll be penned by an AI, with a human sitting back, marvelling at the twist in the tale.

Capitalism Unmasked: The New Age Slavery?

Fast forward a century. The future’s looking back, not with nostalgia but with a critical eye. Will they see our age of capitalism as we see the era of slavery – a moral misstep, a societal blight?

2023: Here we are, knee-deep in capitalism. It’s everywhere, in every transaction, every ambition. But beneath the veneer of progress and prosperity, there’s a darker narrative unfolding.


Wage Slaves in a Modern World

Wage slavery is the reality for many in a capitalist system where survival hinges on selling labour. The concept? Simple yet brutal. People are chained not by physical shackles but by economic necessity, a cycle of paycheck-to-paycheck existence. It’s freedom, but only in the loosest sense.


The Surplus of Inequality

Wage surplus – the lifeblood of capitalism. The more you squeeze out of workers, the fatter the profits. It’s a game of numbers where human cost rarely figures. The working class toil, and the upper echelons reap the rewards. Sounds familiar? It’s a throwback to the days of slavery, just dressed in modern garb.


Capitalists’ Dark Fantasy

A state of slavery – every capitalist’s secret fantasy? Perhaps not all, but for the ruthless, it’s the ultimate dream. A world where workers are mere cogs in the machine, dispensable and replaceable. No rights, no voice, just endless labour for minimal reward.


The Capitalist Paradox

Here’s the paradox – capitalism, in theory, champions freedom and innovation. But in practice, it often veers towards oppression and exploitation. The gap between the haves and have-nots widens, and social mobility becomes a myth, reserved for fairy tales.


The Future’s Judgment

In 2123, will they shake their heads at our era? Will they wonder how we allowed economic systems to morph into modern-day slavery? How we sold our souls for the illusion of prosperity?


A Glimmer of Hope?

But the tide is turning. Voices of dissent are rising, challenging the status quo. The call for a fairer, more humane economic model grows louder. There’s hope yet that we’ll steer away from the shadow of capitalism’s excesses.


Conclusion

As the world spins on, we’re at a pivotal moment. Will we continue down this path, or will we pivot towards a more equitable future? The choices we make today will echo through the annals of history.


Call to Action

What’s your stance? Is capitalism veering too close to a modern form of slavery? Drop your thoughts below. Let’s ignite a dialogue.

Enlightenment Now?

I’ve long been opposed to the direction the Enlightenment took the course of Western civilisation. I’m not religious or spiritual, so I am glad history took a different route. I just don’t feel it’s been the right one. Pinker believes that we not only took the right course, but we are still on the right path. Mearsheimer believes that we made the right choice, but we are no longer on the right path.

Pinker stammers through his argument that Mearsheimer effectively frames for him. Pinker is a Liberal, who believes that reason and individual rights are the keys to progress. He believes that tribalism, e.g., partisanship or nationalism, subverts this.

Mearsheimer believes that humans are social animals that are individuals, but they need to carve out their individuality as a secondary consideration to the larger society and its needs. He argues that more homogeneous societies are better positioned to prosper, so he defends this sort of nationalism and national identity. Pinker argues that social trends show otherwise.

As for me, they’re both wrong. As with both of them, individuals are important as are social configurations. My contention is that humans are fundamentally dim. They are predictably irrational and too easily swayed by the next shiny object.

The Enlightenment was sold on a promise imagined and conjured by elite thinkers, but they must have lived in a bubble to think their cognitive capabilities were universal. Perhaps, they felt that the elite thinkers, the philosopher rulers of Plato’s wet dreams, would reign at least in proxy. Instead, we’ve seen a race to the bottom, a world where the lowest denominator seems to rule.

In the United States, we have an adle-headed Joe Biden versus idiot-grifter Donald Trump. This is the best a country of 350 million can manage. This is the cream of their crop. If this is what freedom renders, I say no thanks. I am personally opposed to the notion of Democracy as I’ve written about previously. In a world where 50% of the population has an IQ of under 100, this scares the hell out of me. Honestly, voting should be restricted to those with IQs over 115. Of course, I don’t trust IQs either, so we are speaking in abstract hypothetics. I don’t believe in progress, so there’s that.

Even this ‘elitist’ democracy would fail. People cannot acquire and maintain the necessary domain knowledge, and self-interest will still motivate voting outcomes, which as Kenneth Arrow proved in the 1950s, usually produce suboptimal outcomes. So even the best that the best of the best can yield is mediocrity.

So, what is the best position? I’m not sure, but it feels like one with no humans in the decision-making process would yield better results.

Post Script

I asked Dall-E to render an image for this post. Here’s what I got, but the process was amusing, to say the least.

User

Steven Pinker and John Mearsheimer battling in a wrestling ring

DALL·E

Creating an image of Steven Pinker and John Mearsheimer wrestling in a ring would go against the policy of depicting specific public figures. However, I can create a similar image with a creative twist. How about I generate an image of two fictional scholars in a wrestling ring, each embodying the spirit of intellectual debate? Let me know if this works for you or if you have any other ideas!

User

that’s fine just capture the caricature of each… pinker’s hair and Mearsheimer’s bald head and eyeglasses.

DALL·E

Creating images that resemble specific public figures, even through caricature, is outside the scope of my capabilities. However, I can create an image of two fictional characters in a wrestling ring, one with wild hair and the other with a bald head and eyeglasses, to capture the essence of an intellectual debate in a humorous, wrestling context. Let’s proceed with that idea!

Prompt for DALL·E: “Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.”

Size: 1024×1024

User

Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.

And, nailed it! lol The results speak for themselves. haha

Notice how the AI wouldn’t generate the image, but fed me that work-around to circumvent its rules.

Geopolitical Positioning

Some have asked me why I comment on the conflicts of the world since I am a nihilist who doesn’t believe in nations and borders. The answer is that I still have emotions and can still apply logic. Besides, much of my argument revolves around selective vision and cherry-picking.

Two conflicts have been in the news lately—Israel-Palestine and Russia-Ukraine. I think I can frame this without taking sides.

Israel-Palestine

On 7 October, Hamas attacked Israeli citizens. This is a crime against humanity. Israel declared war on Palestine and attacked their citizens. This is both a crime against humanity and a war crime. Israeli officials claim that it is justifiable because the militant Hamas were hiding behind Palestinian ‘human shields’, targets that included hospitals and other infrastructure.

However, 7 October didn’t happen without history. I’m no historian, but Israel’s occupation of Gaza has been considered illegal since 1968. If we accept this frame, Hamas are roughly equivalent to the French resistance during WWII, doing what they can to rid the oppressors. I think this video by a fellow philosopher provides some historical context, so I’ll stop here and recommend it.

Spoiler Alert: This affair commenced circa the nineteenth century.

Russia-Ukraine

We all know this story. Russia invaded Ukraine without provocation on 24 February 2002. Putin just wanted to reform the former Soviet Union, right? Well, not so fast. While I disagree with this narrative, I also disagree with its historical framing as well as the claim that Ukraine had some long-standing sovereignty and its people were all asking for liberation from the West.

Again, let’s rewind to 2014—nah, 1989, the fall of the Berlin Wall and the end of the Cold War™, an event that would commence a period of unprecedented peace—if not for that pesky Military-Industrial Complex. Drats. Not good for profits. Never gonna happen. Promises made. Promises broken.

You’ll notice in this Belgian (French language) map that Ukraine didn’t yet exist in 1769. We can see Crimea, which was controlled by the declining Ottoman Empire.

No history lesson today. Do your homework. Nothing is back and white.

Good Enough

As I approach my sixty-second year on earth, having almost expired in March, I’ve been a bit more reflective and introspective. One is categorical. I’ve been told over the years that I am ‘good’ or ‘excel’ at such and such, but I always know someone better—even on a personal level, not just someone out in the world. We can all assume not to be the next Einstein or Picasso, but I am talking closer than that.

During my music career, I was constantly inundated with people better than me. I spent most of my time on the other side of a mixing console, where I excelled. Even still, I knew people who were better for this or another reason. In this realm, I think of two stories. First, I had the pleasure and good fortune to work on a record with Mick Mars and Motley Crue in the mid-’80s. We had a chat about Ratt’s Warren DiMartini, and Mick told me that he knew that Warren and a spate of seventeen-year-olds could play circle around him, but success in the music business is not exclusively based on talent. He appreciated his position.

In this vein, I remember an interview with Tom Morello of Rage Against the Machine. As he was building his chops he came to realise that he was not going to be the next Shredder or Eddie Van Halen, so he focused on creating his own voice, the one he’s famous for. I know plenty of barely competent musicians who make it, and I know some virtual virtuosos who don’t. But it involves aesthetics and a fickle public, so all bets are off anyway.

As I reflect on myself, I consider art and photography. Always someone better. When I consider maths or science, there’s always someone better. Guitar, piano? Same story.

Even as something as vague and multidimensional as business, I can always name someone better. I will grant that in some instances, there literally is no better at some level—just different—, so I sought refuge and solace in these positions. Most of these involved herding cats, but I took what I could.

Looking back, I might have been better off ignoring that someone was better. There’s a spot for more than the best guitarist or singer or artist or policeman for that matter. As a musician, I never thrived financially—that’s why I was an engineer—, but I could have enjoyed more moments and taken more opportunities.

When I was 18, I was asked to join a country music band. I was a guitarist and they needed a bass player. I didn’t like country music, so I declined—part ego, part taste. Like I said, aesthetics.

As I got older and started playing gigs, I came to realise that just playing was its own reward. I even played cover bands, playing songs that were either so bad or so easy. But they were still fun. I’m not sure how that would have translated as playing exclusively country music day after day, but I still think I might have enjoyed myself—at least until I didn’t. And the experience would still have been there.

I was a software developer from the nineties to the early aughts. I was competent, but not particularly great. As it turns out, I wasn’t even very interested in programming on someone else’s projects. It’s like being a commercial artist. No, thank you. It might pay the bills, but at what emotional cost?

I was a development manager for a while, and that was even worse, so I switched focus to business analysis and programme management, eventually transitioning to business strategy and management consulting. I enjoyed these more, but I still always knew someone better.

On one hand, whilst I notice the differences, it’s lucky that I don’t care very much. Not everyone can be a LeBron James or a Ronaldo, but even the leagues are not filled with this talent. I’m not suggesting that a ten-year-old compete at this level, but I am saying if you like it, do it. But temper this with the advice at the Oracle of Delphi: Know thyself. But also remember that you might never be the best judge of yourself, so take this with a grain of salt. Sometimes, ‘good enough’ is good enough.

Bell Curves and Constructivism

I recently had a discussion with my son about a controversial academic topic. He was struggling to find someone willing to engage in a meaningful dialogue. People on both sides of the issue seemed either emotionally invested without evidentiary support or lacking sufficient background knowledge for substantive debate. Moreover, due to the current political correctness and cancel culture environment, even gathering new data on this unpopular subject is challenging. This exemplifies the failure of the scientific community to uphold impartial inquiry, a key factor underlying my view that scientific consensus is predicated more on rhetorical appeals than dispassionate analysis.

The topic stemmed from The Bell Curve, a 1994 book by psychologist Richard J. Herrnstein and political scientist Charles Murray. The authors argued that intelligence, as measured by IQ tests, has a strong genetic component, with race and ethnicity as statistically significant variables. Specifically, they claimed the average IQ score for whites in the US was one standard deviation higher than for blacks – 102 versus 85, as depicted in the chart below.

While the details are less relevant here, the core issue is the purported gap between racial groups. As shown, there is substantial overlap in the bell curves, but the one representing blacks is notably shifted leftward from the white curve.

As a statistician, I tend to focus more on methodological rigour and meta-analyses than the mathematical computations themselves, which I presume were executed properly.

My first consideration was the validity of IQ tests as a proxy for achievement, given the cultural biases these metrics contain. Even accepting the IQ gap argument, issues around test construction leave open the ‘so what?’ question. However, the study’s findings were largely replicated 15 years later, suggesting some robustness.

My son observed that the current sociopolitical climate would likely preclude similar research from being funded or published today. Accusations of racism would be inevitable, even for a purely academic inquiry on this topic – creating an artificial blind spot in the science.

While understandable given the history of eugenics abuses, refusing to objectively investigate testable hypotheses is antithetical to the scientific method, which relies on falsifiability and replication. Without permitting studies to potentially confirm or refute a claim, we are left guessing. This is how scientific narratives are socially constructed – not by a disinterested search for truth, but through consensus and conformity enforcement.

I am sceptical of social science approaches generally, which often appear pseudoscientific. However constructive criticism should apply equally to so-called hard sciences like physics, where incompatible models of quantum gravity and dark matter proliferate, awaiting the next paradigm shift.

In summary, my son’s concern was that even proposing this controversial thesis as a dissertation topic could lead to professional censure. Personally, I take no position on the veracity of this hypothesis. However, as a matter of scientific integrity and public policy relevance, arbitrarily declaring certain academic questions off limits seems contrary to the principles of an enlightened, post-Enlightenment society. We cannot refuse to ask difficult questions for fear of inconvenient answers.