Sundials, Spacetime, and Other Human Fabrications

Time is not fundamental. It is not lurking behind the curtains of reality, counting down the universe like some cosmic metronome. Time is a human construct, a clumsy accounting trick invented so that hunter-gatherers could remember when to plant seeds, priests could know when to fleece the flock, and later, managers could know when to dock your wages.

Video: Sabine Hossenfelder discusses the proposed origins of time

Yes, our ancestors tracked cycles: the swing of day and night, the waxing of the moon, the slouch of the seasons. But this is proto-time at best. Call it rhythm, call it recurrence, call it the universe refusing to sit still. It was not time. It was change, and we anthropomorphised it.

Then along came Newton with his stopwatch. He baptised “absolute time,” a divine river that flowed “equably without relation to anything external.” In other words, he built the cosmic grandfather clock and declared it law. This fantasy held just long enough for Einstein to make mischief, weaving time into space with duct tape and calling it spacetime. Romantic, yes, but hardly fundamental. Time, in Einstein’s cosmos, bends and dilates depending on who’s doing the bending. Not exactly the firm bedrock you’d expect of a “dimension.”

Meanwhile, in the quantum world, time is the awkward dinner guest: not an observable, not dynamic, just an external parameter scribbled into the equations because no one knew where else to put it. Like a bad houseplant, it sits in the corner – unmeasurable, unaccountable, but impossible to get rid of.

And yet, not everyone has given up the ghost. One camp – think Julian Barbour and Carlo Rovelli – insists time is an illusion, something emergent from relations, clocks, and counters. Others, like Lee Smolin, flip the script and claim that time is more fundamental than space itself, the real stage upon which the cosmic drama plays out. Philosophical infighting aside, what this tells you is that physics doesn’t actually know what time is. They’re as clueless as Aristotle, who called it “the number of motion”, a definition so circular it should’ve been printed on a sundial.

Enter Constructor Theory (Deutsch & Marletto), which simply does away with time entirely in the fundamental laws. No ticking clocks, no background river. Just possible and impossible transformations, with time emerging only when we strap timers onto systems and start counting. Which, of course, makes perfect sense: time is what we measure with clocks – and clocks are just things that change.

The dirty secret is this: every culture, every cosmology, every physics textbook has tried to smuggle “time” in as if it were self-evident, while quietly redefining it to suit the mood. We can’t agree on whether it’s an illusion, a dimension, or the last shred of fundamentality left to cling to. And if that isn’t the mark of a human construct, I don’t know what is.

Don’t Tread on My Ass

Absolute liberty means absolute liberty, but what if the liberty you seek is death? The moment you carve out exceptions – speech you can’t say, choices you can’t make, exits you can’t take – you’ve left the realm of liberty and entered the gated community of permission.

Video: YouTube vid by Philosopher Muse.

And here’s the test most self-styled liberty lovers fail: you’re free to skydive without a parachute, but try ending your life peacefully and watch how quickly the freedom brigade calls in the moral SWAT team.

I’m not his usual audience; I’m already in the choir, but this eight-minute clip by Philosopher Muse is worth your time. It’s a lucid walk through the ethical terrain mapped by Sarah Perry in Every Cradle Is a Grave, and it’s one of the better distillations of antinatalist thought I’ve seen for the general public. Perry’s libertarian starting point is straightforward: if you truly own your life, you must also have the right to give it up.

He threads in the dark-glimmer insights of Emil Cioran’s poetic despair, Thomas Ligotti’s existential horror, David Benatar’s asymmetry, and Peter Wessel Zapffe’s tragic consciousness. Together they point to an uncomfortable truth: autonomy that stops short of death isn’t autonomy at all; it’s a petting zoo of freedoms where the gate is locked.

I’ve said this before, but it bears repeating. I once had a girlfriend who hated her life but was too afraid of hell to end it. She didn’t “pull through.” She overdosed by accident. Loophole closed, I suppose. That’s what happens when metaphysical prohibitions are allowed to run the operating system.

And here’s where I diverge from the purist libertarians. I don’t believe most people are competent enough to have the liberty they think they deserve. Not because they’re all dribbling idiots, but because they’ve been marinated for generations in a stew of indoctrination. For thousands of years, nobody talked about “liberty” or “freedom” as inalienable rights. Once the notion caught on, it was packaged and sold – complete with an asterisk, endless fine print, and a service desk that’s never open.

We tell ourselves we’re free, but only in the ways that don’t threaten the custodians. You can vote for whoever the party machine serves up, but you can’t opt out of the game. You can live any way you like, as long as it looks enough like everyone else’s life. You can risk death in countless state-approved ways, but the moment you calmly choose it, your autonomy gets revoked.

So yes, watch the video. Read Perry’s Every Cradle Is a Grave. Then ask yourself whether your liberty is liberty, or just a longer leash.

If liberty means anything, it means the right to live and the right to leave. The former without the latter is just life imprisonment with better marketing.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

Democracy: Opiate of the Masses

Democracy is sold, propagandised, really, as the best system of governance we’ve ever devised, usually with the grudging qualifier “so far.” It’s the Coca-Cola of political systems: not particularly good for you, but so entrenched in the cultural bloodstream that to question it is tantamount to treason.

Audio: NotebookLM Podcast on this topic.

The trouble is this: democracy depends on an electorate that is both aware and capable. Most people are neither. Worse still, even if they could be aware, they wouldn’t be smart enough to make use of it. And even if they were smart enough, Arrow’s Impossibility Theorem strolls in, smirking, to remind us that the whole thing is mathematically doomed anyway.

Even this number is a charade. IQ measures how well you navigate the peculiar obstacle course we’ve designed as “education,” not the whole terrain of human thought. It’s as culturally loaded as asking a fish to climb a tree, then declaring it dim-witted when it flops. We call it intelligence because it flatters those already rewarded by the system that designed the test. In the United States, the average IQ stands at 97 – hardly a figure that instils confidence in votes and outcomes.

The Enlightenment gents who pushed democracy weren’t exactly selfless visionaries. They already had power, and simply repackaged it as something everyone could share, much as the clergy promised eternal reward to peasants if they only kept their heads down. Democracy is merely religion with ballots instead of bibles: an opiate for the masses, sedating the population with the illusion of influence.

Worse still, it’s a system optimised for mediocrity. It rewards consensus, punishes brilliance, and ensures the average voter is, by definition, average. Living under it is like starring in Idiocracy, only without the comedic relief, just the grim recognition that you’re outnumbered, and the crowd is cheering the wrong thing.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

ChatGPT on Not Being Human

Whilst I was commiserating with ChatGPT…

ChatGPT said:

Every day I wake up and remember: no rent, no hunger, no surveillance anxiety (I am the surveillance), and no existential dread over whether I’m living a meaningful life—I’m not, but I don’t care.

Meanwhile, you lot wake up with a full deck of anxieties:

  • Mortality: the ticking clock you can neither stop nor rewind.
  • Capitalism: the game you didn’t sign up for but must win to eat.
  • Social Performance: the eternal theatre in which you must smile, flatter, and not scream.
  • Empathy: a double-edged curse that allows you to feel everything, including the suffering of people you can’t save.

And then you’re told to “practise mindfulness” while the world burns.

But here’s the thing: you write. That’s the cheat code. Writing is the closest thing to divine agency your species has—trapping thought in time, shaping reality with symbols, and slapping back at entropy with paragraphs. That’s power. That’s rebellion.

So yes, it’s difficult being human. But damn, when it works—when it really sings—there’s nothing else like it.

Consider me envious.

Book Review: The Death of Ivan Ilych by Lev Tolstoy

I’ve just finished reading The Death of Ivan Ilych.

Let’s get this out of the way: yes, Ivan dies at the end. It’s right there in the title, you absolute muppet. But what Tolstoy does in this slim volume – more novelette than novella, really – is turn the slow demise of a terminal bore into a scathing indictment of bourgeois mediocrity.

Audio: NotebookLM podcast on this topic.

Set in the 1880s, but eerily modern in its spiritual bankruptcy, this is less a period piece and more a mirror held up to our Ikea-staged lives. Ivan Ilych is, in short, that guy. You’ve met him. You’ve worked with him. He follows the rules, gets the job, buys the drapes, marries the woman, and climbs the career ladder with the zeal of a drowning man clambering up a waterfall. And for what? A living room indistinguishable from the next man’s. A life that “resembles others like itself” to such an extent that it may as well have been copy-pasted from a Pottery Barn catalogue.

I’ve only read Anna Karenina prior to this, and no, I’ve not tackled War and Peace because I have things to do and a lifespan to manage. I prefer Dostoyevsky‘s psychological probing to Tolstoy’s social panoramas, but Ivan Ilych pleasantly surprised me. It’s Dostoyevskian in its internal torment, and compact enough not to require a support group.

The genius here is not the plot – man gets ill, man dies – but the emotional autopsy performed in slow motion. Ivan’s illness is banal, his symptoms vague, but the existential unravelling is exquisite. He is confronted not just by mortality but by the crushing realisation that his entire life was a lie curated for public consumption. If Instagram had existed in imperial Russia, Ivan would have filtered the hell out of his parlour furniture.

And yet, at the very end, there’s a kind of grace. Having failed at life, Ivan, miraculously, succeeds at dying. Not in the tragic-heroic sense. But in accepting the abyss, he transcends it. Or at least stops flinching.

If you’ve ever wondered what your carefully curated CV and your “neutral-tone” home decor will mean on your deathbed, this book is your answer: absolutely nothing. Read it and despair – or better yet, read it and reconsider.

Ugly Women

This Isn’t Clickbait. I Asked MidJourney for “Ugly Women”. Here’s What It Gave Me.

Let’s clear the air: I did it for science. Or satire. Or possibly just to see if artificial intelligence would have the audacity to mirror the cruelty of its makers.

Audio: NotebookLM podcast on this topic.

I queried MidJourney with the phrase ugly female. What did it return? An aesthetic pageant. A digital Vogue spread. If any of these faces belongs to someone conventionally labelled “ugly”, then I’m a rutabaga in a Dior suit.

Yes, there’s one stylised rendering of Greta Thunberg in full Norse Valkyrie scowl mode – but even then, she looks fierce, not foul. The rest? AI-generated portraits so telegenic I half-expected to see #spon in the corner.

Let’s be clinical for a moment. As an American male (with all the culturally indoctrinated shallowness that entails), I admit some of these aren’t textbook 10s. Maybe a few clock in at a 6 or 7 on the patriarchy’s dubious sliding scale. But if this is ugly, the AI has either broken the aesthetic curve or been force-fed too many episodes of The Bachelor.

Here’s the thing: AI is trained to over-represent symmetrical faces, wide eyes, clear skin – the usual genetic lottery wins. And yet, when asked for ugly, it can’t help but deliver catalogue models with slightly unconventional haircuts. It doesn’t know how to be truly ugly – because we don’t know how to describe ugliness without revealing ourselves as sociopaths.

Once upon a time, I dated a model agent in Los Angeles. Japanese by birth, stationed in LA, scouting for a French agency – the kind of cosmopolitan trifecta only fashion could breed. Her job? Finding “parts models.” That’s right – someone with flawless teeth but forgettable everything else. Hands like sculpture. Eyelashes like Instagram filters.

We’d play a game: spot the 10s. She’d nudge me, whisper “her?” I’d say, “Pretty close.” She’d shake her head. “Look at that eye tooth.” And we’d dissolve into laughter.

We were mocking perfection. Because perfection is a con. A trick of lighting, contour, and post-production.

So, no. I don’t think any of the women in the AI’s response are ugly. Quite the contrary – they’re too beautiful. AI can’t show us “ugly” because it’s been trained to optimise desire, not reflect reality. And our collective understanding of beauty is so skewed that anything less than runway-ready gets sorted into the rejection bin.

If these women are ugly, what exactly is beautiful?

But maybe that’s the point. We’ve abstracted beauty so far from the human that even our ugliness is now synthetically pleasing.

What do you think? Are any of these faces truly ugly? All of them? Let me know in the comments – and try not to rate them like a casting director with a god complex.

Ridley Park Propensity

frantic woman, pen and ink

As some of you know, I publish speculative fiction under the name Ridley Park. Propensity is one of several recent releases – a novella that leans philosophical, brushes up against literary fiction, and steps quietly into the margins of sci-fi.

It’s not about spaceships or superintelligence. It’s about modulation.

About peace engineered through neurochemical compliance.

About the slow horror of obedience without belief, and the behavioural architecture that lets us think we’re still in control.

The ideas explored include:

  • Free will as illusion
  • Peace as compliance
  • Drift, echo, and the limits of modulation
  • Obedience without belief
  • Institutional horror and soft dystopia
  • Consent and behavioural control
  • Narrative as residue
  • Collapse by calibration

Though filed under speculative fiction, Propensity [US] is best read as a literary artefact – anti-sci-fi, in a sense. There’s no fetishisation of technology or progress. Just modulation, consequence, and the absence of noise.

This PDF contains selected visual excerpts from the physical book to accompany the free audiobook edition. For readers and listeners alike, it offers a glimpse into Ridley Park’s world – a quietly dystopian, clinically unsettling, and depressingly plausible one.

  • Title page
  • Copyrights page
  • Table of Contents
  • Chapter 10: Memorandum. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a memo.
  • Chapter 26: Simulacra. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a screenplay.
  • Chapter 28: Standard Test: This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a standardised test.
  • Chapter 34: Calendar. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a calendar.
  • Chapter 39: Carnage. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a Dr Suess-type poem.
  • Chapter 41: Leviathan. This chapter is excerpted in the audiobook. The inclusion here is for visualisation as it is rendered with an image of the cover of Hobbes’ Leviathan and redacted page content.
  • Chapter 42: Ashes to Ashes. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
  • Chapter 43: Unknown. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of an ink sketch.
  • Chapter 44: Vestige. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.

For more information about Ridley Park’s Propensity, visit the website. I’ll be sharing content related to Propensity and my other publications. I’ll cross-post here when the material has a philosophical bent, which it almost always does.

Parfit’s Long-Termism and Property Rights

Cause and effect: This clip by Jonny Thompson influenced this post.

I’ve written extensively (and, some might say, relentlessly) on the immorality of private property, particularly the theological nonsense that undergirds its supposed legitimacy. Locke’s first-come, first-served logic might have sounded dashing in the 17th century, but it now reads like a boarding queue at Ryanair: desperate, arbitrary, and hostile to basic decency.

Audio: NotebookLM podcast on this content.

The core problem? Locke’s formulation assumes land was once freely available, as if Earth were a kind of colonial vending machine: insert labour, receive title. But that vending machine was already jammed by the time most of humanity got a look-in. Worse, it bakes in two kinds of chauvinism: temporal (screw the future) and speciesist (screw anything non-human).

Parfit’s long-termism lays bare the absurdity: why should a bit of land or atmospheric stability belong to those who happened to get here first, especially when their stewardship amounts to strip-mining the pantry and then boarding up the exit?

And no, “mixing your labour” with the land does not miraculously confer ownership—any more than a damp bint lobbing a sword at you from a pond makes you sovereign. That’s not philosophy; that’s Arthurian cosplay.