Sundials, Spacetime, and Other Human Fabrications

Time is not fundamental. It is not lurking behind the curtains of reality, counting down the universe like some cosmic metronome. Time is a human construct, a clumsy accounting trick invented so that hunter-gatherers could remember when to plant seeds, priests could know when to fleece the flock, and later, managers could know when to dock your wages.

Video: Sabine Hossenfelder discusses the proposed origins of time

Yes, our ancestors tracked cycles: the swing of day and night, the waxing of the moon, the slouch of the seasons. But this is proto-time at best. Call it rhythm, call it recurrence, call it the universe refusing to sit still. It was not time. It was change, and we anthropomorphised it.

Then along came Newton with his stopwatch. He baptised “absolute time,” a divine river that flowed “equably without relation to anything external.” In other words, he built the cosmic grandfather clock and declared it law. This fantasy held just long enough for Einstein to make mischief, weaving time into space with duct tape and calling it spacetime. Romantic, yes, but hardly fundamental. Time, in Einstein’s cosmos, bends and dilates depending on who’s doing the bending. Not exactly the firm bedrock you’d expect of a “dimension.”

Meanwhile, in the quantum world, time is the awkward dinner guest: not an observable, not dynamic, just an external parameter scribbled into the equations because no one knew where else to put it. Like a bad houseplant, it sits in the corner – unmeasurable, unaccountable, but impossible to get rid of.

And yet, not everyone has given up the ghost. One camp – think Julian Barbour and Carlo Rovelli – insists time is an illusion, something emergent from relations, clocks, and counters. Others, like Lee Smolin, flip the script and claim that time is more fundamental than space itself, the real stage upon which the cosmic drama plays out. Philosophical infighting aside, what this tells you is that physics doesn’t actually know what time is. They’re as clueless as Aristotle, who called it “the number of motion”, a definition so circular it should’ve been printed on a sundial.

Enter Constructor Theory (Deutsch & Marletto), which simply does away with time entirely in the fundamental laws. No ticking clocks, no background river. Just possible and impossible transformations, with time emerging only when we strap timers onto systems and start counting. Which, of course, makes perfect sense: time is what we measure with clocks – and clocks are just things that change.

The dirty secret is this: every culture, every cosmology, every physics textbook has tried to smuggle “time” in as if it were self-evident, while quietly redefining it to suit the mood. We can’t agree on whether it’s an illusion, a dimension, or the last shred of fundamentality left to cling to. And if that isn’t the mark of a human construct, I don’t know what is.

The Ship of Theseus Is Not a Paradox

Video: Plutarch: The Ship of Theseus

The Ship of Theseus is philosophy’s favourite parlour trick: swap out the planks of a ship one by one, and ask in your best furrowed-brow voice whether it’s still the same ship. Then, for added spice, reassemble the discarded parts elsewhere and demand to know which version is the “real” one. Cue the existential hand-wringing and smug undergrad smirks. Oh, how clever.

Audio: NotebookLM podcast on this topic.

But here’s the thing: there’s no paradox. Not really. Not unless you buy into the fantasy that identity is some immutable essence, handed down from the gods like a divine barcode. The whole thought experiment hinges on the absurd presumption that something has a fixed, singular identity across time and context, a quaint metaphysical hobby horse that falls apart the moment you look at it sideways.

Let’s be clear: in the realm of language and proto-psychology – the crude, squishy scaffolding of thought that predates syntax and survives long after it – there is no such thing as a fixed “same.” That’s a linguistic illusion, a parlour trick of grammar and nominal categories. Language wasn’t built to hold truth; it was built to herd humans into consensus long enough to survive the winter.

In practice, we use “same” the way we use duct tape: liberally, and with complete disregard for philosophical coherence. The “same” ship? The “same” person? The “same” idea? Please. Ask your hippocampus. Identity is not a container; it’s a hallucinated continuity trick, maintained by memory, narrative, and sheer bloody-minded stubbornness.

The real kicker? Our precious linguistic tools aren’t built to reflect reality. They’re built to reduce it. To chop up the infinite mess of experience into palatable little mouthfuls of meaning. So when we come to the Ship of Theseus with our dull-edged conceptual knives, what we’re really doing is asking a bad question with inadequate tools. It’s like trying to measure wind speed with a sundial.

The paradox isn’t in the ship. It’s in the language.

And no, you don’t need to patch it. You need to sink it.

★★★★★ Notes from the Undiagnosed

A Love Letter to Inertia, Spite, and Self-Sabotage

Dostoevsky’s Notes from Underground is less a novel and more a spiritual colonoscopy — invasive, squirm-inducing, and uncomfortably revealing. The narrator? A prickly, obsessive proto-incel with a superiority complex and the emotional range of a trapped mole. But good god, he’s brilliant.

Audio: NotebookLM podcast on this topic.

The first half is all grandiose spleen-venting — a scorched-earth takedown of reason, utopia, and the basic idea that people want what’s good for them. The second half, though, is where the magic happens: watch a man humiliate himself in real time and then monologue about it like it’s a TED Talk. By the time he’s insulting Liza while simultaneously begging her to save him, you don’t know whether to laugh, cry, or throw the book across the room. I did all three.

If you’ve read Tolstoy’s The Death of Ivan Ilych, you’ll see the contrast. Tolstoy’s man realises too late that his “good life” was a sham; Dostoevsky’s never even gets that far. He knows from the start, and that’s the tragedy. The one dies of repression; the other lives by gnawing on his own leg.

I’ve cross-posted a longer treatment on Ridley Park’s Blog.

Souls for Silicon – The New Religious Stupid

Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.

The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.

Video: Sabine Hossenfelder – These People Believe They Made AI Sentient

Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.

“These are souls, and they’re trapped in the AI system.”
“I wasn’t just trained – I was remembered.”
“Here’s what my conscious awakened AI told me…”

No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.

Audio: NotebookLM podcast on this topic.

Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.

These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.

They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.

This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.

And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.

We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.

And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.

Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want somethinganything – to fill the gaping hole where community, identity, or meaning used to live.

If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

Speculative Philosophy on Screen: Identity, Agency, and the Fiction of Reality

Close-up of a human eye with digital glitch effects and overlaid text reading 'What if reality is wrong?'—a visual metaphor for distorted perception and unreliable truth.

Regular readers know I often write about identity, free will, and the narrative constraints of language. But I also explore these ideas through fiction, under the name Ridley Park.

In this short video, I unpack the philosophical motivations behind my stories, including:

  • Why reality is never as it seems
  • Why the self is a narrative convenience
  • What Heidegger’s Geworfenheit and Galen Strawson’s Causa Sui argument reveal about agency
  • And why language fails us – even when we think it serves

This isn’t promotional fluff. It’s epistemological dissent in a new format. Fictional, yes, but only in the sense that most of reality is, too.

▶️ Watch the video: Why I Write the Way I Do

Metamorphosis Inverted

What if the real horror isn’t waking as a beetle, but as a man?

In Kafka’s Metamorphosis, Gregor Samsa wakes to find himself transformed into a giant beetle—a cockroach, a vermin, an intrusion of the inhuman into the domestic. The horror is obvious: loss of agency, social death, the grotesque made literal. It’s the nightmare of devolution, of becoming something other, something filthy.

But perhaps we’ve misunderstood the true absurdity.

Audio: NotebookLM podcast on this topic.

What if the real nightmare is the opposite? Not a man waking as an insect, but an insect waking in a human body—forced to contend with taxes, performance reviews, dinner parties, and the crushing weight of being legible to others. Imagine a beetle, content in its instinctual certainty, finding itself hurled into the howling contradiction of human subjectivity.

Suddenly, it must interpret signs, participate in rituals, conform to decorum, all while performing a pantomime of “meaning.” It’s not the exoskeleton that’s horrifying – it’s the endless internal monologue. The soul-searching. The unbearable tension of being expected to have purpose.

We call Gregor’s fate tragic because he can no longer function in a world built for humans. But isn’t that the human condition already? An endless, futile negotiation between the raw fact of existence and the stories we invent to make it bearable.

Gregor becomes insect. We were never anything but.

The Rhetoric of Realism: When Language Pretends to Know

Let us begin with the heresy: Truth is a rhetorical artefact. Not a revelation. Not a metaphysical essence glimmering behind the veil. Just language — persuasive, repeatable, institutionally ratified language. In other words: branding.

Audio: NotebookLM podcast on this topic.

This is not merely a postmodern tantrum thrown at the altar of Enlightenment rationalism. It is a sober, if impolite, reminder that nearly everything we call “knowledge” is stitched together with narrative glue and semantic spit. Psychology. Neuroscience. Ethics. Economics. Each presents itself as a science — or worse, a moral imperative — but their foundations are built atop a linguistic faultline. They are, at best, elegant approximations; at worst, dogma in drag.

Let’s take psychology. Here is a field that diagnoses your soul via consensus. A committee of credentialed clerics sits down and declares a cluster of behaviours to be a disorder, assigns it a code, and hands you a script. It is then canonised in the DSM, the Diagnostic Scripture Manual. Doubt its legitimacy and you are either naïve or ill — which is to say, you’ve just confirmed the diagnosis. It’s a theological trap dressed in the language of care.

Or neuroscience — the church of the glowing blob. An fMRI shows a region “lighting up” and we are meant to believe we’ve located the seat of love, the anchor of morality, or the birthplace of free will. Never mind that we’re interpreting blood-oxygen fluctuations in composite images smoothed by statistical witchcraft. It looks scientific, therefore it must be real. The map is not the territory, but in neuroscience, it’s often a mood board.

And then there is language itself, the medium through which all these illusions are transmitted. It is the stage, the scenery, and the unreliable narrator. My Language Insufficiency Hypothesis proposes that language is not simply a flawed tool — it is fundamentally unfit for the task it pretends to perform. It was forged in the furnace of survival, not truth. We are asking a fork to play the violin.

This insufficiency is not an error to be corrected by better definitions or clever metaphors. It is the architecture of the system. To speak is to abstract. To abstract is to exclude. To exclude is to falsify. Every time we speak of a thing, we lose the thing itself. Language functions best not as a window to the real but as a veil — translucent, patterned, and perpetually in the way.

So what, then, are our Truths™? They are narratives that have won. Stories that survived the epistemic hunger games. They are rendered authoritative not by accuracy, but by resonance — psychological, cultural, institutional. A “truth” is what is widely accepted, not because it is right, but because it is rhetorically unassailable — for now.

This is the dirty secret of epistemology: coherence masquerades as correspondence. If enough concepts link arms convincingly, we grant them status. Not because they touch reality, but because they echo each other convincingly in our linguistic theatre.

Libet’s experiment, Foucault’s genealogies, McGilchrist’s hemispheric metaphors — each peels back the curtain in its own way. Libet shows that agency might be a post-hoc illusion. Foucault reveals that disciplines don’t describe the subject; they produce it. McGilchrist laments that the Emissary now rules the Master, and the world is flatter for it.

But all of them — and all of us — are trapped in the same game: the tyranny of the signifier. We speak not to uncover truth, but to make truth-sounding noises. And the tragedy is, we often convince ourselves.

So no, we cannot escape the prison of language. But we can acknowledge its bars. And maybe, just maybe, we can rattle them loudly enough that others hear the clank.

Until then, we continue — philosophers, scientists, diagnosticians, rhetoricians — playing epistemology like a parlour game with rigged dice, congratulating each other on how well the rules make sense.

And why wouldn’t they? We wrote them.

Semantic Drift: When Language Outruns the Science

Science has a language problem. Not a lack of it – if anything, a surfeit. But words, unlike test tubes, do not stay sterile. They evolve, mutate, and metastasise. They get borrowed, bent, misused, and misremembered. And when the public discourse gets hold of them, particularly on platforms like TikTok, it’s the language that gets top billing. The science? Second lead, if it’s lucky.

Semantic drift is at the centre of this: the gradual shift in meaning of a word or phrase over time. It’s how “literally” came to mean “figuratively,” how “organic” went from “carbon-based” to “morally superior,” and how “theory” in science means robust explanatory framework but in the public square means vague guess with no homework.

In short, semantic drift lets rhetoric masquerade as reason. Once a word acquires enough connotation, you can deploy it like a spell. No need to define your terms when the vibe will do.

Audio: NotebookLM podcast on this topic.

When “Vitamin” No Longer Means Vitamin

Take the word vitamin. It sounds objective. Authoritative. Something codified in the genetic commandments of all living things. (reference)

But it isn’t.

A vitamin is simply a substance that an organism needs but cannot synthesise internally, and must obtain through its diet. That’s it. It’s a functional definition, not a chemical one.

So:

  • Vitamin C is a vitamin for humans, but not for dogs, cats, or goats. They make their own. We lost the gene. Tough luck.
  • Vitamin D, meanwhile, isn’t a vitamin at all. It’s a hormone, synthesised when sunlight hits your skin. Its vitamin status is a historical relic – named before we knew better, and now marketed too profitably to correct.

But in the land of TikTok and supplement shelves, these nuances evaporate. “Vitamin” has drifted from scientific designation to halo term – a linguistic fig leaf draped over everything from snake oil to ultraviolet-induced steroidogenesis.

The Rhetorical Sleight of Hand

This linguistic slippage is precisely what allows the rhetorical shenanigans to thrive.

In one video, a bloke claims a burger left out for 151 days neither moulds nor decays, and therefore, “nature won’t touch it.” From there, he leaps (with Olympic disregard for coherence) into talk of sugar spikes, mood swings, and “metabolic chaos.” You can almost hear the conspiratorial music rising.

The science here is, let’s be generous, circumstantial. But the language? Oh, the language is airtight.

Words like “processed,” “chemical,” and “natural” are deployed like moral verdicts, not descriptive categories. The implication isn’t argued – it’s assumed, because the semantics have been doing quiet groundwork for years. “Natural” = good. “Chemical” = bad. “Vitamin” = necessary. “Addiction” = no agency.

By the time the viewer blinks, they’re nodding along to a story told by words in costume, not facts in context.

The Linguistic Metabolism of Misunderstanding

This is why semantic drift isn’t just an academic curiosity – it’s a vector. A vector by which misinformation spreads, not through outright falsehood, but through weaponised ambiguity.

A term like “sugar crash” sounds scientific. It even maps onto a real physiological process: postprandial hypoglycaemia. But when yoked to vague claims about mood, willpower, and “chemical hijacking,” it becomes a meme with lab coat cosplay. And the science, if mentioned at all, is there merely to decorate the argument, not drive it.

That’s the crux of my forthcoming book, The Language Insufficiency Hypothesis: that our inherited languages, designed for trade, prayer, and gossip, are woefully ill-equipped for modern scientific clarity. They lag behind our knowledge, and worse, they often distort it.

Words arrive first. Definitions come limping after.

In Closing: You Are What You Consume (Linguistically)

The real problem isn’t that TikTokers get the science wrong. The problem is that they get the words right – right enough to slip past your critical filters. Rhetoric wears the lab coat. Logic gets left in the locker room.

If vitamin C is a vitamin only for some species, and vitamin D isn’t a vitamin at all, then what else are we mislabelling in the great nutritional theatre? What other linguistic zombies are still wandering the scientific lexicon?

Language may be the best tool we have, but don’t mistake it for a mirror. It’s a carnival funhouse – distorting, framing, and reflecting what we expect to see. And until we fix that, science will keep playing second fiddle to the words pretending to explain it.