Truth, Subjectivity, and Constraint

3–5 minutes

I like this bloke. Here, he clarifies Rorty’s perspective on Truth. I am quite in sync with Rorty’s position, perhaps 90-odd per cent.

Allow me to explain.

Audio: NotebookLM summary podcast of this topic.

I have written about truth several times over the years, 1, 2, 3, and more. In earlier posts, I put the point rather bluntly: truth is largely rhetorical. I still think that captured something important, but it now feels incomplete. With the development of my Mediated Encounter Ontology of the World (MEOW) and the Language Insufficiency Hypothesis (LIH), the picture needs tightening.

NotebookLM Infographic on this topic.

The first step is to stop pretending that ‘truth’ names a single thing.

Philosopher Bernard Williams helpfully distinguished between thin and thick senses of truth in Truth and Truthfulness. The distinction is simple but instructive.

In its thin sense, truth is almost trivial. Saying ‘it is true that p’ typically adds nothing beyond asserting p. The word ‘true’ functions as a logical convenience: it allows endorsement, disquotation, and generalisation. Philosophically speaking, this version of truth carries very little metaphysical weight. Most arguments about truth, however, are not about this thin sense.

In practice, truth usually appears in a thicker social sense. Here, truth is embedded in practices of inquiry and communication. Communities develop norms around sincerity, accuracy, testimony, and credibility. These norms help stabilise claims so that people can coordinate action and share information.

At this level, truth becomes something like a social achievement. A statement counts as ‘true’ when it can be defended, circulated, reinforced, and relied upon within a shared framework of interpretation. Evidence matters, but so do rhetoric, persuasion, institutional authority, and the distribution of power. This is the sense in which truth is rhetorical, but rhetoric is not sovereign.

NotebookLM Infographic on this topic. I prompted NotebookLM to illustrate a 4-layered model that shows how removed language is from encounter, attention, conception, and representation of what we normally consider to be reality. This view is supported by both MEOW and LIH.

Human beings can imagine almost anything about the world, yet the world has a stubborn habit of refusing certain descriptions. Gravity does not yield to persuasion. A bridge designed according to fashionable rhetoric rather than sound engineering will collapse regardless of how compelling its advocates may have been.

This constraint does not disappear in socially constructed domains. Institutions, identities, norms, and laws are historically contingent and rhetorically stabilised, but they remain embedded within material, biological, and ecological conditions. A social fiction can persist for decades or centuries, but eventually it encounters pressures that force revision.

Subjectivity, therefore, doesn’t imply that ‘anything goes’. It simply means that all human knowledge is mediated.

We encounter the world through perception, language, culture, and conceptual frameworks. Every description is produced from a particular standpoint, using particular tools, within particular historical circumstances. Language compresses experience and inevitably loses information along the way. No statement captures reality without distortion. This is the basic insight behind the Language Insufficiency Hypothesis.

At the same time, our descriptions remain answerable to the constraints of the world we inhabit. Some descriptions survive repeated encounters better than others.

In domains where empirical constraint is strong – engineering, physics, medicine – bad descriptions fail quickly. In domains where constraint is indirect – ethics, politics, identity, aesthetics – multiple interpretations may remain viable for long periods. In such cases, rhetoric, institutional authority, and power often function as tie-breakers, stabilising one interpretation over others so that societies can coordinate their activities. These settlements are rarely permanent.

What appears to be truth in one era may dissolve in another. Concepts drift. Institutions evolve. Technologies reshape the landscape of possibility. Claims that once seemed self-evident may later appear parochial or incoherent.

In this sense, many truths in human affairs are best understood as temporally successful settlements under constraint.

Even the most stable arrangements remain vulnerable to change because the conditions that sustain them are constantly shifting. Agents change. Environments change. Expectations change. The very success of a social order often generates the tensions that undermine it. Change, in other words, is the only persistence.

The mistake of traditional realism is to imagine truth as a mirror of reality – an unmediated correspondence between statement and world. The mistake of crude relativism is to imagine that language and power can shape reality without limit. Both positions misunderstand the situation.

We do not possess a final language that captures reality exactly as it is. But neither are we free to describe the world however we please. Truth is not revelation, and it is not mere invention.

It is the provisional stabilisation of claims within mediated encounter, negotiated through language, rhetoric, and institutions, and continually tested against a world that never fully yields to our descriptions. We don’t discover Truth with a capital T. We negotiate survivable descriptions under pressure.

The Procrustean Universe

5–7 minutes

How Modern Thought Mistakes Its Own Grid for Reality

Modern thought has a peculiar habit.

It builds a measuring device, forces the world through it, and then congratulates itself for discovering what the world is really like.

This is not always called scientism. Sometimes it is called rigour, precision, formalism, standardisation, operationalisation, modelling, or progress. The names vary. The structure does not. First comes the instrument. Then comes the simplification. Then comes the quiet metaphysical sleight of hand by which the simplification is promoted into reality itself.

Consider music.

A drummer lays down a part with slight drag, push, looseness, tension. It breathes. It leans. It resists the metronome just enough to sound alive. Then someone opens Pro Tools and quantises it. The notes snap to grid. The beat is now ‘correct’. It is also, very often, dead.

This is usually treated as an aesthetic dispute between old romantics and modern technicians. It is more than that. It is a parable.

Quantisation is not evil because it imposes structure. Every recording process imposes structure. The problem is what happens next. Once the grid has done its work, people begin to hear the grid not as a tool, but as truth. Timing that exceeds it is heard as error. The metric scaffold becomes the criterion of reality.

Audio: NotebookLM summary podcast of this topic.

A civilisation can live like this.

It can begin with a convenience and end with an ontology.

Carlo Rovelli’s The Order of Time is useful here precisely because it unsettles the fantasy that time is a single smooth substance flowing uniformly everywhere like some celestial click-track. It is not. Time frays. It dilates. It varies by frame, relation, and condition. Space, too, loses its old role as passive container. The world begins to look less like a neat box of coordinates and more like an unruly field of relations that only reluctantly tolerates our diagrams.

This ought to induce some modesty. Instead, modern disciplines often respond by doubling down on the diagram.

That is where James C. Scott arrives, carrying the whole argument in a wheelbarrow. Seeing Like a State is not merely about states. It is about the administrative desire to make the world legible by reducing it to formats that can be counted, organised, compared, and controlled. Forests become timber reserves. People become census entries. Places become parcels. Lives become cases. The simplification is not wholly false. It is simply tailored to the needs of governance rather than to the fullness of what is governed.

That’s the key.

The state does not need the world in its density. It needs the world in a format it can read.

And modern disciplines are often no different. They require stable units, repeatable measures, abstract comparability, portable standards. Fair enough. No one is conducting physics with incense and pastoral reverie. But then comes the familiar conceit: what was required for the practice quietly becomes what reality is said to be. The discipline first builds the bed for its own survival, then condemns the world for failing to lie down properly.

This is the Procrustean move.

Cut off what exceeds the frame. Stretch what falls short. Call the result necessity.

Many supposed paradoxes begin here. Not in reality itself, but in the overreach of a measuring grammar.

I use a ruler to measure temperature, and I am surprised when it does not comport.

The example is absurd, which is why it is helpful. The absurdity is not in the temperature. It’s in the category mistake. Yet much of modern thought survives by committing more sophisticated versions of precisely this error. We use tools built for extension to interpret process. We use spatial metaphors to capture time. We use statistical flattening to speak of persons. We use administrative categories to speak of communities. We use computational tractability to speak of mind. Then the thing resists, and we call the resistance mysterious.

Sometimes it is not mysterious at all. Sometimes it is merely refusal.

The world declines to be exhausted by the terms under which we can most easily manage it.

That refusal then returns to us under grander names: paradox, irrationality, inconsistency, noise, anomaly. But what if the anomaly is only the residue of what our instruments were built to exclude? What if paradox is often the bruise left by an ill-fitted measure?

This is where realism, at least in its chest-thumping modern form, begins to look suspicious. Not because there is no world. There is clearly something that resists us, constrains us, embarrasses us, punishes bad maps, and ruins bad theories. The issue is not whether there is a real. The issue is whether what we call “the real” is too often just what our current apparatus can stabilise.

That is not realism.

That is successful compression mistaken for ontology.

Space and time, in this light, begin to look less like the universe’s native grammar and more like the interface through which a certain kind of finite creature renders the world tractable. Useful, yes. Necessary for us, perhaps. Final? hardly.

The same applies everywhere. We do not merely measure the world. We reshape it, conceptually and institutionally, until it better fits our preferred methods of seeing. Then we forget we did this.

Scott’s lesson is that states fail when they confuse legibility with understanding. Our broader civilisational lesson may be that disciplines fail in much the same way. They flatten in order to know, and then mistake the flattening for disclosure. What exceeds the frame is dismissed until it returns as contradiction.

None of this requires anti-scientific melodrama. Science is powerful. Measurement is indispensable. Standardisation is often the price of cumulative knowledge. The problem is not the existence of the grid. The problem is the promotion of the grid into metaphysics. A tool required for a practice is not therefore the native structure of the world. That should be obvious. It rarely is.

Scientism, in its most irritating form, begins precisely where this obviousness ends. It is not disciplined inquiry but disciplinary inflation: the belief that whatever can be rendered formally legible is most real, and whatever resists is merely awaiting capture by better instruments, finer models, sharper equations, more obedient categories. It is the provincial fantasy that the universe must ultimately speak in the accent of our methods.

Perhaps it doesn’t.

Perhaps our great achievement is not that we have discovered reality’s final language, but that we have become unusually good at mistaking our translations for the original.

Imagine that.

Architecture of Encounter

I’ve been writing. In fact, I’ve been clarifying A Mediated Encounter Ontology of the World (MEOW) and expanding and extending it into a book with a broader remit. This might well be the cover, following the monograph layout for Philosophics Press.

Image: Mockup of cover art.

As shown, the working title is The Architecture of Encounter: A Mediate Encounter Ontology. I’ve swapped the slate cover for a magenta in this volume.

So what’s it all about?

I’m not going to summarise the book here, but I’ll share some tidbits. I’ve settled on these chapter names:

  1. The Mediated Encounter Ontology
  2. Ontology
  3. Subjecthood
  4. Logic
  5. Epistemology
  6. Perception and Affordances
  7. Language
  8. Social Ontology
  9. Realism
  10. Application
  11. The Normativity Frontier
  12. Conclusion

Chapter 1, The Mediated Encounter Ontology, is a summary and update of the original essay, which will be included in full as an appendix item for reference, but this update will become canonical.

Chapter 2, Ontology: Interaction, Constraint, and the Rejection of Substance, will describe what I mean by ontology and what my proposed ontology looks like.

Chapter 3, Subjecthood: Modal Differentiation Within the Field, will explain how the subject-object relationship changes, and what a subject is in the first place.

Chapter 4, Logic: Coherence Grammar Under Constraint, will explain what logic is and how it operates in this paradigm.

Chapter 5, Epistemology: Convergence, Error, and the Structure of Justification, will describe what knowledge looks like. IYKYK.

Chapter 6, Perception and Affordances: Encounter as Orientation, extends Gibson’s work to comport with MEOW 2.0 (or 1.1).

Chapter 7, Language: Synchronisation, Ontological Grammar, and Structural Limits, explains how language works and how it limits our perception. We’re not talking Sapir-Whorf here, but what respectable language philosopher wouldn’t reserve a chapter for language?

Chapter 8, Social Ontology: Second-Order Constraint Systems. MEOW has a lot to say about first-order constraints, but there are higher-order considerations. I discuss them here.

Chapter 9, Realism: Cross-Perspectival Convergence and the Invariant Anchor, talks about the real elephant in the room. Since MEOW challenges both realism and idealism, we need to talk about it.

Chapter 10, Application: The Apophatic Mind, is mostly an observation on artificial intelligence as it relates to the mind-consciousness debate, primarily scoped around LLMs and similar machine processes.

Chapter 11. The Normativity Frontier, doesn’t yet have a subtitle, but this is where I discuss issues like normative ethics and morality.

I probably don’t need to tell you how Conclusion chapters work.

I expect to have 3 appendices.

  1. Summary of commitments, which will summarise and distil key topics – so like a cheat sheet for reference – a bit more robust than a glossary.
  2. Bibliography of reference material. As this is not an essay, it won’t be chock-full of citations – only a few, where I feel they are necessary. Much of this work represents years of thinking, and in many cases, the attribution has been lost; I remember the contents and not necessarily the attribution. I will prompt AI to fill in some missing pieces, but that’s that. The bibliography attempts to capture the general flavour.
  3. The original MEOW essay. This is already freely available on several platforms, including Zenodo. Download it here if you haven’t already – or wait for the book.

The rest of the story

This book not only extends MEOW, but it also ties in concepts from A Language Insufficiency Hypothesis and other of my already published and yet unpublished work.

I expect to produce a decent amount of explanatory and support material, though to be fair, I tell myself that every time until I get distracted by the next project. I need a producer to manage these affairs.

Why “Just Think Critically” Keeps Failing

2–4 minutes

The paper is now available on Zenodo.

I’ve been wittering on about social ontological positions and legibility for a few months now. I’ve been writing a book and several essays, but this is the first to be published. In it, I not only counter Ranalli – not personally; his adopted belief – I also counter Thomas Sowell, George Lakoff, Jonathan Haidt, Kurt Gray, and Joshua Green. (Counter might be a little harsh; I agree with their conclusions, but I remain on the path they stray from.)

Audio: NotebookLM summary of the essay: Grammatical Failure

There is a strange faith circulating in contemporary culture: the belief that disagreement persists because someone, somewhere, hasn’t been taught how to think properly.

The prescription is always the same. Teach critical thinking. Encourage openness. Expose people to alternatives. If they would only slow down, examine the evidence, and reflect honestly, the right conclusions would present themselves.

When this doesn’t work, the explanation is equally ready to hand. The person must be biased. Indoctrinated. Captured by ideology. Reason-resistant.

What’s rarely considered is a simpler possibility: nothing has gone wrong.

Most of our public arguments assume that we are all operating inside the same conceptual space, disagreeing only about how to populate it. We imagine a shared menu of reasons, facts, and values, from which different people select poorly. On that picture, better reasoning should fix things.

What if what counts as a ‘reason’, what qualifies as ‘evidence’, or what even registers as a meaningful alternative is already structured differently before any deliberation begins?

At that point, telling someone to ‘think critically’ is like asking them to optimise a system they cannot see, using criteria they do not recognise. The instruction is not offensive. It’s unintelligible. This is why so many contemporary disputes feel immune to argument. Not merely heated, but strangely orthogonal. You aren’t rebutted so much as translated into something else entirely: naïve, immoral, dangerous, unserious. And you do the same in return.

Liberal epistemology has a neat explanation for this. It treats these failures as agent-level defects: insufficient openness, motivated reasoning, epistemic irresponsibility. The problem is always how people reason. The argument of Grammatical Failure is that this diagnosis is systematically misplaced. The real constraint, in many cases, lies upstream of reasoning itself. It lies in the semantic frameworks that determine what can count as a reason in the first place. When those frameworks diverge, deliberation doesn’t fail heroically. It fails grammatically.

This doesn’t mean people lack agency. It means agency operates within a grammar, not over it. We choose, revise, and reflect inside spaces of intelligibility we did not author. Asking deliberation to rewrite its own conditions is like asking a sentence to revise its own syntax mid-utterance. The result is a familiar pathology. Disagreement across frameworks is redescribed as epistemic vice. Category rejection is mistaken for weak endorsement. Indoctrination becomes a label we apply whenever persuasion fails. Not because anyone is lying, but because our diagnostic tools cannot represent what they are encountering.

The paper itself is not a manifesto or a programme. It doesn’t tell you what to believe, how to educate, or which politics to adopt. It does something more modest and more uncomfortable. It draws a boundary around what liberal epistemology can coherently explain – and shows what happens when that boundary is ignored.

Sometimes the problem isn’t that people won’t think.

It’s that they are already thinking in a grammar that your advice cannot reach.

The Expiration Date of Object Permanence

2–4 minutes

There is a persistent story we tell ourselves about quantum mechanics:* that it reveals reality to be fundamentally strange, paradoxical, or hostile to common sense. Particles in two places at once. Cats be both alive and dead. Worlds multiplying to save appearances.

I’ve never found that story convincing.

Audio: NotebookLM summary podcast of this topic.
Image: NotebookLM infographic for this topic.

What I do find convincing is a simpler diagnosis: that we are applying a cognitive tool far beyond the conditions under which it earned its authority – and then mistaking the resulting discomfort for metaphysical insight.

Object permanence is one of our earliest and most successful heuristics. It allows infants to track toys behind sofas, caregivers behind doors, and threats behind occlusion. Without it, coordinated action would be impossible. With it, the world becomes navigable, predictable, and stable. It is a genuine cognitive achievement. But it is not a universal guarantee about reality.

In a new essay, The Expiration Date of Object Permanence: Heuristics, Grammar, and Quantum Pseudoproblems, I argue that much of what we call ‘quantum weirdness’ arises from the uncritical extension of this heuristic into domains where its ecological licensing no longer holds. The problem is not that quantum mechanics violates common sense. The problem is that we quietly treat common sense as metaphysics.

Quantum mechanics functions here not as a mystery generator, but as a stress test. Recent matter-wave interference experiments with increasingly massive systems show that object-based expectations fail quantifiably under carefully engineered conditions. When environmental coupling is suppressed, when decoherence is delayed, when the world is no longer warm, noisy, and forgiving, the assumptions underwriting object permanence simply stop paying rent.

The essay also takes a dim view of some familiar cultural furniture. Schrödinger’s cat, for example, was introduced as a reductio – an intentionally absurd demonstration of what happens when microscopic formalism is naively scaled up. That it now circulates as an explanatory image tells us less about quantum mechanics than about the tenacity of object-grammar. Even jokes cannot escape it.

Interpretations fare no better. I suggest that the appeal of frameworks like Many-Worlds is not exhausted by their technical merits. They also function as strategies for preserving object-based reidentification – ways of ensuring that there is still something that can be pointed to, counted, and followed through time, even if the price is ontological inflation.

None of this denies the reality of quantum phenomena, nor does it pretend to solve the measurement problem. The essay is deliberately deflationary. Its claim is methodological, not revisionary: that many of the puzzles we inherit are artefacts of treating developmentally acquired heuristics as if they were unconditional features of the world.

Philosophy’s task, on this view, is not to make reality intuitive. It is to recognise when intuition has reached the end of its jurisdiction.

The paper is now available on Zenodo and will be indexed shortly on PhilPapers. As always, comments, objections, and principled misreadings are welcome.


This post and the underlying essay were inspired by a Nature article: Probing quantum mechanics with nanoparticle matter-wave interferometry, published on 21 January 2026. I get annoyed watching people misunderstand quantum mechanics and its effects, so I decided to address some of the issues in an essay. Read this essay as well as mine, which will explain why the paradoxes and ‘spooky behaviour’ of QM are only counter-intuitive if you’ve fallen into this heuristic trap.

The Trouble with Facts

5–8 minutes

One Motor Vehicle

What we call facts are not discoveries of an unfiltered world. They are the end-products of mediation.

Let’s walk through an example.

Image: Autosmash example. An observer arrives with experience – from genetic predisposition to childhood trauma to winning the lottery. Whatever it might be. Of course, they have many cognitive deficits, biases and filters. Then, there’s the immediate problem of attention. When did they notice the event? Did they turn to look after hearing the noise, or were they meditating on the tree in that moment?

Apparently, a motor vehicle has collided with a tree. Trees are immobile objects, so we can safely rule out the tree colliding with the car.*

So what, exactly, are the facts?

Audio: NotebookLM summary podcast of this topic.

Ontology (the boring bit)

Ontologically, something happened.

A car struck a tree.
Metal deformed.
Momentum stopped.

Reality did not hesitate. It did not consult witnesses. It did not await interpretation.

This is the part Modernity likes to gesture at reverently before immediately leaving it behind.

Image: Requisite NotebookLM infographic on this content.

The Witness

Even the driver does not enjoy privileged access to “what really happened”.

They get:

  • proprioceptive shock
  • adrenaline distortion
  • attentional narrowing
  • selective memory
  • post hoc rationalisation
  • possibly a concussion

Which is already several layers deep before language even arrives to finish the job.

We can generalise the structure:

Ontology: events occur. States of affairs obtain. Something happens whether or not we notice.

Epistemology: observation is always filtered through instruments, concepts, language, habits, and incentives.

Modern sleight of hand: collapse the second into the first and call the result the facts.

People love the phrase “hard facts”, as if hardness transfers from objects to propositions by osmosis. It doesn’t. The tree is solid. The fact is not.

Facts are artefacts. They are assembled from observation, inference, convention, and agreement. They function. They do not reveal essence.

Filtration

An event occurred. A car struck a tree.

Then an observer arrives. But observers never arrive empty-handed.

They arrive with history: genetics, upbringing, trauma, habits, expectations, incentives. They arrive already filtered.

Daniel KahnemanOlivier Sibony, and Cass Sunstein spend an entire book explaining just how unreliable this process is. See Noise: A Flaw in Human Judgment if you want the empirical receipts.

  • Even before bias enters, attention does.
  • When did the observer notice the crash?
  • At the sound? At the sight? After the fact?
  • Were they already looking, or did the noise interrupt something else entirely?

Reality happens once. Facts happen many times, differently, depending on who needs them and why.

Here Comes the Law

This is where the legal system enters, not because truth has been found, but because closure is required.

Courts do not discover facts. They designate versions of events that are good enough to carry consequences. They halt the cascade of interpretations by institutional force and call the result justice.

At every epistemic level, what we assert are interpretations of fact, never access to ontological essence.

Intent, negligence, recklessness. These are not observations. They are attributions. They are stopping rules that allow systems to function despite uncertainty.

The law does not ask what really happened.
It asks which story is actionable.

Two Motor Vehicles

Now add a second moving object.

Another car enters the frame, and with it an entire moral universe.

Suddenly, the event is no longer merely physical. It becomes relational. Agency proliferates. Narratives metastasise.

Who was speeding?
Who had the right of way?
Who saw whom first?
Who should have anticipated whom?

Intent and motive rush in to fill the explanatory vacuum, despite remaining just as unobservable as before.

Nothing about the ontology improved.
Everything about the storytelling did.

Where the tree refused intention, the second vehicle invites it. We begin inferring states of mind from trajectories, attributing beliefs from brake lights, extracting motives from milliseconds of motion.

But none of this is observed.

What we observe are:

  • vehicle positions after the fact,
  • damage patterns,
  • skid marks,
  • witness statements already filtered through shock and expectation.

From these traces, we construct mental interiors.

The driver “intended” to turn.
The other driver “failed” to anticipate.
Someone was “reckless”.
Someone else was merely “unlucky”.

These are not facts. They are interpretive assignments, layered atop already mediated observations, selected because they allow responsibility to be distributed in socially recognisable ways.

This is why explanation now fractures.

One cascade of whys produces a story about distraction or poor judgment.
Another produces a story about road design or visibility.
Another about timing, traffic flow, or urban planning.

Each narrative is plausible.
Each is evidence-constrained.
None is ontologically privileged.

Yet one will be chosen.

Not because it is truer, but because it is actionable.

The presence of a second vehicle does not clarify causation. It merely increases the number of places we are willing to stop asking questions.

Modernity mistakes this proliferation of narrative for epistemic progress. In reality, it is moral bookkeeping.

The crash still occurred.
Metal still deformed.
Momentum still stopped.

What changed was not access to truth, but the urgency to assign fault.

With one vehicle and a tree, facts already fail to arrive unmediated.
With two vehicles, mediation becomes the point.

And still, we insist on calling the result the facts.

Many Vehicles, Cameras, and Experts

At this point, Modernity regains confidence.

Add more vehicles.
Add traffic cameras.
Add dashcams, CCTV, bodycams.
Add accident reconstruction experts, engineers, psychologists, statisticians.

Surely now we are approaching the facts.

But nothing fundamental has changed. We have not escaped mediation. We have merely scaled it up and professionalised it.

Cameras do not record reality. They record:

  • a frame,
  • from a position,
  • at a sampling rate,
  • with compression,
  • under lighting conditions,
  • interpreted later by someone with a mandate.

Video feels decisive because it is vivid, not because it is ontologically transparent. It freezes perspective and mistakes that freeze for truth. Slow motion, zoom, annotation. Each step adds clarity and distance at the same time.

Experts do not access essence either. They perform disciplined abduction.

From angles, debris fields, timing estimates, and damage profiles, they infer plausible sequences. They do not recover the event. They model it. Their authority lies not in proximity to reality, but in institutional trust and methodological constraint.

More data does not collapse interpretation.
It multiplies it.

With enough footage, we don’t get the story. We get competing reconstructions, each internally coherent, each technically defensible, each aligned to a different question:

  • Who is legally liable?
  • Who is financially responsible?
  • Who violated policy?
  • Who can be blamed without destabilising the system?

At some point, someone declares the evidence “clear”.

What they mean is: we have enough material to stop arguing.

This is the final Modern illusion: that accumulation converges on essence. In reality, accumulation converges on closure.

The event remains what it always was: inaccessible except through traces.
The facts become thicker, more confident, more footnoted.
Their metaphysical status does not improve.

Reality happened once. It left debris. We organised the debris into narratives that could survive institutions.

Cameras didn’t reveal the truth. Experts didn’t extract it. They helped us agree on which interpretation would count.

And agreement, however necessary, has never been the same thing as access to what is.

* I was once driving in a storm, and a telephone pole fell about a metre in front of my vehicle. My car drove over the pole, and although I was able to drive the remainder of the way home, my suspension and undercarriage were worse for the wear and tear.

Language Insufficiency Hypothesis: Presumption and Horizon

I discuss Chapter 4 of ‘A Language Insufficiency Hypothesis’ in this video clip.

Video: Presumption and Horizon

In short, I discuss where language fails in law, politics, science, and digital culture, where we think language conveys more than it does.

Rejection Letter

Rejected Note
2–3 minutes

Dear Author. [REDACTED] does not accept the submission of personal works produced by students, independent researchers, or professionals who have not yet attained a doctoral level. This is a moderation policy intended to ensure that publications deposited on the platform originate from qualified researchers affiliated with a recognized institution (REDACTED) and acknowledged for their expertise or previous work in the relevant field of research. This rule applies regardless of the quality or scientific value of the work, which is by no means in question here. We therefore regret to inform you that we are unable to accept this submission. If you wish, we invite you to share your work through other open platforms such as Zenodo, which allow all authors to make their research visible. Thank you for your understanding. Kind regards

Allow me to rephrase this:

Disappointing, though hardly surprising. This is the same logic as age-based thresholds I have recently taken a hammer to: crude proxies elevated into moral and epistemic gatekeepers. Not ‘is this good?’, but ‘are you old enough, stamped enough, letterheaded enough to be taken seriously?’. A bureaucratic horoscope.

Yes, I use Zenodo. I use PhilPapers. I will continue to do so. But let’s not pretend all platforms are socially equivalent. Journals still function as credibility engines, not because they magically improve truth, but because they distribute legitimacy. To be excluded on status grounds alone is not a quality filter. It is a caste system with footnotes.

And journals already make participation unnecessarily hostile. Many refuse work that has been publicly shared at all, even in preprint form. Lead times stretch to a year or more. The result is that anyone attempting to contribute to live debates is instructed to sit quietly whilst the conversation moves on without them. In a so-called knowledge economy, this is an astonishing self-own.

What we have, then, is a system that:

  • equates institutional affiliation with epistemic competence,
  • penalises open dissemination,
  • and delays circulation until relevance decays.

All in the name of rigour.

I will keep submitting elsewhere. There are other journals. There always are. But let’s stop pretending this is about protecting standards. It is about preserving a hierarchy that mistakes accreditation for insight and treats independent thought as a contamination risk.

Knowledge does not become true by passing through the right doorway. It merely becomes approved. I’ll not witter on about the bollocks of peer review.

Homo Legibilis

3–4 minutes

A Brief Field Note from the Department of Bureaucratic Anthropology

Still reeling from the inability to fold some pan into homo, Palaeontologists are seemingly desperate for a new hominid. Some dream of discovering the ‘missing link’; others, more honest, just want something with a jawline interesting enough to secure a grant. So imagine the surprise when the latest species didn’t come out of the Rift Valley but out of an abandoned server farm somewhere outside Reading.

They’ve named it Homo Legibilis – the Readable Human. Not ‘H. normālis’ (normal human), not ‘H. ratiōnālis (rational human), but the one who lived primarily to be interpreted. A species who woke each morning with a simple evolutionary imperative: ensure one’s dataprints were tidy, current, and machine-actionable.

Audio: NotebookLM summary podcast of this topic.

You’ll have seen their skeletons before, though you may not have recognised them as such. They often appear upright, mid-scroll, preserved in the amber of a status update. A remarkable creature, really. Lithe thumbs. Soft cranial matter. Eyes adapted for low-light environments lit primarily by advertisements.

Habitat

The species thrived in densely surveilled ecosystems: corporate intranets, public Wi-Fi, facial-recognition corridors, anywhere with sufficient metadata to form a lasting imprint. They built vast nests out of profiles, settings, dashboards. Territorial disputes were settled not through display or violence but through privacy-policy updates. Their preferred climate? Temperate bureaucracy.

Diet

Contrary to earlier assumptions, H. Legibilis did not feed on information. It fed on interpretation: likes, metrics, performance reviews, and algorithmic appraisal. Some specimens survived entire winters on a single quarterly report. Every fossil indicates a digestive tract incapable of processing nuance. Subtext passed through untouched.

Mating Rituals

Courtship displays involved reciprocal data disclosure across multiple platforms, often followed by rapid abandonment once sufficient behavioural samples were collected. One famous specimen is preserved alongside fourteen dating-app profiles and not a single functional relationship. Tragic, in a way, but consistent with the species’ priorities: be seen, not held.

Distinguishing Traits

Where Homo sapiens walked upright, Homo legibilis aimed to sit upright in a chair facing a webcam.
Its spine is subtly adapted for compliance reviews. Its hands are shaped to cradle an object that no longer exists: something called ‘a phone’. Ironically, some term these ‘mobiles’, apparently unaware of the tethers.

Researchers note that the creature’s selfhood appears to have been a consensual hallucination produced collaboratively by HR departments, advertising lobbies, and the Enlightenment’s long shadow. Identity, for H. legibilis, was not lived but administered.

Extinction Event

The fossil record ends abruptly around the Great Blackout, a period in which visibility – formerly a pillar of the species’ survival – became inconvenient. Some scholars argue the species didn’t perish but simply lost the will to document itself, making further study inconvenient.

Others suggest a quieter transformation: the species evolved into rumour, passing stories orally once more, slipping back into the anonymity from which its ancestors once crawled.

Afterword

A few renegade anthropologists insist Homo Legibilis is not extinct at all. They claim it’s still out there, refreshing dashboards, syncing calendars, striving to be neatly interpreted by systems that never asked to understand it. But these are fringe theories. The prevailing view is that the species perished under the weight of its own readability. A cautionary tale, really. When your survival strategy is to be perfectly legible, you eventually disappear the moment the lights flicker.

The Hard Problem Was Never Consciousness

3–5 minutes

It Was Language All Along.

This whole misadventure began sometime in 2018, when I started documenting what has now metastasised into the Language Insufficiency Hypothesis. If I weren’t typing this, I’d be doing the honourable thing and finishing the index, but here we are, procrastinating with purpose. I had a suspicion, even then, that language was up to something. Something slippery. Something evasive. At first, it was just a motley catalogue of weasel words that refused to sit still long enough to be given a meaning. I should have taken the hint when the list kept expanding like a Victorian railway: terminally over-budget and convinced of its own grandeur.

But, naturally, I pressed on.

Audio: NotebookLM summary podcast and conversation around this topic.

At the time I had that slow-burn itch about identity, selfhood, free will, agency – you know, the usual metaphysical tat we’re reared on like a Victorian child raised on laudanum. It wasn’t that these things didn’t exist; it was that the words simply couldn’t bear the conceptual load we’d been piling onto them. And so I found myself in the company of philosophers who either tried to rescue these terms (Dennett, ever the valiant firefighter with a damp match), complicate them (Searle, constructing houses of cards under wind machines), or dissolve them outright (Parfit, smiling serenely as the rest of us panic).

Meanwhile, Strawson was somewhere in the corner insisting experience is all there is, Putnam was in his perennial retraction phase, and I was merely trying to keep my own conceptual apparatus from collapsing like an undercooked soufflé.

I’ll admit I had a long-standing soft spot for Dennett’s consciousness-as-emergence hypothesis. It made a certain intuitive sense at the time: pile up enough neural machinery, sprinkle in some feedback loops, and consciousness would bubble up like steam from a kettle. It felt elegant. It felt mechanistically honest. And, crucially, it made perfect sense within the inherited Realist framework I was still tacitly lugging around. Of course, experience ’emerges’ from physical processes if you start from a worldview already partitioned into physical substrates and mental phenomena waiting to be accounted for. Dennett wasn’t wrong so much as operating within the same architectural error the rest of us had been marinating in. Once I began reframing the whole encounter through mediation rather than emergence, the elegance dissolved. What had looked like metaphysics turned out to be a conceptual afterimage generated by a language that couldn’t model its own limitations.

And then there was Chalmers.

Ah, the ‘hard problem’. I lost count of how many times it surfaced. Like mould. Or a debt collector. Chalmers’ dilemma – how physical processes give rise to experience – is purportedly the Mount Everest of metaphysics. Yet the more I thought about it, the more it reeked of a linguistic parlour trick. A conceptual magic eye puzzle: stare long enough and a unicorn appears, provided you’ve surrendered your scepticism and a good measure of oxygen.

The problem isn’t that consciousness is ‘hard’. The problem is that the linguistic scaffolding we’re using was never built for this terrain. ‘Experience’. ‘Physical’. ‘Mental’. ‘Explain’. These words pretend to be steel beams when they’re actually damp cardboard.

What remains isn’t a cosmic riddle but a linguistic artefact. A conceptual false path carved by centuries of grammatico-metaphysical enthusiasm – the unfortunate habit of mistaking grammatical symmetry for metaphysical necessity.

Which brings me to the present, having at last gelled the LIH and published the Mediated Encounter Ontology of the World – a relational metaphysics that has the decency not to hallucinate substances it can’t justify. MEOW clears the fog rather neatly: the so-called ‘hard problem’ is only ‘hard’ because we continue to treat ‘mind’ and ‘world’ as two independent substances requiring metaphysical reconciliation. Together, LIH and MEOW provide a double exposure of the problem: LIH shows why the language fails; MEOW shows what the language was failing to describe.

So here we are. I’d like to reconsider Chalmers through the dual lenses of LIH and MEOW – not to ‘solve’ the hard problem, but to show it was never the right problem to begin with. The difficulty isn’t consciousness; it’s the language we’re forced to use, the same language that refuses to sit still, the same language that keeps trying to trick us into mistaking grammatical symmetry for metaphysical necessity.

In a coming post, I intend to pry open that illusion with a crowbar. Delicately, of course. One must be civilised about these things.

Because if language is insufficient – and it is – then perhaps what Chalmers discovered was not the abyss of consciousness, but the limit of the dictionary.