This is awkward. I’d been preparing some posts on the age of consent, and I decided to write a formal essay on ageism. Since the age of consent is a moral hot-button topic for some, I decided to frame the situation in a political framework instead. The setup isn’t much different, but it keeps people’s heads out of the gutter and removes the trigger that many people seem to pull. It’s awkward because none of these posts has yet been posted. Spoiler alert, I guess. I could delay this announcement, but I won’t. Here it is.
Democracy is often defended in lofty terms. We are told that citizens are rational agents, capable of judgment, autonomy, and reasoned participation in collective decision-making. Voting, on this story, is not just a procedure. It is the expression of agency by competent participants. That all sounds reassuring.
Audio: NotebookLM summary podcast of this essay and concept.
What’s curious is that no democratic system actually checks whether any of this is true.
There are no assessments of political understanding. No evaluation of judgment. No test of civic competence. You become a fully empowered political agent overnight, not because you demonstrate anything, but because the calendar flips. Turn eighteen. You’re in. This isn’t a minor oversight. It’s the central puzzle my recent preprint explores.
The Proxy Nobody Questions
Modern democracies assign political standing using proxies: simple categorical markers that stand in for more complex qualities. Age is the most obvious. It is treated as a substitute for maturity, judgment, autonomy, and responsibility. But here’s the key point: age doesn’t approximate competence. It replaces it.
If age were a rough indicator, we might expect flexibility at the margins. Exceptions. Supplementary criteria. Some attempt to track the thing it supposedly represents. Instead, we get a hard boundary. Below it, total exclusion. Above it, permanent inclusion. Capacity doesn’t matter on either side. The proxy isn’t helping institutions identify competence. It is doing something else entirely.
Competence Talk Without Competence
Despite this, democratic theory remains saturated with competence language. We are told that participation is grounded in rational agency. That citizens possess the capacities needed for self-government. That legitimacy flows from meaningful participation by autonomous agents. None of this is operationalised.
Competence is never specified, measured, or verified. It functions purely as justificatory rhetoric. A moral vocabulary that explains why inclusion is legitimate, without ever guiding how inclusion actually happens. This isn’t confusion; it’s design.
Why the Gap Doesn’t Collapse
At this point, a reasonable person might expect trouble. After all, if the justification doesn’t match the mechanism, shouldn’t the system wobble? It doesn’t. And the reason matters.
Political participation generates very weak feedback. Outcomes are mediated through institutions. Causal responsibility is diffuse. Success criteria are contested. When things go badly, it’s rarely clear why, or what a better alternative would have been.
Under these conditions, dissatisfaction becomes affective rather than analytic. People sense that things aren’t working, but lack the tools to diagnose how or where the system failed. Crucially, they also lack any way to recalibrate the link between competence and political standing, because that link was never operational in the first place. The system doesn’t aim for optimisation. It aims for stability.
Boundary Drawing Without Saying So
This structure becomes clearest when we look at boundary cases. Why eighteen rather than sixteen? Or twelve? Or twenty-one? There is no competence-based answer. Developmental research consistently shows wide overlap between adolescents and adults, and massive variation within age groups. If competence were taken seriously, age thresholds would be indefensible.
Historically, when competence was operationalised such as through literacy tests, the result was transparent hierarchy and eventual delegitimation. Modern democracies avoid that by keeping competence abstract and proxies neutral-looking. The boundary remains. The justification changes.
What This Does and Does Not Argue
This analysis does not propose reforms. It does not advocate competence testing. It does not suggest lowering or raising the voting age. It does not claim voters are stupid, irrational, or defective. It describes a structural feature of democratic legitimacy:
Democracy works by saying one thing and doing another, and that gap is not accidental. Competence language stabilises legitimacy precisely because it is never put to work. You may think that’s fine. You may think it’s unavoidable. You may think it’s a problem. The paper doesn’t tell you which to choose. It simply insists that if we’re going to talk seriously about democratic legitimacy, we should notice what role competence actually plays. And what it doesn’t.
Fairness, Commensurability, and the Quiet Violence of Comparison
Fairness and Commensurability as Preconditions of Retributive Justice
This is the final part of a 3-part series. Read parts 1 and 2 for a fuller context.
Audio: NotebookLM summary podcast of this topic.
Before the Cards Are Dealt
Two people invoke fairness. They mean opposite things. Both are sincere. Neither can prove the other wrong. This is not a failure of argument. It is fairness working exactly as designed.
Before justice can weigh anything, it must first decide that the things being weighed belong on the same scale. That single move – the assertion that comparison is even possible – quietly does most of the work.
Most people think justice begins at sentencing, or evidence, or procedure. But the real work happens earlier, in a space so normalised it has become invisible. Before any evaluation occurs, the system must install the infrastructure that makes evaluation legible at all.
That infrastructure rests on two foundations:
fairness, which supplies the rhetoric, and
commensurability, which supplies the mathematics.
Together, they form the felt beneath the table – the surface on which the cards can be dealt at all.
1. Why Fairness Is Always Claimed, Never Found
Let’s be precise about what fairness is not.
Fairness is not a metric. You cannot measure it, derive it, or point to it in the world.
Fairness is not a principle with determinate content. It generates no specific obligations, no falsifiable predictions, no uniquely correct outcomes.
Fairness is an effect. It appears after assessment, not before it. It is what you call an outcome when you want it to feel inevitable.
Competing Fairness Is Not a Problem
Consider how disputes actually unfold:
The prosecutor says a long sentence is fair because it is proportional to harm.
The defender says a shorter sentence is fair because it reflects culpability and circumstance.
The victim says any sentence is unfair because nothing restores what was taken.
The community says enforcement itself is unfair because it predictably targets certain groups.
Each claim is sincere. None can be resolved by fairness itself.
That is because fairness has no independent content. It does not decide between these positions. It names them once the system has already decided which will prevail. This is not a bug. It is the feature.
A Fluid Masquerading as an Invariant
In the language of the Language Insufficiency Hypothesis, fairness is a Fluid – a concept whose boundaries shift with context and use – that masquerades as an Invariant, something stable and observer-independent.
The system treats fairness as perceptual, obvious, discoverable. But every attempt to anchor it collapses into:
Intuition (‘It just feels right’)
Precedent (‘This is how we do things’)
Consensus (‘Most people agree’)
None of these establishes fairness. They merely perform it.
And that performance matters. It converts contested metaphysical commitments into the appearance of shared values. It allows institutions to claim neutrality whilst enforcing specificity. Fairness is what the system says when it wants its outputs to feel unavoidable.
2. The Real Gatekeeper: Commensurability
Fairness does rhetorical work. But it cannot function without something deeper.
That something is commensurability: the assumption that different harms, injuries, and values can be placed on a shared scale and meaningfully compared.
Proportionality presupposes commensurability. Commensurability presupposes an ontology of value. And that ontology is neither neutral nor shared.
When Incommensurability Refuses to Cooperate
A parent loses a child to preventable negligence. A corporation cuts safety corners. A warning is ignored. The system moves. Liability is established. Damages are calculated. £250,000 is awarded.
The parent refuses the settlement. Not because the amount is insufficient. But because money and loss are not the same kind of thing. The judge grows impatient. Lawyers speak of closure. Observers mutter about grief clouding judgment. But this is not grief. It is incommensurability refusing to cooperate.
The parent is rejecting the comparison itself. Accepting payment would validate the idea that a child’s life belongs on a scale with currency. The violence is not the number. It is the conversion. The system cannot process this refusal except as emotional excess or procedural obstruction. Not because it is cruel, but because without commensurability the engine cannot calculate.
Two Ontologies of Value
There are two incompatible ontologies at work here. Only one is playable.
Ontology A: The Scalar Model
Harm is quantifiable
Suffering is comparable
Trade-offs are morally coherent
Justice is a balancing operation
Under Ontology A, harms differ in degree, not kind. A broken arm, a stolen car, and a dead child all occupy points on the same continuum. This makes proportionality possible.
Ontology B: The Qualitative Model
Harms are categorical
Some losses are incommensurable
Comparison itself distorts
Justice is interpretive, not calculative
Under Ontology B, harms are different kinds of things. Comparison flattens what matters. To weigh them is to misunderstand them.
Why Only One Ontology Can Play
Retributive justice, as presently constituted, cannot function under Ontology B.
Without scalar values, proportionality collapses. Without comparison, equivalence disappears. Without trade-offs, punishment has no exchange rate.
Ontology B is not defeated. It is disqualified. Structurally, procedurally, rhetorically. The house needs a shared scale. Without it, the game cannot settle accounts.
3. Why Incommensurability Is Treated as Bad Faith
Here is where power enters without announcing itself. Incommensurability does not merely complicate disputes. It stalls the engine. And stalled engines threaten institutional legitimacy.
Systems designed to produce closure must ensure that disputes remain within solvable bounds. Incommensurability violates those bounds. It suggests that resolution may be impossible – or that the attempt to resolve does further harm. So the system reframes the problem.
Not as an alternative ontology, but as:
Unreasonableness
Extremism
Emotional volatility
Refusal to engage in good faith
Reasonableness as Border Control
This is why reasonableness belongs where it does in the model. Not as an evaluative principle, but as a gatekeeping mechanism.
Reasonableness does not assess claims. It determines which claims count as claims at all. This is how commensurability enforces itself without admitting it is doing so. When someone refuses comparison, they are not told their ontology is incompatible with retributive justice. They are told to be realistic.
Ontological disagreement is converted into:
A tone problem
A personality defect
A failure to cooperate
The disagreement is not answered. It is pathologised.
4. Why These Debates Never Resolve
This returns us to the Ontology–Encounter–Evaluation model.
People argue fairness as if adjusting weights would fix the scale. They debate severity, leniency, proportionality.
But when two sides inhabit incompatible ontologies of value, no amount of evidence or dialogue bridges the gap. The real disagreement is upstream.
A prosecutor operating under scalar harm and an advocate operating under incommensurable injury are not disagreeing about facts. They are disagreeing about what kind of thing harm is.
Fairness cannot resolve this, because fairness presupposes the very comparison under dispute. This is why reform debates feel sincere and go nowhere. Outcomes are argued whilst ontological commitments remain invisible.
Remediation Requires Switching Teams
As argued elsewhere, remediation increasingly requires switching teams.
But these are not political teams. They are ontological commitments.
Ontologies are not held like opinions. They are held like grammar. You do not argue someone out of them. At best, you expose their costs. At worst, you force others to operate within yours by disqualifying alternatives.
Retributive justice does the latter.
5. What This Means (Without Offering a Fix)
Justice systems are not broken. They are optimised. They are optimised for closure, manageability, and the appearance of neutrality. Fairness supplies the rhetoric. Commensurability supplies the mathematics. Together, they convert contestable metaphysical wagers into procedural common sense.
That optimisation has costs:
Disagreements about value become illegible
Alternative ontologies become unplayable
Dissent becomes pathology
Foundations disappear from view
If justice feels fair, it is because the comparisons required to question it were never permitted.
Ontology as Pre-emptive Gatekeeping
None of this requires conspiracy.
Institutions do not consciously enforce ontologies. They do not need to.
They educate them. Normalise them. Proceduralise them. Then treat their rejection as irrationality.
By the time justice is invoked, the following have already been installed as reality:
That persons persist over time in morally relevant ways
That agents meaningfully choose under conditions that count
That harms can be compared and offset
That responsibility can be localised
That disagreement beyond a point is unreasonable
None of these are discovered. All are rehearsed.
A law student learns that ‘the reasonable person’ is a construct. By year three, they use it fluently. It no longer feels constructed.
This is not indoctrination. It is fluency.
And fluency is how ontologies hide.
By the time an alternative appears – episodic selfhood, incommensurable harm, distributed agency – it does not look like metaphysics. It looks like confusion.
Rationality as Border Control
The system does not say: we reject your ontology.
It says: that’s not how the world works.
Or worse: you’re being unreasonable.
Ontological disagreement is reframed as a defect in the person. And defects do not need answers. They need management.
This is why some arguments feel impossible to have. One ontology has been naturalised into common sense. The other has been reclassified as error.
The Final Irony
The more fragile the foundations, the more aggressively they must be defended as self-evident.
Free will is taught as obvious.
Fairness is invoked as perceptual.
Responsibility is treated as observable.
Incommensurability is treated as sabotage.
Not because the system is confident.
Because it cannot afford not to be.
The Point
Justice does not merely rely on asserted ontologies. It expends enormous effort ensuring they never appear asserted at all.
By the time the cards are dealt, the rules have already been mistaken for reality. That is the felt beneath the table. Invisible. Essential. Doing all the work. And if you want to challenge justice meaningfully, you do not start with outcomes. You start by asking:
What comparisons are we being asked to accept as natural? And what happens to those who refuse?
Most people never make that move. Not because it is wrong. But because by the time you notice the game is rigged, you are already fluent in its rules. And fluency feels like truth.
Final Word
Why write these assessments? Why care?
With casinos, like cricket, we understand something fundamental: these are games. We can learn the rules. We can decide whether to play. We can walk away.
Justice is different. Justice is not opt-in. It is imposed. You do not get to negotiate the rules, the scoring system, or the house assumptions about what counts as a move. Once you are inside, even dissent must be expressed in the system’s own grammar. Appeals do not question the game; they replay it under slightly altered conditions.
You may contest the outcome. You may plead for leniency. You may argue fairness. You may not ask why chips are interchangeable with lives, why losses must be comparable, or why refusing comparison itself counts as misconduct.
Imagine being forced into a casino. Forced to play. Forced to stake things you do not believe are wagerable. Then told, when you object, that the problem is not the game, but your attitude toward it.
That is why these assessments matter. Not to declare justice illegitimate. Not to offer a fix. But to make visible the rules that pretend not to be rules at all. Because once you mistake fluency for truth, the house no longer needs to rig the game.
Now that A Language Insufficiency Hypothesis has been put to bed — not euthanised, just sedated — I can turn to the more interesting work: instantiating it. This is where LIH stops being a complaint about words and starts becoming a problem for systems that pretend words are stable enough to carry moral weight.
What follows is not a completed theory, nor a universal schema. It’s a thinking tool. A talking point. A diagram designed to make certain assumptions visible that are usually smuggled in unnoticed, waved through on the strength of confidence and tradition.
The purpose of this diagram is not to redefine justice, rescue it, or replace it with something kinder. It is to show how justice is produced. Specifically, how retributive justice emerges from a layered assessment process that quietly asserts ontologies, filters encounters, applies normative frames, and then closes uncertainty with confidence.
Audio: NotebookLM summary podcast of this topic.
Most people are willing to accept, in the abstract, that justice is “constructed”. That concession is easy. What is less comfortable is seeing how it is constructed — how many presuppositions must already be in place before anything recognisable as justice can appear, and how many of those presuppositions are imposed rather than argued for.
The diagram foregrounds power, not as a conspiracy or an optional contaminant, but as an ambient condition. Power determines which ontologies are admissible, which forms of agency count, which selves persist over time, which harms are legible, and which comparisons are allowed. It decides which metaphysical configurations are treated as reasonable, and which are dismissed as incoherent before the discussion even begins.
Justice, in this framing, is not discovered. It is not unearthed like a moral fossil. It is assembled. And it is assembled late in the process, after ontology has been assumed, evaluation has been performed, and uncertainty has been forcibly closed.
This does not mean justice is fake. It means it is fragile. Far more fragile than its rhetoric suggests. And once you see that fragility — once you see how much is doing quiet, exogenous work — it becomes harder to pretend that disagreements about justice are merely disagreements about facts, evidence, or bad actors. More often, they are disagreements about what kind of world must already be true for justice to function at all.
I walk through the structure and logic of the model below. The diagram is also available as a PDF, because if you’re going to stare at machinery, you might as well be able to zoom in on the gears.
Why Retributive Justice (and not the rest of the zoo)
Before doing anything else, we need to narrow the target.
“Justice” is an infamously polysemous term. Retributive, restorative, distributive, procedural, transformative, poetic, cosmic. Pick your flavour. Philosophy departments have been dining out on this buffet for centuries, and nothing useful has come of letting all of them talk at once.
This is precisely where LIH draws a line.
The Language Insufficiency Hypothesis is not interested in pedestrian polysemy — cases where a word has multiple, well-understood meanings that can be disambiguated with minimal friction. That kind of ambiguity is boring. It’s linguistic weather.
What LIH is interested in are terms that appear singular while smuggling incompatible structures. Words that function as load-bearing beams across systems, while quietly changing shape depending on who is speaking and which assumptions are already in play.
“Justice” is one of those words. But it is not usefully analysable in the abstract.
So we pick a single instantiation: Retributive Justice.
Why?
Because retributive justice is the most ontologically demanding and the most culturally entrenched. It requires:
a persistent self
a coherent agent
genuine choice
intelligible intent
attributable causation
commensurable harm
proportional response
In short, it requires everything to line up.
If justice is going to break anywhere, it will break here.
Retributive justice is therefore not privileged in this model. It is used as a stress test.
The Big Picture: Justice as an Engine, Not a Discovery
The central claim of the model is simple, and predictably unpopular:
Justice is not discovered. It is produced.
Not invented in a vacuum, not hallucinated, not arbitrary — but assembled through a process that takes inputs, applies constraints, and outputs conclusions with an air of inevitability.
The diagram frames retributive justice as an assessment engine.
An engine has:
inputs
internal mechanisms
thresholds
failure modes
and outputs
It does not have access to metaphysical truth. It has access to what it has been designed to process.
The justice engine takes an encounter — typically an action involving alleged harm — and produces two outputs:
Desert (what is deserved),
Responsibility (to whom it is assigned).
Everything else in the diagram exists to make those outputs possible.
The Three Functional Layers
The model is organised into three layers. These are not chronological stages, but logical dependencies. Each layer must already be functioning for the next to make sense.
1. The Constitutive Layer
(What kind of thing a person must already be)
This layer answers questions that are almost never asked explicitly, because asking them destabilises the entire process.
What counts as a person?
What kind of self persists over time?
What qualifies as an agent?
What does it mean to have agency?
What is a choice?
What is intent?
Crucially, these are not empirical discoveries made during assessment. They are asserted ontologies.
The system assumes a particular configuration of selfhood, agency, and intent as a prerequisite for proceeding at all. Alternatives — episodic selves, radically distributed agency, non-volitional action — are not debated. They are excluded.
This is the first “happy path”.
If you do not fit the assumed ontology, you do not get justice. You get sidelined into mitigation, exception, pathology, or incoherence.
2. The Encounter Layer
(What is taken to have happened)
This layer processes the event itself:
an action
resulting harm
causal contribution
temporal framing
contextual conditions
motive (selectively)
This is where the rhetoric of “facts” tends to dominate. But the encounter is never raw. It is already shaped by what the system is capable of seeing.
Causation here is not metaphysical causation. It is legible causation. Harm is not suffering. It is recognisable harm. Context is not total circumstance. It is admissible context.
Commensurability acts as a gatekeeper between encounter and evaluation: harms must be made comparable before they can be judged. Anything that resists comparison quietly drops out of the pipeline.
3. The Evaluative Layer
(How judgment is performed)
Only once ontology is assumed and the encounter has been rendered legible does evaluation begin:
proportionality
accountability
normative ethics
fairness (claimed)
reasonableness
bias (usually acknowledged last, if at all)
This layer presents itself as the moral heart of justice. In practice, it is the final formatting pass.
Fairness is not discovered here. It is declared. Reasonableness does not clarify disputes. It narrows the range of acceptable disagreement. Bias is not eliminated. It is managed.
At the end of this process, uncertainty is closed.
That closure is the moment justice appears.
Why Disagreement Fails Before It Starts
At this point, dissent looks irrational.
The system has:
assumed an ontology
performed an evaluation
stabilised the narrative through rhetoric
and produced outputs with institutional authority
To object now is not to disagree about evidence. It is to challenge the ontology that made assessment possible in the first place.
And that is why so many justice debates feel irresolvable.
They are not disagreements within the system. They are disagreements about which system is being run.
LIH explains why language fails here. The same words — justice, fairness, responsibility, intent — are being used across incompatible ontological commitments. The vocabulary overlaps; the worlds do not.
The engine runs smoothly. It just doesn’t run the same engine for everyone.
Where This Is Going
With the structure in place, we can now do the slower work:
unpacking individual components
tracing where ontological choices are asserted rather than argued
showing how “reasonableness” and “fairness” operate as constraint mechanisms
and explaining why remediation almost always requires a metaphysical switch, not better rhetoric
Justice is not broken. It is doing exactly what it was built to do.
That should worry us more than if it were merely malfunctioning.
This essay is already long, so I’m going to stop here.
Not because the interesting parts are finished, but because this is the point at which the analysis stops being descriptive and starts becoming destabilising.
The diagram you’ve just walked through carries a set of suppressed footnotes. They don’t sit at the margins because they’re trivial; they sit there because they are structurally prior. Each one represents an ontological assertion the system quietly requires in order to function at all.
By my count, the model imposes at least five such ontologies. They are not argued for inside the system. They are assumed. They arrive pre-installed, largely because they are indoctrinated, acculturated, and reinforced long before anyone encounters a courtroom, a jury, or a moral dilemma.
Once those ontologies are fixed, the rest of the machinery behaves exactly as designed. Disagreement downstream is permitted; disagreement upstream is not.
In a follow-up essay, I’ll unpack those footnotes one by one: where the forks are, which branch the system selects, and why the alternatives—while often coherent—are rendered unintelligible, irresponsible, or simply “unreasonable” once the engine is in motion.
That’s where justice stops looking inevitable and starts looking parochial.
And that’s also where persuasion quietly gives up.
Written by Bry Willis and ChatGPT 5.2 after a couple of days of back and forth
A great deal of philosophy begins with the claim that we ought to examine our assumptions. Fewer philosophers seem interested in examining the mechanisms that decide which assumptions are allowed to count as philosophy in the first place.
This is not a polemic about the Analytic–Continental divide. It’s an observation about how that divide quietly maintains itself. The immediate provocation was banal. Almost embarrassingly so.
Are cold, or do we have cold?
In English, the answer feels obvious. I am cold. The grammar barely registers. In French, Italian, or German, the structure flips. One has cold. Or hunger. Or thirst. Or age. Or a name, understood as something one performs rather than something one is. I spoke about this here and here. Indulge this link to the original position being argued.
On the surface, this looks like a curiosity for linguistics students. A translation quirk. A grammatical footnote. But grammar is rarely innocent.
Audio: NotepadLM summary podcast on this topic.
Grammar as Ontological Scaffolding
The verbs to be and to have are not neutral carriers. They quietly encode assumptions about identity, property, possession, and stability.
When I say I am cold, I cast coldness as a property of the self. It becomes something like height or nationality: a state attributable to the person. When I say I have cold, the experience is externalised. The self remains distinct from the condition it undergoes. Neither option is metaphysically clean.
Both structures smuggle in commitments before any philosophy has been done. One risks inflating a transient sensation into an ontological state. The other risks reifying it into a thing one owns, carries, or accumulates. My own suggestion in a recent exchange was a third option: sensing.
Cold is not something one is or has so much as something one feels. A relational encounter. An event between organism and environment. Not an identity predicate, not a possession.
This suggestion was met with a fair pushback: doesn’t saying that cold ‘belongs to the world’ simply introduce a different metaphysical assumption? Yes. It does. And that response neatly demonstrates the problem.
When Grammar Starts Doing Philosophy
The original claim was idiomatic, not ontological. It was a negative gesture, not a positive thesis. The point was not to relocate cold as a mind-independent substance floating about like a rock. It was to resist treating it as an essence of the person. But once you slow down, you see how quickly everyday grammar demands metaphysical loyalty.
Being invites substance. Having invites inventory. Sensing keeps the relation open, but even that makes people nervous. This nervousness is instructive. It reveals how much metaphysical weight we quietly load onto grammatical habits simply because they feel natural. And that feeling of naturalness matters more than we like to admit.
Two Philosophical Temperaments, One Linguistic Groove
At this point, the temptation is to draw a clean line:
On one side: the Anglo-American Analytic tradition, comfortable treating mental states as properties, objects, or items to be catalogued. Locke’s introspective inventory. Hume’s bundle. Logical positivism’s clean surfaces.
On the other: the Continental tradition, suspicious of objectification, insisting on an irreducible subject for whom experience occurs but who is never identical with its contents. Kant, Husserl, Heidegger, Sartre.
The grammar aligns disturbingly well. Languages that habitually say I am cold make it feel natural to treat experience as something inspectable. Languages that insist on having or undergoing experiences keep the subject distinct by default.
This is not linguistic determinism. English speakers can read phenomenology. German speakers can do analytic philosophy. But language behaves less like a prison and more like a grooved path. Some moves feel obvious. Others feel forced, artificial, or obscure.
Philosophies do not arise from grammar alone. But grammar makes certain philosophies feel intuitively right long before arguments are exchanged.
Where Gatekeeping Enters Quietly
This brings us to the part that rarely gets discussed.
The Analytic–Continental divide persists not only because of philosophical disagreement, but because of institutional reinforcement. Peer review, citation norms, and journal cultures act as boundary-maintenance mechanisms. They are not primarily crucibles for testing ideas. They are customs checkpoints for recognisability.
I have been explicitly cautioned, more than once, to remove certain figures or references depending on the venue. Don’t mention late Wittgenstein here. Don’t cite Foucault there. Unless, of course, you are attacking them. This is not about argumentative weakness. It’s about genre violation.
Hybrid work creates a problem for reviewers because it destabilises the grammar of evaluation. The usual criteria don’t apply cleanly. The paper is difficult to shelve. And unshelvable work is treated as a defect rather than a signal. No bad faith is required. The system is doing what systems do: minimising risk, preserving identity, maintaining exchange rates.
Cold as a Diagnostic Tool
The reason the cold example works is precisely because it is trivial.
No one’s career depends on defending a metaphysics of chilliness. That makes it safe enough to expose how quickly grammar starts making demands once you pay attention.
If something as mundane as cold wobbles under scrutiny, then the scaffolding we rely on for far more abstract notions – self, identity, agency, consciousness – should make us uneasy.
And if this is true for human languages, it becomes far more pressing when we imagine communication across radically different forms of life.
Shared vocabulary does not guarantee shared metaphysics. Familiar verbs can conceal profound divergence. First contact, if it ever occurs, will not fail because we lack words. It will fail because we mistake grammatical comfort for ontological agreement.
A Modest Conclusion
None of this settles which philosophical tradition is ‘right’. That question is far less interesting than it appears. What it does suggest is that philosophy is unusually sensitive to linguistic scaffolding, yet unusually resistant to examining the scaffolding of its own institutions.
We pride ourselves on questioning assumptions while quietly enforcing the conditions under which questions are allowed to count. Cold just happens to be a good place to start noticing.
A Footnote on Linguistic Determinism
It’s worth being explicit about what this is not. This is not an endorsement of strong linguistic determinism, nor a revival of Sapir–Whorf in its more ambitious forms. English speakers are not condemned to analytic philosophy, nor are Romance-language speakers predestined for phenomenology.
Language does not imprison thought. It inclines it.
Grammar operates less like a set of handcuffs and more like a well-worn path. Some moves feel effortless. Others require deliberate resistance. Philosophical traditions co-evolve with these habits, reinforcing what already feels natural while treating alternatives as strained, obscure, or unnecessary.
More precisely, I need less sleep and longer days – preferably twice as long. I’ve been writing almost non-stop for the better part of a week: fourteen- to sixteen-hour days, fuelled by irritation and the stubborn belief that if I just keep reading, something will finally click into place.
I’m not complaining. This is a virtuous cycle. Reading leads to writing. Writing demands more reading. Eventually, the loop closes into something that looks suspiciously like progress.
Audio: Short NotebookLM summary podcast on this topic.
Still, there’s a bottleneck.
Because some of this work – the work I’m most excited about – I’m deliberately not publishing yet. Journals, bless their glacial hearts, don’t much care for prior publication. So ideas sit in limbo for six to eighteen months, locked in a room like argumentative houseplants, slowly growing sideways.
From the perspective of someone who thinks in public, this is maddening.
Now add AI to the mix.
This is where things get dangerous.
I’ll feed ChatGPT a thesis, a skeletal structure, notes, and references. I ask what I’m missing. It obliges – often helpfully – by pointing me toward adjacent thinkers and relevant literature, complete with page numbers. From there, I verify, hunt down the sources, skim, read, discard, or integrate.
And every so often, I stumble across something that makes me swear out loud.
This week, it was Bernard Williams.
I’ve cited Ethics and the Limits of Philosophy before. But this time, I actually sat down and read it properly. Which immediately prompted the thought:
Why didn’t I read this sooner?
Williams dismantles moral objectivity with the calm precision of someone who knows the Enlightenment project has already lost – he just hasn’t told everyone yet. Thick and thin moral concepts, locality, non-extensibility, the collapse of universal moral reason at scale – yes, yes, yes. He published this in 1985. Fine. I’ll survive.
But then I went further.
Williams shows that morality fails between people at scale. I argue that it fails within a single person over time.
That became my second paper.
And this is where things went off the rails.
Because in the course of writing that paper, I dipped into Hart’s The Concept of Law and Endicott’s Vagueness in Law. These are not fringe polemics. These are law textbooks. For law students. People allegedly trained to parse language for a living.
And what I found was… astonishing.
Let me paraphrase the admissions:
First:
Image: When the law is vague, judicial decisions may be unconstrained by the law.
Endicott: “By upsetting the standard view of adjudication, the book reaches conclusions that some people find horrible: when the law is vague, judicial decision- making will in some cases be unconstrained by the law. It is impossible in principle for judges always to treat like cases alike. Predictability in the law is to some extent unattainable. Moreover, I argue in Chapter 9,2 that vagueness cannot be eliminated from law. These conclusions might seem to imply that the rule of law is, at least to some extent, conceptually impossible.”
Then:
Image: Vagueness is inevitable. Deal with it.
Endicott: “Secondly, I do not claim that vagueness is a purely linguistic feature of law. And the book relies on no claim about the relation between law and language. These points must be stressed, because vagueness is commonly thought of as a linguistic phenomenon. And. indeed, most of the discussion in the book concerns the vagueness of linguistic expressions. But the indeterminacy claim is not just a claim about language (so I argue in Chapter 3.12). So. for example, the claim in Chapter 6 that general evaluative and normative expressions are necessarily vague is not just a claim about the word ‘good’ and the word ‘right1: it is a claim about any linguistic expression in which we could conceivably express general evaluative and normative judgments. It therefore includes a claim about what is good and what is right.”
Then, almost casually:
Image: Whether law is morally valuable to a community is not my concern. Justice and the rule of law may be political virtues — or not. I don’t defend them here.
Endicott: “Disputes between legal positivists and natural law theorists have concerned not only the relation between law and adjudication, but also the relation between law and morality. Here I take no general position on the intrinsic moral value of law. I do rely on the claims that law can be valuable to a community, and that justice and the rule of law are two ideals which a com- munity can intelligibly pursue as political virtues. Even those claims are controversial (Kelsen and some of the theorists discussed in Chapter 2 have controverted them ). But I do not defend them here. This work aims to show that the indeterminacy claim does nothing to threaten the pursuit of justice and the rule of law. Those ideals cannot be well understood if we try to make them depend on determinacy in the requirements of the law.”
Say what?
Read together – not even uncharitably – the message is clear:
Law is indeterminate. Indeterminacy is unavoidable. And whether law is good, just, or valuable is… optional.
The subtext isn’t even hiding.
Law is a power structure first. If it happens to align with justice, fairness, or communal value, well, lovely. A bonus. Champagne all round.
This does not sit well with a sceptical cynic.
What really broke me, though, wasn’t the argument itself. Philosophers make grim claims all the time. What broke me was the silence around it.
How does this pass under the radar?
How do cohorts of law students – drilled in textual analysis, trained to read footnotes like tea leaves – not trip over this elephant stampede? How do they graduate believing they’re upholding inalienable rights, rather than participating in a managed system of coercion that occasionally behaves itself?
Self-preservation, I suppose. Wilful ignorance. Professional cosplay.
I’ve seen this before.
As an economist, ask the wrong foundational question, and you’re instantly radioactive. Persona non grata. Careers don’t end with explosions — they end with polite silence and no invitations.
I probably should have committed to heterodox philosophy from the start. Or stayed a musician.
I remember leaving graduate school, putting on a suit, and feeling like I was wearing a costume. Cosplay, before we had the word. “Business professional” as a role, not an identity.
I’ve always felt intellectually capable of doing whatever I set out to do. My temperament, however, has never agreed to play along.
Which is perhaps why diagnosing ontologies comes so naturally. Once you see the scaffolding, you can’t unsee it – whether it’s metaphysics, jurisprudence, or a corporate department pretending it has a mission.
Then David Graeber came along with Bullshit Jobs, and I remember thinking: Thank God. It’s not just me.
So yes. I need a break.
I need sleep. I need silence. I need to stop reading law books that accidentally admit they’re about power and then act surprised when someone notices.