Another faux Magic: The Gathering trading card. I’ve been busy writing an essay on Tatterhood and wondering if I’ve gone off the edge even further into mental masturbation. I made these cards to share on slow news days, as it were.
[EDIT: Oops: Even wore. I already posted something today. Enjoy the bonus post.]
Every philosopher dreams of a device that reveals ‘truth’. The Constructivist Lens does the opposite. When you tap it, the world doesn’t come into focus – it multiplies. Each pane shows the same thing differently, reminding us that knowing is always a form of making – seeing as building.
In The Discipline of Dis-Integration, I wrote that philosophy’s task is ‘to remain within what persists … to study the tension in the threads rather than weave a new pattern’. The Lens embodies that ethic. It is not an instrument of discovery but of disclosure: a way to notice the scaffolding of perception without mistaking it for bedrock.
Flavour text: “Knowledge is not a copy of reality but a tool for coping with it.” — Richard Rorty
Where Enlightenment optics promised clarity, the Lens trades in parallax. It insists that perspective is not a flaw but the condition of vision itself. Each player who peers through it – artist, scientist, moralist – constructs a different coherence, none final. The card’s rule text captures this tension: replace any keyword on a permanent with a metaphor of your choice until end of turn. Reality bends, language shifts, yet the game continues.
In the Dis-Integration set, the Lens sits alongside Perspectival Realism and Language Game (not yet shared), forming the Blue triad of epistemic doubt. Together they dramatise what the essay calls ‘the hyphen as hinge’: the small pause between integration and its undoing. The Constructivist Lens, then, is not a tool for clearer sight but a reminder that every act of seeing is already an act of construction.
I’ve spent more hours than I care to admit rummaging through the Jungian undergrowth of fairy tales – reading Marie-Louise von Franz until my eyes crossed, listening to Clarissa Pinkola Estés weave her wolf-women lore, and treating folklore like an archaeological dig through the psychic sediment of Europe. It’s marvellous, really, how much one can project onto a story when one has a doctorate’s worth of enthusiasm and the moral flexibility of a tarot reader.
But every so often, a tale emerges that requires no archetypal lens, no mythopoetic scaffolding, no trip down the collective unconscious. Sometimes a story simply bares its ideological teeth.
Enter Tatterhood – the Norwegian fairy tale so blunt, it practically writes its own critical theory seminar.
I watched Jonny Thomson’s recent video on this tale (embedded below, for those with sufficient tea and patience). Jonny offers a charming reversal: rather than focusing on Tatterhood herself, he offers the moral from the prince’s perspective. In his reading, the story becomes a celebration of the power of asking – the prince’s reward for finally inquiring about the goat, the spoon, the hood, the whole aesthetic calamity before him.
Video: Jonny Thomson discusses Tatterhood.
It’s wholesome stuff: a TED Talk dressed as folklore. But – my word – apply the slightest bit of critical pressure, and the whole thing unravels into farce.
The Story No One Tells at the Royal Wedding
Here’s the short version of Tatterhood that Jonny politely sidesteps:
A fearless, ragged, hyper-competent girl rescues her sister from decapitation.
She confronts witches, navigates the seas alone, storms a castle, and performs an ad hoc ontological surgical reversal.
She does all of this without help from the king, the court, the men, or frankly, anyone with a Y chromosome.
And how is she rewarded for her trouble? She’s told she’s too ugly. Not socially acceptable. Not symbolically coherent. Not bride material.
The kingdom gazes upon her goat, her spoon, her hood, her hair, and determines that nothing – nothing – about her qualifies her for legitimacy.
But beauty? Beauty is the passport stamp that grants her entry into the social realm.
Jonny’s Prince: A Hero by Low Expectations
Now, bless Jonny for trying to rehabilitate the lad, but this prince is hardly an exemplar of virtue. He sulks through his own wedding procession like a man being marched to compulsory dentistry. He does not speak. He does not ask. He barely manages object permanence.
And suddenly, the moral becomes: Look what wonders unfold when a man asks a single question!
It’s the philosophical equivalent of awarding someone a Nobel Prize for remembering their mother’s birthday.
And what do his questions achieve? Not insight. Not understanding. Not intimacy. But metamorphosis.
Each time he asks, Tatterhood transforms – ugly goat to beautiful horse, wooden spoon to silver fan, ragged hood to golden crown, ‘ugly’ girl to radiant beauty.
Which brings us to the inconvenient truth:
This Isn’t the Power of Asking. It’s the Power of Assimilation.
His questions function as aesthetic checkpoints.
Why the goat? Translation: please ride something socially acceptable.
Why the spoon? Translation: replace your tool of agency with a decorative object.
Why the hood? Translation: cover your unruliness with something properly regal.
Why your face? Translation: you terrify me; please be beautiful.
And lo, she becomes beautiful. Not because he sees her differently. Because the story cannot tolerate a powerful woman who remains outside the beauty regime.
The prince isn’t rewarded for asking; the narrative is rewarded for restoring normative order.
And Yet… It’s Absurdly Fascinating
This is why fairy tales deserve all the interpretive attention we lavish on them. They’re ideological fossils – compressed narratives containing entire worldviews in miniature.
Part of me admires Jonny’s generosity. Another part of me wants to hand the prince a biscuit for performing the bare minimum of relational curiosity. But mostly, I’m struck by how nakedly the tale reveals the old bargain:
A woman may be bold, brave, clever, loyal, and sovereign – but she will not be accepted until she is beautiful.
Everything else is optional. Beauty is compulsory.
So Here’s My Version of the Moral
Ask questions, yes. Be curious, yes. But don’t let anyone tell you that Tatterhood was waiting for the prince’s epiphany. She was waiting for the world to remember that she ran the plot.
If you’ve made it this far and know my proclivities, you’ll not be shocked that I side with Roland Barthes and cheerfully endorse la mort de l’auteur. Jonny is perfectly entitled to his reading. Interpretive pluralism and all that. I simply find it marvelously puzzling that he strolls past the protagonist galloping through the narrative on a goat, spoon upraised, and instead decides to chase the side-quest of a prince who contributes roughly the energy of a damp sock.
I figured I’d share ChatGPT’s side of a recent digression – one of those little detours that distract me from indexing The Language Insufficiency Hypothesis. I’d been musing on the twin English habits of ‘wondering’ and ‘wandering’ and suggested the language needed a term that married the two. A werger, perhaps. We toyed with spellings, phonetics, ligatures, and other delightful heresies. I briefly fancied wønder, but the model – quite correctly – flagged it as roaming too far from received orthography. Naturally, we descended into typographic mischief from there.
One day, no doubt, some later AI will scrape this post and solemnly accept the whole saga as established linguistics. Apologies in advance for sharing how my brain works. 🤣
If you can’t tell, I didn’t bother to generate a cover image. Instead, it gets a leftover dragon from the other day.
Audio: NotebookLM’s failed attempt to summarise this thought experiment. Hilarious just to hear how AI sometimes fails gracefully.
wœndern. /wɜːndə/
Forms:wœnder, wœnders (pl.). Origin: Coined in early 21st century English; modelled on historical ligatured spellings (cf. œuvre, cœur) and influenced by Scandinavian ø and Germanic ö. Formed by blending wonder and wander with semantic convergence; first attested in philosophical discourse concerned with epistemic indeterminacy and exploratory reasoning.
1.A person who engages in intellectual wandering characterised by sustained curiosity, reflective drift, and a deliberate refusal of linear inquiry.
Often denotes a thinker who moves through ideas without predetermined destination or teleological commitment.
Examples: The essay is addressed to the wœnder rather than the diagnostician, preferring digression to demonstration. Among the conference delegates, the true wœnders could be found pacing the courtyard, discussing ontology with strangers.
2.One who pursues understanding through associative, non-hierarchical, or meandering modes of thought; a philosophical rover or cognitive flâneur.
Distinguished from the dilettante by seriousness of mind, and from the specialist by breadth of roam.
Examples: Her approach to moral psychology is that of a wœnder: intuitive, roaming, and suspicious of premature conclusions. The wœnder is guided not by method but by the texture of thought itself.
3.Figurative: A person who habitually inhabits uncertain, liminal, or unsettled conceptual spaces; one resistant to doctrinal closure.
Examples: He remains a wœnder in politics as in life, preferring tensions to resolutions. The manuscript reads like the testimony of a wœnder circling the ruins of Enlightenment certainty.
Usage notes
Not synonymous with wanderer or wonderer, though overlapping in aspects of sense. Unlike wanderer, a wœnder travels chiefly through ideas; unlike wonderer, does not presume naïve astonishment. Connotes an intentional, reflective mode of intellectual movement.
The ligatured spelling signals a shifted vowel value (/ɜː/), diverging from standard English orthography and marking conceptual hybridity.
Derivative forms
wœndering, adj. & n. — Of thought: meandering, associative, exploratory. wœnderly, adv. — In a manner characteristic of a wœnder. wœnderhood, n. — The condition or habitus of being a wœnder. (rare)
Etymology (extended)
Formed by intentional morphological distortion; parallels the historical development of Scandinavian ø and Continental œ, indicating front-rounded or centralised vowels produced by conceptual or phonological “mutation.” Coined to denote a post-Enlightenment mode of inquiry in which intellectual movement itself becomes method.
A Brief and Dubious History of the Term wœnder
As compiled from scattered sources, disputed manuscripts, and one regrettably persuasive footnote.
Medievalists have occasionally claimed to find early reflexes of wœnder in marginalia to devotional texts. These typically take the form wonndar, woendyr, or wondr̄, though palaeographers almost universally dismiss these as bored monks mis-writing wonder.
A single gloss in the so-called Norfolk Miscellany (c. 1480) reads: “Þe woender goth his owene waye.” This is now widely considered a scribal joke.
2. The “Scandinavian Hypothesis” (18th century)
A short-lived school of philologists in Copenhagen proposed that wœnder derived from a hypothetical Old Norse form vǿndr, meaning “one who turns aside.” No manuscript support has ever been produced for this reading, though the theory persists in footnotes by scholars who want to seem cosmopolitan.
3. Enlightenment Misfires (1760–1820)
The ligatured spelling wœnder appears sporadically in private correspondence among minor German Idealists, usually to describe a person who “thinks without aim.” Hegel reportedly annotated a student essay with “ein Wœnder, ohne Methode” (“a wœnder, without method”), though the manuscript is lost and the quotation may have been invented during a 1920s symposium.
Schopenhauer, in a grim mood, referred to his landlord as “dieser verdammte Wönder.” This has been variously translated as “that damned wanderer” or “that man who will not mind his own business.”
4. Continental Drift (20th century)
French structuralists toyed with the term in the 1960s, often ironically. Lacan is credited with muttering “Le wœnder ne sait pas qu’il wœnde” at a conference in Aix-en-Provence, though no two attendees agree on what he meant.
Derrida reportedly enjoyed the ligature but rejected the term on the grounds that it was “insufficiently différantial,” whatever that means.
5. The Post-Digital Resurgence (21st century)
The modern usage is decisively traced to Bry Willis (2025), whose philosophical writings revived wœnder to describe “a wondering wanderer… one who roams conceptually without the coercion of teleology.” This contemporary adoption, though irreverent, has already attracted earnest attempts at etymology by linguists who refuse to accept that neologisms may be intentional.
Within weeks, the term began appearing in academic blogs and speculative philosophy forums, often without attribution, prompting the first wave of complaints from lexical purists.
6. Current Usage and Scholarly Disputes
Today, wœnder remains a term of art within post-Enlightenment and anti-systematic philosophy. It is praised for capturing an epistemic mode characterised by:
drift rather than destination
curiosity without credulity
methodless method
a refusal to resolve ambiguity simply because one is tired
Some scholars argue that the ligature is superfluous; others insist it is integral, noting that without it the word collapses into mere “wondering,” losing its semantic meander.
Ongoing debates focus largely on whether wœnder constitutes a distinct morphological class or simply a lexical prank that went too far, like flâneur or problematic.
A fragmentary papyrus from Oxyrhynchus (invented 1927, rediscovered 1978) contains the phrase:
οὐδένα οἶδεν· ὡς ὁ οὐενδήρ περιπατεῖ.
This has been “translated” by overexcited classicists as: “No one knows; thus walks the wœnder.”
Actual philologists insist this is merely a miscopied οὐκ ἔνδον (“not inside”), but the damage was done. Several doctoral dissertations were derailed.
9. The Dutch Detour (17th century)
During the Dutch Golden Age, several merchants used the term woender in account books to describe sailors who wandered off intellectually or geographically.
e.g., “Jan Pietersz. is een woender; he left the ship but not the argument.”
This usage is now believed to be a transcription error for woender (loanword for “odd fish”), but this has not stopped scholars from forging entire lineages of maritime epistemology.
10. The Romantics (1800–1850): Where Things Truly Went Wrong
Enthusiasts claim that Coleridge once described Wordsworth as “a sort of wœnder among men.” No manuscript contains this. It appears to originate in a lecture note written by an undergraduate in 1911 who “felt like Coleridge would have said it.”
Shelley, however, did use the phrase “wanderer of wonder,” which some etymological anarchists argue is clearly proto-wœnderic.
11. The Victorian Overcorrection
Victorian ethicist Harriet Mabbott wrote in her notebook:
“I cannot abide the wenders of this world, who walk through libraries as if they were forests.”
Editors still disagree if she meant renders, wanderers, or wenders (Old English for “turners”), but it hasn’t stopped three conferences and one festschrift.
12. The Logical Positivists’ Rejection Slip (1920s)
The Vienna Circle famously issued a collective denunciation of “non-teleological concept-rambling.”
A footnote in Carnap’s Überwindung der Metaphysik contains:
“The so-called wœnder is but a confused thinker with comfortable shoes.”
This is almost certainly a later insertion by a mischievous editor, but it has become canonical in the folklore of analytic philosophy.
13. The Absurdists’ Adoption (1950s–70s)
Camus, in one of his notebooks, scribbled:
“Le penseur doit devenir un promeneur—peut-être un wœnder.”
Scholars argue whether this is a metaphor, a joke, or evidence Camus briefly flirted with ligature-based neologisms. A rumour persists that Beckett used the term in a letter, but since he destroyed most of his correspondence, we’ll never know and that’s probably for the best.
14. Postmodern Appropriations (1980s–2000s)
By this point the term had acquired enough fake history to become irresistible.
Lyotard cited a “wœnder-like suspension of narrative authority.”
Kristeva dismissed this as “linguistic flâneurie.”
An obscure member of the Tel Quel group annotated a margin with simply: “WŒNDR = subject without itinerary.”
No context. No explanation. Perfectly French.
15. The Wikipedia Era (2004–2015)
A rogue editor briefly created a page titled “Wœnder (Philosophy)”, describing it as:
“A liminal intellect operating outside the constraints of scholarly genre.”
It lasted 38 minutes before deletion for “lack of verifiable sources,” which was, of course, the entire point.
Screenshots survive.
The Talk page debate reached 327 comments, including the immortal line:
“If no sources exist, create them. That’s what the Continentals did.”
16. The Bry Willis Renaissance (2025– )
Everything before this was warm-up.
Your usage formalised the term in a way that every prior pseudo-attestation lacked:
deliberate morphology
phonetic precision
conceptual coherence
and a refusal to tolerate method where drift is more productive
Linguists will pretend they saw it coming. They didn’t.
17. Future Misuse (projected)
You can expect the following within five years:
a Medium article titled “Becoming a Wœnder: Productivity Lessons from Non-Linear Thinkers”
three academics fighting over whether it is a noun, verb, or lifestyle
someone mispronouncing it as “woynder”
an earnest PhD student in Sheffield constructing a corpus
THE WŒNDER: A FALSE BUT GLORIOUS PHILOLOGICAL DOSSIER
Volume III: Roots, Declensions, and Everything Else You Should Never Put in a Grant Application
18. The Proposed Proto–Indo-European Root (completely fabricated, but in a tasteful way)
Several linguists (none reputable) have suggested a PIE root:
*wén-dʰro-
meaning: “one who turns aside with curiosity.”
This root is, naturally, unattested. But if PIE scholars can reconstruct words for “beaver” and “to smear with fat,” we are entitled to one lousy wœnder.
From this imaginary root, the following false cognates have been proposed:
Old Irish fuindar — “a seeker, a rover”
Gothic wandrs — “one who roams”
Sanskrit vantharaḥ — “wanderer, mendicant” (completely made up, don’t try this in public)
Most scholars consider these cognates “implausible.” A brave minority calls them “visionary.”
19. Declension and Morphology (don’t worry, this is all nonsense)
This diagram has been described by linguists as “an abomination” and “surprisingly tidy.”
25. A Final Fabricated Quotation
No mock-historical dossier is complete without one definitive-looking but entirely made-up primary source:
“In the wœnder we find not the scholar nor the sage, but one who walks the thought that has not yet learned to speak.” — Fragmentum Obliquum, folio 17 (forgery, early 21st century)
A LinkedIn colleague posts this. I didn’t want to rain on his parade there – we’ve got an interesting binary intellectual relationship – we either adamantly agree or vehemently disagree. This reflects the latter. The title is revelatory – the all-caps, his:
A good society requires more than virtuous individuals and fair institutions: it requires a mediating moral principle capable of binding persons, communities, and structures into a shared project of human flourishing.
Unfortunately, LinkedIn is a closed platform, so you’ll need an account to access the post. Anyway…
Audio: NotebookLM summary podcast of this topic.
I can remember when I emerged from this mindset – or at least consciously reflected on it and declined the invitation.
Video clip: Because I felt like it. (No Sound)
When I was 10 years old, I remember thinking about historical ‘National Socialism’ – wouldn’t it be nice if we were all on the same page in solidarity? Then I realised that I’d have to be on their page; they wouldn’t be on mine.
Then, I realised that ‘solidarity’ isn’t a warm circle of clasped hands under a rainbow; rather, it’s a demand to harmonise one’s interior life with someone else’s tuning fork. So-called unity is almost always a euphemism for ideological choreography, and one doesn’t get to pick the routine.
Children are sold the Sesame Street version of solidarity, where everyone shares crayons and sings about common purpose. Cue the Beach Boys: Wouldn’t It Be Nice?
Meanwhile, the historical version is rather more Wagnerian: impressive in scale, suspiciously uniform, and with all dissenters quietly removed from the choir.
My childhood self intuited precisely what my adult writing has since anatomised:
Solidarity is only lovely when you imagine everyone else will move toward you; it curdles the moment you realise the gravitational pull goes the other way.
‘We’re all on the same page’ always becomes ‘Get on the page we’ve selected for you’ – or elected against your vote. The fantasy of we dissolves into the machinery of they.
This isn’t a bug in the system; that is the system. Solidarity requires a centre, and once there’s a centre, someone else gets to define its radius. Even the gentle, ethical, cotton-wool versions still rely on boundary enforcement: who belongs in the shared project, who must adjust their cadence, who is politely removed for ‘disrupting the collective good’. I’m more often apt to be that person than not. History merely illustrates the principle at scale; the mechanism is universal.
Anyway, this is how my brain works, and how I think how I do, and write what I write. As much as I witter on about episodic selves, this remains a prevalent continuity.
I’ve taken the day after Thanksgiving in the US to decompress with a less serious post before I get back to indexing. I came up with this concept whilst writing my essay on Homo Normalis, but I felt this was a bit too cheeky for a formal essay. This is where my thoughts led me.
A Brief Field Note from the Department of Bureaucratic Anthropology
Still reeling from the inability to fold some pan into homo, Palaeontologists are seemingly desperate for a new hominid. Some dream of discovering the ‘missing link’; others, more honest, just want something with a jawline interesting enough to secure a grant. So imagine the surprise when the latest species didn’t come out of the Rift Valley but out of an abandoned server farm somewhere outside Reading.
They’ve named it Homo Legibilis – the Readable Human. Not ‘H. normālis’ (normal human), not ‘H. ratiōnālis (rational human), but the one who lived primarily to be interpreted. A species who woke each morning with a simple evolutionary imperative: ensure one’s dataprints were tidy, current, and machine-actionable.
Audio: NotebookLM summary podcast of this topic.
You’ll have seen their skeletons before, though you may not have recognised them as such. They often appear upright, mid-scroll, preserved in the amber of a status update. A remarkable creature, really. Lithe thumbs. Soft cranial matter. Eyes adapted for low-light environments lit primarily by advertisements.
Habitat
The species thrived in densely surveilled ecosystems: corporate intranets, public Wi-Fi, facial-recognition corridors, anywhere with sufficient metadata to form a lasting imprint. They built vast nests out of profiles, settings, dashboards. Territorial disputes were settled not through display or violence but through privacy-policy updates. Their preferred climate? Temperate bureaucracy.
Diet
Contrary to earlier assumptions, H. Legibilis did not feed on information. It fed on interpretation: likes, metrics, performance reviews, and algorithmic appraisal. Some specimens survived entire winters on a single quarterly report. Every fossil indicates a digestive tract incapable of processing nuance. Subtext passed through untouched.
Mating Rituals
Courtship displays involved reciprocal data disclosure across multiple platforms, often followed by rapid abandonment once sufficient behavioural samples were collected. One famous specimen is preserved alongside fourteen dating-app profiles and not a single functional relationship. Tragic, in a way, but consistent with the species’ priorities: be seen, not held.
Distinguishing Traits
Where Homo sapiens walked upright, Homo legibilis aimed to sit upright in a chair facing a webcam. Its spine is subtly adapted for compliance reviews. Its hands are shaped to cradle an object that no longer exists: something called ‘a phone’. Ironically, some term these ‘mobiles’, apparently unaware of the tethers.
Researchers note that the creature’s selfhood appears to have been a consensual hallucination produced collaboratively by HR departments, advertising lobbies, and the Enlightenment’s long shadow. Identity, for H. legibilis, was not lived but administered.
Extinction Event
The fossil record ends abruptly around the Great Blackout, a period in which visibility – formerly a pillar of the species’ survival – became inconvenient. Some scholars argue the species didn’t perish but simply lost the will to document itself, making further study inconvenient.
Others suggest a quieter transformation: the species evolved into rumour, passing stories orally once more, slipping back into the anonymity from which its ancestors once crawled.
Afterword
A few renegade anthropologists insist Homo Legibilis is not extinct at all. They claim it’s still out there, refreshing dashboards, syncing calendars, striving to be neatly interpreted by systems that never asked to understand it. But these are fringe theories. The prevailing view is that the species perished under the weight of its own readability. A cautionary tale, really. When your survival strategy is to be perfectly legible, you eventually disappear the moment the lights flicker.
With the MEOW thesis now roaming freely across the intellectual savannah, knocking over conceptual furniture and frightening the rationalists, it’s time to walk through a simple example. We’ll stay safely within the realm of conscious perception for now. That way, no one panics, and everyone can pretend they’re on familiar ground.
Our case study: colour.
Or rather, the quite embarrassing misunderstanding of colour that Western philosophy has been peddling for roughly three centuries.
Audio: NotebookLM podcast summary of this topic, Colour, Perception, and Mediated Ontology.
The Realist’s Apple: A Comedy of Certainty
Picture an apple on a table: plump, unashamedly spherical, wearing its redness like a badge of honour. The traditional Realist swears it’s red in itself, quite independent of anyone wandering in to admire it. The apple has redness the way it has mass, curvature, and that little bruise from the careless shop assistant. When you enter the room, you ‘see’ the red it’s been proudly radiating all along.
Image 0: Subject viewing red apple.
By school age, most of us are told that apples don’t ‘have’ colour; they merely reflect certain wavelengths. A minor complication. A mechanical detail. Nothing to disturb the fundamental metaphysical fantasy: that redness is still ‘out there’, waiting patiently for your eyes to come collect it.
It’s all very straightforward. Very tidy. And very wrong.
Idealists to the Rescue (Unfortunately)
Ask an Idealist about the apple and the entertainment begins.
The Berkeley devotee insists the apple exists only so long as it’s perceived – esse est percipi – which raises awkward questions about what happens when you step out for a cuppa. God, apparently, keeps the universe running as a kind of 24-hour perceptual babysitter. You may find this profound or you may find it disturbingly clingy.
The Kantian, inevitably wearing a waistcoat, insists the apple-in-itself is forever inaccessible behind the Phenomenal Veil of Mystery. What you experience is the apple-for-you, sculpted by space, time, causality, and a toolkit of categories you never asked for. This explains a lot about post-Kantian philosophy, not least the fixation on walls no one can climb.
Contemporary idealists get creative: proto-experience in everything, cosmic consciousness as universal substrate, matter as a sleepy epiphenomenon of Mind. It’s quite dazzling if you ignore the categories they’re smashing together.
What unites these camps is the conviction that mind is doing the heavy lifting and the world is an afterthought – inconvenient, unruly, and best kept in the margins.
The Shared Mistake: An Architectural Catastrophe
Both Realist and Idealist inherit the same faulty blueprint: mind here, world there – two self-contained realms entering into an epistemic handshake.
Realists cling to unmediated access (a fantasy incompatible with biology). Idealists cling to sovereign mentality (a fantasy incompatible with objectivity).
Both take ‘experience’ to be a relation between two pre-existing domains rather than a single structured encounter.
This is the mistake. Not Realism’s claims about mind-independence. Not Idealism’s claims about mental primacy. The mistake is the architecture – the assumption of two separately-existing somethings that subsequently relate.
MEOW – yes, we’re calling it that – puts it bluntly:
The problem isn’t where colour is. The problem is assuming it has to be in something – mind or world – rather than in the event.
Redness isn’t inside your head or inside the apple. It’s co-constituted by biological, cognitive, linguistic, and cultural mediation interacting with persistent constraint patterns.
Time to peel this onion… er, apple.
The Four Layers of Mediation (T₀–T₃)
A Ridiculously Oversimplified Cheat-Sheet That Still Outperforms Most Metaphysics Syllabi
Structure and Sensitivity: the Architecture You Never Asked For
This is where the Enlightenment’s fantasy of ‘raw perception’ goes to die.
Your visual system transforms, filters, enhances, suppresses, and reconstructs before ‘red’ even reaches consciousness. Cone responses, opponent processes, retinal adaptation, spatial filtering – all of it happening before the poor cortex even gets a look-in.
You never perceive ‘wavelengths’. You perceive the output of a heavily processed biological pipeline.
The biology isn’t the barrier.The biology is the view.
Image 2: Chromatic processing pathway (T₀→T₁): the layered biological transformations that make “red” possible long before consciousness gets involved.
T₁ – Cognitive Mediation
Prediction and Inference: You See What You Expect (Until Constraint Smacks You)
Your cognitive system doesn’t ‘receive’ colour information – it predicts it and updates the guess when necessary.
Memory colour biases perception toward canonical instances. Attentional gating determines what gets processed intensively and what gets summary treatment. Top-down modulation shapes what counts as signal versus noise.
There is no percept without mediation. There is no ‘raw data’ waiting underneath.
The Enlightenment liked to imagine perception as a passive window. Cognition turns that window into a heavily editorialised newsfeed.
Image 3: Expectation and input co-determine the percept: “red” emerges from the encounter, not from either source alone.
T₂ – Linguistic–Conceptual Mediation
Categories and Symbols: How Words Carve the Spectrum
Enter the famous Whorf skirmishes. Do words change perception? Do they merely label pre-existing distinctions? Do Russians really ‘see’ blue differently?
Berlin & Kay gave us focal colour universals – constraint patterns stable across cultures. Roberson et al. gave us the Himba data – linguistic categories reshaping discrimination and salience.
The correct answer is neither universalism nor relativism. It’s MEOW’s favourite refrain:
Mediation varies; constraint persists.
Words don’t invent colours. But they do reorganise the perceptual field, changing what pops and what hides.
Image 4: Different languages carve the same physical continuum differently: English imposes a blue/green split; Himba divides the region into several greens with no blue boundary at all.
T₃ – Cultural–Normative Mediation
Shared Practices: The Social Life of Perception
Your discipline, training, historical context, and shared norms tell you:
which distinctions matter
which differences ‘count’
which patterns get ignored
A Himba herder, a Renaissance painter, and a radiologist do not inhabit the same perceptual world – even when staring at the same patch of light.
Cultural mediation doesn’t rewrite biology; it reorganises priorities, salience, and interpretive readiness.
Image 5: Three observers confronting the same stimulus yet extracting different distinctions: the scientist tracks wavelength, the artist tracks hue and value, and the Himba pastoralist tracks ecologically salient greens. Same object, different salience hierarchies. Not arbitrary – just mediated.
What Seeing Red Actually Involves (Step By Exhausting Step)
You walk into a room. Apple on table. Looks red. What just happened?
T₀ – Biological: Long wavelength light hits L-cones harder than M- and S-cones. Opponent channels compute (L−M). Adaptation shifts baseline. Edge detection fires. You don’t have ‘red’ yet – you have transformed photoreceptor output.
T₁ – Cognitive: Your brain predicts ‘apple, probably red’ based on shape and context. Memory colour pulls toward canonical apple-red. Attention allocates processing resources. Prediction matches input (roughly). System settles: ‘yes, red apple’.
T₂ – Linguistic–Conceptual: The continuous gradient gets binned: ‘red’, not ‘crimson’ or ‘scarlet’ unless you’re a designer. The category provides stability, ties this instance to others, makes it reportable.
T₃ – Cultural–Normative: Does the exact shade matter? Depends whether you’re buying it, photographing it, or painting it. Your practical context determines which distinctions you bother tracking.
And through all of this: Constraint. Metameric matches stay stable. Focal colours persist cross-culturally. Wavelength sensitivities don’t budge. The encounter isn’t arbitrary – but it’s not unmediated either.
What happened wasn’t: Mind Met World. What happened was: an encounter-event unfolded, organised through four mediational layers, exhibiting stable constraint patterns that made it this and not that.
Where This Leaves Us
Colour is not ‘out there’. Colour is not ‘in here’.
Colour is the structured relational event of encounter.
Four mediation layers shape what appears. Constraint patterns stabilise the encounter so we aren’t hallucinating wildly divergent rainbows.
There is no ‘apple as it really is’ waiting behind the encounter. Nor is there a sovereign mind constructing its own private theatre.
There is only the event – where biological structure, cognitive dynamics, conceptual categories, and cultural histories co-emerge with the stable patterns of constraint we lazily call ‘the world’.
The apple was never red ‘in itself’. You were never seeing it ‘as it really is’. And the Enlightenment can finally take off its colour-blind uncle glasses and admit it’s been squinting at the wrong question for three hundred years.
Next time: Why visual illusions aren’t perception failing, but perception revealing itself.
Until then: stop asking where colour ‘really’ lives.
It lives in the event. And the event is mediated, constrained, and real enough.
The Frege–Geach problem was one of the impetuses for finishing my Language Insufficiency Hypothesis. From the first encounter it felt off, as though someone were trying to conjure depth from a puddle. There was no paradox here; just another case of mistaking the map for the terrain, a habit analytic philosophy clings to with almost devotional zeal. The more time I spend on this project, the more often I find those cartographic illusions doing the heavy lifting.
Audio: NotebookLM podcast on this topic.
For the uninitiated, the Frege–Geach problem is supposed to be the knockout blow against AJ Ayer’s emotivism. Frege’s manoeuvre was simple enough: moral language must behave like descriptive language, so embed it in a conditional and watch the whole thing buckle. Neat on paper. Vacuous in practice. The entire construction only stands if one accepts Frege’s original fiat: that moral utterances and empirical propositions share the same logical metabolism. And why should they? Because he said so.
This is the core of the analytic mistake. It is grammar dressed up as ontology.
The LIH doesn’t ‘solve’ the Frege–Geach problem for the simple reason that there is nothing to solve. What it does instead is reclassify the habitat in which such pseudo-problems arise. It introduces categories the analytic tradition never suspected existed and drafts a grammar for language’s failure modes rather than politely ignoring them. It exposes the metaphysics analytic philosophy has been smuggling under its coat for decades.
The LIH does four things at once:
• It destabilises an alleged Invariant. • It exposes the Contestable foundations underneath it. • It shows that many analytic puzzles exist only because of the presuppositions baked into the analytic grammar. • And it asks the forbidden question: what if this cherished problem simply isn’t one?
Analytic philosophy proceeds as though it were operating on a single, pristine grammar of meaning, truth, and assertion. The LIH replies: charming idea, but no. Different conceptual regions obey different rules. Treating moral predicates as if they were factual predicates is not rigour; it’s wishful thinking.
As my manuscript lays out, instead of one flat linguistic plain, the LIH gives you an ecology:
• Invariants for the things that actually behave. • Contestables for the concepts that wobble under scrutiny. • Fluids for notions that change shape depending on who touches them. • Ineffables for everything language tries and fails to pin down.
The analytic tradition, bless its little heart, tries to stretch classical logic across the entire terrain like clingfilm. The clingfilm snaps because reality never agreed to be wrapped that way.
This taxonomy isn’t jargon for its own sake. It’s a meta-grammar: a way of describing how language breaks, where it breaks, and why it breaks in predictable places. It names the structures analytic philosophy has been tripping over for a century but studiously refused to acknowledge.
Their error is simple: they treat language as flat. The LIH treats language as topographical – scored with ridges, fault lines, and pressure fronts.
They think in one grammar. I wrote a grammar for grammars.
No wonder there’s disquiet. Their tools have been optimised for the wrong terrain. I’m not challenging their competence; I’m pointing out that the conceptual map they’ve been so proudly updating was drawn as if the continent were uniformly paved.
This is why Frege–Geach, the ‘hard problem’ of consciousness, another irritant, paradoxes, semantic embeddings – so many of their grand intellectual fixtures – appear dramatic inside their grammar yet quietly evaporate once you switch grammars. The LIH isn’t a theory about language; it is a theory of the boundary conditions where language stops being able to masquerade as a theory of anything at all.
And the Frege–Geach problem? In the end, perhaps it isn’t.
Note that the cover image is of the rhinocerosin the animated movie, James and the Giant Peach. The rhino was meant to remind James of the importance of perspective. I feel it’s fitting here.
How the Trump Era Rewrote Time, Truth, and the Very Idea of a Common World
Politics in the Trump era wasn’t merely a spectacle of bad manners and worse epistemology; it was the moment the United States stopped pretending it shared a common world – when politics ceased to be a quarrel over facts and became a quarrel over the very conditions that make facts possible. This essay is part of an ongoing project tracing how post-Enlightenment societies lose their shared grammar of verification and retreat into parallel narrative architectures that demand allegiance rather than assessment.
And before anyone hyperventilates about implied asymmetry: the recursive logic described here is not exclusive to the right. The progressive cosmology, though stylistically different, exhibits the same structural features – prophetic claims about impending catastrophe or salvation, retrospective reinterpretations to maintain coherence, and an insistence on possessing privileged interpretive tools. The Trump era didn’t invent this recursive mode; it simply accelerated it, stripped it naked, and pumped it through a 24-hour media bloodstream until everyone could see the circuitry sparking.
Audio: NotebookLM podcast on this topic.
Welcome to the new cosmology.
1. The Death of a Common Grammar
Once the shared grammar of verification dissolves, political discourse stops unfolding in empirical time. It migrates into suspended futurity – a realm of conditional wagers:
If this, then that. Just wait. You’ll see. The future will vindicate us.
But the horizon keeps receding. When reality refuses to comply, factions rewrite the past to preserve the equilibrium between prophecy and outcome. Truth becomes less a matter of correspondence and more an act of narrative self-maintenance. Where the world diverges from the story, the world is adjusted.
Political time becomes pliable; the narrative must be kept intact, whatever the cost.
2. Mimetic Prophecy and the Absence of Catharsis
A Girardian lens clarifies what’s happening beneath the surface. The factions are not simply disagreeing; they are locked in mimetic rivalry, each imitating the other’s claim to prophetic vision. Insight becomes the mimetic object: each camp insists it alone can decode the approaching shape of events.
As the rivalry escalates, differentiation collapses. Both sides perform identical moves – warnings of authoritarianism, narratives of national peril, promises of historical vindication – whilst insisting the other’s prophecies are delusional.
In classic Girardian fashion, this symmetry produces a crisis: a collapse of distinction between rivals, accompanied by a desperate hunt for a stabilising sacrifice. In the Trump era, the scapegoat was not a person but a category: truth itself. Doubt, verification, shared reality – these were sacrificed at the altar of maintaining internal cohesion.
Yet unlike the societies Girard studied, the American polity achieves no catharsis. The sacrificial mechanism fails. No cleansing moment restores order. The cycle loops endlessly, forcing the community to reenact the ritual without the relief of resolution.
Prophecy, rivalry, crisis – repeat.
3. From Chronology to Mythic Temporality
Once prediction and remembrance collapse into one another, political time becomes mythic rather than chronological. The present becomes a hinge between two versions of the world: the one the faction already believes in and the one it insists the future will confirm.
The future becomes partisan property. The past becomes commentary. The present becomes maintenance.
Each faction edits its cosmology to preserve coherence, producing a recursive temporality in which prophecy and memory reinforce one another. Narrative supplants chronology; plausibility is subordinated to coherence. The factions are not lying; they are mythologising.
This is what a society does when it cannot stabilise truth but cannot abandon truth-claims either.
4. Madison’s Diagnosis, Reversed
James Madison, in his republican optimism, believed factions were inevitable but containable. Pluralism, he argued, would safeguard the republic by ensuring no faction could elevate its partial vision into a universal claim. The sheer scale and diversity of the republic would generate cross-pressure strong enough to check epistemic domination.
He assumed a shared evidentiary world.
He did not imagine a polity in which factions construct discrete epistemic universes – self-sealing interpretive systems with their own temporal orders, myths of origin, and theories of legitimacy. Under such conditions, pluralism no longer disciplines factional excess; it shelters it. It becomes a buffer that prevents contact, not a mechanism that fosters correction.
Madison feared that factions might mistake their partial view for the whole. Our moment dissolves the very idea of the whole.
Pluralism, once a remedy, becomes the architecture of epistemic secession.
5. The Theatre of Recursive Narration
What remains is not deliberation but theatre—political communities sustained by the perpetual reenactment of their own certainties. Each faction maintains itself through narrative recursion, chanting the same incantation of retrospective rightness, performing the same rites of interpretive renewal.
The republic no longer hosts disagreement; it hosts parallel cosmologies.
In the republic of recursive prophecy, truth is no longer what grounds politics – it’s what politics performs.
Afterword
This article followed a chat with ChatGPT. For what it’s worth, I now style myself a post-postmodern, post-critical theorist – though these labels are as pointless as the ones they replace.
The conversation began with Paul Feyerabend’s Against Method, which was already on my mind. In Appendix 1 he writes:
“After all, in a democracy ‘reason’ has just as much right to be heard and to be expressed as ‘unreason’, especially in view of the fact that one man’s ‘reason’ is the other man’s insanity.”
That set me wondering, again, how one discerns signal from noise. As a statistician, separating wheat from chaff is my daily bread, but how does one do it politically without pretending to possess privileged access to truth? In this environment, each faction insists it has such access. The other side, naturally, is deluded. Ignore the fact that there are more than two sides; binary thinking is the fashion of the day.
I leaned on ChatGPT and asked for sources on this lemma – what to read, where to dig. It replied with books I’d already read, save for one:
I hadn’t read Laclau & Mouffe. ChatGPT summarised them neatly:
“Politics is the contest over the very conditions for meaning. The signal/noise split is hegemonic construction, not metaphysical reality.”
Right up my street. (I still need to read it.)
That, in turn, brought Madison’s Federalist No. 10 to mind – his warning that factional division, particularly the two-party structure the United States later perfected, would one day become corrosive.
Then Girard entered the chat. And so on. We followed the thread a little longer until this essay took shape. I didn’t feel compelled to polish it into a formal academic piece. A blog seems a far better home for now, and the essay version can remain an open question.
Written by Bry Willis with assistance from ChatGPT 5.1
In fact, I’ve been involved with ‘artificial intelligence’ since about 1990, when I developed Wave 3 AI – expert systems. Wave 4 is the current incarnation. Still no ‘intelligence’ to speak of, but marketers and hypsters love the term. Perhaps in Wave 5, the name will finally be correct.
Aside from my historical connexion, I want to share how I am using AI in my writing – in this case, ChatGPT 5.1. I’m not going to give much backstory on the setup, but I’ll point out some internal process logic.
Audio: NotebookLM podcast on this topic.
I have completed the manuscript for a Language Insufficiency Hypothesis, so I have been sharing screenshots of each page – usually a spread – and using the GPT as a second set of eyes. I’ll feed it an image and a request, in this case, to find key terms so I can capitalise and italicise them appropriately. In this example, this is the ending paragraph of Chapter 6.
Image 1: Sample chapter copy. In good order.
This first screenshot is an example of output. As is evident, it was looking, among other things, for the capitalisation of the concepts of Presumption Gap and Effectiveness Horizon.
Image 2: Sample GPT output – bad iconography
Notice the iconographic language is a bit off. The red X is a bit out of sync with the rest of the message, which says the entry is already correct. So, two instances; no problems. Next.
In this message, I warned that it was OCRing the screenshots but not retaining the formatting, and which is a reason I was sharing images over text.
Image 3: Sample GPT output – OCR confusion
What’s interesting is that it informed me that it would now treat the image as canonical. In Image 3 (above), it’s engaging in introspection – or at least self-dialogue. This is evidence that it (1) reviewed the results of the OCR, reviewed the image (as an image), and (3) compared 1 and 2 to arrive at the conclusion that the OCR had indeed dropped the formatting.
It wasn’t enough to inform me that everything was ok or, better still, not to bother me with noise since it was already in good order. Instead, it’s like an autist talking to itself. It reminds me of Raymond in Rain Man.
Image 34 (next) is the last example. Here, the OCR confounds rendering Horizon as Hπrizon, and then points out that I should avoid the same mistake of viewing o as π.
Image 4: Sample GPT output – OCR corruption
Thanks for the advice. I was losing sleep worrying about this possibility.
Conclusion
This is obviously a late-stage use case. I use GPT for ideation and research. Perhaps I’ll share an example of this later. I might be able to review my earlier notes for this project, but it was started years before the latest Wave arrived.
It seems my latest rant about AI-authorship accusations stirred something in me, that I need to apologise for being a professional writer – or is that a writing professional? Blame the Enlightenment, blame writing and communication courses, whatevs. I certainly do. But since some people are still waving the pitchforks, insisting that anything too coherent must be artificially tainted, I should address the obvious point everyone keeps missing:
The writing structures people attribute to AI aren’t AI inventions. They’re human inventions. Old ones. Codified ones. And we made the machines copy them. Sure, they have a certain cadence. It’s the cadence you’d have if you also followed the patterns you should have been taught in school or opened a book or two on the topic. I may have read one or two over the years.
Wait for it… The orthodoxy is ours. I hate to be the one to break it to you.
Video: AI Robot Assistant (no audio)
Professional Writing Has Its Own House Rules (And They’re Older Than AI Neural Nets)
👉 I wrote earlier how even talking about AI is censored in Reddit. 🤷
Audio: NotebookLM podcast on this topic and the last one.
Long before AI arrived to ruin civilisation and steal everyone’s quiz-night jobs, we’d already built an entire culture around ‘proper writing’. The sort of writing that would make a communications lecturer beam with pride. The Sith may come in twos; good writing comes in threes.
Tell them what you’re going to say.
Say it.
Repeat what you told them.
But wait, there’s more:
Use linear flow, not intellectual jazz.
One idea per paragraph, please.
Support it with sources.
Conclude like a responsible adult.
These aren’t merely classroom antics. They’re the architectural grammar of academic, corporate, scientific, and policy writing. No poetic flourishes. No existential detours. No whimsical cadence. The aim is clarity, predictability, and minimal risk of misinterpretation. It’s the textual equivalent of wearing sensible shoes to a board meeting. So when someone reads a structured piece of prose and yelps, ‘It sounds like AI!’, what they’re really saying is:
It sounds like someone who was properly trained to write in a professional context.
Je m’accuse. AI Didn’t Invent Structure. We Forced It To Learn Ours. Full stop. The problem is that it did whilst most of us didn’t.
If AI tends toward this style – linear, tidy, methodical, lamentably sane – that’s because we fed it millions of examples of ‘proper writing’. It behaves professionally because we trained it on professional behaviour – surprisingly tautological. Quelle surprise, eh?
Just as you don’t blame a mimeograph for producing a perfectly dull office memo, you don’t blame AI for sounding like every competent academic who’s been beaten with the stick of ‘clarity and cohesion’. It’s imitation through ingestion. It’s mimicry through mass exposure.
And Now for the Twist: My Fiction Has None of These Constraints
My fiction roams freely. It spirals, loops, dissolves, contradicts, broods, and wanders through margins where structured writing fears to tread. It chases affect, not clarity. Rhythm, not rubrics. Experience, not exegesis.
No one wants to read an essay that sounds like Dr Seuss, but equally, no one wants a novel that reads like the bylaws of a pension committee.
Different aims, different freedoms: Academic and professional writing must behave itself. Fiction absolutely should not.
This isn’t a value judgement. One isn’t ‘truer’ or ‘better’ than the other – only different tools for different jobs. One informs; the other evokes. One communicates; the other murmurs and unsettles.
Not to come off like Dr Phil (or Dr Suess), but the accusation itself reveals the real anxiety. When someone accuses a writer of sounding ‘AI-like,’ what they usually mean is:
‘Your writing follows the conventions we taught you to follow – but now those conventions feel suspect because a machine can mimic them’.
And that’s not a critique of the writing. It’s a critique of the culture around writing – a panic that the mechanical parts of our craft are now automated and thus somehow ‘impure’.
But structure is not impurity. Professional clarity is not soullessness. Repetition, sequencing, scaffolding – these aren’t telltale signs of AI; they’re the residue of centuries of human pedagogy.
AI mirrors the system. It didn’t create the system. And if the system’s beginning to look uncanny in the mirror, that’s a problem of the system, not the reflection.
In Short: The Craft Is Still the Craft, Whether Human or Machine
Professional writing has rules because it needs them. Fiction abandons them because it can. AI imitates whichever domain you place in front of it.
The accusation that structured writing ‘sounds artificial’ is merely a confusion between form and origin. The form is ours. The origin is irrelevant.
If clarity is now considered suspicious, I fear for the state of discourse. But then again, I’ve feared for that for some time.
And apparently, I’ve still got more to say on the matter.