I have acquired a minor but persistent defect. When I try to type enough, my fingers often produce anough. Not always. Often enough to notice. Enough to be, regrettably, anough.
This is not a simple typo. The e and a keys are not conspirators with shared borders. This is not owned → pwned, where adjacency and gamer muscle memory do the heavy lifting. This is something more embarrassing and more interesting: a quasi-phonetic leak. A schwa forcing its way into print without permission. A clue for how I pronounce the word – like Depeche Mode’s I can’t get enough.
Audio: NotebookLM summary podcast of this topic.
Internally, the word arrives as something like ənuf, /əˈnʌf/. English, however, offers no schwa key. So the system improvises. It grabs the nearest vowel that feels acoustically honest and hopes orthography won’t notice. Anough slips through. Language looks the other way.
Image: Archaeology of anough
Video: Depeche Mode: I Just Can’t Get Enough
Is this revelatory?
Not in the heroic sense. No breakthroughs, no flashing lights. But it is instructive in the way cracked pottery is instructive. You don’t learn anything new about ceramics, but you learn a great deal about how the thing was used.
This is exactly how historians and historical linguists treat misspellings in diaries, letters, and court records. They don’t dismiss them as noise. They mine them. Spelling errors are treated as phonetic fossils, moments where the discipline of standardisation faltered, and speech bled through. Before spelling became prescriptive, it was descriptive. People wrote how words sounded to them, not how an academy later insisted they ought to look.
That’s how vowel shifts are reconstructed. That’s how accents are approximated. That’s how entire sound systems are inferred from what appear, superficially, to be mistakes. The inconsistency is the data. The slippage is the signal.
Anough belongs to this lineage. It’s a microscopic reenactment of pre-standardised writing, occurring inside a modern, over-educated skull with autocorrect turned off. For a brief moment, sound outranks convention. Orthography lags. Then the editor arrives, appalled, to tidy things up.
What matters here is sequence. Meaning is not consulted first. Spelling rules are not consulted first. Sound gets there early, locks the door, and files the paperwork later. Conscious intention, as usual, shows up after the event and claims authorship. That’s why these slips are interesting and why polished language is often less so. Clean prose has already been censored. Typos haven’t. They show the routing. They reveal what cognition does before it pretends to be in charge.
None of this licenses forensic grandstanding. We cannot reconstruct personalities, intentions, or childhood trauma from rogue vowels. Anyone suggesting otherwise is repackaging graphology with better fonts. But as weak traces, as evidence that thought passes through sound before it passes through rules, they’re perfectly serviceable.
Language doesn’t just record history. It betrays it. Quietly. Repeatedly. In diaries, in marginalia, and occasionally, when you’re tired and trying to say you’ve had enough. Or anough.
More precisely, I need less sleep and longer days – preferably twice as long. I’ve been writing almost non-stop for the better part of a week: fourteen- to sixteen-hour days, fuelled by irritation and the stubborn belief that if I just keep reading, something will finally click into place.
I’m not complaining. This is a virtuous cycle. Reading leads to writing. Writing demands more reading. Eventually, the loop closes into something that looks suspiciously like progress.
Audio: Short NotebookLM summary podcast on this topic.
Still, there’s a bottleneck.
Because some of this work – the work I’m most excited about – I’m deliberately not publishing yet. Journals, bless their glacial hearts, don’t much care for prior publication. So ideas sit in limbo for six to eighteen months, locked in a room like argumentative houseplants, slowly growing sideways.
From the perspective of someone who thinks in public, this is maddening.
Now add AI to the mix.
This is where things get dangerous.
I’ll feed ChatGPT a thesis, a skeletal structure, notes, and references. I ask what I’m missing. It obliges – often helpfully – by pointing me toward adjacent thinkers and relevant literature, complete with page numbers. From there, I verify, hunt down the sources, skim, read, discard, or integrate.
And every so often, I stumble across something that makes me swear out loud.
This week, it was Bernard Williams.
I’ve cited Ethics and the Limits of Philosophy before. But this time, I actually sat down and read it properly. Which immediately prompted the thought:
Why didn’t I read this sooner?
Williams dismantles moral objectivity with the calm precision of someone who knows the Enlightenment project has already lost – he just hasn’t told everyone yet. Thick and thin moral concepts, locality, non-extensibility, the collapse of universal moral reason at scale – yes, yes, yes. He published this in 1985. Fine. I’ll survive.
But then I went further.
Williams shows that morality fails between people at scale. I argue that it fails within a single person over time.
That became my second paper.
And this is where things went off the rails.
Because in the course of writing that paper, I dipped into Hart’s The Concept of Law and Endicott’s Vagueness in Law. These are not fringe polemics. These are law textbooks. For law students. People allegedly trained to parse language for a living.
And what I found was… astonishing.
Let me paraphrase the admissions:
First:
Image: When the law is vague, judicial decisions may be unconstrained by the law.
Endicott: “By upsetting the standard view of adjudication, the book reaches conclusions that some people find horrible: when the law is vague, judicial decision- making will in some cases be unconstrained by the law. It is impossible in principle for judges always to treat like cases alike. Predictability in the law is to some extent unattainable. Moreover, I argue in Chapter 9,2 that vagueness cannot be eliminated from law. These conclusions might seem to imply that the rule of law is, at least to some extent, conceptually impossible.”
Then:
Image: Vagueness is inevitable. Deal with it.
Endicott: “Secondly, I do not claim that vagueness is a purely linguistic feature of law. And the book relies on no claim about the relation between law and language. These points must be stressed, because vagueness is commonly thought of as a linguistic phenomenon. And. indeed, most of the discussion in the book concerns the vagueness of linguistic expressions. But the indeterminacy claim is not just a claim about language (so I argue in Chapter 3.12). So. for example, the claim in Chapter 6 that general evaluative and normative expressions are necessarily vague is not just a claim about the word ‘good’ and the word ‘right1: it is a claim about any linguistic expression in which we could conceivably express general evaluative and normative judgments. It therefore includes a claim about what is good and what is right.”
Then, almost casually:
Image: Whether law is morally valuable to a community is not my concern. Justice and the rule of law may be political virtues — or not. I don’t defend them here.
Endicott: “Disputes between legal positivists and natural law theorists have concerned not only the relation between law and adjudication, but also the relation between law and morality. Here I take no general position on the intrinsic moral value of law. I do rely on the claims that law can be valuable to a community, and that justice and the rule of law are two ideals which a com- munity can intelligibly pursue as political virtues. Even those claims are controversial (Kelsen and some of the theorists discussed in Chapter 2 have controverted them ). But I do not defend them here. This work aims to show that the indeterminacy claim does nothing to threaten the pursuit of justice and the rule of law. Those ideals cannot be well understood if we try to make them depend on determinacy in the requirements of the law.”
Say what?
Read together – not even uncharitably – the message is clear:
Law is indeterminate. Indeterminacy is unavoidable. And whether law is good, just, or valuable is… optional.
The subtext isn’t even hiding.
Law is a power structure first. If it happens to align with justice, fairness, or communal value, well, lovely. A bonus. Champagne all round.
This does not sit well with a sceptical cynic.
What really broke me, though, wasn’t the argument itself. Philosophers make grim claims all the time. What broke me was the silence around it.
How does this pass under the radar?
How do cohorts of law students – drilled in textual analysis, trained to read footnotes like tea leaves – not trip over this elephant stampede? How do they graduate believing they’re upholding inalienable rights, rather than participating in a managed system of coercion that occasionally behaves itself?
Self-preservation, I suppose. Wilful ignorance. Professional cosplay.
I’ve seen this before.
As an economist, ask the wrong foundational question, and you’re instantly radioactive. Persona non grata. Careers don’t end with explosions — they end with polite silence and no invitations.
I probably should have committed to heterodox philosophy from the start. Or stayed a musician.
I remember leaving graduate school, putting on a suit, and feeling like I was wearing a costume. Cosplay, before we had the word. “Business professional” as a role, not an identity.
I’ve always felt intellectually capable of doing whatever I set out to do. My temperament, however, has never agreed to play along.
Which is perhaps why diagnosing ontologies comes so naturally. Once you see the scaffolding, you can’t unsee it – whether it’s metaphysics, jurisprudence, or a corporate department pretending it has a mission.
Then David Graeber came along with Bullshit Jobs, and I remember thinking: Thank God. It’s not just me.
So yes. I need a break.
I need sleep. I need silence. I need to stop reading law books that accidentally admit they’re about power and then act surprised when someone notices.
LinkedIn, that carnival of professional self-delusion, has a little diversion called Pinpoint. It pretends to tell you how much you “match” with other people, presumably so you’ll feel less alone as you scroll past thought-leaders peddling snake oil in PowerPoint form. In English, the results arrive in the cold, hard, dating-app idiom: “% match.” Simple, brutal, and bland.
Audio: NotebookLM podcast on this topic.
But LinkedIn, ever the polyglot, translates this phrase into other tongues. And here is where a trivial game unmasks the philosophical chaos of language itself. For in one idiom, your soul and another’s are “in correspondence.” In another, you are the product of “coincidence.” Elsewhere, you are a “hit,” a “fit,” a “suitability.” The poor Swedes, apparently exhausted, simply gave up and borrowed “matchning.”
The Romance languages, of course, are the most pedantic. Correspondência, corrispondenza — all very scholastic, as if Aquinas himself were lurking in the backend code. A match is nothing less than the degree to which one proposition mirrors another, as in the correspondence theory of truth. You can be 72% true, like a botched syllogism that half-lands. Elegant, precise, exasperating.
Spanish, on the other hand, opts for coincidencia. A “% coincidence.” Imagine it: you bump into your ex at the market, but only 46% of the way. Coincidence, by definition, is binary; either the train wreck occurs or it does not. And yet here it is, rendered as a gradable metric, as if fate could be quantified. It’s a kind of semantic surrealism: Dalí with a spreadsheet.
Then we have the Germans: Treffer. A hit. In English, a hit is binary – you score or you miss. But the Germans, ever the statisticians of fate, make Trefferquote into a percentage. You may not have killed the truth outright, but you wounded it respectably. It’s a firing squad turned bar chart.
Indonesians say cocok, which means “appropriate, suitable.” This is not about truth at all, but about fit. A match is not correspondence to reality but pragmatic adequacy: does it work? Does it feel right? The difference is subtle but devastating. Correspondence makes truth a metaphysical mirror; suitability makes it a tailoring problem.
And English? English, with its toddler’s toybox of a vocabulary, just shrugs and says “match.” A word that means as much as a tennis final, a Tinder swipe, or a child’s puzzle book. Adequate, lazy, neutered. Anglo-pragmatism masquerading as universality.
So from a silly HR-adjacent parlour game we stumble into a revelation: truth is not one thing, but a polyglot mess. The Romance tongues cling to correspondence. Spanish insists on coincidence. German goes target practice. Indonesian settles for a good fit. And English floats on ambiguity like an inflatable swan in a corporate swimming pool.
The lesson? Even a “% match” is already lost in translation. There is no stable denominator. We speak not in universals but in parochialisms, in metaphors smuggled into software by underpaid translators. And we wonder why philosophy cannot settle the matter of truth: it is because language itself cheats. It gives us correspondence, coincidence, hits, and fits, all while claiming to say the same thing.
Perhaps LinkedIn should update its UI to something more honest: % mess.
I’ve written a lot on the insufficiency of language, and it’s not even an original idea. Language, our primary tool for sharing thoughts and ideas, harbours a fundamental flaw: it’s inherently insufficient for conveying precise meaning. While this observation isn’t novel, recent developments in artificial intelligence provide us with new ways to illuminate and examine this limitation. Through a progression from simple geometry to complex abstractions, we can explore how language both serves and fails us in different contexts.
The Simple Made Complex
Consider what appears to be a straightforward instruction: Draw a 1-millimetre square in the centre of an A4 sheet of paper using an HB pencil and a ruler. Despite the mathematical precision of these specifications, two people following these exact instructions would likely produce different results. The variables are numerous: ruler calibration, pencil sharpness, line thickness, paper texture, applied pressure, interpretation of “centre,” and even ambient conditions affecting the paper.
This example reveals a paradox: the more precisely we attempt to specify requirements, the more variables we introduce, creating additional points of potential divergence. Even in mathematics and formal logic—languages specifically designed to eliminate ambiguity—we cannot escape this fundamental problem.
Precision vs Accuracy: A Useful Lens
The scientific distinction between precision and accuracy provides a valuable framework for understanding these limitations. In measurement, precision refers to the consistency of results (how close repeated measurements are to each other), while accuracy describes how close these measurements are to the true value.
Returning to our square example:
Precision: Two people might consistently reproduce their own squares with exact dimensions
Accuracy: Yet neither might capture the “true” square we intended to convey
As we move from geometric shapes to natural objects, this distinction becomes even more revealing. Consider a maple tree in autumn. We might precisely convey certain categorical aspects (“maple,” “autumn colours”), but accurately describing the exact arrangement of branches and leaves becomes increasingly difficult.
The Target of Meaning: Precision vs. Accuracy in Communication
To understand language’s limitations, we can borrow an illuminating concept from the world of measurement: the distinction between precision and accuracy. Imagine a target with a bullseye, where the bullseye represents perfect communication of meaning. Just as archers might hit different parts of a target, our attempts at communication can vary in both precision and accuracy.
Consider four scenarios:
Low Precision, Low Accuracy When describing our autumn maple tree, we might say “it’s a big tree with colourful leaves.” This description is neither precise (it could apply to many trees) nor accurate (it misses the specific characteristics that make our maple unique). The communication scatters widely and misses the mark entirely.
High Precision, Low Accuracy We might describe the tree as “a 47-foot tall maple with exactly 23,487 leaves displaying RGB color values of #FF4500.” This description is precisely specific but entirely misses the meaningful essence of the tree we’re trying to describe. Like arrows clustering tightly in the wrong spot, we’re consistently missing the point.
Low Precision, High Accuracy “It’s sort of spreading out, you know, with those typical maple leaves turning reddish-orange, kind of graceful looking.” While imprecise, this description might actually capture something true about the tree’s essence. The arrows scatter, but their centre mass hits the target.
High Precision, High Accuracy This ideal state is rarely achievable in complex communication. Even in our simple geometric example of drawing a 1mm square, achieving both precise specifications and accurate execution proves challenging. With natural objects and abstract concepts, this challenge compounds exponentially.
The Communication Paradox
This framework reveals a crucial paradox in language: often, our attempts to increase precision (by adding more specific details) can actually decrease accuracy (by moving us further from the essential meaning we’re trying to convey). Consider legal documents: their high precision often comes at the cost of accurately conveying meaning to most readers.
Implications for AI Communication
This precision-accuracy framework helps explain why AI systems like our Midjourney experiment show asymptotic behaviour. The system might achieve high precision (consistently generating similar images based on descriptions) while struggling with accuracy (matching the original intended image), or vice versa. The gap between human intention and machine interpretation often manifests as a trade-off between these two qualities.
Our challenge, both in human-to-human and human-to-AI communication, isn’t to achieve perfect precision and accuracy—a likely impossible goal—but to find the optimal balance for each context. Sometimes, like in poetry, low precision might better serve accurate meaning. In other contexts, like technical specifications, high precision becomes crucial despite potential sacrifices in broader accuracy.
The Power and Limits of Distinction
This leads us to a crucial insight from Ferdinand de Saussure’s semiotics about the relationship between signifier (the word) and signified (the concept or object). Language proves remarkably effective when its primary task is distinction among a limited set. In a garden containing three trees—a pine, a maple, and a willow—asking someone to “point to the pine” will likely succeed. The shared understanding of these categorical distinctions allows for reliable communication.
However, this effectiveness dramatically diminishes when we move from distinction to description. In a forest of a thousand pines, describing one specific tree becomes nearly impossible. Each additional descriptive detail (“the tall one with a bent branch pointing east”) paradoxically makes precise identification both more specific and less likely to succeed.
An AI Experiment in Description
To explore this phenomenon systematically, I conducted an experiment using Midjourney 6.1, a state-of-the-art image generation AI. The methodology was simple:
Generate an initial image
Describe the generated image in words
Use that description to generate a new image
Repeat the process multiple times
Attempt to refine the description to close the gap
Continue iterations
The results support an asymptotic hypothesis: while subsequent iterations might approach the original image, they never fully converge. This isn’t merely a limitation of the AI system but rather a demonstration of language’s fundamental insufficiency.
A cute woman and her dog stand next to a tree
One can already analyse this for improvements, but let’s parse it together.
a cute woman
With this, we know we are referencing a woman, a female of the human species. There are billions of women in the world. What does she look like? What colour, height, ethnicity, and phenotypical attributes does she embody?
We also know she’s cute – whatever that means to the sender and receiver of these instructions.
I used an indefinite article, a, so there is one cute woman. Is she alone, or is she one from a group?
It should be obvious that we could provide more adjectives (and perhaps adjectives) to better convey our subject. We’ll get there, but let’s move on.
and
We’ve got a conjunction here. Let’s see what it connects to.
her dog
She’s with a dog. In fact, it’s her dog. This possession may not be conveyable or differentiable from some arbitrary dog, but what type of dog is it? Is it large or small? What colour coat? Is it groomed? Is it on a leash? Let’s continue.
stand
It seems that the verb stand refers to the woman, but is the dog also standing, or is she holding it? More words could qualify this statement better.
next to a tree
A tree is referenced. Similar questions arise regarding this tree. At a minimum, there is one tree or some variety. She and her dog are next to it. Is she on the right or left of it?
We think we can refine our statements with precision and accuracy, but can we? Might we just settle for “close enough”?
Let’s see how AI interpreted this statement.
Image: Eight Midjourney renders from the prompt: A cute woman and her dog stand next to a tree. I’ll choose one of these as my source image.
Let’s deconstruct the eight renders above. Compositionally, we can see that each image contains a woman, a dog, and a tree. Do any of these match what you had in mind? First, let’s see how Midjourney describes the first image.
In a bout of hypocrisy, Midjourney refused to /DESCRIBE the image it just generated.
Last Midjourney description for now.
Let’s cycle through them in turn.
A woman is standing to the left of an old-growth tree – twice identified as an oak tree. She’s wearing faded blue jeans and a loose light-coloured T-shirt. She’s got medium-length (maybe) red-brown hair in a small ponytail. A dog – her black and white dog identified as a pitbull, an American Foxhound, and an American Bulldog – is also standing on his hind legs. I won’t even discuss the implied intent projected on the animal – happy, playful, wants attention… In two of the descriptions, she’s said to be training it. They appear to be in a somewhat residential area given the automobiles in the background. We see descriptions of season, time of day, lighting, angle, quality,
A woman is standing to the right of an old-growth tree. She’s wearing short summer attire. Her dog is perched on the tree.
An older woman and her dog closer up.
A read view of both a woman and her dog near an oak tree.
As it turned out, I wasn’t thrilled with any of these images, so I rendered a different one. Its description follows.
The consensus is that ‘a beautiful girl in a white dress and black boots stands next to a tree’ with a Jack Russell Terrier dog. I see birch trees and snow. It’s overcast. Let’s spend some time trying to reproduce it. To start, I’m consolidating the above descriptions. I notice some elements are missing, but we’ll add them as we try to triangulate to the original image.
A beautiful girl in a white dress and black boots stands next to a tree in a winter forest with birch trees. A Jack Russell Terrier dog is nearby, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
This is pretty far off the mark. We need to account for the overall setting and composition, relative positioning, clothing, hair, camera, perspective – even lighting and film emulsion.
Let’s see how we can refine it with some adjectives. Before this, I asked Anthropic’s Claude 3.5 to describe the image. Perhaps we’ll get more details.
An atmospheric winter photograph was taken in what appears to be a birch and pine forest. The ground is partially covered with snow, creating a patchy pattern of white snow and brown leaves or soil. In the image, there's a person wearing winter attire consisting of a white knit turtleneck sweater, a white skirt, dark knee-high socks, and grey lace-up boots. They're standing against a large tree trunk with their arms crossed. Next to them is a Jack Russell Terrier mix dog with distinctive brown and white colouring. The dog appears to be well-behaved, sitting attentively beside the tree. The composition is quite striking, with the massive dark tree trunk serving as a central focal point against the misty, winter forest background. The white clothing of the person creates a visual echo with the white bark of birch trees visible in the background, as well as with the patches of snow on the ground. The overall mood of the photo is somewhat moody and contemplative, enhanced by what appears to be overcast lighting and the misty quality of the forest in the background. The colour palette is largely monochromatic, dominated by whites, greys, and browns, which adds to the winter atmosphere of the scene.
We don’t seem to be moving in a good direction. Let’s modify the initial prompt.
A beautiful girl in a white dress and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
I’ll allow the results to speak for themselves. Let’s see if we can’t get her out of the wedding gown and into a white jumper and skirt. I’ll bold the amends.
A beautiful girl in a white jumper and skirt wearing black leggings and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
s
A beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black leggings under tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. Patchy snow is on the ground. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
What gives?
I think my point has been reinforced. I’m getting nowhere fast. Let’s give it one more go and see where we end up. I’ve not got a good feeling about this.
A single large oak tree centred in the foreground of a winter forest with birch trees in the background. Patches of snow is on the ground. To the right of the oak tree stands a beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black boots over tall laced black boots. She stands facing the camera. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
With this last one, I re-uploaded the original render along with this text prompt. Notice that the girl now looks the same and the scene (mostly) appears to be in the same location, but there are still challenges.
After several more divergent attempts, I decided to focus on one element – the girl.
As I regard the image, I’m thinking of a police sketch artist. They get sort of close, don’t they? They’re experts. I’m not confident that I even have the vocabulary to convey accurately what I see. How do I describe her jumper? Is that a turtleneck or a high collar? It appears to be knit. Is is wool or some blend? does that matter for an image? Does this pleated skirt have a particular name or shade of white? It looks as though she’s wearing black leggings – perhaps polyester. And those boots – how to describe them. I’m rerunning just the image above through a describe function to see if I can get any closer.
These descriptions are particularly interesting and telling. First, I’ll point out that AI attempts to identify the subject. I couldn’t find Noa Levin by a Google search, so I’m not sure how prominent she might be if she even exists at all in this capacity. More interesting still, the AI has placed her in a scenario where the pose was taken after a match. Evidently, this image reflects the style of photographer Guy Bourdin. Perhaps the jumper mystery is solved. It identified a turtleneck. I’ll ignore the tree and see if I can capture her with an amalgamation of these descriptions. Let’s see where this goes.
A photo-realistic portrait of Israeli female soccer player Noa Levin wearing a white turtleneck sweater, arms crossed, black boots, and a short skirt, with long brown hair, standing near a tree in a winter park. The image captured a full-length shot taken in a studio setting, using a Canon EOS R5 camera with a Canon L-series 80mm f/2 lens. The image has been professionally color-graded, with soft shadows, low contrast, and a clean, sharp focus. --ar 9:16
Close-ish. Let’s zoom in to get better descriptions of various elements starting with her face and hair.
Now, she’s a sad and angry Russian woman with (very) pale skin; large, sad, grey eyes; long, straight brown hair. Filmed in the style of either David LaChapelle or Alini Aenami (apparently misspelt from Alena Aenami). One thinks it was a SnapChat post. I was focusing on her face and hair, but it notices her wearing a white (oversized yet form-fitting) jumper sweater and crossed arms .
I’ll drop the angry bit – and then the sad.
Stick a fork in it. I’m done. Perhaps it’s not that language is insufficient; it that my language skills are insufficient. If you can get closer to the original image, please forward the image, the prompt, and the seed, so I can post it.
The Complexity Gradient
A clear pattern emerges when we examine how language performs across different levels of complexity:
Categorical Distinction (High Success)
Identifying shapes among limited options
Distinguishing between tree species
Basic color categorization
Simple Description (Moderate Success)
Basic geometric specifications
General object characteristics
Broad emotional states
Complex Description (Low Success)
Specific natural objects
Precise emotional experiences
Unique instances within categories
Abstract Concepts (Lowest Success)
Philosophical ideas
Personal experiences
Qualia
As we move up this complexity gradient, the gap between intended meaning and received understanding widens exponentially.
The Tolerance Problem
Understanding these limitations leads us to a practical question: what level of communicative tolerance is acceptable for different contexts? Just as engineering embraces acceptable tolerances rather than seeking perfect measurements, perhaps effective communication requires:
Acknowledging the gap between intended and received meaning
Establishing context-appropriate tolerance levels
Developing better frameworks for managing these tolerances
Recognizing when precision matters more than accuracy (or vice versa)
Implications for Human-AI Communication
These insights have particular relevance as we develop more sophisticated AI systems. The limitations we’ve explored suggest that:
Some communication problems might be fundamental rather than technical
AI systems may face similar boundaries as human communication
The gap between intended and received meaning might be unbridgeable
Future development should focus on managing rather than eliminating these limitations
Conclusion
Perhaps this is a simple exercise in mental masturbation. Language’s insufficiency isn’t a flaw to be fixed but a fundamental characteristic to be understood and accommodated. By definition, it can’t be fixed. The gap between intended and received meaning may be unbridgeable, but acknowledging this limitation is the first step toward more effective communication. As we continue to develop AI systems and push the boundaries of human-machine interaction, this understanding becomes increasingly critical.
Rather than seeking perfect precision in language, we might instead focus on:
Developing new forms of multimodal communication
Creating better frameworks for establishing shared context
Accepting and accounting for interpretative variance
Building systems that can operate effectively within these constraints
Understanding language’s limitations doesn’t diminish its value; rather, it helps us use it more effectively by working within its natural constraints.