The Insufficiency of Language Meets Generative AI

I’ve written a lot on the insufficiency of language, and it’s not even an original idea. Language, our primary tool for sharing thoughts and ideas, harbours a fundamental flaw: it’s inherently insufficient for conveying precise meaning. While this observation isn’t novel, recent developments in artificial intelligence provide us with new ways to illuminate and examine this limitation. Through a progression from simple geometry to complex abstractions, we can explore how language both serves and fails us in different contexts.

The Simple Made Complex

Consider what appears to be a straightforward instruction: Draw a 1-millimetre square in the centre of an A4 sheet of paper using an HB pencil and a ruler. Despite the mathematical precision of these specifications, two people following these exact instructions would likely produce different results. The variables are numerous: ruler calibration, pencil sharpness, line thickness, paper texture, applied pressure, interpretation of “centre,” and even ambient conditions affecting the paper.

This example reveals a paradox: the more precisely we attempt to specify requirements, the more variables we introduce, creating additional points of potential divergence. Even in mathematics and formal logic—languages specifically designed to eliminate ambiguity—we cannot escape this fundamental problem.

Precision vs Accuracy: A Useful Lens

The scientific distinction between precision and accuracy provides a valuable framework for understanding these limitations. In measurement, precision refers to the consistency of results (how close repeated measurements are to each other), while accuracy describes how close these measurements are to the true value.

Returning to our square example:

  • Precision: Two people might consistently reproduce their own squares with exact dimensions
  • Accuracy: Yet neither might capture the “true” square we intended to convey

As we move from geometric shapes to natural objects, this distinction becomes even more revealing. Consider a maple tree in autumn. We might precisely convey certain categorical aspects (“maple,” “autumn colours”), but accurately describing the exact arrangement of branches and leaves becomes increasingly difficult.

The Target of Meaning: Precision vs. Accuracy in Communication

To understand language’s limitations, we can borrow an illuminating concept from the world of measurement: the distinction between precision and accuracy. Imagine a target with a bullseye, where the bullseye represents perfect communication of meaning. Just as archers might hit different parts of a target, our attempts at communication can vary in both precision and accuracy.

Consider four scenarios:

  1. Low Precision, Low Accuracy
    When describing our autumn maple tree, we might say “it’s a big tree with colourful leaves.” This description is neither precise (it could apply to many trees) nor accurate (it misses the specific characteristics that make our maple unique). The communication scatters widely and misses the mark entirely.
  2. High Precision, Low Accuracy
    We might describe the tree as “a 47-foot tall maple with exactly 23,487 leaves displaying RGB color values of #FF4500.” This description is precisely specific but entirely misses the meaningful essence of the tree we’re trying to describe. Like arrows clustering tightly in the wrong spot, we’re consistently missing the point.
  3. Low Precision, High Accuracy
    “It’s sort of spreading out, you know, with those typical maple leaves turning reddish-orange, kind of graceful looking.” While imprecise, this description might actually capture something true about the tree’s essence. The arrows scatter, but their centre mass hits the target.
  4. High Precision, High Accuracy
    This ideal state is rarely achievable in complex communication. Even in our simple geometric example of drawing a 1mm square, achieving both precise specifications and accurate execution proves challenging. With natural objects and abstract concepts, this challenge compounds exponentially.

The Communication Paradox

This framework reveals a crucial paradox in language: often, our attempts to increase precision (by adding more specific details) can actually decrease accuracy (by moving us further from the essential meaning we’re trying to convey). Consider legal documents: their high precision often comes at the cost of accurately conveying meaning to most readers.

Implications for AI Communication

This precision-accuracy framework helps explain why AI systems like our Midjourney experiment show asymptotic behaviour. The system might achieve high precision (consistently generating similar images based on descriptions) while struggling with accuracy (matching the original intended image), or vice versa. The gap between human intention and machine interpretation often manifests as a trade-off between these two qualities.

Our challenge, both in human-to-human and human-to-AI communication, isn’t to achieve perfect precision and accuracy—a likely impossible goal—but to find the optimal balance for each context. Sometimes, like in poetry, low precision might better serve accurate meaning. In other contexts, like technical specifications, high precision becomes crucial despite potential sacrifices in broader accuracy.

The Power and Limits of Distinction

This leads us to a crucial insight from Ferdinand de Saussure’s semiotics about the relationship between signifier (the word) and signified (the concept or object). Language proves remarkably effective when its primary task is distinction among a limited set. In a garden containing three trees—a pine, a maple, and a willow—asking someone to “point to the pine” will likely succeed. The shared understanding of these categorical distinctions allows for reliable communication.

However, this effectiveness dramatically diminishes when we move from distinction to description. In a forest of a thousand pines, describing one specific tree becomes nearly impossible. Each additional descriptive detail (“the tall one with a bent branch pointing east”) paradoxically makes precise identification both more specific and less likely to succeed.

An AI Experiment in Description

To explore this phenomenon systematically, I conducted an experiment using Midjourney 6.1, a state-of-the-art image generation AI. The methodology was simple:

  1. Generate an initial image
  2. Describe the generated image in words
  3. Use that description to generate a new image
  4. Repeat the process multiple times
  5. Attempt to refine the description to close the gap
  6. Continue iterations

The results support an asymptotic hypothesis: while subsequent iterations might approach the original image, they never fully converge. This isn’t merely a limitation of the AI system but rather a demonstration of language’s fundamental insufficiency.

One can already analyse this for improvements, but let’s parse it together.

With this, we know we are referencing a woman, a female of the human species. There are billions of women in the world. What does she look like? What colour, height, ethnicity, and phenotypical attributes does she embody?

We also know she’s cute – whatever that means to the sender and receiver of these instructions.

I used an indefinite article, a, so there is one cute woman. Is she alone, or is she one from a group?

It should be obvious that we could provide more adjectives (and perhaps adjectives) to better convey our subject. We’ll get there, but let’s move on.

We’ve got a conjunction here. Let’s see what it connects to.

She’s with a dog. In fact, it’s her dog. This possession may not be conveyable or differentiable from some arbitrary dog, but what type of dog is it? Is it large or small? What colour coat? Is it groomed? Is it on a leash? Let’s continue.

It seems that the verb stand refers to the woman, but is the dog also standing, or is she holding it? More words could qualify this statement better.

A tree is referenced. Similar questions arise regarding this tree. At a minimum, there is one tree or some variety. She and her dog are next to it. Is she on the right or left of it?

We think we can refine our statements with precision and accuracy, but can we? Might we just settle for “close enough”?

Let’s see how AI interpreted this statement.

Image: Eight Midjourney renders from the prompt: A cute woman and her dog stand next to a tree. I’ll choose one of these as my source image.

Let’s deconstruct the eight renders above. Compositionally, we can see that each image contains a woman, a dog, and a tree. Do any of these match what you had in mind? First, let’s see how Midjourney describes the first image.

In a bout of hypocrisy, Midjourney refused to /DESCRIBE the image it just generated.

Last Midjourney description for now.

Let’s cycle through them in turn.

  1. A woman is standing to the left of an old-growth tree – twice identified as an oak tree. She’s wearing faded blue jeans and a loose light-coloured T-shirt. She’s got medium-length (maybe) red-brown hair in a small ponytail. A dog – her black and white dog identified as a pitbull, an American Foxhound, and an American Bulldog – is also standing on his hind legs. I won’t even discuss the implied intent projected on the animal – happy, playful, wants attention… In two of the descriptions, she’s said to be training it. They appear to be in a somewhat residential area given the automobiles in the background. We see descriptions of season, time of day, lighting, angle, quality,
  2. A woman is standing to the right of an old-growth tree. She’s wearing short summer attire. Her dog is perched on the tree.
  3. An older woman and her dog closer up.
  4. A read view of both a woman and her dog near an oak tree.

As it turned out, I wasn’t thrilled with any of these images, so I rendered a different one. Its description follows.

The consensus is that ‘a beautiful girl in a white dress and black boots stands next to a tree’ with a Jack Russell Terrier dog. I see birch trees and snow. It’s overcast. Let’s spend some time trying to reproduce it. To start, I’m consolidating the above descriptions. I notice some elements are missing, but we’ll add them as we try to triangulate to the original image.

This is pretty far off the mark. We need to account for the overall setting and composition, relative positioning, clothing, hair, camera, perspective – even lighting and film emulsion.

Let’s see how we can refine it with some adjectives. Before this, I asked Anthropic’s Claude 3.5 to describe the image. Perhaps we’ll get more details.

We don’t seem to be moving in a good direction. Let’s modify the initial prompt.

I’ll allow the results to speak for themselves. Let’s see if we can’t get her out of the wedding gown and into a white jumper and skirt. I’ll bold the amends.

s

What gives?

I think my point has been reinforced. I’m getting nowhere fast. Let’s give it one more go and see where we end up. I’ve not got a good feeling about this.

With this last one, I re-uploaded the original render along with this text prompt. Notice that the girl now looks the same and the scene (mostly) appears to be in the same location, but there are still challenges.

After several more divergent attempts, I decided to focus on one element – the girl.

As I regard the image, I’m thinking of a police sketch artist. They get sort of close, don’t they? They’re experts. I’m not confident that I even have the vocabulary to convey accurately what I see. How do I describe her jumper? Is that a turtleneck or a high collar? It appears to be knit. Is is wool or some blend? does that matter for an image? Does this pleated skirt have a particular name or shade of white? It looks as though she’s wearing black leggings – perhaps polyester. And those boots – how to describe them. I’m rerunning just the image above through a describe function to see if I can get any closer.

These descriptions are particularly interesting and telling. First, I’ll point out that AI attempts to identify the subject. I couldn’t find Noa Levin by a Google search, so I’m not sure how prominent she might be if she even exists at all in this capacity. More interesting still, the AI has placed her in a scenario where the pose was taken after a match. Evidently, this image reflects the style of photographer Guy Bourdin. Perhaps the jumper mystery is solved. It identified a turtleneck. I’ll ignore the tree and see if I can capture her with an amalgamation of these descriptions. Let’s see where this goes.

Close-ish. Let’s zoom in to get better descriptions of various elements starting with her face and hair.

Now, she’s a sad and angry Russian woman with (very) pale skin; large, sad, grey eyes; long, straight brown hair. Filmed in the style of either David LaChapelle or Alini Aenami (apparently misspelt from Alena Aenami). One thinks it was a SnapChat post. I was focusing on her face and hair, but it notices her wearing a white (oversized yet form-fitting) jumper sweater and crossed arms .

I’ll drop the angry bit – and then the sad.

Stick a fork in it. I’m done. Perhaps it’s not that language is insufficient; it that my language skills are insufficient. If you can get closer to the original image, please forward the image, the prompt, and the seed, so I can post it.

The Complexity Gradient

A clear pattern emerges when we examine how language performs across different levels of complexity:

  1. Categorical Distinction (High Success)
    • Identifying shapes among limited options
    • Distinguishing between tree species
    • Basic color categorization
  2. Simple Description (Moderate Success)
    • Basic geometric specifications
    • General object characteristics
    • Broad emotional states
  3. Complex Description (Low Success)
    • Specific natural objects
    • Precise emotional experiences
    • Unique instances within categories
  4. Abstract Concepts (Lowest Success)
    • Philosophical ideas
    • Personal experiences
    • Qualia

As we move up this complexity gradient, the gap between intended meaning and received understanding widens exponentially.

The Tolerance Problem

Understanding these limitations leads us to a practical question: what level of communicative tolerance is acceptable for different contexts? Just as engineering embraces acceptable tolerances rather than seeking perfect measurements, perhaps effective communication requires:

  • Acknowledging the gap between intended and received meaning
  • Establishing context-appropriate tolerance levels
  • Developing better frameworks for managing these tolerances
  • Recognizing when precision matters more than accuracy (or vice versa)

Implications for Human-AI Communication

These insights have particular relevance as we develop more sophisticated AI systems. The limitations we’ve explored suggest that:

  • Some communication problems might be fundamental rather than technical
  • AI systems may face similar boundaries as human communication
  • The gap between intended and received meaning might be unbridgeable
  • Future development should focus on managing rather than eliminating these limitations

Conclusion

Perhaps this is a simple exercise in mental masturbation. Language’s insufficiency isn’t a flaw to be fixed but a fundamental characteristic to be understood and accommodated. By definition, it can’t be fixed. The gap between intended and received meaning may be unbridgeable, but acknowledging this limitation is the first step toward more effective communication. As we continue to develop AI systems and push the boundaries of human-machine interaction, this understanding becomes increasingly critical.

Rather than seeking perfect precision in language, we might instead focus on:

  • Developing new forms of multimodal communication
  • Creating better frameworks for establishing shared context
  • Accepting and accounting for interpretative variance
  • Building systems that can operate effectively within these constraints

Understanding language’s limitations doesn’t diminish its value; rather, it helps us use it more effectively by working within its natural constraints.

Atheist, Agnostic, Other

This article appeared in my social feed, What are the different types of atheism? But the author makes at least two notable mistakes. Firstly, he conflates atheism with agnosticism, the first being about belief and the last being about knowledge. So, one can believe or disbelieve in something, but that doesn’t speak to knowing. This debate is specifically about gods, so one can believe in a god but not know; one can not believe in gods and not know; one can believe in a god and know; and one can disbelieve in gods and not know. As for me, I am an igtheist: I don’t care about gods. It’s a silly place to spend my time.

For a theist or atheist, the existence of gods is a truth statement. For me, the question is not ruth apt; it’s ostensibly gibberish. Even then, I am still agnostic, which might also be ignostic because not only don’t I know, neither do I care.

As Ricky Gervais has pointed out (recasting per the linked article), if there are 10,000 gods, a typical Christian doesn’t believe in 9,999 gods. They believe in their god. Just 1. Of course, the other gods are nonsense. Ditto for Muslims. Ditto for Jews. In the end, they claim the same underlying deity, but they argue over which cohort He favours, and their god identifies as a male, so they’ve adopted male pronouns.

Secondly, whilst the author mentions religious and non-religious, he misses the spiritual cohort. This is a subset of non-religious. In some cases, I and many others might argue that spirituality is simply a personal religion, so the distinction would be one of community. The religious congregate en masse whilst the spiritual take this journey alone. One may also argue that some spiritual folks also congregate. I’ve attended more than one Wiccan or Pagan group event, but the ties may be looser than with a mainstream religion.

In the case of some spiritual adherents, non-religious is shorthand for being opposed to Big Religion. Perhaps not coincidentally, many of these are opposed to Big Pharma and Big Agriculture, but my purpose here is not a psychological profile.

I recently heard Robert Sapolsky say in a lecture that the religious live longer and are happier on average than non-religious, which is to say the spiritual and the atheists alike, so he notes this could provide an underlying evolutionary explanation for religious belief. Neither will I comment further on this notion, but there you have it. Take id or leave it.

Meaningless

4–6 minutes

Whether in English or in French, I don’t believe Foucault ever uttered the words, ‘It is meaningless to speak in the name of – or against – Reason, Truth, or Knowledge‘*, but I don’t think he’d disagree with the sentiment.


“All my analyses are against the idea of universal necessities in human existence.”

Michel Foucault

Foucault was a postmodernist, and on balance, political Conservatives (Rightists?) dislike the notion of postmodernism. Evidently, a lot of Postmodernists are also Leftists (Progressives or Liberals in the US), so somehow critics such as Jordan Peterson conflate the two clearly distinct concepts.

A basis for Conservatism is the notion of an objective truth, and despite recent sociopolitical trends, they at least say they are guardians or truth and purveyors of knowledge. Conservatives (OK, so I am broad-brushing here) are staunch individualists who believe strongly in possession and property, of material, of an objective reality. Fundamentally, the are aligned to a monotheistic god or at least some discernible (and objective) moral compass.

On the Left, especially post-Enlightenment, they’ve substituted God with some anthropomorphic Nature. In fact, they find comfort in natural laws and human nature. Science is often their respite because science is objective. Isn’t it? Leftists are friends or Reason, and one can’t acquire enough knowledge. Moderation need not apply here; the more the merrier.

This being said, evidently, many on the Left seem to have abandoned this comfort zone. Of course, this may be because the Left-Right dichotomy doesn’t capture the inherent nuance, and so they were miscategorised—perhaps, much in the same manner as persons are miscategorised in a binary gender system. No. It must be something else.

In any case, both side claim to the parties of knowledge, reason, and truth because the opposing parties are clearly abject morons. There is no hint of irony in the situation where each side claims some objective notion of truth—whether divinely granted or self-evidently reasoned—, yet they can’t resolve what the true truth is. If only the other side were more rational.

By now, we are well aware of the demise of homo economicus, the hyper-rational actor foundational to modern economic theory. In reality, humans are only rational given the loosest definitions, say, to (in most cases) know enough to get in the shade on a 37.2°C day. However, as behavioural economist Dan Ariely noted by the title of his book, people are Predictably Irrational. Ariely is just standing on the shoulders of Kahneman and Tversky and Richard Thaler. My point is that humans are only marginally rational.

As I’ve written elsewhere, truth is nothing more than a rhetorical endpoint. It is hardly objective. It’s a matter of opinion. Unfortunately, systems of government and jurisprudence require this objective truth. In truth—see what I did there?—, social fabric requires a shared notion of truth.

A shared notion doesn’t imply that this notion is objective, but if it’s not objective, how does one resolve differences of opinion as to which is the better truth. Without establishing a frame and a lens, this is impossible. The problem is that frames and lenses are also relative. Whether the members accept a given frame or lens is also a matter of rhetoric. It’s turtles all the way down.

Turtles all the way down

Even if all members agree on all parameters of truth at day 0, there is nothing to prevent opinion changes or from new members not to share these parameters. Such is always the problem with social contract theory. [How does one commit to a contract s/he is born into with little recourse to rescind the contract, renegotiate terms, or choose a different contract option. The world is already carved up, and the best one can do is to jump from the frying pan into the fire.]

In the end, the notion of truth is necessary, but it doesn’t exist. Playing Devil’s advocate, let’s say that there is a single purveyor of Truth; let’s just say that it’s the monotheistic Abrahamic God of Judeo-Christian beliefs. There is no (known) way to ascertain that a human would have the privilege to know such a truth nor, if s/he were to encounter, say, a burning bush of some sort, that this entity would be conveying truth; so, we aren’t really in a better place. Of course, we could exercise faith and just believe, but this is a subjective action. We could also take Descarte’s line of logic and declare that a good God would not deceive us—sidestepping that this ethereal being was good, as advertised. I’m afraid it’s all dead ends here, too.

And so, we are back to where we started: no objective truth, limited ability to reason, and some fleeting notion of knowledge. We are still left with nothing.

Enter the likes of Jordan Peterson, he with his fanciful notion of metaphysics and morality—a channeller of Carl Jung. His tactic is to loud dog the listener and outshout them indignantly. His followers, already primed with a shared worldview, are adept (or inept) cheerleaders ready to uncritically echo his refrain. To them, his virtue-ethical base, steeped in consequentialism awash in deontology, Peterson speaks the truth.

He also potentiates the selfish anti-collective germ and rage of the declining white man. He’s sort of a less entertaining Howard Stern for the cleverer by half crowd. He gives a voice to the voiceless—or perhaps the thoughtless. He uses ‘reason’ to back his emotional pleas. He finds a voice in the wilderness where white Western males are the oppressed. If only they hadn’t been born centuries earlier—albeit with iPhones and microwaves.

Those would be the days.

* I believe this phrase attributed to Foucault was a paraphrase by philosopher Todd May.

Binary: Meaning | -Meaning

There are 10 types of people in this world*:

  • Those who require meaning
  • Those who don’t require meaning

Of those who do not require meaning, I don’t have much to say, and anything I do have to say about this will be obvious in reflection to the people who do.

Some people—perhaps even the majority or super-majority of them—require meaning. Of these, some accept the doctrines offered by religion or society whilst others will construct meaning. Even those who don’t patently need meaning may still construct one anyway, whether employment, family, country, or duty.

People who require meaning feel that the world will devolve into chaos if no meaning were inherent, that people would become selfish anarchists, hooligans, and ne’er-do-wells.  Most of these people (convincingly) convince themselves that there is some source of objective morality and that that reality is known. Insomuch as, in this case, perception is reality, it doesn’t matter whether it is or isn’t.

Pinker’s possible objections in The Blank Slate notwithstanding—there may be some genetic component driving this delusion—, this reality is a social construct.

I guess the point is that some people are incapable of accepting a world with no meaning. Unfortunately, politicians, and Randian Objectivists (perhaps a large overlap in some circles) can prey on these gullible masses, and they can vilify others who are not so lucky in masking their motives.

* An age-old binary joke