Slice of Life

This is a timeline of foundational ideas on which I’ve built my Language Insufficiency Hypothesis. I spent a day compositing this timeline in Adobe Illustrator. I hadn’t used Illustrator in decades. It’s got a lot of options, so I’ve been leveraging ChatGPT as a help guide. It seems the UI/UX could be improved, but I’m sure I’ll get used to it. I’ve got another couple dozen to go. I’m hoping a learning/efficiency curve kicks in.

WordPress wouldn’t accept or render my first few file types, even though they are listed as acceptable – SVG, PNG, TIF, WEBP – so I opted for gold, old-fashioned BMP, so it’s pretty hefty for inline rendering on a blog. I want to share, and so here is a late draft.

I’m no graphic artist, so it’s relatively primitive. I’ve been experimenting with colours, but the book is black and white, so I’ll probably just keep it the same.

There are a lot of data points to fit on this timeline, and I’m limited to a 6″ x 9″ form factor. Except for the first 3 entries, the items are to-scale by year. I have more information, but I can’t really fit it in. In fact, this started as two timelines. There was so much cross-over that I merged them into one, forcing me to remove some meta information and lose some detail. Design is hard.

It looks fine on the page. I don’t want to reduce the font sizes to cram in more info. I’ll probably put the more descriptive copy in an appendix, if not on a facing page.

Language Insufficiency Hypothesis, Chapter One

Well, a word cloud anyway.

I’m busy focusing on the manuscript for Chapter One of my Language Insufficiency Hypothesis. Scanning the copy—over 4,000 words—I noticed some repetition. My eye caught ‘fundamental’, which I use often in speech as well. I decided to try a word cloud to illustrate common words.

As this is a hypothesis on the limitations of language, the fact that these are prominent is no surprise, but 34 instances of ‘fundamental’ seem fundamentally off—see what I did there? And this is only the first chapter of eight.

I’ll review the content to see if I can remove some instances—perhaps entire sentences—or choose a synonym to see where it leads.

The manuscript is in its fourth full revision, and it feels closer to completion. I aim for an early 2025 release date, but time will tell.

Ass usual, I employ AI (Claude, in this case) to do some heavy lifting.

Having analysed the chapter, I’ve identified several places where “fundamental” could be substituted or removed to enhance the linguistic diversity and rhetorical effectiveness of the text. Here are some examples:

  1. “These limitations, we argue, manifest not merely as practical challenges but as fundamental features of how meaning systems operate…”
    • Alternative: “These limitations, we argue, manifest not merely as practical challenges but as intrinsic features of how meaning systems operate…”
  2. “This theoretical perspective fundamentally reframes several longstanding problems in linguistics and philosophy.”
    • Alternative: “This theoretical perspective radically reframes several longstanding problems in linguistics and philosophy.”
  3. These patterns suggest inherent rather than circumstantial constraints – limitations that emerge from language’s structure itself rather than from inadequate definition or imprecise usage.”
    • Here, “fundamental” could be omitted entirely without losing the core meaning of the sentence.
  4. Understanding these constraints as systematic features rather than implementation failures provides a crucial theoretical foundation for the Language Insufficiency Hypothesis.”
    • Alternative: “Understanding these constraints as systematic features rather than implementation failures provides a vital theoretical foundation for the Language Insufficiency Hypothesis.”
  5. These cognitive constraints manifest not as mere practical impediments but as necessary features of how meaning is processed and transmitted through human neural networks.
    • Here, “fundamental” could be replaced with “essential” or simply omitted.

Notice that 3 of the 5 recommendations didn’t include the offending word. Rather, they already employed synonyms. This gives us insights into how LLMs translate language with fuzzy logic. Perhaps that’s an article for another day.

Where you from, Homie?

This skit is a comical take on in-group versus out-group language insufficiency. It’s a couple years old, so you may have seen it before.

This video illustrates how easy it is for miscommunication to occur in mixed-group settings.
Trigger Warning: The humour is a bit weak and the focus is on stereotypes. If this isn’t quite up your street, just move on. Nothing to see here.

Beware the Bots: A Cautionary Tale on the Limits of Generative AI

Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.

First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.

Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.

But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.

So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.

For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.

The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.

Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.

The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.