Legibility Meets Humanity for Xmas

3–4 minutes

I’m no fan of holidays. I neither enjoy nor celebrate Christmas. I’m acutely aware of its commercial excesses and its religious inheritance, two institutions I find, at best, tiresome and, at worst, actively corrosive. Whether that’s abhorrence or simple loathing is a distinction I’ll leave to braver souls.

Still, calendars exist whether one consents to them or not, and this piece happens to land today. If Christmas is your thing, by all means, have at it. Sincerely. Rituals matter to people, even when their metaphysics don’t survive inspection.

What follows is not a defence of the season, nor a seasonal moral. It’s a small human moment that happens to involve Santa, which is to say a costume, a script, and a public performance. What interests me is not the symbolism, but what happens when the performance yields just enough to allow someone else to be seen on their own terms. If nothing else, that feels like a tolerable use of the day.

Audio: NotebookLM summary podcast on this topic.

What Legibility?

When I use the term legibility, it’s usually as a pejorative. It’s my shorthand for reductionism. For the way human beings are flattened into checkboxes, metrics, market segments, or moral exemplars so they can be processed efficiently by institutions that mistake simplification for understanding.

But legibility isn’t always a vice.

Video: Santa signs with a 3-year-old dear girl

Most of us, I suspect, want to be legible. Just not in the ways we are usually offered. We want to be seen on our own terms, not translated into something more convenient for the viewer. That distinction matters.

In the video above, a deaf child meets Santa. Nothing grand happens. No lesson is announced. No slogan appears in the corner of the screen. Santa simply signs.

The effect is immediate. The child’s posture changes. Her attention sharpens. There’s a visible shift from polite endurance to recognition. She realises, in real time, that she does not need to be adapted for this encounter. The encounter has adapted to her. This is legibility done properly.

Not the synthetic legibility of television advertising, where difference is curated, sanitised, and arranged into a reassuring grid of representation. Not the kind that says, we see you, while carefully controlling what is allowed to be seen. That version of legibility is extraction. It takes difference and renders it harmless. Here, the legibility runs the other way.

Santa, already a performative role if ever there was one, doesn’t stop being performative. The costume remains. The ritual remains. But the performance bends. It accommodates. It listens. The artifice doesn’t collapse; it becomes porous.

I’m wary of words like authenticity. They’ve been overused to the point of meaninglessness. But I do think we recognise performatism when we see it. Not in the technical sense of speech acts, but in the everyday sense of personas that ring hollow, gestures that exist for the camera rather than the people involved. This doesn’t feel like that.

Of course, the child could already connect. Deaf people connect constantly. They persevere. They translate. They accommodate a world that rarely meets them halfway. Nothing here ‘grants’ her humanity. What changes is the tightness of the connexion.

The shared language acts as a verbal proxy, a narrowing of distance. You can see the moment it clicks. He speaks her language. Or rather, he speaks a language that already belongs to her, even if calling it ‘hers’ is technically imprecise. Mother tongue is a slippery phrase. Irony does some of the work here.

Legibility, in this case, doesn’t make her smaller. It makes the interaction larger. And that, inconveniently for our systems and slogans, is what most people have been asking for all along.

Language and Generative AI: A Journey through Midjourney

I am not a fan of Midjourney v7. I prefer v6.1. And I want to write about the correspondence of language, per my Language Insufficiency Hypothesis.

Let’s start with the language aspect. Notice how distant the renders are from the intent of the prompt.

This is my initial prompt. I used it about a year ago to generate the cover image with v6.1, but I wanted to see how it renders in v7. Let’s take a trip all the way back to the beginning.

cinematic, tight shot, photoRealistic light and shadow, exquisite details, delicate features, emaciated sensual female vampire waif with vampire fangs, many tattoos, wearing crucifix necklace, gazes into mirror, a beam of moonlight shines on her face in dark mausoleum interior, toward camera, facing camera, black mascara, long dark purple hair , Kodak Portra 400 with a Canon EOS R5
Image: Midjourney v6.1 render set (from about a year ago)

As you can see, these renders are somewhat lacking in photorealism, but the “sensual” term in the prompt was not blocked.

Midjourney v7

Initially, I encountered a hiccup. After a couple of rejections on the grounds of morality, I removed the word ‘sensual’ and received the output. All of the output uses this prompt absent the sensual term.

As mentioned, I have generated several images (including the cover image) with this prompt, but Midjourney is inconsistent in its censorship gatekeeping.

Image: Midjourney v7 render set

Notice that 3 of the 4 renders in the v7 set don’t even have a mirror. The top right one does, but it’s not evident that she’s a vampire. In fact, I could say that any of these are vampiresses, but perhaps that’s what they want you to believe. In place of a necklace, the lower right wokan sports a cross tattoo.

Midjourney v6.1

Image: Midjourney v6.1 render set

Again, these renders don’t appear to be vampires. The one on the lower left does appear to have snake-like fangs, so I guess I’ll give partial credit.

My next attempt was interrupted by this message.

It rendered something that might violate community guidelines. The funny thing is that one can watch the image generate in process. It only takes one “offensive” image to disqualify the whole batch.

Midjourney v6

Image: Midjourney v6 render set

Yet again, not a vampire to be found. Notice the reflection in the lower left image. Perhaps vampire reflections just behave differently.

Midjourney 5.2

Image: Midjourney v5.2 render set

Midjourney v5.2 was a crapshoot. Somehow, I got vampire lips (?), a Wiccan, a decrepit Snape from Harry Potter lore, and Iron Maiden’s Eddy reading a book. It’s something. I’m sensing gender dysphoria. Dare I go back further?

Midjourney v5.1

Image: Midjourney v5.1 render set

It gets worse. No comments necessary. Let’s turn back the clocks even more.

Midjourney v5

Image: Midjourney v5 render set

To be fair, these all do have occult undertones, but they are weak on vampireness.

Midjourney v4

Image: Midjourney v4 render set

To be fair, the render quality isn’t as bad as I expected, but it still falls short. There’s further back to travel.

Midjourney v3

Image: Midjourney v3 render set

Some configuration parameters no longer exist. Still, I persist for the sake of art and science at the cost of time and ecology.

As much as I complain – and I complain a lot – this is how far we’ve come. As I recall, this is when I hopped onto the Midjourney bandwagon. There’s still more depth to plumb. I have no idea how much of the prompt is simply ignored at this point.

Midjourney v2

Image: Midjourney v2 render set

What the hell is this? 🤔🤣 But I’m not done yet.

Midjourney v1

Image: Midjourney v1 render set

The damned grandpappy of them all. Apparently, colour hadn’t been invented yet. You can’t tell by these thumbnails, but the resolution on these early versions approaches that of a postage stamp.

Midjourney Niji 3

Image: Midjourney Niji 3 render set

I had forgotten about the Niji models from back in the day. There were 3 versions. I don’t recall where this slotted into the chronology. Obviously, not down here. I’ve only rendered the newest one. I think this was used primarily for anime outputs, but I might be mistaken.

Bones Content 1: Video

Video: Midjourney Render of Purported Vampiress

This is a video render of the same prompt used on this page.

Bonus Content 2: Midjourney v6.1 Content from 34 weeks ago

Same prompt.

Image: Midjourney v6.1 render set (several passes)

The upper left image reminds me of Kirsten Dunst. Again, notice the female breasts, highlighting Midjourney’s censorial schizophrenia.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, “Hold My Beer.”

At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du Surréalisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

¬◇ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.

From Memes to Meaning: The Beautiful Chaos of Modern Language

3–5 minutes

The Present Day: Social Media and Memes – The Final Nail in the Coffin?

Just when you thought things couldn’t get any more chaotic, enter the 21st century, where language has been boiled down to 280 characters, emojis, and viral memes. If you think trying to pin down the meaning of “freedom” was hard before, try doing it in a tweet—or worse, a string of emojis. In the age of social media, language has reached new heights of ambiguity, with people using bite-sized bits of text and images to convey entire thoughts, arguments, and philosophies. And you thought interpreting Derrida was difficult.

Social media has turned language into an evolving, shape-shifting entity. Words take on new meanings overnight, hashtags rise and fall, and memes become the shorthand for complex cultural commentary. In some ways, it’s brilliant—what better way to capture the madness of modern life than with an image of a confused cat or a poorly drawn cartoon character? But in other ways, it’s the final nail in the coffin for clear communication. We’ve gone from painstakingly crafted texts, like Luther’s 95 Theses, to memes that rely entirely on shared cultural context to make sense.

The irony is that we’ve managed to make language both more accessible and more incomprehensible at the same time. Sure, anyone can fire off a tweet or share a meme, but unless you’re plugged into the same cultural references, you’re probably going to miss half the meaning. It’s like Wittgenstein’s language games on steroids—everyone’s playing, but the rules change by the second, and good luck keeping up.

And then there’s the problem of tone. Remember those philosophical debates where words were slippery? Well, now we’re trying to have those debates in text messages and social media posts, where tone and nuance are often impossible to convey. Sarcasm? Forget about it. Context? Maybe in a follow-up tweet, if you’re lucky. We’re using the most limited forms of communication to talk about the most complex ideas, and it’s no surprise that misunderstandings are at an all-time high.

And yet, here we are, in the midst of the digital age, still using the same broken tool—language—to try and make sense of the world. We’ve come a long way from “flamey thing hot,” but the basic problem remains: words are slippery, meanings shift, and no matter how advanced our technology gets, we’re still stuck in the same old game of trying to get our point across without being completely misunderstood.

Conclusion: Language – Beautiful, Broken, and All We’ve Got

And here’s where the irony kicks in. We’ve spent this entire time critiquing language—pointing out its flaws, its limitations, its inability to truly capture abstract ideas. And how have we done that? By using language. It’s like complaining about how unreliable your GPS is while using it to get to your destination. Sure, it’s broken—but it’s still the only tool we have.

In the end, language is both our greatest achievement and our biggest limitation. It’s allowed us to build civilisations, create art, write manifestos, and start revolutions. But it’s also the source of endless miscommunication, philosophical debates that never get resolved, and social media wars over what a simple tweet really meant.

So yes, language is flawed. It’s messy, it’s subjective, and it often fails us just when we need it most. But without it? We’d still be sitting around the fire, grunting at each other about the ‘toothey thing’ lurking in the shadows. For better or worse, language is the best tool we’ve got for making sense of the world. It’s beautifully broken, but we wouldn’t have it any other way.

And with that, we’ve used the very thing we’ve critiqued to make our point. The circle of irony is complete.


Previous | End

Not Only the Queen Is Dead

“The Queen is dead. Long live the King,” I believe it goes. There is propaganda through symbolism and spectacle. And there is detachment. Witness in this photograph of Queen Elizabeth II’s funeral procession, a sea of mobile devices—black mirrors.

Audio: Podcast rendition of this page content

Are these people living in the moment? I can’t say for sure, but I am thinking they are more interested in laying claim to bragging rights that they were present (in body)—pictures or it didn’t happen, eh?—, but at the same time, they were missing the moment. Perhaps it would have been overwhelming otherwise.

I recall a Buddhist meme where a traveller was more (or as) interested in taking photographs than experiencing the moments—well before the Social Media Age. And what are photographs, except attachments to the past? As if the memory is not enough in and of itself.

I am neither a Royalist nor a Statist, yet I understand that many people are and they have emotional connexions to transitions of this nature. Some ask if the system will persist whilst others wonder if there might be changes conforming to their worldviews. Time will tell.

As for me, I am flexible. Were I Charles, I’d likely abdicate the throne and pass the sceptre to William, whom I would hope would usher the monarchy out of existence in favour of something more modern—and by modern, I mean something in the order of an anachronistic constitutional republic such as (poorly) employed by the United States of America.

Obviously, my preference would be for some autonomous syndicalist structure, but I am not holding my breath, nor do I feel that would fare much better anyway. I feel that’s just another pipe dream anyway as it involves people.

Would you look at the time? I’m off wittering again. Cheers.

Anatomy of a Social Media Challenge

As a Social Justice Warrior, I tend to favour diversity and inclusion as a principle. As such, I follow some people who share this interest. In fact, most of these people expend much more energy toward this end than I do. The challenge I am about to convey is that some people don’t read beyond the subject line, and don’t even attempt to assess the underlying claim, let alone the issue at hand.

I recently engaged in a nonsensical interaction that I am sharing and dissecting. It started with this share, an image of the border outline of Nigeria with an overlay caption that reads: “Nigeria becomes the first country to ban white and British models in all advertising”.

I’d like to point out two items in particular. Firstly, the caption is fabricated. I’ll get to the source reference presently. Secondly, the re-poster aptly corrects the caption when he shared it—”Well, all foreign models, but HELL YEAH!”

Nigeria recently passes a law that essentially assesses a tariff or levy on advertising content using non-Nigerian talent. There is no mention of ‘white’ models, though British models would fall under this umbrella. This protectionist law stems from nationalism. I’d guess that ‘white’ people comprise less than one per cent of the Nigerian national population, but I could be wrong. This is well outside my area of expertise.

My response was to say “Down with Nationalism and the Promotion of Otherism.”
I may be misinterpreting myself, but it feels to me that this is denouncing racism and other forms of otherness.

Sabrina responds, ‘Why is not having white models in advertising a bad thing?” and “Isn’t the whole point of advertising [for] people to…see themselves… ?”
In response, I should have pointed out that the initiative had nothing to do with skin colour. Instead, I responded to the second question: the point of advertising is to sell product. Full stop. If people see themselves with the product, then great. Clearly, this comprises a fraction of successful adverts. More common is to make a connection to what they aspire to. It’s not about making a social statement—unless, of course, that social statement will sell more product. If an ad with a white model will sell more product, a business would be derelict not to employ one; conversely, if white models result in lower sales, a business would be foolish not to switch to the more successful vector.

Sabrina really goes off the reservation with her reply, somehow conflating Nigeria with the African continent. Attention to detail is not her forte.

At this point, I feed into her laziness and send her a link to an Al-Jazeera article addressing the law.

She leaves with a parting shot, and I quote: “Have you ever thought about the harm you might cause by playing devil’s advocate and “creating an argument”?”

She’s off course and then attempts to diminish my point by calling it ‘playing devil’s advocate’ rather than admitting that she hadn’t even considered the rationale and possible ramifications. She didn’t even grasp the main point, so I suppose I should forgive her for not noticing secondary and edge cases.

At this point, Dr Perkins adds her voice. Her initial question is valid, and as I responded, the answer is “No”. The race card was introduced by some narrator who didn’t know what game he was broadcasting. But then she goes on to “applaud Nigeria for making a [decision] centering [on] Blackness”, save to say that was not what prompted the decision.

Notice, too, that other people “Liked” the other comments, a testament to the principle of least effort of the bystanders, too.

I recognise that the original post anchored the conversation off the actual topic, but it was also very easy to track down the reference and note the content discrepancy. Granted, this takes time and effort, but so does responding on a thread and then escalating commitment to a non-cause. And for one tilting at windmills to be tossing around accusations of playing devil’s advocate. It’s not a good sign.

But wait, there’s more. I commented on this post on a second thread.

In this case, Dr Anderson suggests that this is just “a country celebrating its own citizens by recognizing their beauty and knowing they can move product just as good, and probably better than white women, to which I responded that this is a testable hypothesis. It’s either true that on balance white models sell more product or black models do. Again, don’t fail to miss the point that none of this is about white versus black models.

Somehow, LinkedIn can’t seem to keep their threads in order, but Ms Rice takes my hypothesis testing point as a support for racism before precipitating to full-on troll mode.

It scares me to see that there are two academic doctors participating in this thread, neither with a trait of attention to detail nor even a fundamental pursuit of evidence.

This is why it is difficult to engage with social media. You have no idea what level a commenter is coming in on. And even when spoon-fed information, they refuse to alter their position. In fact, they tend to double down on their wrongness.
Moving on…