Pinpointing the Messiness of Language

LinkedIn, that carnival of professional self-delusion, has a little diversion called Pinpoint. It pretends to tell you how much you “match” with other people, presumably so you’ll feel less alone as you scroll past thought-leaders peddling snake oil in PowerPoint form. In English, the results arrive in the cold, hard, dating-app idiom: “% match.” Simple, brutal, and bland.

Audio: NotebookLM podcast on this topic.

But LinkedIn, ever the polyglot, translates this phrase into other tongues. And here is where a trivial game unmasks the philosophical chaos of language itself. For in one idiom, your soul and another’s are “in correspondence.” In another, you are the product of “coincidence.” Elsewhere, you are a “hit,” a “fit,” a “suitability.” The poor Swedes, apparently exhausted, simply gave up and borrowed “matchning.”

The Romance languages, of course, are the most pedantic. Correspondência, corrispondenza — all very scholastic, as if Aquinas himself were lurking in the backend code. A match is nothing less than the degree to which one proposition mirrors another, as in the correspondence theory of truth. You can be 72% true, like a botched syllogism that half-lands. Elegant, precise, exasperating.

Spanish, on the other hand, opts for coincidencia. A “% coincidence.” Imagine it: you bump into your ex at the market, but only 46% of the way. Coincidence, by definition, is binary; either the train wreck occurs or it does not. And yet here it is, rendered as a gradable metric, as if fate could be quantified. It’s a kind of semantic surrealism: Dalí with a spreadsheet.

Then we have the Germans: Treffer. A hit. In English, a hit is binary – you score or you miss. But the Germans, ever the statisticians of fate, make Trefferquote into a percentage. You may not have killed the truth outright, but you wounded it respectably. It’s a firing squad turned bar chart.

Indonesians say cocok, which means “appropriate, suitable.” This is not about truth at all, but about fit. A match is not correspondence to reality but pragmatic adequacy: does it work? Does it feel right? The difference is subtle but devastating. Correspondence makes truth a metaphysical mirror; suitability makes it a tailoring problem.

And English? English, with its toddler’s toybox of a vocabulary, just shrugs and says “match.” A word that means as much as a tennis final, a Tinder swipe, or a child’s puzzle book. Adequate, lazy, neutered. Anglo-pragmatism masquerading as universality.

So from a silly HR-adjacent parlour game we stumble into a revelation: truth is not one thing, but a polyglot mess. The Romance tongues cling to correspondence. Spanish insists on coincidence. German goes target practice. Indonesian settles for a good fit. And English floats on ambiguity like an inflatable swan in a corporate swimming pool.

The lesson? Even a “% match” is already lost in translation. There is no stable denominator. We speak not in universals but in parochialisms, in metaphors smuggled into software by underpaid translators. And we wonder why philosophy cannot settle the matter of truth: it is because language itself cheats. It gives us correspondence, coincidence, hits, and fits, all while claiming to say the same thing.

Perhaps LinkedIn should update its UI to something more honest: % mess.

Of GenAI, Gatekeepers, and Moral Panic in Minor Key

I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.

Audio: NotebackLM podcast on this topic.

It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?

Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.

Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?

This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.

The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.

To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?

We are fast becoming a culture where the tool is the sin, and the sinner the tool.

Image: Exhibit A

The Ethics of Feedback in an Algorithmic Age


We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.

Full story here, from the Ridley side: Needle’s Edge: Scene Feedback 01

Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”

Was it honest?

Was it useful?

Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?

This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.

There’s a difference between signal and applause. Between understanding and affirmation.

The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.