When Syntax Is Asked to Bear Too Much v1.2

1–2 minutes

I published the first version of this essay in February, arguing that the Frege–Geach problem, that three-score-year-old albatross around expressivism’s neck rests on a category error. Analytic philosophers were polite about it in the way that analytic philosophers are polite about things they intend to ignore. I don’t often revise my manuscripts, opting instead to publish a new and improved version, but the meat of this one remained strong and not worth revisiting as much as fortifying.

The trouble was that I’d dissolved the problem without resolving it. Good enough for me. Others were less convinced. Telling people they’ve been asking the wrong question is satisfying but insufficient without a better one. Version 1.1 tidied the prose. Version 1.2 does the actual work.

The new section (§4, if you’ve already read previous versions) introduces recruitable expressions – a broader class of expressions (moral predicates, thick evaluative terms, epistemic and institutional vocabulary) whose full functional load is attenuated under embedding whilst a thinner inferential profile remains available for reasoning. The standard of practical inferential adequacy replaces the demand for semantic identity: what ordinary reasoning requires is not invariance but inferential sufficiency. And the pattern isn’t peculiar to moral language – a noted goal –, which means Frege–Geach stops looking like a special embarrassment for expressivism and starts looking like one symptom of a general feature of how natural language handles multi-functional expressions under logical stress.

The essay is dissolved as a demand for unrestricted semantic invariance. It is resolved insofar as the behaviour it identifies is explained, predicted, and shown to be general.

The revised paper is available here, near the rest of my manuscripts: DOI

Lastly, this essay is built on the foundations of A Language Insufficiency Hypothesis and The Architecture of Encounter, the latter of which wasn’t yet available for the initial publication.

As ever, I welcome the polite ignoring.

The Demise of Frege–Geach?

4–5 minutes

Journal Entry

I published an essay on the Frege–Geach problem in February. I published an update yesterday. I still wasn’t satisfied, so I engaged with several LLMs. This was my approach.

The involved LLMs were:

  • Claude
  • Grok
  • ChatGPT
  • Gemini
Audio: NotebookLM summary podcast of this topic.
(This summary misses the mark in some ways, but it brings up some interesting observations along the way.)

First, I fed them some documents in no particular order, my goal being to share my own knowledge and position on the purported problem.

I started with Gemini. This was my prompt:

I am interested in resolving the Frege–Geach problem, but it seems I can only dissolve it. This doesn’t appear to be adequate for some analytical philosophers. How might I get closer to resolving it? My main argument is that they are assuming that language is stronger than it is, and they don’t agree with my argument.

As the prompt notes and by design, many analytical philosophers are reluctant to grant the degree of insufficiency I take to be constitutive of natural language, especially where logical embedding is concerned. Evidently, that counts as my not wanting to play their game. From my perspective, they are committed to a different ontological grammar. What this means practically is that I need to present my solution proposal in their terms. This doesn’t mean their terms are right; problems are only relevant in their dialect, even though my argument is that all dialects are lossy – mine included.

Part of the challenge is that formal logic was invented precisely because ordinary language is imprecise, yet its standards are often retrofitted back onto natural language as though they revealed what language must have been doing all along.

Without sharing the entire play-by-play of the transcripts, I established my course of action. I’d dissolved the problem, but I hadn’t yet resolved it.

My initial intuition of several years ago was to argue that they were expecting too much from grammar. I’ll use a well-worn example. Follow these statements:

  1. IF ‘Murder is wrong.’
  2. THEN ‘If murder is wrong, then getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

According to them, the embedded ‘murder is wrong‘ doesn’t make sense. Here’s their logic:

According to Ayer, moral statements are simply emotive. When one utters, ‘murder is wrong‘, they are really saying ‘Boo, murder‘ – ‘I don’t like murder‘.

If ‘murder‘ is defined as ‘killing disallowed by the state‘, then murder is wrong might be translated into ‘killing disallowed by the state is wrong’ or ‘what the state declares is wrong is wrong’, but we also know that the state makes many pronouncements, many of which carry no moral weight and others which are counter to expected moral positions – law does not equal moral, and vice versa. Let’s move on and revisit our statements:

  1. IF ‘Boo to Murder is wrong.’
  2. THEN ‘If boo to murder is wrong, then boo to getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

My intuition was that the embedded clause does not perform the same linguistic act as the standalone assertion, even if the lexical material is repeated. We’re committing a category error. More crucially, the category it belongs to doesn’t exist, so it’s unspecified. It needs to be invented.

Although I struggled to find apt nomenclature, I settled on performance-sensitive expressions.

A parallel challenge is that the solution can’t be a simple carve-out for moral language. Whilst I feel that moral language does use its own grammar and semantics, I don’t expect analytical philosophers to accept this assertion, so the solution should be more generalisable. I’d need to demonstrate where else this conditional logic fails in the same manner.

ChatGPT had this to say in response to a draft:

The comparative-cost section is good, but one sentence should be added to pre-empt the “your account also uses theory-laden notions” objection.
A critic may reply: your own terms, like “coherence zone” and “synchronisation protocol,” are also theoretical machinery. True enough. The difference is that your machinery is independently motivated and not introduced solely to patch Frege-Geach. You imply this already, but it would help to say so directly.

It offers clearer language:

Embedded moral predicates are not semantically identical to their unembedded counterparts, but neither are they inert; they are performance-sensitive expressions whose full evaluative load is attenuated under embedding while a thinner inferential profile remains available for reasoning.

In any case, I am still polishing the essay, dotting Is and crossing Ts. I think I’ve got the main argument and some examples. One of my weaknesses may be that I rely heavily on my own theories, but these are published and debatable on their own merits.


What If the Frege–Geach Problem Isn’t?

3–4 minutes

The Frege–Geach problem was one of the impetuses for finishing my Language Insufficiency Hypothesis. From the first encounter it felt off, as though someone were trying to conjure depth from a puddle. There was no paradox here; just another case of mistaking the map for the terrain, a habit analytic philosophy clings to with almost devotional zeal. The more time I spend on this project, the more often I find those cartographic illusions doing the heavy lifting.

Audio: NotebookLM podcast on this topic.

For the uninitiated, the Frege–Geach problem is supposed to be the knockout blow against AJ Ayer’s emotivism. Frege’s manoeuvre was simple enough: moral language must behave like descriptive language, so embed it in a conditional and watch the whole thing buckle. Neat on paper. Vacuous in practice. The entire construction only stands if one accepts Frege’s original fiat: that moral utterances and empirical propositions share the same logical metabolism. And why should they? Because he said so.

This is the core of the analytic mistake. It is grammar dressed up as ontology.

The LIH doesn’t ‘solve’ the Frege–Geach problem for the simple reason that there is nothing to solve. What it does instead is reclassify the habitat in which such pseudo-problems arise. It introduces categories the analytic tradition never suspected existed and drafts a grammar for language’s failure modes rather than politely ignoring them. It exposes the metaphysics analytic philosophy has been smuggling under its coat for decades.

The LIH does four things at once:

• It destabilises an alleged Invariant.
• It exposes the Contestable foundations underneath it.
• It shows that many analytic puzzles exist only because of the presuppositions baked into the analytic grammar.
• And it asks the forbidden question: what if this cherished problem simply isn’t one?

Analytic philosophy proceeds as though it were operating on a single, pristine grammar of meaning, truth, and assertion. The LIH replies: charming idea, but no. Different conceptual regions obey different rules. Treating moral predicates as if they were factual predicates is not rigour; it’s wishful thinking.

As my manuscript lays out, instead of one flat linguistic plain, the LIH gives you an ecology:

Invariants for the things that actually behave.
Contestables for the concepts that wobble under scrutiny.
Fluids for notions that change shape depending on who touches them.
Ineffables for everything language tries and fails to pin down.

The analytic tradition, bless its little heart, tries to stretch classical logic across the entire terrain like clingfilm. The clingfilm snaps because reality never agreed to be wrapped that way.

This taxonomy isn’t jargon for its own sake. It’s a meta-grammar: a way of describing how language breaks, where it breaks, and why it breaks in predictable places. It names the structures analytic philosophy has been tripping over for a century but studiously refused to acknowledge.

Their error is simple: they treat language as flat. The LIH treats language as topographical – scored with ridges, fault lines, and pressure fronts.

They think in one grammar. I wrote a grammar for grammars.

No wonder there’s disquiet. Their tools have been optimised for the wrong terrain. I’m not challenging their competence; I’m pointing out that the conceptual map they’ve been so proudly updating was drawn as if the continent were uniformly paved.

This is why Frege–Geach, the ‘hard problem’ of consciousness, another irritant, paradoxes, semantic embeddings – so many of their grand intellectual fixtures – appear dramatic inside their grammar yet quietly evaporate once you switch grammars. The LIH isn’t a theory about language; it is a theory of the boundary conditions where language stops being able to masquerade as a theory of anything at all.

And the Frege–Geach problem? In the end, perhaps it isn’t.


Note that the cover image is of the rhinoceros in the animated movie, James and the Giant Peach. The rhino was meant to remind James of the importance of perspective. I feel it’s fitting here.