The Demise of Frege–Geach?

4–5 minutes

Journal Entry

I published an essay on the Frege–Geach problem in February. I published an update yesterday. I still wasn’t satisfied, so I engaged with several LLMs. This was my approach.

The involved LLMs were:

  • Claude
  • Grok
  • ChatGPT
  • Gemini
Audio: NotebookLM summary podcast of this topic.
(This summary misses the mark in some ways, but it brings up some interesting observations along the way.)

First, I fed them some documents in no particular order, my goal being to share my own knowledge and position on the purported problem.

I started with Gemini. This was my prompt:

I am interested in resolving the Frege–Geach problem, but it seems I can only dissolve it. This doesn’t appear to be adequate for some analytical philosophers. How might I get closer to resolving it? My main argument is that they are assuming that language is stronger than it is, and they don’t agree with my argument.

As the prompt notes and by design, many analytical philosophers are reluctant to grant the degree of insufficiency I take to be constitutive of natural language, especially where logical embedding is concerned. Evidently, that counts as my not wanting to play their game. From my perspective, they are committed to a different ontological grammar. What this means practically is that I need to present my solution proposal in their terms. This doesn’t mean their terms are right; problems are only relevant in their dialect, even though my argument is that all dialects are lossy – mine included.

Part of the challenge is that formal logic was invented precisely because ordinary language is imprecise, yet its standards are often retrofitted back onto natural language as though they revealed what language must have been doing all along.

Without sharing the entire play-by-play of the transcripts, I established my course of action. I’d dissolved the problem, but I hadn’t yet resolved it.

My initial intuition of several years ago was to argue that they were expecting too much from grammar. I’ll use a well-worn example. Follow these statements:

  1. IF ‘Murder is wrong.’
  2. THEN ‘If murder is wrong, then getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

According to them, the embedded ‘murder is wrong‘ doesn’t make sense. Here’s their logic:

According to Ayer, moral statements are simply emotive. When one utters, ‘murder is wrong‘, they are really saying ‘Boo, murder‘ – ‘I don’t like murder‘.

If ‘murder‘ is defined as ‘killing disallowed by the state‘, then murder is wrong might be translated into ‘killing disallowed by the state is wrong’ or ‘what the state declares is wrong is wrong’, but we also know that the state makes many pronouncements, many of which carry no moral weight and others which are counter to expected moral positions – law does not equal moral, and vice versa. Let’s move on and revisit our statements:

  1. IF ‘Boo to Murder is wrong.’
  2. THEN ‘If boo to murder is wrong, then boo to getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

My intuition was that the embedded clause does not perform the same linguistic act as the standalone assertion, even if the lexical material is repeated. We’re committing a category error. More crucially, the category it belongs to doesn’t exist, so it’s unspecified. It needs to be invented.

Although I struggled to find apt nomenclature, I settled on performance-sensitive expressions.

A parallel challenge is that the solution can’t be a simple carve-out for moral language. Whilst I feel that moral language does use its own grammar and semantics, I don’t expect analytical philosophers to accept this assertion, so the solution should be more generalisable. I’d need to demonstrate where else this conditional logic fails in the same manner.

ChatGPT had this to say in response to a draft:

The comparative-cost section is good, but one sentence should be added to pre-empt the “your account also uses theory-laden notions” objection.
A critic may reply: your own terms, like “coherence zone” and “synchronisation protocol,” are also theoretical machinery. True enough. The difference is that your machinery is independently motivated and not introduced solely to patch Frege-Geach. You imply this already, but it would help to say so directly.

It offers clearer language:

Embedded moral predicates are not semantically identical to their unembedded counterparts, but neither are they inert; they are performance-sensitive expressions whose full evaluative load is attenuated under embedding while a thinner inferential profile remains available for reasoning.

In any case, I am still polishing the essay, dotting Is and crossing Ts. I think I’ve got the main argument and some examples. One of my weaknesses may be that I rely heavily on my own theories, but these are published and debatable on their own merits.


Leave a comment