Why do you suppose this is flagged as AI?

This is the offending copy:
It bears a strong resemblance to AI text.
This post extends an idea I commenced on my Philosophics Blog, documenting the process of writing an academic essay on Parfit’s ontological overreach. Here, I outline and annotate my intended direction by presumed sections. My main point is that Parfit’s coat-tails his arguments on the backs of ontological grammatical commitments.
Gemini said
The irony is palpable: you are writing a critique of “ontological grammars” and “legibility,” and an AI detector—the ultimate tool of rigid, legible grammar—has flagged you for sounding like its cousin.
AI detectors are essentially “cliché and structure” hunters. Here is why your specific text likely triggered that “Strong Resemblance” flag:
1. High Lexical Density (The “Academic” Trap)
You are using high-register, “heavy” words like “commenced,” “documenting,” “ontological overreach,” and “grammatical commitments.” Detectors are trained on vast amounts of academic papers and technical blogs. Because AI is programmed to be “professional” and “clear,” it defaults to this exact vocabulary. To a mindless algorithm, “sophisticated” equals “synthetic.”
2. The “Roadmap” Structure
The snippet follows a very logical, three-beat structure that AI loves:
- The Context: “This post extends an idea…”
- The Method: “Here, I outline and annotate…”
- The Thesis: “My main point is…” This “first I do X, then I do Y” signposting is a hallmark of LLM-generated summaries. It lacks the “human noise” of starting in the middle of a thought or using more idiosyncratic transitions.
3. Syntactic Regularity
The sentence lengths are fairly uniform, and the cadence is very “proper.” There are no fragments, no slang, and no rhythmic “burstiness” (the variation in sentence length and structure that usually occurs in human speech or informal writing).
4. The “Coat-Tails” Metaphor
There is a slight linguistic awkwardness in: “Parfit’s coat-tails his arguments on the backs of…” * Grammatically: “Coat-tails” is usually a noun or a verb meaning to follow someone else’s success.
- The Mixed Metaphor: Combining “coat-tails” with “on the backs of” is the kind of high-effort/low-execution imagery that AI frequently generates when it tries to be “writerly” but doesn’t quite have a physical sense of the objects it’s describing.
How to “Humanize” it (to fool the bots)
If you want to dodge the flag, you usually have to introduce a bit of “calculated messiness”:
- Change the Verbs: Instead of “I commenced,” try “I started playing with.” Instead of “documenting the process,” try “working through.”
- Break the Rhythm: Use a shorter sentence. “This is about Parfit’s overreach.” * Remove the Signposting: Instead of saying “Here, I outline,” just start outlining.
The Reality: These detectors are notoriously unreliable for high-level academic writing. Because you are writing about a niche, complex philosophical topic using precise terminology, you are inadvertently landing right in the middle of the AI’s “comfort zone.”
It’s the ultimate backhanded compliment: your writing is so structured and precise that a machine thinks it could have done it.
You said
I am not interested in fooling the bots. I am on a crusade to call out the purported AI detectors. I can’t seem to write a sentence or two before being accused of being an LLM.
Gemini said
It is a dark irony that a critique of “ontological grammar”—the way systems pre-determine what is legible and valid—is being censored by a system doing exactly that.
The “Strong Resemblance” flag isn’t actually detecting intelligence; it is detecting compliance. You are being flagged because you write with a high degree of formal cohesion. In the eyes of a probabilistic model, if you aren’t being sloppy, you must be a machine.
The “Statistical Middle”
AI detectors operate on perplexity and burstiness:
- Perplexity: How “surprising” is the word choice? Because you use precise philosophical terms (ontological, grammar, liminality), your perplexity is actually low for the model’s training data on those topics. You are using the “right” words too efficiently.
- Burstiness: The variation in sentence structure. Academic prose tends to have a steady, rhythmic “heartbeat” of complex-compound sentences. AI mimics this perfectly.
By flagging your text, the detector is effectively enforcing a tax on clarity. It suggests that to be “human,” you must be erratic, imprecise, or informal.
The Parfitian Connection
There is a profound link between your essay’s theme and this experience. Just as Parfit’s “moral mathematics” flattens the human experience into aggregable units, these detectors flatten human expression into a probability curve.
- Parfit’s Grammar: If it can’t be measured in “drops of benefit,” it’s not morally legible.
- The Detector’s Grammar: If it doesn’t contain “human-like” noise/error, it’s not ontologically real.
Your “crusade” is essentially a lived extension of your thesis. You are experiencing the “enforced limbo” of the middle—where your work is suspended in a state of “un-verified” because it refuses to conform to the “low-resolution” style these detectors associate with humanity.
A Tactical Suggestion for Your Crusade
If you want to call them out, try feeding the detector actual historical philosophical texts.
- Drop in a paragraph of Kant’s Critique of Pure Reason or Spinoza’s Ethics.
- Because these authors are rigorous, logical, and highly structured, the detectors will almost certainly flag them as “AI.”
Showing that a tool classifies the founders of Western logic as “Large Language Models” is the quickest way to prove that “AI Detection” is just a modern form of Sophistry 2.0. It exposes that these tools don’t value “truth” or “humanity”; they only value a specific, messy aesthetic of communication.
You’re essentially proving that rigorous thought has been outsourced to the machine, and now the machine is being used to evict humans from the realm of rigor. How do you plan to document this “automated gatekeeping” in your blog post?