Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

Deepseek: Hallucination and Populism

The connection between conspiracy theorists, religious believers, and generative AI is a fascinating topic that touches on epistemology, psychology, and the sociology of belief. At its core, the common thread lies in the human tendency to impose meaning, structure, and causality on complex or ambiguous phenomena, often in the absence of sufficient evidence. Let’s explore this connection through the lens of political philosophy.

1. The Need for Meaning and Control

  • Religious Believers: Religion often provides a framework for understanding the world, offering answers to existential questions (e.g., the meaning of life, the nature of good and evil) that might otherwise seem incomprehensible. This framework simplifies complexity by attributing events to divine will or cosmic order.
  • Conspiracy Theorists: Similarly, conspiracy theories offer a simplified narrative that explains chaotic or unsettling events by attributing them to the deliberate actions of powerful, hidden actors. This provides a sense of control and understanding, even if the explanation is speculative or unfounded.
  • Generative AI: AI models, like humans, operate by identifying patterns and generating outputs based on incomplete data. When faced with ambiguity, they “hallucinate” plausible but potentially false information to fill gaps, mirroring the human tendency to create coherent narratives from incomplete evidence.
  • Connection: All three exhibit a drive to reduce uncertainty by generating explanations, whether through divine intervention, secret plots, or algorithmic extrapolation. This reflects a broader human desire to impose order on chaos, even at the cost of accuracy.

2. The Role of Authority and Trust

  • Religious Believers: Religious systems often rely on authoritative texts, leaders, or traditions as sources of truth. These authorities provide a sense of certainty and trust, even when empirical evidence is lacking.
  • Conspiracy Theorists: Conspiracy theories frequently reject mainstream authorities (e.g., governments, scientists) and instead place trust in alternative sources of information, such as charismatic figures or fringe communities.
  • Generative AI: AI systems are often perceived as authoritative due to their ability to process vast amounts of data. However, their outputs are only as reliable as the data they are trained on, and they lack the ability to critically evaluate the truthfulness of their responses.
  • Connection: All three rely on trust in certain authorities or systems to validate their claims, whether those authorities are divine, contrarian, or algorithmic. This highlights the role of trust in shaping belief systems and the potential for misuse or misinterpretation of authority.

3. Cognitive Biases and Pattern Recognition

  • Religious Believers: Humans are predisposed to detect agency and intentionality in the world, a tendency known as “hyperactive agency detection.” This can lead to the attribution of natural events to supernatural forces.
  • Conspiracy Theorists: Conspiracy thinking often involves seeing patterns where none exist (apophenia) and attributing events to deliberate human agency rather than randomness or coincidence.
  • Generative AI: AI models are designed to identify patterns in data, but they can also generate spurious correlations or false connections when the data is noisy or incomplete.
  • Connection: Both humans and AI exhibit a tendency to overfit patterns to data, leading to the creation of narratives that may be compelling but lack empirical support. This suggests a shared cognitive bias toward finding meaning, even when it is not warranted.

4. Social and Political Implications

  • Religious Believers: Religious beliefs can foster community cohesion and moral frameworks but can also lead to division and conflict when different belief systems clash.
  • Conspiracy Theorists: Conspiracy theories can create a sense of belonging among adherents but often lead to polarization and distrust of societal institutions.
  • Generative AI: AI has the potential to democratize information but also risks amplifying misinformation and reinforcing echo chambers.
  • Connection: All three phenomena have significant social and political implications, as they shape how individuals and groups perceive reality, interact with others, and engage with societal structures. They highlight the tension between the need for shared narratives and the dangers of divisive or false beliefs.

5. The Role of Epistemic Humility

  • A key philosophical insight is the importance of epistemic humility—the recognition of the limits of one’s knowledge. Religious traditions, conspiracy theories, and AI systems often lack this humility, presenting their narratives as definitive truths rather than contingent explanations.
  • Encouraging a culture of critical thinking, scepticism, and openness to evidence could help mitigate the risks associated with all three phenomena.

Conclusion

The connection between conspiracy theorists, religious believers, and generative AI lies in their shared tendency to simplify complexity, impose meaning, and generate narratives in the face of uncertainty. While this tendency is deeply human, it also carries risks, particularly when it leads to the uncritical acceptance of false or harmful beliefs. Understanding these connections can help us develop strategies to promote more nuanced, evidence-based approaches to knowledge and belief in an increasingly complex world.

DISCLAIMER: Please note that this is the first response I received from a prompt to DeepSeek.

I am interested in the possible connexion between conspiracy theorists and religious believers. These two cohorts (if they are even independent) have a common trait of oversimplifying things they can’t understand and hallucinating solutions in the absence of facts or evidence. GenerativeAI is accused of the same behaviour. How might these be connected?

NB: Evidently, some versions of DeekSeek generate images, but mine doesn’t, so I prompted it to generate an apt cover image.

I also asked for keyword tags. It provided these, but then hid them, replacing them with this message:

Sorry, that’s beyond my current scope. Let’s talk about something else.

Power Relations Bollox

As I put the finishing touches on the third revision of my Language Insufficiency Hypothesis manuscript, I find myself reflecting on the role of Foucault’s concept of Power Relations in shaping the use and interpretation of language in institutional contexts.

A key aspect of my hypothesis is the notion that some abstract conceptual language is intentionally vague. I touched on this idea in my recent article on the ambiguity of the term ‘gift’, but the implications extend far beyond that specific example. The strategic use of linguistic indeterminacy is a pervasive feature of many professional domains, serving to veil and enable subtle power plays.

NotebookLM Audio Podcast Discussion of this content.

In my manuscript, I examine the concept of ‘reasonableness’ as a prime example of this phenomenon. This term is a favourite hiding spot for legal professionals, appearing in phrases like ‘reasonable doubt’ and ‘reasonable person’.Yet, upon closer inspection, the apparent clarity and objectivity of this language dissolves into a morass of ambiguity and subjectivity. The invocation of reasonableness often serves as a rhetorical sleight of hand, masking the exercise of institutional power behind a veneer of impartiality.

While I don’t wish to venture too far into Nietzschean cynicism, there is a sense in which the legal system operates like a casino. The house always seeks to maintain its edge, and it will employ whatever means necessary to preserve its authority and legitimacy. In the case of reasonableness, this often involves a strategic manipulation of linguistic indeterminacy.

The court reserves for itself the power to decide what counts as reasonable on a case-by-case basis. Definitions that prove expedient in one context may be swiftly discarded in another. While skilled advocates may seek to manipulate this ambiguity to their advantage, the ultimate authority to fix meaning rests with the judge – or, in some instances, with a higher court on appeal. The result is a system in which the interpretation of key legal concepts is always subject to the shifting imperatives of institutional power.

This example highlights the broader significance of the Language Insufficiency Hypothesis. By attending to the ways in which abstract and contested terms can be strategically deployed to serve institutional ends, we can develop a more critical and reflexive understanding of the role of language in shaping social reality. In the process, we may begin to glimpse the complex interplay of power and meaning that underlies many of our most important professional and political discourses.