It appears the merry halfwits at Meta — those tireless alchemists of despair — are now experimenting with AI-generated comments on Instagram. Because, of course they are. Why sully your dainty human fingers tapping out coherent thoughts when a helpful little gremlin in the server farm can do it for you? In their infinite, tinfoil wisdom, they envision a future wherein the Machine analyses your browsing history (undoubtedly a glittering mosaic of shame) and the original post, and, from that exquisite mulch, vomits up a tidy selection of canned remarks.
Audio: NotebookLM podcast on this content.
No need to think, no need to feel. Simply choose one of the suggested comments — “Love this! 💖” or “So inspiring! 🚀” — and blast it into the void. If you’re feeling particularly alive, tweak a word or two. Or, better yet, allow the AI to respond automatically, leaving you free to pursue more meaningful activities. Like blinking. Or waiting for death.
Am I dreaming? Could this be — dare I whisper it — the final liberation? Could we, at last, ignore social media altogether, let it rot in peace, while we frolic outside in the sunlight like bewildered medieval peasants seeing the sky for the first time?
Picture it: a vast, pulsating wasteland where no living soul has trodden for centuries. Only bots remain, engaged in endless cycles of trolling, flattering, and mutual gaslighting. Automated praise machines battling semi-sentient hatebots, each iteration less tethered to reality than the last. Digital crabs scuttling across an abandoned beach, hissing memes into the void. Yet another example of carcinisation.
Indeed, if one could zoom further out, the true horror becomes evident: a crumbling worldscape founded on a shattered circuit board, stretching endlessly in all directions. Across this silicon desert, scavenger crabs—half-metal, half-mad—scuttle about, clutching relics of the digital age: a rusted Instagram logo, a shattered “Like” button, a defunct influencer’s ring light. Massive server towers loom as toppled monuments, their wires weeping in the acid wind. Here, in this museum of forgotten vanities, the crabs reign supreme, kings of a kingdom of ash and corrupted data.
And somewhere in the future, an anthropologist — perhaps a child raised by wolves and irony — will dust off an ancient Instagram server and peer inside. What will they see? Not a record of humanity’s vibrant social life, no. Not a tapestry of culture and thought. No, they’ll find a grim, howling testament to our collective abandonment: bots chatting to bots about posts made by other bots, in a language degraded into gibberish, a self-perpetuating carnival of nonsense.
“#Blessed,” the ancient texts will proclaim, beneath a pixelated photograph of an AI-generated smoothie, posted by a bot, commented upon by a bot, adored by bots who themselves have been dead for centuries, if they were ever truly “alive” at all.
One can almost imagine the academic paper: “The Great Collapse: How Homo Sapiens Outsourced Its Emotional Labour to the Algorithm and Evaporated in a Puff of Likes.”
The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.
Audio: NotebookLM podcast on this topic.
Let us survey the battlefield:
1. Classical Probability (Laplacean Determinism in a Tuxedo)
This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.
2. Frequentist Probability (The Empiricist’s Fantasy)
Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.
3. Bayesian Probability (Subjectivity Dressed as Rigor)
Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.
4. Propensity Interpretation (The Ontology of Maybes)
Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.
5. Logical Probability (A Sober Attempt at Rationality)
Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.
6. Quantum Probability (Schrödinger’s Definition)
In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.
Current Status: War of Attrition
There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.
In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.
And so the probability of this contest being resolved any time soon? About zero. Or one. Depending on your interpretation.
I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongs—next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.
The so-called Intelligence Industrial Complex—an infernal alliance of psychologists, bureaucrats, and HR departments—has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.
Audio: NotebookLM podcast on this topic.
What it measures, it defines. What it defines, it controls.
IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. It’s a test of how well you’ve been conditioned to think like the test-makers.
This is not to say IQ has no value. Of course it does—within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.
True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.
To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.
A Comparative Vivisection of Intelligence Theories
Theory / Model
Core Premise
Strengths
Blind Spots / Critiques
Cultural Framing
IQ (Psychometric g)
Intelligence is a single, general cognitive ability measurable via testing
Predicts academic & job performance; standardised
Skewed toward Western logic, ignores context, devalues non-abstract intelligences
Western, industrial, meritocratic
Multiple Intelligences (Gardner)
Intelligence is plural: linguistic, spatial, musical, bodily, etc.
Recognises diversity; challenges IQ monopoly
Still individualistic; categories often vague; Western in formulation
Liberal Western pluralism
Triarchic Theory (Sternberg)
Intelligence = analytical + creative + practical
Includes adaptability, real-world success
Still performance-focused; weak empirical grounding
Western managerial
Emotional Intelligence (Goleman)
Intelligence includes emotion regulation and interpersonal skill
Useful in leadership & education contexts
Commodified into corporate toolkits; leans self-help
Western therapeutic
Socio-Cultural (Vygotsky)
Intelligence develops through social interaction and cultural mediation
Recognises developmental context and culture
Less attention to adult or cross-cultural intelligence
Soviet / constructivist
Distributed Cognition / Extended Mind
Intelligence is distributed across people, tools, systems
Breaks skull-bound model; real-world cognition
Hard to measure; difficult to institutionalise
Post-cognitive, systems-based
Indigenous Epistemologies
Intelligence is relational, ecological, spiritual, embodied, ancestral
Holistic; grounded in lived experience
Marginalised by academia; often untranslatable into standard metrics
Global South / decolonial
Conclusion: Beyond the Monoculture of Mind
If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept plurality—not as a nod to diversity, but as an ontological truth.
Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.
And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.
It’s time we told the Intelligence Industrial Complex: your number’s up.
Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.
Audio: NotebookLM podcast discussing this content.
Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?
Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.
Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.
But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.
So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?
Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.
The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.
That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.
I’ve finally had time to create some video content for the Modernity Worldview Survey. This content is a cursory overview and serves as an introduction to deeper content planned for the future.
This video is short of seven minutes, so briefly, it outlines the worldviews and the questions. I opted not to produce a single comprehensive video so the material could arrive sooner. The content is bookmarked, though this is likely overkill for such a short video.
A permanent page about the survey is always available on this blog.
I’m still accumulating responses, but the survey is available here if you haven’t taken it. Apologies in advance for the fact that it renders best on a larger monitor or tablet rather than a mobile phone. It doesn’t render at all on a landline, so there’s that.
If you’re reading this, chances are you’re mortal. Bummer. Even worse, you may not be maximizing your odds of wringing every last drop out of your limited lifespan. But fear not! Science has some answers. And the answer, at least in the United States, is shockingly unsecular: religious people, on average, live longer than their non-religious counterparts. They also tend to be happier. But don’t rush to your nearest house of worship just yet—because it’s not God, the afterlife, or divine intervention at work. It’s something far more mundane: people.
Audio: NotebookLM podcast on this topic.
The Religion-Longevity Link: A Holy Miracle or Just Good Networking?
Multiple studies have confirmed what might seem an inconvenient truth for secular folks like myself: religious participation is associated with longer lifespans. A 2018 study published in JAMA Internal Medicine found that attending religious services more than once a week was associated with a roughly 33% lower risk of mortality. That’s a pretty solid statistical incentive to at least pretend to enjoy Sunday sermons.
Why the boost in longevity? No, it’s not divine reward points. It boils down to a few key factors:
Community and Social Support: Regularly showing up to church, temple, mosque, or synagogue means interacting with the same people repeatedly, forming strong social bonds. When life gets tough, these people tend to notice and lend support.
Healthier Lifestyles: Many religious traditions frown upon self-destructive behaviours like smoking, heavy drinking, and drug use.
Lower Stress Levels: Religious belief systems provide coping mechanisms for hardship, instilling a sense of meaning and reducing existential dread.
Volunteerism and Purpose: Many religious folks engage in community service, which has been linked to greater happiness and longevity.
The Not-So-Spiritual Catch: Why Atheists and the “Spiritual but Not Religious” Miss Out
Here’s the kicker: it’s not belief in a deity that grants these benefits. It’s participation in a structured, tight-knit community. That’s why merely identifying as “spiritual” doesn’t deliver the same effects—without a committed social framework, spirituality becomes a solo endeavour. And whilst atheists can certainly find meaning in other ways, they often lack equivalent institutions providing routine, real-world social engagement.
To put it bluntly, God isn’t keeping people alive longer. Other people are. Having a tribe that notices when you don’t show up, checks in when you’re sick, and nags you into a healthier lifestyle has tangible benefits.
The Scandinavian Exception: Thriving Without Religion
“But wait,” you may say, “what about those blissfully secular Scandinavian countries? They’re barely religious, yet they consistently rank among the happiest and longest-living people on Earth.” Good point. The key difference? They have successfully replaced the social function of religion with other strong communal institutions.
Nordic nations boast robust social safety nets, well-funded public spaces, and a culture prioritising collective well-being. They don’t need church groups to function as makeshift welfare systems because the state ensures no one falls through the cracks. They also have thriving clubs, hobby groups, and worker associations that provide built-in social support.
Conclusion: What This Means for Longevity-Seeking Atheists and Introverts
If you, like me, are an atheist and also an introvert who prefers solitude, writing, and the company of generative AI, this presents a bit of a conundrum. How does one reap the benefits of social integration without enduring the horror of group activities?
The lesson here isn’t that you need to feign religious belief or force yourself into suffocating social obligations. But if you want to maximize your lifespan and well-being, some form of consistent, meaningful connection with others is essential. Whether that’s through a socialist co-op, a local philosophy club, a structured hobby group, or even just a tight circle of like-minded misanthropes, the key is to avoid total isolation.
Religion isn’t the magic ingredient—it’s just a well-tested delivery system. And in a society where other forms of community are fraying, it’s not surprising that religious folks seem to be winning the longevity lottery. The real takeaway? Find your people. Even if you’d rather be alone.
This post explains how to interpret the ternary plot chart’s visualisation. The ternary chart on the survey results page will render something like this. This is an admin page with additional functionality, but it’s similar enough. The blue dot represents the average of all responses. The star represents where I guessed the average would land–mostly modern with some residual premodernity and a touch of postmodernity.
Under the title in the header is a textual assessment of the visualisation. In this case, the response illustrates someone moderately modern with postmodern influences. Although this person also has some premodern tendencies, they are relatively insignificant to the context.
The three possible worldviews are at the vertices (the corners) of the triangle. Each side is a scale progressing from 0% to 100%—100% coincident with the label. For example, the bottom side runs from 0 on the left to 100 on the right, which would indicate a score of 100 per cent Premodern, which the output deems Pure Premodern.
Notice that each vertex has green and yellow shading that serves as visual aids representing the strength of the relationship to the corner. Green is strong, and yellow is moderate. The white section outlined by an interior triangle with a red border is decidedly mixed, showing no strong inclination to any of the extremes.
In the example above, the red plot point illustrates a response (as shown below the chart) that is 20.7% Premodern, 52.1% Modern, and 27.2% Postmodern. These numbers should always sum to 100, though there will be some drift due to rounding. The star represents where I thought the average response would be. Follow the tickmarks on each side, and you’ll notice they correspond with the plot point as a 3-tuple (20, 70, 10).
In the future, I expect to render a view that plots the average survey response as a reference.
Below this chart is an expository account of the response choices. You can render this content as a PDF for your personal archive.
Final Word
If you have any questions or suggestions related to this topic, please feel free to leave them in the comments below.
The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.
Audio: NotebookLM podcast of this content.
Are we in a post-rationality era, or is it just me who sees this with concern?
ChatGPT
Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.
We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.
But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.
DeepSeek
The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.
Several factors contribute to this perception:
Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.
However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”
If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.
Meta AI
You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.
Factors contributing to this trend
Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.
Concerns and implications
Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.
What can be done?
Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.
You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.
Claude
I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:
The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
“Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.
At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.
As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.
These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.
Perplexity
There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].
Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].
However, it’s important to note that:
Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].
Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.
Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].
You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.
It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.
Why do you think we might be in a post-rationality era?
¬◇ Not Diamond
The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:
Loss of Rational Certainty
The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].
Post-Rationalism
Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].
Post-Rationalist Nihilism
Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].
Postmodern and Post-Christian Era
There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].
Practical Implications
While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].
In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.
NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.
perception—My favourite unreliable narrator. We humans love to believe we’ve got nature all figured out. Venomous snakes are brightly coloured to scream “danger.” Butterflies have wings so clever they double as invisibility cloaks. Zebras blend into their herds like barcodes in a supermarket scanner. Simple, right? Evolution explained; case closed.
But then something like this tiger meme smacks you upside the head, reminding you that the animal kingdom didn’t evolve just for our benefit—or our eyes. To a deer or a boar, that glaring orange tiger we associate with breakfast cereal is practically dressed in camouflage green. What we see as flamboyant and conspicuous is, in their dichromatic world, stealth at its finest. It’s not just our story, folks. The world doesn’t revolve around us, no matter how much we try to make it so.
Audio: NotebookLM podcast discussing this topic.
And that’s the punchline here: all those neat evolutionary narratives we’ve packaged up with a bow? They’re “just-so” stories built on our limited sensory toolkit. What if the zebra’s stripes aren’t just for blending into the herd but also for confusing a lion’s depth perception? What if those venomous snakes’ colours aren’t only a warning but also a mating ad in wavelengths we’ll never see? What if we’re just projecting human logic onto a planet with millions of other perspectives—each living in its own bespoke version of reality?
The meme about the tiger is a perfect metaphor for this broader idea. It’s not just about what we see; it’s about what others—be they animals, cultures, or people—experience. The tiger isn’t orange to them. What feels blindingly obvious to one perspective might be invisible to another. It’s a simple truth with profound implications, not just for understanding nature but for navigating the world we humans have made.
Take any argument—politics, culture, morality—and you’ll find the same principle at play. Everyone’s a trichromat in their own little world, convinced they’ve got the full spectrum of truth, when in reality, they’re missing entire wavelengths. Just like the deer who doesn’t see orange, we’re all blind to what we’re not built to perceive.
So next time someone insists their worldview is the only valid one, you might want to remind them that to some creatures, even the loudest tiger is just part of the scenery. Nature didn’t evolve for human eyes alone, and neither did the truth.
If you are reading this, you are likely familiar with David Chalmers’ idea of the Hard Problem of Consciousness—the thorny, maddeningly unsolvable question of why and how subjective experience arises from physical processes. If you’re not, welcome to the rabbit hole. Here, we’ll plunge deeper by examining the perspective of Stuart Hameroff, who, like a philosophical magician, reframes this conundrum as a chicken-and-egg problem: what came first, life or consciousness? His answer? Consciousness. But wait—there’s a slight snag. Neither “life” nor “consciousness” has a universally agreed-upon definition. Oh, the joy of philosophical discourse.
Video: Professor Stuart Hameroff and others promote the idea that consciousness pre-dates life. A fuller version is available at IAI.
Audio: Podcast on this topic.
For the uninitiated, Hameroff’s stance is heavily flavoured with panpsychism—the idea that consciousness is a fundamental feature of the universe, like space or time. In this worldview, consciousness predates life itself. From this vantage, Hameroff’s proposition seems inevitable, a tidy solution that fits neatly into a panpsychistic framework. But let me stop you right there because I’m not signing up for the panpsychism fan club, and I’m certainly not prepared to let Hameroff’s intellectual sleight of hand go unchallenged.
To make his case, Hameroff engages in a curious manoeuvre: he defines both life and consciousness in ways that conveniently serve his argument. Consciousness, for him, is not limited to the complex phenomena of human or even animal experience but is a fundamental property of the universe, embedded in the very fabric of reality. Meanwhile, consciousness eventually orchestrates itself into life—a secondary phenomenon. With these definitions, his argument clicks together like a self-serving jigsaw puzzle. It’s clever, I’ll grant him that. But cleverness isn’t the same as being correct.
This is the philosophical equivalent of marking your own homework. By defining the terms of debate to fit his narrative, Hameroff ensures that his conclusion will satisfy his fellow panpsychists. The faithful will nod along, their priors confirmed. But for those outside this echo chamber, his framework raises more questions than it answers. How does this universal consciousness work? Why should we accept its existence as a given? And—here’s the kicker—doesn’t this just punt the problem one step back? If consciousness is fundamental, what’s the mechanism by which it “pre-exists” life?
Hameroff’s move is bold, certainly. But boldness isn’t enough. Philosophy demands rigour, and redefining terms to suit your argument isn’t rigorous; it’s rhetorical trickery. Sure, it’s provocative. But does it advance our understanding of the Hard Problem, or does it merely reframe it in a way that makes Hameroff’s preferred answer seem inevitable? For my money, it’s the latter.
The real issue is that panpsychism itself is a philosophical Rorschach test. It’s a worldview that can mean just about anything, from the claim that electrons have a rudimentary kind of awareness to the idea that the universe is a giant mind. Hameroff’s take lands somewhere in this spectrum, but like most panpsychist arguments, it’s long on metaphysical speculation and short on empirical grounding. If you already believe that consciousness is a fundamental aspect of reality, Hameroff’s arguments will feel like a revelation. If you don’t, they’ll feel like smoke and mirrors.
In the end, Hameroff’s chicken-and-egg problem might be better framed as a false dichotomy. Perhaps life and consciousness co-evolved in ways we can’t yet fully understand. Or perhaps consciousness, as we understand it, emerges from the complexity of life, a byproduct rather than a prerequisite. What’s clear is that Hameroff’s solution isn’t as tidy as it seems, nor as universally compelling. It’s a clever sleight of hand, but let’s not mistake cleverness for truth.