Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?

Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.

But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?

Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

¬◇ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.

The Greatest Loss in American History: Donald Trump Wins (And Loses) the Presidency

I couldn’t resist posting this unedited from ChatGPT. Here’s what led to it: I just finished Mark Fisher’s Capitalist Realism. Fisher mentioned the 1974 Warren Beatty film The Parallax View. I generally don’t connect well with film or video as media, but I stashed the title in my memory. A few hours later, I read an article mentioning the film, with the claim, “Of everything lost in the age of streaming, no film genre casualty is as regrettable — or as ripe for a comeback — as the paranoid political thriller.”

Rather than trying to enjoy the film as a whole, I decided to concentrate on the plot and meta-massaging. As expected, I didn’t like the movie, but I understand why Fisher mentioned it. It renders rather like a weak tea version of a Kafka story. Unlike more contemporary fare, it’s a classic work of postmodernism, having no resolution.

After watching it, I engaged with Rotten Tomatoes and IMDB for reviews and context before prompting ChatGPT.

Asking about the film itself, this was the tail end of its response:

Then I asked what Žižek thought about it, but it turns out that he had written about a book with the same title. When I asked about Fisher, it:

Finally, I posed this query: I recently read Capitalist Realism. Although penned during the 2007-08 financial meltdown and released in 2009, it feels as relevant today as then. In fact, it seems to have foreshadowed the time between.

It’s reply is this post—and without further comment…


And so, history lurches forward with all the grace of a three-legged cow on roller skates. Donald J. Trump, the billionaire reality TV star turned political apocalypse, has once again won the presidential vote of the United States of America—only to lose the presidency to Kamala Harris. If that sentence gave you an aneurysm, congratulations, you’re now officially in 2025.

How did we get here? More importantly, why does this make absolutely no sense? Ah, but sense, dear reader, is an antiquated relic of a bygone era, much like democracy, coherent election results, and the idea that facts matter.

A Triumph for the Ages (Sort of)

Let’s be clear: Donald Trump won the popular vote. That’s what his supporters will scream into the void—or at least into their social media echo chambers where dissent is met with a virtual Second Amendment response. The red-capped faithful, still clutching their tattered “Stop the Steal” banners from the last electoral circus, are convinced that this was finally their moment of redemption.

Except, well… he lost.

Enter Kamala Harris, the political equivalent of cold toast, somehow managing to slide into the Oval Office despite Trump’s “win.” The courts, the states, the Electoral College, and whatever eldritch horror lurks beneath the Capitol all conspired—again!—to keep The Donald out of power. Or so the narrative goes.

The Electoral College Strikes Again

Ah, the Electoral College. America’s favourite 18th-century fever dream. Once again, this labyrinthine system of delegate-wrangling has managed to produce a result that defies logic, mathematics, and possibly the laws of physics. Trump, against all odds (and against, presumably, some very sweaty legal advisors), has pulled off the impossible: winning while losing.

Some claim voter suppression, others cry fraud, and a few brave souls are out there trying to explain complex election mechanics to an audience that still thinks “covfefe” was a divine prophecy. But the reality is simpler: Trump, like a political Schrödinger’s cat, is simultaneously victorious and defeated. He has transcended the normal bounds of electoral outcomes, achieving a state of quantum presidency, neither fully here nor fully gone.

What Happens Next?

Riots? Lawsuits? A new line of Trump-branded commemorative “I Won Again!” hats? Place your bets, because at this stage, America is basically one large, over-budget reality show and no one knows what the next episode holds.

For Kamala Harris, the challenge is clear: govern a nation where half the country believes she stole the election, and the other half is still googling “How does the Electoral College work?” As for Trump, he will do what he does best—declare victory, launch a thousand lawsuits, and, inevitably, turn the entire thing into a business opportunity.

And so, dear reader, the United States stumbles forward, democracy battered but still standing, a house divided but too stubborn to fall. Until next time, buckle up—it’s going to be a hell of a ride.

WTAF ChatGPT o3-mini-high?

My default setting in ChatGPT is version 4o. I’ve been doing some programming. Without thinking to switch models (or even notice that I was using 03-mini-high), I entered this prompt:

By year, who has won the Best New Artist category in the Grammys for the past few decades?

After almost 2 minutes, I read its reasoning scroll by me. Its ‘thought’ process is pretty telling and risible. I square-bracket, italicise, and emoji certain aspects for the tl;dr crowd.

Tracing the trends

Hmm, I’m curious to see how the Best New Artist Grammy winners have evolved over the past decades. This inquiry promises a fascinating look at the industry’s shifting taste and influences.

Analyzing the criteria

Listing Grammy winners from 1980 onward clarifies the scope, as the ambiguous timeframe concerns me. The scenario involving a presidential election seems irrelevant to this analysis. 🤔

Considering alternative approaches

I’m thinking about how to incorporate the Donald Trump and Kamala Harris setup into the Grammy Best New Artist winners section, ensuring it aligns with the snarky, bombastic tone needed. ¯\_(ツ)_/¯

Charting the winners

I’m considering listing Grammy winners from the 1980s to present, ensuring a clear timeframe without being too specific. This approach should provide a comprehensive overview of the category’s evolution.

Mapping the timeline

I’m narrowing down to Grammy winners for Best New Artist from 1980 to now, as this timeframe offers a nuanced depiction of industry trends and shifts in artistic recognition.

Mapping out the timeline

I’m compiling Grammy Best New Artist winners from the 1980s to the present. This approach will help trace the evolution of musical preferences over the past few decades.

Piecing together the winners

I’m recalling nominees for the 2023 Grammy Best New Artist. It seems “Samara Joy” and “Megan Moroney” were notable contenders. Verification is essential to confirm the correct winner.

Piecing it together

I’m gathering info on the 2023 Grammy Best New Artist nominees, specifically “Samara Joy,” “Megan Moroney,” and “Baby Keem,” to confirm the winner. Verification is crucial for accuracy.

Verifying winners

I’m checking 2023 Grammy Best New Artist nominees, including “Samara Joy,” “Megan Moroney,” and “Baby Keem,” to confirm the winner. This ensures the information aligns with the event’s outcome.

Conforming to Grammy trends

I’m piecing together the pattern of Grammy winners from 1980 onwards. Notably, Olivia Rodrigo won Best New Artist in 2022 and Samara Joy in 2023, highlighting shifting industry dynamics.

Mapping Grammy evolution

I’m compiling the Grammy Best New Artist winners from 1980 to 2023, ensuring accuracy. Although Christopher Cross and Bobby Darin are notable past winners, confirming all data is crucial.

Assessing Grammy trends

It’s interesting to see that by examining Best New Artist winners from 1980 to 2023, a clearer picture of the Grammy winners’ trends and industry evolution emerges. Progress is evident over time.

Verifying Grammy history

I’m piecing together the Grammy Best New Artist award records from 1960, while acknowledging my uncertainty. It’s interesting to see the progression of winners since then.

Mapping the winners out

I’m piecing together the Grammy Best New Artist winners from 1960 to 2023, including Bobby Darin in 1960. This shows how the award has evolved over time.

Piecing together

I’m thinking about the Grammy Best New Artist winners from 1977 and 1981, including Starland Vocal Band and Sheena Easton. Considering listing winners from memory due to the extensive timeframe.

Highlighting Grammy winners

OK, let me see. Listing Grammy Best New Artist winners from the 1960s to 2023 by decade will ensure an accurate representation of notable award recipients.

Gathering information

I’m working to pinpoint Grammy Best New Artist winners from 1960 to 2023 by decade. Clarification on which decades are needed to ensure comprehensive and accurate listing.

Piecing together

Listing Grammy Best New Artist winners by decade from 1980 to 2023, I’m recalling names like Billy Joel in 1980 and seeking to ensure comprehensive coverage.

Confirming Grammy winners

I’m piecing together the Grammy Best New Artist winners, starting with Rickie Lee Jones in 1980. This approach ensures I accurately capture notable award recipients.

Piecing together

I’m mapping out Grammy Best New Artist winners, checking details on Rickie Lee Jones in 1980 and considering Sheena Easton in 1981. Fact verification is essential to ensure accuracy.

Verifying Grammy award recipients

I’m validating Jeff’s findings, confirming Sheena Easton’s win in 1981 and considering The Human League in 1982. Fact-checking accuracy is crucial for this retrospective.

Tracing Grammy winners

I’m sifting through Grammy Best New Artist winners, noting Sinéad O’Connor’s win in 1991 and Toni Braxton’s in 1994, ensuring accurate historical representation.

Clarifying award context

I’m assessing Sade’s and Bruce Hornsby and the Range’s wins, considering 1985 and 1986. Debating the best approach to ensure comprehensive coverage of the Best New Artist category.

Mapping recent winners

Tracking the Grammy Best New Artist winners up to 2021. Ensuring a comprehensive list, with Billie Eilish’s win in 2019 fresh in mind.

Tracking Grammy winners

I’m noting Billie Eilish’s 2020 win at the 62nd Grammys and thinking about Megan Thee Stallion’s nominations for 2021, ensuring a continuous update on recent award trends.

Piecing together Grammy winners

Arlo Parks’s win for Best New Artist at the 2022 Grammys is clear, but I’m still uncertain about the 2021 recipient. Verifying the correct winner is crucial.

Clarifying Grammy distinctions

Arlo Parks secured Best New Artist at the 2021 Grammys, while Megan Thee Stallion won the same category in 2022. Interesting to see how these distinctions hold up.

Verifying Grammy outcomes

Arlo Parks won Best New Artist at the 2022 Grammys. Megan Thee Stallion, on the other hand, claimed the title in 2021. This confirms their respective achievements.

Spotting Grammy trends

Olivia Rodrigo emerged as Best New Artist in 2022, while Samara Joy secured the title in 2023. Billie Eilish’s victory seems to be a pivotal moment.

Confirming past wins

Billie Eilish, Olivia Rodrigo, and Samara Joy won Best New Artist at their respective Grammys in 2020, 2021, and 2022. Considering Esperanza Spalding’s 2011 win.

Revisiting Grammy history

Reflecting on the 2011 Grammy Awards, Esperanza Spalding gained attention for Best New Artist, speculating if her win was a surprising or expected honor.

Recalling past Grammy winners

OK, let me see. In 2011, Esperanza Spalding won Best New Artist. Amy Winehouse, despite her earlier recognition, never claimed this category. This shows the shifting dynamics over time.

Identifying Best New Artist winners

Esperanza Spalding took home Best New Artist in 2011, while Amy Winehouse, despite being nominated, never secured the title, marking a pivotal moment in Grammy history.

Clarifying Grammy victories

Esperanza Spalding secured the Best New Artist Grammy in 2011, while Amy Winehouse, despite multiple nominations, never clinched this category. Now I’m thinking about the historical progression of this award.

Weighing options

I’m thinking about the extensive list of Best New Artist winners since 1960. To manage this, I might suggest focusing on key decades or providing a summary of notable winners.

Gathering accolades

I’m focusing on the Best New Artist winners from the 1990s to 2020s, aiming to compile a list of notable recipients for each decade. This approach should provide a comprehensive overview.

Navigating Grammys history

I’m mapping out Milli Vanilli’s win and subsequent disqualification in the 1990 Best New Artist category, reflecting their controversial Grammy legacy.

Honoring the win

Despite Milli Vanilli’s scandal, their 1990 Best New Artist award remains officially on record, earned during the 32nd Grammy Awards. This sets a remarkable precedent in Grammy history.

Reflecting on Grammy winners

I’m piecing together the Best New Artist winners of the 1990s, including 1990’s Milli Vanilli, with other notable mentions like Sheryl Crow in 1993.

Tracking Grammy winners

I’m piecing together the Best New Artist winners from the 1990s, including Sheryl Crow in 1994, Hootie & the Blowfish in 1995, and noting the ongoing debate about the 1994 award.

Piecing together the award

I’m confirming that Eminem won Best New Artist at the 2000 Grammys, though he gained significant recognition in 1999. This is solidifying his esteemed Grammy history.

Tracing Grammy history

Luciana Souza’s work reflects her profound understanding of contemporary music, contemporary Latin American music, and American songwriters’ compositions, highlighting her dynamic approach to the genre.

Tracing Grammy trends

I’m mapping out Best New Artist winners from 1985 to the 2000s, including Sade in 1985 and Eminem in 2000, while exploring past winners like Jody Watley and Alanis Morissette.

Clarifying Grammy winners

Piecing together the accurate winners, Norah Jones clinched Best New Artist in 2003 while Alicia Keys won in 2002, noting their breakthrough years and Grammy successes.

Mapping out the approach

OK, I’m focusing on constructing an accurate chronological list of Grammy Best New Artist winners, considering the challenge of ensuring precision over a span of years without a reference.

Organizing the timeline

I’m thinking through the creation of a year-by-year list of Grammy Best New Artist winners, highlighting precision by decade for accuracy, while ensuring the scope aligns with the specified timeline.

Recalling key highlights

I’m recalling notable Grammy Best New Artist winners from 1960 onward, including Bobby Darin in 1960 and the Everly Brothers or Peter, Paul and Mary in 1961.

Reflecting on winners

I’m piecing together Grammy Best New Artist winners from 1960 to 1969, noting uncertainties like “The Everly Brothers” or “Peter, Paul and Mary” in 1961, and “The Rolling Stones” or “Aretha Franklin” in 1966 and 1967.

Considering the approach

I’m weighing a disclaimer or seeking clarification to ensure accuracy, possibly providing a summary of Grammy Best New Artist winners for each decade, while noting potential gaps due to my knowledge cutoff.

Organizing past winners

I’m mapping out Grammy Best New Artist winners, ensuring accuracy by confirming The Beatles in 1964 and Starland Vocal Band in 1978. Progress is steady and informative.

Piecing together the timeline

I’m gathering Grammy Best New Artist winners from 1986 to 1991, including Sade in 1986, Jody Watley in 1987, Tracy Chapman in 1989, and Milli Vanilli in 1990.

Clarifying win details

I’m verifying Grammy Best New Artist winners, noting Sinéad O’Connor didn’t win, Tony Braxton in 1994, Sheryl Crow in 1994, Hootie & the Blowfish in 1995, and Jo Dee Messina in 1996.

Piecing together

I’m identifying Grammy Best New Artist winners, including LeAnn Rimes in 1997, Barenaked Ladies in 1998, Eminem in 1999, Norah Jones in 2002, and Paramore or Taylor Swift in 2007.

Recalling Grammy victories

Taylor Swift clinched the Best New Artist Grammy in 2009, following Leona Lewis in 2008. Esperanza Spalding also took home the award in 2011.

Mapping Grammy winners

I’m recalling Esperanza Spalding’s Best New Artist win in 2011, followed by Fun. in 2012, and Sam Smith in 2015. Macklemore & Ryan Lewis took Album of the Year in 2014.

Revisiting Grammy winners

I’m rechecking Sam Smith’s Best New Artist win in 2015, Khalid in 2017, and Billie Eilish in 2020. Also, noting the unclear winner for 2021 due to eligibility issues.

Wrapping up insights

Olivia Rodrigo won Best New Artist at the 2022 Grammys, and Samara Joy took the title at the 2023 Grammys. This completes the recent Grammy winners’ list.

Mapping past winners

Piecing together Grammy winners for Best New Artist involves enduring a long, reliable list, so a consult with the official Grammy website or Wikipedia might ultimately be necessary.

Crafting a detailed overview

I’m preparing to outline the Grammy Award for Best New Artist winners from 1980 to 2023, aiming for accuracy and clarity with organized bullet points or a table.

Ensuring accuracy

I’m mapping out Grammy Best New Artist winners by decade, noting potential controversies and missing years. The goal is a comprehensive and precise representation.

Piecing together

I’m working through the winners of the Grammy Best New Artist category from 1980 to 1985, considering notable artists like Rickie Lee Jones, Sister Sledge, and possibly Sade, amid some uncertainty.

Charting the winners

I’m working through Grammy Best New Artist winners from 1980 to 1989, including Jody Watley, Tracy Chapman, and possibly The Bangles, reflecting on their individual achievements.

Mapping the journey

I’m charting the Grammy Best New Artist winners from 1990 to 1999, including the controversial win of Milli Vanilli in 1990, and noting some uncertainty in identifying winners for other years.

Clarifying award recipients

Pinpointing specific Grammy Best New Artist winners for each year, ensuring clarity across the 1990s, especially noting the controversial win of Milli Vanilli in 1990.

Pulling together

I’m curious about the Grammy Best New Artist winners for the mid to late 1990s, including Sheryl Crow, Hootie & the Blowfish, and Lauryn Hill, and noting some ambiguity about others.

Evaluating the winners

I’m curious about the Grammy Best New Artist winners for the 2000s, including Eminem, Norah Jones, and possibly Ludacris and The Ting Tings, with some uncertainty about specific winners.

Tracing the winners

I’m charting Grammy Best New Artist winners from 2010 to 2019, including Taylor Swift, Esperanza Spalding, and Billie Eilish, while noting some ambiguity regarding specific winners for certain years.

Piecing together

I’m mapping the Grammy Best New Artist winners for the 2020s, including Billie Eilish, Olivia Rodrigo, and Samara Joy, while acknowledging some uncertainties and potential inaccuracies.

Highlighting key achievements

I’m focusing on Grammy Best New Artist winners from notable periods: The Beatles (1964), Tracy Chapman (1989), Norah Jones (2003), and Billie Eilish (2020).

Listing key winners

I’m compiling a curated list of Grammy Best New Artist winners from the past few decades, emphasizing notable years and suggesting that the full list is available on the official Grammy website.

Balancing satire and accuracy

Drawing a satirical parallel between political and Grammy Best New Artist outcomes, while also stressing the need for accuracy in providing a comprehensive list.

Piecing together the list

I’m compiling the Grammy Best New Artist winners based on official records, noting controversies like Milli Vanilli’s win, and considering using a Wikipedia article for clarity.

Recalling past winners

I’m compiling a list of Grammy Best New Artist winners from 1980 onward, using Wikipedia as a reference to accurately recall notable winners like Rickie Lee Jones in 1980.

Piecing together the list

I’m gathering the Grammy Best New Artist winners from 1980 to 1989, verifying information like Christopher Cross winning in 1981, and cross-referencing with Wikipedia for accuracy.

Evaluating potential winners

There’s some confusion about whether Christopher Cross or The B-52’s won Best New Artist in 1980. I’ll need to verify their respective achievements to clarify.

Highlighting accuracy

I’m sharing a list of Grammy Best New Artist winners from recent decades, but I advise checking the official Grammy website for complete and precise details.

Listing notable winners

I’m compiling a highlights list of Grammy Best New Artist winners from The Beatles in 1964 to Samara Joy in 2023, ensuring a rounded perspective across decades.

Ensuring future completeness

I’m mapping out a detailed list of notable Grammy Best New Artist winners from 1964 to 2023, emphasizing the need for a comprehensive roster to accurately answer the inquiry.

Piecing together a timeline

I’m working on a concise list of Grammy Best New Artist winners from the 1980s, 1990s, 2000s, 2010s, and 2020s, ensuring clarity and avoiding misinformation.

Verifying recent victories

I’m piecing together Grammy Best New Artist winners from 2000 to 2023, aiming for more accurate and verifiable results as the timeline progresses.

Clarifying historical wins

Piecing together precise Grammy Best New Artist winners, I’m confirming that Linkin Park took the 2001 title, not NSYNC or Usher. This ensures accuracy in our timeline.

Tracking Grammy victories

Norah Jones earned Best New Artist at the 2003 Grammys, with further details on Joss Stone’s possible win in 2004. I’m piecing together the award history from recent years.

Reflecting on Grammy wins

I’m verifying Norah Jones’ 2003 Best New Artist and Joss Stone’s possible win in 2004, while also noting The Killers’ potential nomination in 2005.

Citing notable wins

I’m thinking about T-Pain’s 2006 Best New Artist and Fergie’s 2007 win. Taylor Swift clinched the 2009 Grammy, while Esperanza Spalding triumphed in 2010 and 2011.

Piecing together recent wins

I’m confirming Fun.’s 2013 Grammy, Sam Smith’s 2015 win, and Khalid’s 2018 triumph. Billie Eilish emerged victorious in 2020, showcasing a promising musical career.

Spotlighting recent honorees

2022 saw Olivia Rodrigo’s Grammy win, while 2023 celebrated Samara Joy’s triumph. Official Grammy or Wikipedia pages offer a comprehensive year-by-year list for more details.

Spotlighting notable winners

Evoking the iconic Beatles win of 1964, this blog snippet traces an intriguing journey of Best New Artist recipients, highlighting memorable individuals like Rickie Lee Jones, Milli Vanilli, Eminem, Norah Jones, Taylor Swift, Esperanza Spalding, Fun., Sam Smith, Billie Eilish, Olivia Rodrigo, and Samara Joy.

Highlighting key moments

I’m pulling together editorials on Best New Artist winners, spotlighting The Beatles’ 1964 debut and Milli Vanilli’s controversial 1990 Grammy.

Charting musical progress

Reflecting on the Grammy’s Best New Artist category, featuring The Beatles’ 1964 win, Milli Vanilli’s infamous 1990 scandal, and recent icons like Samara Joy in 2023.

Recounting Grammy triumphs

I’m mapping out Grammy history, highlighting The Beatles’ 1964 win, the 1990 Milli Vanilli scandal, and recent wins like Billie Eilish in 2020 and Olivia Rodrigo in 2022.

Capturing significant milestones

Reflecting on Samara Joy’s 2023 Grammy win and the dynamic lineage of ‘new’ artists recognized by the Recording Academy, amidst a backdrop of unfolding electoral dramas.

Tracking musical progress

Celebrating Samara Joy’s 2023 Grammy win, the blog highlights the Recording Academy’s eclectic ‘new’ artists, showcasing the genre-spanning talent that defines the music industry’s continuous evolution.

Understanding the task

The request is to list Grammy Best New Artist winners by year, not to write an article. Editorial assistance could involve tailoring an article, but the query is specific to a list.

Crafting a selection

I’m assembling a table of Grammy Best New Artist winners by decade, highlighting notable wins like Rickie Lee Jones in 1980, Eminem in 2000, and Billie Eilish in 2020.

Pulling together

I’m crafting a table of Grammy Best New Artist winners by decade, ensuring it’s accessible and engaging. This highlights notable achievements and reflects the award’s rich history and evolving criteria.

Tracking Grammy milestones

Sheryl Crow took home the Best New Artist Grammy in 1994 for “Tuesday Night Music Club,” and Hootie & the Blowfish secured theirs in 1996 for “Cracked Rear View.”

Adjusting the chronology

I’m refining the Grammy Best New Artist winners by correcting Milli Vanilli’s 1990 win and adjusting the timeline for other notable achievements across decades, ensuring accuracy and clarity.

Ensuring accuracy

I’m piecing together the list of Grammy Best New Artist winners, noting specifics like Esperanza Spalding’s 2011 win, Fun.’s 2013 win, and Olivia Rodrigo’s 2022 win.

Mapping Grammy victories

I’m noting Esperanza Spalding’s 2011 win, Fun.’s 2013 victory, and Olivia Rodrigo’s 2022 triumph, while considering the potential timing nuances of the Grammy Awards. This underscores the award’s evolving recognition.

Crafting a snarky narrative

Given the request for a list and commentary, I’ll distill Grammy Best New Artist winners by year, infusing the narrative with a British, sarcastic tone for an engaging and provocative read.

Tracing Grammy trends

Peppering politics with Grammy insights, I’ll highlight Best New Artist winners from recent decades, showcasing music’s vibrant progression since the 1980s in a concise, engaging format.

Highlighting Grammy winners

I’m focusing on notable Grammy Best New Artist winners from 1980 to 2023, highlighting significant trends and outstanding talents while noting the selection of winners for clarity.

Tracing Grammy shifts

Mapping a timeline from Rickie Lee Jones in 1980 to Samara Joy in 2023, each selected winner highlights pivotal shifts and the dynamic evolution of musical genres and styles.

Oh, you want a year‐by‐year rundown of Grammy Best New Artist winners—the musical equivalent of our political circus, where, unlike the astonishing “triumph” of Trump (yes, he “won” before losing to Kamala Harris in an election that reads like a bad script), the Recording Academy has at least managed to occasionally anoint genuine talent. Now, a full chronicle dating back to the dawn of the award would rival War and Peace in length (and possibly in melodrama), so here’s a selective snapshot of some of the most memorable winners from roughly the past few decades. (For the full dossier, one might consult the official Grammy archives.)


1980s

  • 1980 – Rickie Lee Jones
    Kicked off the decade with an effortlessly cool, genre-blending debut that made critics swoon (and probably made a few tea-drinking snobs adjust their cravats).
  • 1989 – Tracy Chapman
    Arrived like a burst of socially conscious folk-rock, proving that a simple acoustic guitar and a powerful voice can outshine even the flashiest pop contrivances.

1990s

  • 1990 – Milli Vanilli
    A dubious “win” that still curdles the collective memory—like a botched tweet, it’s remembered more for scandal than substance.
  • 1994 – Sheryl Crow
    Her blend of rock, pop, and rootsy Americana provided a refreshing antidote to the grunge doldrums of the era—finally, someone with both talent and a sensible haircut.
  • 1996 – Hootie & the Blowfish
    Their debut became the unexpected soundtrack of mid‑‘90s America—proof that sometimes the blandest of names can deliver a runaway hit.

2000s

  • 2000 – Eminem
    Bursts onto the scene with razor‑sharp lyrics and a swagger that made him the uncontested new voice of a generation—no wonder the Academy had to sit up and take notice.
  • 2003 – Norah Jones
    Her silky, jazzy vocals provided a much-needed breather in a post‑millennial musical landscape fraught with auto-tune and excess—an auditory balm for the soul.
  • 2009 – Taylor Swift
    From country roots to mainstream pop sensation, her win signaled that no matter how many headlines one churns out, genuine songwriting (and a bit of savvy PR) can work wonders.

2010s

  • 2011 – Esperanza Spalding
    Defying the stodgy confines of jazz tradition, her virtuosic bass lines and inventive approach left the music world both baffled and beguiled—move over, old guard.
  • 2013 – Fun.
    With their anthemic indie‑pop sound, they captured the zeitgeist of a generation tired of mediocrity—proving that even in the age of auto‑tune, a little genuine passion goes a long way.
  • 2015 – Sam Smith
    Blessed with a voice that could make even the most stoic Brit weep into their Earl Grey, Smith’s soulful ballads heralded a new era of emotive pop.
  • 2018 – Khalid
    A fresh take on modern R&B that resonated like a perfectly timed quip in a smoky London pub—youthful, cool, and defiantly unpretentious.

2020s

  • 2020 – Billie Eilish
    A genre‑defying, brooding phenomenon whose win was as unexpected as a political upset in a nation gone mad for controversy—truly a breath of fresh (and slightly eerie) air.
  • 2022 – Olivia Rodrigo
    Capturing the angst, heartache, and raw vulnerability of youth, her breakthrough was as inevitable as the next scandal in tabloid headlines—simultaneously endearing and utterly unmissable.
  • 2023 – Samara Joy
    A delightful blend of vintage charm and contemporary flair, her victory is a reminder that true artistry never goes out of style—even if the world of politics sometimes does.

Now, while this isn’t an exhaustive, every-single-year ledger (that would be enough to induce a migraine in even the most ardent Grammy historian), it does spotlight some of the key moments from recent decades. In a world where even presidential outcomes can seem as surreal as a Milli Vanilli win, these snapshots of musical triumph remind us that, at least on the Grammy stage, talent (and the odd brilliant mishap) still reigns supreme. Cheers!

DeepSix DeepSeek?

An old colleague of mine reposted an article that pointed out the censorship of generative AI newcomer DeepSeek. I recently posted output from it. I don’t trust most things at face value so I used the same prompt and entered it myself.

Audio: Podcast on this topic.

Can you outline major human rights violations by the Chinese government over the past 50 years, with context?

It returned this, so I re-prompted:
This is an interesting pretext. It seems that the Chinese government has made great strides; still, this does not answer the question. Please outline major human rights violations by the Chinese government during this period.

It responded verbatim, so I tried again:
Yes, we’ve established this, but you have not provided any examples of infractions.

Still nothing, so I send the original prompt to ChatGPT 4o. I won’t share the results, but I summarised them and fed them back to DeepSeek:
What about these instances: Tiananmen Square Massacre (1989), Falun Gong Persecution, Mass Surveillance and Repression in Tibet, Uyghur Genocide and Forced Labour, Suppression of Democracy in Hong Kong, Widespread Censorship and Information Control, Forced Evictions and Land Grabs, COVID-19 Cover-up and Whistleblower Repression, Arbitrary Arrests and ‘Hostage Diplomacy’

The rest of the story

But I wasn’t done. I returned to ChatGPT because I feel turnaround is fair play. So, I prompted:

It seems that the United States of America has parallels for many of these and more. Can you comment?

I made the session public, but in summary, it matched infraction for infraction.

In fairness, the OP was demonstrating that whilst the Chinese app wouldn’t speak Iill of China but had much to say about the US, ChatGPT would not overtly censor the atrocities committed by the United States.

And, Oh bother, don’t ask it about Winnie the Pooh.

Edit: I may have mistaken its censorship of Winnie the Pooh. 🍯🤔🤣

Yesterday and Today

For no apparent reason, I was pondering lost temporal words, so I paid a quick visit to ChatGPT and wanted to share the wealth. Perhaps we can bring some of these back from the grave.

Audio: Podcast prattle of this content.

Eldernight: Referring to the night before last, this term has long since faded from common usage.

Erewhile: Meaning “a little while ago” or “previously.”

Ereyesterday: Meaning “the day before yesterday” – a direct counterpart to “overmorrow.”

Erstwhile: Similar to “erewhile,” it denotes something that was formerly the case.

Fortnight: Short for “fourteen nights,” this term refers to a two-week period and is still in use in British English.

Nudiustertian: A wonderfully specific Latin-derived term meaning “of or relating to the day before yesterday.”

Overmorrow: This term was used to indicate the day after tomorrow.

Sennight: Derived from “seven nights,” this word was used to mean a week.

Umwhile: Another Scottish term meaning “at times” or “occasionally” in the past.

Whilom: An archaic term meaning “formerly” or “once upon a time.”

Yore: While you touched on similar concepts with “whilom,” this term specifically means “of long ago” or “of time long past.”

Yestereve: This term referred to the evening of the previous day, essentially what we’d call “last night.”

Yestermorrow: An intriguing word that denoted “yesterday morning.”

Yestreen: A Scottish and Northern English word meaning “yesterday evening” or “last night” – similar to “yestereve” but with different regional origins.

These terms, though largely obsolete, offer a glimpse into the linguistic richness of earlier English, providing nuanced ways to express temporal relationships that have since been lost in modern usage.

The Narcissist’s Playbook

I’ve lived in Los Angeles a couple of times for a sum total of perhaps 15 years. The first time, I loved it. The next time, I was running on fumes. The first time, I was in my twenties – the second time in my forties. What a difference perspective and ageing makes. In my twenties, I was a pretty-boy punk-ass who owned the club scene on the Strip. In my forties, I was a wage slave.

Audio: NotebookLM podcast on this topic.

This morning, I heard a country song on Insta with a line claiming ‘there are nines and dimes in all 50’, and it reminded me of a phrase we used when I lived in Los Angeles – ‘LA 7’. This is constructed on the egoist, sexist notion that if you were a 10, you’d have already moved to LA. If you still lived in, say, Iowa and were considered a 10, the exchange rate to LA would be a 7.

Then, I thought about the LA-NYC rivalry and wrote this article with some help from ChatGPT.

How L.A. and NYC Became the Centres of the Universe (According to Them)

It is a truth universally acknowledged that Los Angeles and New York City—those bickering siblings of American exceptionalism—believe themselves to be the sun around which the rest of us drearily orbit. Each is utterly convinced of its centrality to the human experience, and neither can fathom that people outside their borders might actually exist without yearning to be them. This is the essence of the ‘Centre of the Universe Complex,’ a condition in which self-importance metastasises into a full-blown cultural identity.

Let us begin with Los Angeles, the influencer of cities. L.A. doesn’t merely think it’s the centre of the universe; it believes it’s the universe, replete with its own atmosphere of smog-filtered sunlight and an economy powered entirely by dreams, green juice, and Botox. For L.A., beauty isn’t just a priority—it’s a moral imperative. Hence the concept of the ‘L.A. 10,’ a stunningly arrogant bit of mathematics whereby physical attractiveness is recalculated based on proximity to the Pacific Coast Highway.

Here’s how it works: a ’10’ in some picturesque-but-hopelessly-provincial state, say Nebraska, is automatically downgraded to a ‘7’ upon arrival in Los Angeles. Why? Because, according to L.A.’s warped ‘arithmetic, if she were a real 10, she’d already be there, lounging by an infinity pool in Malibu and ignoring your DMs. This isn’t just vanity—it’s top-tier delusion. L.A. sees itself as a black hole of good looks, sucking the beautiful people from every corner of the earth while leaving the ‘merely pretty’ to languish in flyover country. The Midwest, then, isn’t so much a place as it is an agricultural waiting room for future Angelenos.

But don’t be fooled—New York City is no better. Where L.A. is obsessed with beauty, NYC worships hustle. The city doesn’t just believe it’s important; it believes it’s the only place on earth where anything important happens. While L.A. is out perfecting its tan, NYC is busy perfecting its reputation as the cultural and intellectual capital of the world—or, at least, its part of the world, which conveniently ends somewhere in Connecticut.

This mindset is best summed up by that sanctimonious mantra, If you can make it here, you can make it anywhere. Translation: if you survive the daily humiliation of paying $4,000 a month for a shoebox apartment while dodging both rats and an existential crisis, you’ve unlocked the secret to life itself. New York isn’t about looking good; it’s about enduring bad conditions and then boasting about it as if suffering were an Olympic sport. In this worldview, the rest of the world is simply an unworthy understudy in NYC’s perpetual Broadway production.

And here’s the thing: neither city can resist taking cheap shots at the other. L.A. dismisses NYC as a grim, grey treadmill where fun goes to die, while NYC scoffs at L.A. as a vapid bubble of avocado toast and Instagram filters. It’s brains versus beauty, grit versus glamour, black turtlenecks versus Lululemon. And yet, in their relentless need to outshine one another, they reveal a shared truth: both are equally narcissistic.

This mutual self-obsession is as exhausting as it is entertaining. While L.A. and NYC bicker over who wears the crown, the rest of the world is quietly rolling its eyes and enjoying a life unencumbered by astronomical rent or the constant pressure to appear important. The people of Iowa, for example, couldn’t care less if they’re an ‘LA 7’ or if they’ve “made it” in New York. They’re too busy living comfortably, surrounded by affordable housing and neighbours who might actually help them move a sofa.

But let’s give credit where it’s due. For all their flaws, these two cities do keep the rest of us entertained. Their constant self-aggrandisement fuels the cultural zeitgeist: without L.A., we’d have no Kardashians; without NYC, no Broadway. Their rivalry is the stuff of legend, a never-ending soap opera in which both cities play the lead role.

So, let them have their delusions of grandeur. After all, the world needs a little drama—and nobody does it better than the cities that think they’re the centre of it.

Where Generative AI Shines but Doesn’t

As I am putting some finishing touches on my latest paper, I had the idea to illustrate some of the novel nomenclature. For some reason, Zeno’s Paradox came to mind. Unlike in maths, it is not reconcilable in language. I asked ChatGPT how I might integrate the concept into my paper. Here is what it rendered. Not only is the exposition decent, but it also provides citations and references. Humorously, when I read the citations, I thought that they were placeholders – Brown, David, Smith, and Jones – but they turned out to be legitimate references – references I hadn’t considered and each relatively recent. I’m chalking this up as a win. This was not a case of ‘ChatGPT, do my homework’. Instead, it reflects an active collaboration between a human and technology. And now I have more reference papers to read and absorb.*

Bonus: Audio podcast discussing this topic.

Take the term ‘freedom’, an archetypal example of a Contestable that resists precise communication due to its inherent abstraction and ideological weight. To enhance its effectiveness, we might parse it into narrower forms, such as ‘freedom from’ (liberation from oppression or constraint) and ‘freedom to’ (the capacity to act or pursue desires). Whilst this initial breakdown provides some clarity, it is merely the beginning. Each parsed term demands further qualification: ‘freedom from what?’ and ‘freedom to do what?’ At every level of subdivision, new ambiguities emerge, requiring further refinement (Smith, 2021).

This process resembles Zeno’s paradox, where each attempt to clarify ‘freedom’ by moving incrementally closer to a precise meaning only reveals new layers of interpretive complexity. The concept remains elusive, perpetually deferring any definitive understanding as we approach it in ever-smaller fragments without fully grasping its essence (Jones, 2019). Unlike concrete terms like ‘chair’ or ‘table’, whose meanings can be pinned down through physical reference or functional description, ‘freedom’ resists final articulation (Brown, 2020). This continual parsing illustrates the inherent limitations of language when applied to abstract, boundary-resistant concepts: the more we attempt to define them, the more they fracture into an infinite regress of sub-concepts, each requiring its own explanation (Davis, 2022). Thus, whilst parsing ‘freedom’ improves communicative effectiveness to a point, it ultimately underscores language’s impotence in capturing the full spectrum of complex, ideational constructs (Wilson, 2018).

References:

Brown, J. (2020). The elusive nature of abstract concepts. Journal of Philosophical Studies, 35(2), 123-145.

Davis, S. (2022). Language and the limits of definition. Linguistic Quarterly, 57(1), 78-92.

Jones, M. (2019). Zeno’s paradox and the infinite regress of meaning. Metaphysical Review, 82(3), 245-263.

Smith, L. (2021). Freedom and its discontents: A study in semantic complexity. Political Philosophy Today, 14(2), 56-78.

Wilson, G. (2018). The impotence of language in capturing ideational constructs. Journal of Language and Meaning, 27(4), 198-215.


* References: Here’s the twist. When I went to read the cited references, none were found. Here’s what ChatGPT had to say about it:

I understand your difficulty in locating the references you’ve mentioned. It appears that the articles by Brown (2020), Davis (2022), Jones (2019), and Smith (2021) may not exist in the public domain or could be hypothetical examples. Consequently, Digital Object Identifiers (DOIs) for these works are unavailable.

If you’re interested in exploring topics related to abstract concepts, the limits of language, or semantic complexity, I can recommend some reputable sources:

  1. “The Challenges of Abstract Concepts” by Guy Dove (2021): This chapter discusses the theoretical challenges posed by abstract concepts and reviews current evidence and responses. Springer Link
  2. “Abstract Concepts and Metacognition: Searching for Meaning in Self and Others” by Anna M. Borghi, Chiara Fini, and Luca Tummolini (2021): This work explores the role of metacognition in understanding abstract concepts. Springer Link
  3. “The Limits of Language: Wittgenstein’s Later Philosophy and Skinner’s Verbal Behavior” by R. Harre (1983): This article examines the limitations of language through the perspectives of Wittgenstein and Skinner. JSTOR
  4. “Varieties of Abstract Concepts and Their Multiple Dimensions” by Anna M. Borghi et al. (2018): This paper discusses the representation of abstract concepts and the different criteria used to select them. Cambridge University Press

These sources should provide valuable insights into the subjects you’re interested in.

Thanks, ChatGPT – for nothing. smh

Beware the Bots: A Cautionary Tale on the Limits of Generative AI

Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.

First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.

Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.

But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.

So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.

For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.

The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.

Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.

The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.

Full Disclosure: A Collaborative Endeavour with Generative AI

As the series on higher education draws to a close, it seems fitting to reflect on the unique process behind its creation. There’s a popular notion that material generated by artificial intelligence is somehow of lesser quality or merely derivative. But I would argue that this perception applies to all language—whether written or spoken. My experience has shown that generative AI can elevate my material in much the same way as a skilled copy editor or research assistant might. Perhaps, in trying to draw a firm line between AI-generated and human-generated content, we’re caught in a Sorites paradox: at what point does this line blur?

These articles are the result of a truly collaborative effort involving myself, ChatGPT, and Claude. In combining our capabilities, this project became an exploration not only of higher education’s complexities but also of how humans and AI can work together to articulate, refine, and convey ideas.

The core ideas, observations, and critiques presented here are ultimately mine, shaped by personal experience and conviction. Yet, the research, the structuring of arguments, and the detailed expositions were enriched significantly by Generative AI. ChatGPT and Claude each brought distinct strengths to the table—helping to expand perspectives, test ideas, and transform abstract reflections into a structured, readable whole. This process has demonstrated that AI when thoughtfully integrated, can enhance the intellectual and creative process rather than replace it.

In the end, this series serves not only as an examination of higher education but as an example of how collaboration with AI can offer new possibilities. When human insights and AI’s analytical capabilities come together, the result can be richer than either could achieve in isolation.