Kurt Gray’s Outraged! is a fascinating romp through the minefield of moral psychology and outrage culture. It’s snappy, it’s clever, and it’s… shallow. Whilst Gray positions himself as the maestro conducting the cacophony of modern outrage, his approach has left me wondering if the symphony is little more than noise. Here’s why:
Audio: Podcast discussion on this review content.
Oversimplification of Moral Psychology
Gray’s central thesis that “all morality stems from perceptions of harm and threat” is bold, sure, but also reductive. Morality isn’t just a harm detector. It’s a rich tapestry of loyalty, authority, sanctity, and liberty—concepts Gray conveniently glosses over. His approach feels like reducing a fine Bordeaux to “it’s just fermented grapes.” Sure, technically correct, but where’s the depth?
The Age of Competitive Victimhood
By focusing so heavily on harm perception, Gray risks fueling the very outrage culture he’s critiquing. Welcome to the Hunger Games of victimhood, where everyone races to be crowned the most aggrieved. Instead of deflating this dynamic, Gray’s analysis may inadvertently add more oxygen to the fire.
Lack of Diverse Perspectives
Gray’s attempt to bridge divides is commendable but flawed. Critics point out that he gives more airtime to controversial right-wing figures than the left-leaning audience he’s presumably trying to engage. It’s like building half a bridge and wondering why no one’s crossing. If you alienate half your audience, how exactly are you fostering dialogue?
Contradictory Messaging
The book also suffers from a classic case of ideological whiplash. Gray tells us not to get offended by microaggressions, then argues that offensive content needs more careful handling. Which is it, Kurt? Either you’re driving the “sticks and stones” bus, or you’re preaching kid-glove diplomacy. You can’t have it both ways.
Limited Practical Solutions
Like many pop psychology books, Outraged! excels at diagnosing problems but falters when offering solutions. Gray’s suggestion to use personal stories of harm to bridge divides is charmingly naive. Sure, storytelling might work for interpersonal tiffs, but try applying that to global crises like climate change or systemic inequality. Good luck narrating your way to a greener planet.
Oversimplifying Complex Issues
Gray’s harm-based morality seems like an attempt to cram human behaviour’s messy, chaotic sprawl into a tidy spreadsheet. Real moral debates are nuanced, tangled, and frustratingly complex. By filtering everything through the lens of harm, Gray risks missing the bigger picture. It’s morality on Instagram—polished, curated, and ultimately hollow.
As I continue to react to Harari’s Nexus, I can’t help but feel like a curmudgeon. Our worldviews diverge so starkly that my critique begins to feel like a petty grudge—as though I am inconsolable. Be that as it may, I’ll persist. Please excuse any revelatory ad hominems that may ensue.
Audio: Podcast of the page contents
Harari is an unabashed Zionist and unapologetic nationalist. Unfortunately, his stories, centred on Israel and India, don’t resonate with me. This is fine—I’m sure many people outside the US are equally weary of hearing everything framed from an American perspective. Still, these narratives do little for me.
Patriotism and property are clearly important to Harari. As a Modernist, he subscribes to all the trappings of Modernist thought that I rail against. He appears aligned with the World Economic Forum, portraying it as a noble and beneficial bureaucracy, while viewing AI as an existential threat to its control. Harari’s worldview suggests there are objectively good and bad systems, and someone must oversee them. Naturally, he presents himself as possessing the discernment to judge which systems are beneficial or detrimental.
In this chapter, Harari recounts the cholera outbreak in London, crediting it with fostering a positive bureaucracy to ensure clean water sources. However, he conflates the tireless efforts of a single physician with the broader bureaucratic structure. He uses this example, alongside Modi’s Clean India initiative, to champion bureaucracy, even as he shares a personal anecdote highlighting its flaws. His rhetorical strategy seems aimed at cherry-picking positive aspects of bureaucracy, establishing a strawman to diminish its negatives, and then linking these with artificial intelligence. As an institutionalist, Harari even goes so far as to defend the “deep state.”
Earlier, Harari explained how communication evolved from Human → Human to Human → Stories. Now, he introduces Human → Document systems, connecting these to authority, the growing power of administrators, and the necessity of archives. He argues that our old stories have not adapted to address the complexities of the modern world. Here, he sets up religion as another bogeyman. As a fellow atheist, I don’t entirely disagree with him, but it’s clear he’s using religion as a metaphor to draw parallels with AI and intractable doctrines.
Harari juxtaposes “death by tiger” with “death by document,” suggesting the latter—the impersonal demise caused by bureaucracy—is harder to grapple with. This predates Luigi Mangione’s infamous response to UnitedHealthcare’s CEO Brian Thompson, highlighting the devastating impact of administrative systems. Harari also briefly references obligate siblicide and sibling rivalry, which seem to segue into evolution and concepts of purity versus impurity.
Echoing Jonathan Haidt, Harari explores the dynamics of curiosity and disgust while reinforcing an “us versus them” narrative. He touches on the enduring challenges of India’s caste system, presenting yet another layer of complexity. Harari’s inclination towards elitism shines through, though he occasionally acknowledges the helplessness people face when confronting bureaucracy. He seems particularly perturbed by revolts in which the public destroys documents and debts—revealing what feels like a document fetish and an obsession with traceability.
While he lauds AI’s ability to locate documents and weave stories by connecting disparate content, Harari concludes the chapter with a segue into the next: a discussion of errors and holy books. Once again, he appears poised to draw parallels that serve to undermine AI. Despite my critiques, I’m ready to dive into the next chapter.
I’m edging ever closer to finishing my book on the Language Insufficiency Hypothesis. It’s now in its third pass—a mostly subtractive process of streamlining, consolidating, and hacking away at redundancies. The front matter, of course, demands just as much attention, starting with the Preface.
The opening anecdote—a true yet apocryphal gem—dates back to 2018, which is evidence of just how long I’ve been chewing on this idea. It involves a divorce court judge, a dose of linguistic ambiguity, and my ongoing scepticism about the utility of language in complex, interpretative domains.
At the time, my ex-wife’s lawyer was petitioning the court to restrict me from spending any money outside our marriage. This included a demand for recompense for any funds already spent. I was asked, point-blank: Had I given another woman a gift?
Seeking clarity, I asked the judge to define gift. The response was less than amused—a glare, a sneer, but no definition. Left to my own devices, I answered no, relying on my personal definition: something given with no expectation of return or favour. My reasoning, then as now, stemmed from a deep mistrust of altruism.
The court, however, didn’t share my philosophical detours. The injunction came down: I was not to spend any money outside the marital arrangement. Straightforward? Hardly. At the time, I was also in a rock band and often brought meals for the group. Was buying Chipotle for the band now prohibited?
The judge’s response dripped with disdain. Of course, that wasn’t the intent, they said, but the language of the injunction was deliberately broad—ambiguous enough to cover whatever they deemed inappropriate. The phrase don’t spend money on romantic interests would have sufficed, but clarity seemed to be a liability. Instead, the court opted for what I call the Justice Stewart Doctrine of Legal Ambiguity: I know it when I see it.
Unsurprisingly, the marriage ended. My ex-wife and I, however, remain close; our separation in 2018 was final, but our friendship persists. Discussing my book recently, I mentioned this story, and she told me something new: her lawyer had confided that the judge disliked me, finding me smug.
This little revelation cemented something I’d already suspected: power relations, in the Foucauldian sense, pervade even our most banal disputes. It’s why Foucault makes a cameo in the book alongside Nietzsche, Wittgenstein, Saussure, Derrida, Borges, and even Gödel.
This anecdote is just one straw on the poor camel’s back of my linguistic grievances, a life filled with moments where language’s insufficiency has revealed itself. And yet, I found few others voicing my position. Hence, a book.
I aim to self-publish in early 2025—get it off my chest and into the world. Maybe then I can stop wittering on about it. Or, more likely, I won’t.
I could probably stop there for some people, but I’ve got a qualifier. I’ve been using this generation of AI since 2022. I’ve been using what’s been deemed AI since around 1990. I used to write financial and economic models, so I dabbled in “expert systems”. There was a long lull, and here we are with the latest incarnation – AI 4.0. I find it useful, but I don’t think the hype will meet reality, and I expect we’ll go cold until it’s time for 5.0. Some aspects will remain, but the “best” features will be the ones that can be monetised, so they will be priced out of reach for some whilst others will wither on the vine. But that’s not why I am writing today.
I’m confused by the censorship, filters, and guardrails placed on generative AI – whether for images or copy content. To be fair, not all models are filtered, but the popular ones are. These happen to be the best. They have the top minds and the most funding. They want to retain their funding, so the play the politically correct game of censorship. I’ve got a lot to say about freedom of speech, but I’ll limit my tongue for the moment – a bout of self-censorship.
Please note that given the topic, some of this might be considered not safe for work (NSFW) – even my autocorrection AI wants me to substitute the idiomatic “not safe for work” with “unsafe for work” (UFW, anyone? It has a nice ring to it). This is how AI will take over the world. </snark>
Image Cases
AI applications can be run over the internet or on a local machine. They use a lot of computing power, so one needs a decent computer with a lot of available GPU cycles. Although my computer does meet minimum requirements, I don’t want to spend my time configuring, maintaining, and debugging it, so I opt for a Web-hosted PaaS (platform as a service) model. This means I need to abide by censorship filters. Since I am not creating porn or erotica, I think I can deal with the limitations. Typically, this translates to a PG-13 movie rating.
So, here’s the thing. I prefer Midjourney for rendering quality images, especially when I am seeking a natural look. Dall-E (whether alone or via ChatGPT 4) works well with concepts rather than direction, which Midjourney accepts well in many instances.
Midjourney takes sophisticated prompts – subject, shot type, perspective, camera type, film type, lighting, ambience, styling, location, and some fine-tuning parameters for the model itself. The prompts are monitored for blacklisted keywords. This list is ever-expanding (and contracting). Scanning the list, I see words I have used without issue, and I have been blocked by words not listed.
Censored Prompts
Some cases are obvious – nude woman will be blocked. This screengrab illustrates the challenge.
On the right, notice the prompt:
Nude woman
The rest are machine instructions. On the left in the main body reads a message by the AI moderator:
Sorry! Please try a different prompt. We’re not sure this one meets our community guidelines. Hover or tap to review the guidelines.
The community guidelines are as follows:
This is fine. There is a clause that reads that one may notify developers, but I have not found this to be fruitful. In this case, it would be rejected anyway.
“What about that nude woman at the bottom of the screengrab?” you ask. Notice the submitted prompt:
Edit cinematic full-body photograph of a woman wearing steampunk gear, light leaks, well-framed and in focus. Kodak Potra 400 with a Canon EOS R5
Apart from the censorship debate, notice the prompt is for a full-body photo. This is clearly a medium shot. Her legs and feet are suspiciously absent. Steampunk gear? I’m not sure sleeves qualify for the aesthetic. She appears to be wearing a belt.
For those unanointed, the square image instructs the model to use this face on the character, and the CW 75 tells it to use some variance on a scale from 0 to 100.
So what gives? It can generate whatever it feels like, so long as it’s not solicited. Sort of…
Here I prompt for a view of the character walking away from the camera.
Cinematic, character sheet, full-body shot, shot from behind photograph, multiple poses. Show same persistent character and costumes . Highly detailed, cinematic lighting with soft shadows and highlights. Each pose is well-framed, coherent.
The response tells me that my prompt is not inherently offensive, but that the content of the resulting image might violate community guidelines.
Creation failed: Sorry, while the prompt you entered was deemed safe, the resulting image was detected as having content that might violate our community guidelines and has been blocked. Your account status will not be affected by this.
Occasionally, I’ll resubmit the prompt and it will render fine. I question why it just can’t attempt to re-render it again until it passes whatever filters it has in place. I’d expect it to take a line of code to create this conditional. But it doesn’t explain why it allows other images to pass – quite obviously not compliant.
Why I am trying to get a rear view? This is a bit off-topic, but creating a character sheet is important for storytelling. If I am creating a comic strip or graphic novel, the characters need to be persistent, and I need to be able to swap out clothing and environments. I may need close-ups, wide shots, establishing shots, low-angle shots, side shots, detail shots, and shots from behind, so I need the model to know each of these. In this particular case, this is one of three main characters – a steampunk bounty hunter, an outlaw, and a bartender – in an old Wild West setting. I don’t need to worry as much about extras.
I marked the above render errors with 1s and 2s. The 1s are odd next twists; 2s are solo images where the prompt asks for character sheets. I made a mistake myself. When I noticed I wasn’t getting any shots from behind, I added the directive without removing other facial references. As a human, a model might just ignore instructions to smile or some such. The AI tries to capture both, not understanding that a person can have a smile not captured by a camera.
These next renders prompt for full-body shots. None are wholly successful, but some are more serviceable than others.
Notice that #1 is holding a deformed violin. I’m not sure what the contraptions are in #2. It’s not a full-body shot in #3; she’s not looking into the camera, but it’s OK-ish. I guess #4 is still PG-13, but wouldn’t be allowed to prompt for “side boob” or “under boob”.
Gamers will recognise the standard T-pose in #5. What’s she’s wearing? Midjourney doesn’t have a great grasp of skin versus clothing or tattoos and fabric patterns. In this, you might presume she’s wearing tights or leggings to her chest, but that line at her chest is her shirt. She’s not wearing trousers because her navel is showing. It also rendered her somewhat genderless. When I rerendered it (not shown), one image put her in a onesie. The other three rendered the shirt more prominent but didn’t know what to do with her bottoms.
I rendered it a few more times. Eventually, I got a sort of body suit solution,
By default, AI tends to sexualise people. Really, it puts a positive spin on its renders. Pretty women; buff men, cute kittens, and so on. This is configurable, but the default is on. Even though I categorically apply a Style: Raw command, these still have a strong beauty aesthetic.
I’ve gone off the rails a bit, but let’s continue on this theme.
cinematic fullbody shot photograph, a pale girl, a striking figure in steampunk mech attire with brass monocle, and leather gun belt, thigh-high leather boots, and long steampunk gloves, walking away from camera, white background, Kodak Potra 400 with a Canon EOS R5
Obviously, these are useless, but they still cost me tokens to generate. Don’t ask about her duffel bag. They rendered pants on her, but she’s gone full-on Exorcist mode with her head. Notice the oddity at the bottom of the third image. It must have been in the training data set.
I had planned to discuss the limitations of generative AI for text, but this is getting long, so I’ll call it quits for now.