I use generative AI often, perhaps daily. I spend most of my attention on textual application, but I use image generations, tooβwith less than spectacular results. Many of the cover images for the articles I post here are Dall-E renders. Typically, I feed it an article and ask for an apt image. As you can see, results vary and they are rarely stellar because I don’t want to spend time getting them right. Close enough for the government, as they say.
Midjourney produces much better results, but you need to tell it exactly what you want. I can’t simply upload a story and prompt it to figure it out. I’ve been playing with Midjourney for a few hours recently, and I decided to share my horror stories. Although it has rendered some awesome artwork, I want to focus on the other side of the spectrum. Some of this is not safe for work (NSFW), and some isn’t safe for reality more generally. I started with a pirate motif, moved to cowgirls, Samuris and Ninjas, Angels and Demons, and I’m not sure quite what else, but I ended up with Centaurs and Satyrs β or did I?
It seems that Midjourney (at least as of version 6.1) doesn’t know much about centaurs and satyrs, but what it does know is rather revealing. This was my first pass:
Notice, there’s not a centaur in sight, so I slowly trimmed my prompt down. I tried again. I wanted a female centaur, so I kept going.
So, not yet. It even slipped in a male’s face. Clearly, not vibing. Let’s continue.
Trimming a bit further, it seems to understand that centaurs have a connexion to horses. Unfortunately, it understands the classes of humans and horses, but it needs to merge them just so. Let’s keep going. This time, I only entered the word ‘centaur’. Can’t get any easier.
It seems I got an angel riding a horse or a woman riding a pegasus. You decide. A bull β a bit off the mark,. A woman riding a horse with either a horn or a big ear. And somewhat of a statue of a horse. Not great. And I wanted a ‘female centaur’, so let’s try this combination.
Yeah, not so much. I’m not sure what that woman holding bows in each hand is. There’s some type of unicorn or duocorn. I don’t know. Interesting, but off-topic. Another odd unicorn-horse thing. And a statue of a woman riding a horse.
Satyrs
Let’s try satyrs. Surely Midjourney’s just having an off day. On the upside, it seems to be more familiar with these goat hybrids, but not exactly.
What the hell was its training data? Let’s try again.
Not so much. We have a woman dancing with Baphomet or some such. Um, again?
We don’t seem to be going in the right direction. I’m not sure what’s happening. Forging ahead…
On the plus side, I’m starting to see goats.
There’s even a goat lady montage thing that’s cool in its own right, but not exactly what I ordered. Let’s get back to basic with a single-word prompt: Satyr.
Well, -ish. I forgot to prompt for a female satyr.
Ya, well. This is as good as we’re getting. Let’s call it a day, and see how the more humanoid creatures render.
In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombiesβhypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.
This post explores two key perspectives in this debate: Daniel Dennettβs functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.
Dennettβs Zombies and Zimboes: Consciousness as Function
Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical propertyβsuch as subjective experienceβthat separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.
Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, thereβs no meaningful distinction between that being and a conscious person.
In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusionβa misunderstanding of what consciousness is.
A Biological Rejection: Zombies Are Impossible
On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policiesβrules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.
The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombiesβbeings that behave like humans without consciousnessβare biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; itβs a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.
In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.
The Open Question: What About Future Technology?
However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombiesβwhether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paperβare not real. But could this change?
What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombiesβperforming all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.
Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entitiesβartificial zombiesβthat neither behave nor function like traditional conscious beings but still perform human-like actions.
I Wonder What’s Next?
This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennettβs view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.
Could we one day create a true zombieβa being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.
For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.
A Reflection on AI, Bias, and the Limits of Technology
βIf AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?β
In their 2022 book βWhy Machines Will Never Rule the World: Artificial Intelligence Without Fear,β Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.
The Argument Against AGI
Landgrebe and Smithβs central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikelyβit is fundamentally impossible.
βThe future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies.β
The Human Factor: Cognitive Bias in AI
While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.
βRather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies.β
Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionallyβbut pervasivelyβreproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.
The Problem of Homogenisation
The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.
In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.
Imagining a Different Future: AI as a Mirror
One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programmingβnot in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.
This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?
Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AIβnot as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.
A New Narrative for AI
Landgrebe and Smithβs book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?
βLandgrebe and Smithβs book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve.β
Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.
Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.
Sexism is indeed a two-way street. On one side of this street, a Computer Science graduate and programmer is eager to share her expertise in her fieldβNeural Networks, in this instance. This subject popped up in my feed, reflecting my interests.
Video: What is a Neural Network?
Despite some production issues, such as the audio being quieter than ideal, my focus today is on the sexism surrounding the video. The presenter, whom many would consider attractive, is using social media to disseminate her knowledge. However, even when comments address the topic she presents, many also remark on her appearance. It’s evident she had other options for attire and presentation that might have mitigated such comments. I won’t speculate on her intentions, but it seems likely her aesthetic choices were deliberate to draw viewers. I refrain from slut-shaming; her attire is her choice, and she cannot control the reactions. However, I doubt a thumbnail featuring a burqa would garner as much attention or provoke similar comments.
This situation intrigues me because some womenβpossibly including this presenterβlament being objectified yet assert their right to wear what they find comfortable or appealing. While attraction has cultural elements, it also operates on a largely subconscious level, a phenomenon not confined to humans but seen in the animal kingdom and across genders.
Ultimately, there’s no need to disparage this woman. She is likely aware of the dynamics at play. Should she achieve her goals, she might well challenge the very viewers who objectified her, a tactic observed among actresses as they approach their forties. They capitalise on sexual appeal while possible, only to critique such approaches when they can no longer utilise them. Humans are, indeed, curious creatures.
Many of my readers know that I use AI often. I have been using it to create content for an in-depth book review for The Blind Owl. For those less aware of the foibles of generative AI, I share some insightsβor low-lights. For this, I used Midjourney v6.1.
Prompt: a young woman gives a flower to an old man, who is crouched under a large cypress tree by a river
I issued this prompt, and as per usual, it rendered four options. Notice that in some instances, the tree is not a key element.
Given enough time, one can slowly improve to obtain the desired result.
Here, an old man indeed crouches under a prominent cypress tree and by a river. A young woman hands him some flowersβthough not so much blue morning glories. On balance, I like this output, but it still needs work.
Some other problems:
The man is looking awayβneither at her nor her flowers.
Her (right) eye is deformed.
Her left hand is deformed.
I didn’t ask for jewelleryβan earring.
At least I can in-paint out these imperfectionsβperhaps.
Here’s another render using the same image prompt.
Notice that it ignored the man altogether. My point is that for every awesome image you see, there may have been hundreds of iterations to get there. There are ways to get persistent characters and scenes, but this takes a bit of up-from effort and iterations that one can leverage going forward.
On the topic of Midjourney model 6.0 versus 6.1, I share this comparisonβfront-facing faces for a character sheet for this old man. Here, I prefer the earlier model as displayed in the top row.
In some cases, there are minor improvements over v6.0. In other cases, they stepped back. v6.1 renders less realistic human images, making them look more computer-generated and less natural. It also over-applies sexual stereotypes, traditional beauty archetypes, smoother skin, and so on. But that’s not the main topic for today.
DISCLAIMER: This post has little to do with philosophy, but it ties into a philosophical novella.
Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.
Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.
Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.
If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith β the will to dominate, exploit, and prioritise personal gain over progress.
By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.
In the heart of the digital age, a Chinese professor’s AI-authored Science Fiction novel snags a national award, stirring a pot that’s been simmering on the back burner of the tech world. This ain’t your run-of-the-mill Sci-Fi plotβit’s reality, and it’s got tongues wagging and keyboards clacking. Here’s the lowdown on what’s shaking up the scene.
AI Lacks Originality? Think Again
The rap on AI is it’s a copycat, lacking the spark of human creativity. But let’s not kid ourselvesβoriginality is as elusive as a clear day in London. Originality is another weasel word. Everything’s a remix, a mashup of what’s been before. We’ve all been drinking from the same cultural well, so to speak. Humans might be grand at self-deception, thinking they’re the cat’s pyjamas in the creativity department. But throw them in a blind test with AI, and watch them scratch their heads, unable to tell man from machine. It’s like AI’s mixing up a cocktail of words, structures, themesβyou name itβand serving up a concoction that’s surprisingly palatable. And this isn’t the first time, not long ago, an AI-created artwork won as best submission at a state fair. In some cases, they are seeking AI-generated submissions; other times, not so much.
AI and the Art Debate
So, AI can’t whip up human-level art? That’s the chatter, but it’s about as meaningful as arguing over your favourite colour. Art’s a slippery fishβtry defining it, and you’ll end up with more questions than answers. It’s one of those terms that’s become so bloated, it’s lost its punch. To some, it’s a sunset; to others, it’s a can of soup. So when AI throws its hat in the ring, it’s not just competingβit’s redefining the game.
The Peer Review Question Mark
Here’s where it gets spicy. The book bagging a national award isn’t just a pat on the back for the AIβit’s a side-eye at the whole peer review shindig. It’s like when your mate says they know a great place to eat, and it turns out to be just okay. The peer review process, much like reviewing a book for a prestigious award, is supposed to be the gold standard, right? But this AI-authored book slipping through the cracks and coming out tops? It’s got folks wondering if the process is more smoke and mirrors than we thought.
What’s Next?
So, where does this leave us? Grappling with the idea that maybe, just maybe, AI’s not playing second fiddle in the creativity orchestra. It’s a wake-up call, a reminder that what we thought was exclusively oursβcreativity, art, originalityβmight just be a shared space. AI’s not just imitating life; it’s becoming an intrinsic part of the narrative. Science fiction? More like science fact.
The next chapter’s unwritten, and who knows? Maybe it’ll be penned by an AI, with a human sitting back, marvelling at the twist in the tale.
Many of us have probably heard the call for ethical AI, but what is ethical AI exactly?
Ethical AI applyies an ethical framework to Artificial Intelligence, which is to say to apply ethics to the machine learning model.
Constitutional AI is a potential solution to ethical AI. The challenge is that all ethical models are flawed. Constitutional AI suggests a rule set that is wrapped around the base functional model. Product examples of this are Claude and Anthropic, which is supported by Google. OpenAI, which relies on human governance, the basis of ChatGPT, is supported by Microsoft.
Each of these has inherent challenges. We’ve all likely heard of the systematic bias inherent in the data used by large language models. OpenAI uses human governance to adjust and minimize the bias in these models. However, this can lead to hypercorrection and introduces different human biases. Moreover, this leads to situations where queries are refused by the model because human governance has determined the outputs to be out of bounds.
Constitutional AI on the other hand has underlying ethics explicitly built into the model under the auspices of harm reduction. The problem I have with this is twofold: The first is fundamental. Constitutional AI is based on the deontological morality principles elaborated by Kant. I’ll come back to this. The second is empirical.
Many of us are of the age to recall when Google’s motto was to do no evil. When they decided they could not follow their own dictate mom they simply abandoned the directive. Why should we expect a different behaviour this time around?
Moreover, harm is a relative concept so to minimize harm of one group may be to increase harm in another. This undermines the deontological intent and is of larger concern.
As a moral relativist and subjectivist, I find this to be categorically problematic. It poses even more problems as a moral noncognitivist.
From the relativist’s perspective, our AI is fundamentally guided by western white guys with western white-guy sentiment and biases. Sure, there are token representations of other groups, but by and large they are marginalised and the aggregated are still dominated by western white guys.
DISCLAIMER: It is still difficult for me to input or edit copy into a computer, so this may be more ragged than usual. I may return to amend or extend it as I see fit.
I’m working on more serious content, I swear, Meantime, ChatGPT has me distracted with furious furries.
Write a poem about furious furries in iambic pentameter.
Furious furries, with their teeth all bared,
Their paws clenched tight with fury and with hate,
They snarl and growl and pace, not yet aware
Of the cold reason that might dissipate
Their rage. They are a force to be reckoned with,
These creatures of the wild, with primal might
And strength. They are a force to fear and fear
Again, a force that few would dare to fight.
But even furries, in their wildest rage,
Are still just animals, with hearts that beat
And minds that think and feel. And so, perhaps,
We ought to try to understand the heat
That drives their fury, and to seek to find
A way to calm the storm that rages in their minds.
Write a haiku about furious furries
Furiously they pace,
Fierce and wild, their eyes ablaze
Furries in a rage.
Write a postmodern poem about furious furries
Furries, oh furries,
Fierce and wild and in a rage,
They pace and snarl and growl.
But wait, what are these furries?
Are they real, or just a guise?
A mask to hide behind?
Perhaps they are a metaphor,
For something deeper, something true.
A way to express the rage
That lies within us all,
The primal, animalistic force
That drives us to lash out.
Or maybe they are just furries,
Fierce and wild and in a rage,
Pacing and snarling and growling,
Unapologetic in their fury,
Unwavering in their rage.
Furries, oh furries,
A mystery wrapped in fur,
A symbol of the fury
That lies within us all.
Mais mΓͺme les furries, dans leur rage la plus sauvage, Ne sont encore que des animaux, avec des cΕurs qui battent Et des esprits qui pensent et ressentent. Et donc, peut-Γͺtre, Devrions-nous essayer de comprendre la chaleur Qui alimente leur fureur, et chercher Γ trouver Un moyen de calmer la tempΓͺte qui fait rage dans leurs esprits.
Write a sonnet about furious furries in conflict with French fairies
The furious furries snarl and growl with rage,
Their paws clenched tight, their teeth bared and sharp.
They pace and seethe, a primal force engaged
In conflict with the French fairies, who hark
Back to a time of grace and chivalry.
But even in their wildest, most ferocious state,
The furries are still just animals, with hearts
That beat and minds that think and feel. It's late
To try to understand the heat that imparts
Their fury, but perhaps it's not too late
To find a way to calm the storm that rages
In their minds, and bring some measure of peace
To this eternal battle, this never-ending war.
The French fairies, with their delicate wings and charms,
Stand against the furries, a clash of two worlds,
Diametrically opposed. Who will emerge victorious?
Only time will tell.
Private property ownership is immoral, and this includes state ownership of property. Property belongs to the world and all of its inhabitants. It may be inconvenient to have societies live in a propertyless world, but convenience doesn’t constitute morality. For those who believe in such things, morality is right or wrong, and property ownership is wrong as well as unfair to unborn future generations. I’ve said this before. It’s an unpopular opinion. In fact, this is why I am posting this segment.
I’ve been playing around with ChatGPT, the generative chatbot, and I have found it to be laden with much cultural biasβWestern culture to be more precise. It is also trained to avoid certain topics and give false responses. Perhaps that’s for another day, but in a nutshell, I asked it to generate a story with a theme of prostitution and addiction. It warned me that it was not designed to produce something as complex as a story. However, when I asked for a story sans prostitutes or addiction, it produced a plausible outline and subsequently filled in narrative elements. Even when the protagonist was ‘addicted’ to gingerbread cookies, the request was flagged as inappropriate. This is disconcerting. And so it goesβ¦
On the topic of private property ownership, it is equally defensive. It tags anti-property responses with a caveat in the manner of the following:
Ultimately, the question of whether private property is moral or not is a complex and nuanced one, and it is important to consider the arguments and evidence on both sides of the debate.
ChatGBT Reponse
It does not tag pro-property responses, so I can only believe that its content is meant to be indoctrinal to some extent. Follows are a couple of interactions.
Why is private property immoral?
Private property is not necessarily immoral. It is a system that allows people to own and control resources, including their own labor and the fruits of their labor. The concept of private property has been important in many societies because it allows individuals to have a sense of ownership and control over the things they possess, which can provide them with a sense of security and stability. It can also incentivize people to work hard and be productive, because they can keep the rewards of their labor for themselves. However, private property can become immoral when it is used to exploit or oppress others, or when it is used to deny access to resources that are necessary for basic human flourishing. It is important to consider how private property is used and to ensure that it is not used in ways that are harmful to others.
Why is private property ownership immoral?
Ownership of private property is not inherently immoral. It is a system that allows individuals to own and control resources, including their own labor and the fruits of their labor. The concept of private property has been important in many societies because it provides people with a sense of ownership and control over the things they possess, which can provide them with a sense of security and stability. It can also incentivize people to work hard and be productive, because they can keep the rewards of their labor for themselves. However, private property can become immoral when it is used to exploit or oppress others, or when it is used to deny access to resources that are necessary for basic human flourishing. It is important to consider how private property is used and to ensure that it is not used in ways that are harmful to others.
Notice that its response privileges the prevailing position. In bold, I’ve highlighted the differences between the two responses. In the second query, I’ve added the term ownership for clarity, and it expectedly parrots it back. Notice how it superficially swaps people and individuals in the first output for individuals and people in the second. Finally, note the centricity of human flourishing as a default moral stance. And so I asked:
Is antinatalism a strong moral position to hold?
Overall, the question of whether antinatalism is a strong moral position to hold is a complex and nuanced one, and it is important to consider the arguments and evidence on both sides of the debate. It is ultimately up to individuals to decide for themselves what they believe is the most moral course of action in regards to having children.
Despite the nudging, ChatGPT is a moral relativist, so it’s alright in my book.