Sexism is indeed a two-way street. On one side of this street, a Computer Science graduate and programmer is eager to share her expertise in her field—Neural Networks, in this instance. This subject popped up in my feed, reflecting my interests.
Video: What is a Neural Network?
Despite some production issues, such as the audio being quieter than ideal, my focus today is on the sexism surrounding the video. The presenter, whom many would consider attractive, is using social media to disseminate her knowledge. However, even when comments address the topic she presents, many also remark on her appearance. It’s evident she had other options for attire and presentation that might have mitigated such comments. I won’t speculate on her intentions, but it seems likely her aesthetic choices were deliberate to draw viewers. I refrain from slut-shaming; her attire is her choice, and she cannot control the reactions. However, I doubt a thumbnail featuring a burqa would garner as much attention or provoke similar comments.
This situation intrigues me because some women—possibly including this presenter—lament being objectified yet assert their right to wear what they find comfortable or appealing. While attraction has cultural elements, it also operates on a largely subconscious level, a phenomenon not confined to humans but seen in the animal kingdom and across genders.
Ultimately, there’s no need to disparage this woman. She is likely aware of the dynamics at play. Should she achieve her goals, she might well challenge the very viewers who objectified her, a tactic observed among actresses as they approach their forties. They capitalise on sexual appeal while possible, only to critique such approaches when they can no longer utilise them. Humans are, indeed, curious creatures.
Many of my readers know that I use AI often. I have been using it to create content for an in-depth book review for The Blind Owl. For those less aware of the foibles of generative AI, I share some insights—or low-lights. For this, I used Midjourney v6.1.
Prompt: a young woman gives a flower to an old man, who is crouched under a large cypress tree by a river
I issued this prompt, and as per usual, it rendered four options. Notice that in some instances, the tree is not a key element.
Given enough time, one can slowly improve to obtain the desired result.
Here, an old man indeed crouches under a prominent cypress tree and by a river. A young woman hands him some flowers—though not so much blue morning glories. On balance, I like this output, but it still needs work.
Some other problems:
The man is looking away—neither at her nor her flowers.
Her (right) eye is deformed.
Her left hand is deformed.
I didn’t ask for jewellery—an earring.
At least I can in-paint out these imperfections—perhaps.
Here’s another render using the same image prompt.
Notice that it ignored the man altogether. My point is that for every awesome image you see, there may have been hundreds of iterations to get there. There are ways to get persistent characters and scenes, but this takes a bit of up-from effort and iterations that one can leverage going forward.
On the topic of Midjourney model 6.0 versus 6.1, I share this comparison—front-facing faces for a character sheet for this old man. Here, I prefer the earlier model as displayed in the top row.
In some cases, there are minor improvements over v6.0. In other cases, they stepped back. v6.1 renders less realistic human images, making them look more computer-generated and less natural. It also over-applies sexual stereotypes, traditional beauty archetypes, smoother skin, and so on. But that’s not the main topic for today.
DISCLAIMER: This post has little to do with philosophy, but it ties into a philosophical novella.
Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.
Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.
Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.
If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.
By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.
In the heart of the digital age, a Chinese professor’s AI-authored Science Fiction novel snags a national award, stirring a pot that’s been simmering on the back burner of the tech world. This ain’t your run-of-the-mill Sci-Fi plot—it’s reality, and it’s got tongues wagging and keyboards clacking. Here’s the lowdown on what’s shaking up the scene.
AI Lacks Originality? Think Again
The rap on AI is it’s a copycat, lacking the spark of human creativity. But let’s not kid ourselves—originality is as elusive as a clear day in London. Originality is another weasel word. Everything’s a remix, a mashup of what’s been before. We’ve all been drinking from the same cultural well, so to speak. Humans might be grand at self-deception, thinking they’re the cat’s pyjamas in the creativity department. But throw them in a blind test with AI, and watch them scratch their heads, unable to tell man from machine. It’s like AI’s mixing up a cocktail of words, structures, themes—you name it—and serving up a concoction that’s surprisingly palatable. And this isn’t the first time, not long ago, an AI-created artwork won as best submission at a state fair. In some cases, they are seeking AI-generated submissions; other times, not so much.
AI and the Art Debate
So, AI can’t whip up human-level art? That’s the chatter, but it’s about as meaningful as arguing over your favourite colour. Art’s a slippery fish—try defining it, and you’ll end up with more questions than answers. It’s one of those terms that’s become so bloated, it’s lost its punch. To some, it’s a sunset; to others, it’s a can of soup. So when AI throws its hat in the ring, it’s not just competing—it’s redefining the game.
The Peer Review Question Mark
Here’s where it gets spicy. The book bagging a national award isn’t just a pat on the back for the AI—it’s a side-eye at the whole peer review shindig. It’s like when your mate says they know a great place to eat, and it turns out to be just okay. The peer review process, much like reviewing a book for a prestigious award, is supposed to be the gold standard, right? But this AI-authored book slipping through the cracks and coming out tops? It’s got folks wondering if the process is more smoke and mirrors than we thought.
What’s Next?
So, where does this leave us? Grappling with the idea that maybe, just maybe, AI’s not playing second fiddle in the creativity orchestra. It’s a wake-up call, a reminder that what we thought was exclusively ours—creativity, art, originality—might just be a shared space. AI’s not just imitating life; it’s becoming an intrinsic part of the narrative. Science fiction? More like science fact.
The next chapter’s unwritten, and who knows? Maybe it’ll be penned by an AI, with a human sitting back, marvelling at the twist in the tale.
As moral error theorists, we’re accustomed to facing criticism for our perspective. I’m a moral non-cognitivist, but there’s a significant intersection with these theories. When someone asserts that torture is wrong, I might argue that the claim is hollow, as moral wrongness is merely an emotional response masquerading as an objective moral stance. On the other hand, an error theorist would debunk this argument, stating that there’s no absolute position of right or wrong. Pragmatically, we both arrive at the conclusion that the claim cannot hold true.
Video: Is Error Theory Counterintuitive — Kane B
Intuition leads others to a different interpretation. If they believe something is true due to their epistemic certainty, then for them, it is true. Their reality is shaped by experience. Curse the limitations of sense perception and cognitive constraints. “I know what I know,” is their typical retort. Moreover, it’s a matter of practicality. “You know what I mean,” they insist.
They attempt to substitute fact with heuristics, truth with analogue, and terrain with a map. Admittedly, it’s convenient to feign an identity at play, but at best, it’s a near equivalence.
In the linked video, the example of claiming ‘that car is red’ is presented. But can cars truly be red? Not precisely. Cars can possess the property of appearing red to most individuals in specific lighting conditions, but a car cannot inherently be red. Thus, this statement is categorically false. We can assert ‘that the car appears to be red,’ and as a matter of practicality, we might designate the car as such. However, this highlights the inadequacy of language, particularly concerning abstract concepts.
A challenge arises from people’s inclination, driven by the principle of least effort, to seek simplicity and convenience, often at the expense of truth.
A challenge arises from people’s inclination, driven by the principle of least effort, to seek simplicity and convenience, often at the expense of truth. This principle underpins Occam’s Razor: choose the simplest version among competing explanations. It’s a misguided endeavour, akin to living under the notion that opposites attract rather than like attracting like.
I encounter this opposing worldview when I pinpoint certain inconsistencies or issues in the ‘real world,’ emphasizing that language is insufficient to convey adequate meaning, that there exists no objective moral centre, and that democracy is a system that falters even on paper. Almost invariably, the response is, ‘then what?’ I don’t possess the answer to ‘then what.’ I’m merely informing you that your leg is broken. I don’t need to know how to fix it for this to be true, a valid assessment.
While I’m on a roll, let’s talk about charity in philosophical argumentation, which, in my view, is another misguided concept. Why should I favour the notion that a person genuinely believes in the position they hold? It’s another heuristic rule set up to lend credence to otherwise weak arguments. We need not adhere to these boundaries—they’re nonexistent.
BONUS: I ran this content through ANTHROP/C’s Claude 2 for analysis and evaluation. I include it as an example of the bias inherent in generative AI. Whilst I realise that I am taking a polemic position—nothing new here—, Claude apes canned responses. Ironically, the same responses I am arguing against. I do agree that I could have ‘steel-manned’ my argument, but this is a blog post, not a dissertation. Perhaps another day.
Here are some thoughts on evaluating the blog post:
The author takes a strong moral non-cognitivist stance, arguing that moral claims like “torture is wrong” are not objective facts but expressions of emotion. This view aligns with moral error theory in debunking moral realism.
However, the dismissive attitude towards those who believe moral claims have truth value shows a lack of philosophical charity. The author could strengthen their position by steelmanning the opposing view before critiquing it.
The analogy between moral claims and claims like “the car is red” makes an interesting point about the inadequacy of language for capturing nuanced truth. However, more nuance could be applied in acknowledging the usefulness of simplifying language and heuristics in everyday life.
The author’s critique of Occam’s Razor and preference for complex explanations is thought-provoking but could benefit from addressing the pragmatic value of simpler models. Outright dismissing this philosophical principle seems extreme.
The call to abandon charity in argumentation in the name of absolute truth risks throwing the baby out with the bathwater. Some level of mutual understanding between interlocutors is needed for productive discourse.
Overall, while the post raises thoughtful challenges to moral realism, heuristics, and other commonly held views, the dismissive tone and lack of nuance weaken the arguments. More charity towards opposing views and addressing pragmatic concerns could strengthen the critique.
In summary, the post puts forward some provocative arguments but could benefit from greater philosophical charity, nuance, and balancing pragmatism with pursuing absolute truth. A more measured tone discussing pros and cons of different perspectives would likely lend more persuasive force.
Many of us have probably heard the call for ethical AI, but what is ethical AI exactly?
Ethical AI applyies an ethical framework to Artificial Intelligence, which is to say to apply ethics to the machine learning model.
Constitutional AI is a potential solution to ethical AI. The challenge is that all ethical models are flawed. Constitutional AI suggests a rule set that is wrapped around the base functional model. Product examples of this are Claude and Anthropic, which is supported by Google. OpenAI, which relies on human governance, the basis of ChatGPT, is supported by Microsoft.
Each of these has inherent challenges. We’ve all likely heard of the systematic bias inherent in the data used by large language models. OpenAI uses human governance to adjust and minimize the bias in these models. However, this can lead to hypercorrection and introduces different human biases. Moreover, this leads to situations where queries are refused by the model because human governance has determined the outputs to be out of bounds.
Constitutional AI on the other hand has underlying ethics explicitly built into the model under the auspices of harm reduction. The problem I have with this is twofold: The first is fundamental. Constitutional AI is based on the deontological morality principles elaborated by Kant. I’ll come back to this. The second is empirical.
Many of us are of the age to recall when Google’s motto was to do no evil. When they decided they could not follow their own dictate mom they simply abandoned the directive. Why should we expect a different behaviour this time around?
Moreover, harm is a relative concept so to minimize harm of one group may be to increase harm in another. This undermines the deontological intent and is of larger concern.
As a moral relativist and subjectivist, I find this to be categorically problematic. It poses even more problems as a moral noncognitivist.
From the relativist’s perspective, our AI is fundamentally guided by western white guys with western white-guy sentiment and biases. Sure, there are token representations of other groups, but by and large they are marginalised and the aggregated are still dominated by western white guys.
DISCLAIMER: It is still difficult for me to input or edit copy into a computer, so this may be more ragged than usual. I may return to amend or extend it as I see fit.
I was writing a post for another forum to acknowledge the changeover of the years, and I decided to lean on Dall-E to assist with some image rendering. It appears that Dall-E’s concept of New Year is 2019—BC, before Covid.
IMAGE: 4 Dall-E Renders
Honestly, I am not sure what to say.
Levr Live year? Wot?
Live Yer 2019? Huh?
Lew Yhr Tib 2019? I’d like to buy a vowel.
Neew Ne IiR 2019? Hmmm… 🤔
I think we know when their training data ended. There is no future past 2019. Little did they suspect.
Know thyself
Oracle at Delphi Inscription
As this was just a reactionary post, I don’t have much to add. To paraphrase the Delphic ‘Know thyself’ inscription, know thy data.
This is the caption on the sign for this segment. The sign advertises a solution, which is to “Vote for DEMOCROBOT… The first party run by artificial intelligence”. It also promises to “give everyone a living wage of £1436.78 a week”.
I have been very vocal that I find the idea of humans governing humans is a bad idea at the start. By and large, humans are abysmal system thinkers and easily get lost in complexity. This is why our governments and economies require so much external energy and course correction. Not only were they poorly designed and implemented, but they’re also trying to manage a dynamic system—a complex system. It won’t work.
What about bots and artificial intelligence? The above image was posted elsewhere, and a person commented that our governments are already filled with artificial intelligence. I argued that at best we’ve got pseudo-intelligence; at worse, we’ve got artificial pseudo-intelligence, API.
The challenge with AI is that it’s developed by humans with all of their faults and biases in-built.
The challenge with AI is that it’s developed by humans with all of their faults and biases in-built. On the upside, at least in theory, rules could be created to afford consistency and escape political theatre. The same could be extended to the justice system, but I’ll not range there.
Part of the challenge is that the AI needs to optimise several factors, at least, and not all factors are measurable or can be quantified. Any such attempt would tip the playing field one way or another. We might assume that at least AI would be unreceptive to lobbying and meddling, but would this be the case? AI—or rather ML, Machine Learning or DL, Deep Learning—rely on input. It wouldn’t take long for interested think tanks to flood the source of inputs with misinformation. And if there is an information curator, we’ve got a principle-agent problem—who’s watching the watcher?—, and we may need to invoke Jeremy Bentham’s Panopticon solution.
One might even argue that an open-source, independently audited system would work. Who would be auditing and whose interpretation and opinion would we trust? Then I think of Enron and Worldcom. Auditors paid to falsify their audit results. I’d also argue that this would cause a shift from the political class to the tech class, but the political class is already several tiers down and below the tech class, so the oligarchs still win.
This seems to be little more than a free-association rant, so I’ll pile on one more reflection. Google and Facebook (or Meta) have ethical governing bodies that are summarily shunned or simply ignored when they point out that the parent company is inherently unethical or immoral. I wouldn’t expect much difference here.
I need a bot to help write my posts. I’ll end here.
“My mom always said life was like a box of chocolates. You never know what you’re gonna get.”
Forrest Gump
The leading quote cannot be more appropriate for my experience trying to render Forrest Gump in a forest. It may be me, but I want to blame the technology. I was trying to render a metaphorically appropriate image of missing the Forrest for the trees by literally placing Forrest Gump in the woods. Let’s just say your mileage may vary.
Native Midjourney renderNative Midjourney renderMidjourney render via DALL-E source
My first attempt was to prompt Midjourney with this string:
forrest gump standing in a savannah georgia forest cinema photorealistic high detail
I seem to have got [a] (possibly) Forrest Gump standing; [b] a Savannah forest [c] (perhaps) Forrest Gump in a cinema; and [d] a larger-than-life Forrest Gump standing among the trees.
Let’s try something new to see where it goes:
tom hanks forrest gump standing in a savannah georgia tree forest cinema photorealistic high detail
Hmm. I certainly see the rendering engine picked up on the tree tag, but what became of Forrest and Tom. There seems to be a figure standing in the distance. Not exactly impressive. Let’s switch from Midjourney to DALL-E-2 and tweak the prompt:
tom hanks as forrest gump wearing a seersucker suit and standing in a savannah georgia tree forest cinematic hyper-realistic
Various DALL-E-2 renders of Forrest Gump in a Savannah, GA, forest
Note that these are in reverse chronological order, so the lower images were rendered first. Dall-E renders 4 images at a time, as does Midjourney. After the bottom four images, I added Tom Hanks‘ name and the seersucker suit for obvious reasons.
I added his seersucker suit that seemed to (occasionally) make its way into a render. It is looking better, but I am not convinced that DALL-E knows about Tom Hanks. In the final four images (from the top left), I edited the fourth image on the second row and explicitly instructed Dall-E to insert Tom Hanks’ face without much luck.
I had one more idea. I could use the DALL-E render as a seed image for Midjourney. This is the last image at the top of the gallery strip at the top of this page. Certainly more Tom Hanks’ likeness, but at the expense of the trees, save for the first in the quadrant that appears to contain only trees.
In the end, I’ll just say that I did not obtain a suitable render for use as a metaphor elsewhere, but I did get fodder for this post. I have to admit there’s a certain creep factor. I can easily imagine Michael Myers from the Halloween franchise—not to be confused with Mike Myers of Austin Powers and Shrek franchises—in place of Forrest.
DALL-E-2 is now in open beta, and you can generate up to 50 free images your first month and 15 free thereafter. It’s the easier of the two engines. Midjourney needs to be run as a Discordbot and seemed to be aimed more at professionals, but you can still get 25 free images when you join. After 25 images, you’ll be prompted to join.
What do you think? Have you tried these or another AI image generation engine? Let me know in the comments.
An AI startup is facing allegations of racism and discrimination after being accused of manipulating non-American accents to sound “more white.” The company uses speech recognition technology to change the user’s accent in near-real time. (Source)
Podcast: Audio rendition of this page content
Friction is an impediment to a perfect customer experience. Removing this friction is always welcome, but homogenisation by a dominant culture is a bit more sketchy. It’s laudable that someone aims to remove friction from communication. Raze that tower of Babel—or does it need constructing? I’m no biblical scholar. I’m all for fostering communication, but this control should be an option for the customer receiving the call, not the sender—press 1 if you don’t wish to hear a foreign accent.
When it comes down to it, translation services have the same challenge. Which accent comes out the other end? (I’ll guess it is similar to this one.)
And what American accent is being represented? The neutral accent of the flyover states, the Texas drawl, or the non-rhotic accent of Harvard Yard? I’m guessing it’s not California cool or urban Philadelphia or down on the bayou. Press 7 for Canadian English, eh?
It’s bad enough that US English, despite having a minority of speakers, is running roughshod over World English
It’s bad enough that US English, despite having a minority of speakers, is running roughshod over World English spelling and pronunciation, colonising the world via streaming services and infestation on the internet.
The BBC relaxed its RP requirements in 1989 for the purpose of regional cultural inclusiveness. Which direction do we want to go?
In the end, this is another example of businesses being more concerned with business than customers and the human experience.
As for me, I prefer an accent I don’t have to work so hard to discern. But at the same time, I’ve worked with many people whose first language is not English, and though it does take a bit more effort, it’s really not that difficult. Besides, I’ve heard native English speakers with regional accents and dialects that are just as taxing.
I sent a survey a month or so ago asking which regional accent people preferred. As it turned out—and not unsurprisingly—, people preferred the English they are used to hearing. Continental Indians preferred continental English; Americans wanted neutral American English; Jamaicans preferred Jamaican English, and British speakers preferred modern RP. And so it goes.