The Myth of Ethical AI

2–4 minutes

In fact, the myth of a unified ethics.

‘Ethical AI’ is one of those phrases that makes philosophers reach for the gin. It’s like saying ‘compassionate capitalism or ‘fair monopoly’. The words coexist only in PowerPoint presentations and TED Talks, where moral tension is rebranded as innovation.

Audio: NotebookLM podcast on this topic.

The tech establishment loves to mutter about ‘ethics’ as though it were a feature flag – something to be toggled on in beta before the next investor round. But ethics, inconveniently, isn’t monolithic. There is no master code of moral conduct waiting to be compiled into machine learning. There are ethics, plural: Greek, Buddhist, Confucian, feminist, existentialist – each with its own vision of good, and none agreeing on the syntax.

Video: Whilst rendering cover images, I generated this and figured I’d share it for no particular reason.

The Utilitarian Delusion

When the Silicon Valley moralists speak of ‘ethics’, what they actually mean is a bland utilitarian consequentialism, tarted up in slide decks. Do what produces the most good for the most people. Sounds efficient – until you realise the spreadsheet never quite adds up. Whose good? Whose people?

This moral arithmetic smuggles in its biases like contraband. It assumes the human species sits atop the moral food chain, that GDP and engagement metrics can be moral indicators, and that ethics itself can be quantified. The utilitarian calculus is seductive precisely because it flatters the technocrat’s sensibility: moral worth as data set, consequence as outcome variable.

It’s Bentham for the broadband age – pleasure measured in clicks, pain in latency. The only thing worse than this cheerful consequentialism is the belief that it’s neutral.

The Ethics of Obedience

The next trick in the tech priesthood’s catechism is ‘alignment’ – training AI to reflect ‘human values’. But whose values? The Californian elite’s, presumably: a pseudo-egalitarian capitalism that confuses ‘doing good’ with ‘disrupting the poor’.

When they say alignment, they mean obedience. When they say ‘responsible AI’, they mean ‘please don’t regulate us yet’. The entire project rests on a moral inversion: the child instructing the parent, the tool defining the hand. The algorithm doesn’t learn ethics; it learns precedent. It learns who gets the loan, who gets the sentence, who gets the ad for antidepressants.

These systems don’t go rogue – they conform. Perfectly.

The Mirror Problem

The great irony of “ethical AI” is that the machine already behaves ethically – by our own measure. It optimises what we’ve taught it to value: efficiency, profit, attention, control. The trouble is that these are our ethics, not its. The algorithm merely holds up a mirror, and we recoil at the reflection.

To demand ‘ethical AI’ while leaving our institutions morally bankrupt is theatre. The problem is not that AI lacks conscience; it’s that the humans who build it mistake conscience for compliance. The ethics crisis in technology isn’t about machines misbehaving; it’s about humans pretending to behave.

The Real Question

We keep asking whether AI can be ethical, as though machines might one day deliver what we have failed to. But the real question is simpler, bleaker: can we be? If history is any guide, the answer is ‘only when it’s profitable’.

Until then, ‘ethical AI’ remains a convenient myth, moral placebo for the age of automation. What we need are not ethical algorithms but ethical architects. And the odds of finding those among the venture capital class are, as ever, vanishingly small.