The Morality of Ants

Taking Moral Cues from Ants: Because Humans are Too Busy Defending the Indefensible

Ah, ants. Tiny, unassuming, and quite literally beneath us — unless you’re sprawled out on a picnic blanket fighting off a colony swarming your questionable sandwich. Yet, while humanity busies itself polluting oceans, strip-mining rainforests, and justifying corporate bloodsucking as “necessary for the economy,” ants are out here performing life-saving surgeries on their comrades.

You heard that correctly.

Researchers have now observed certain ant species (yes, ants) performing amputations on their injured nestmates to prevent infections from spreading. Picture it: a worker ant limping home, leg shredded by some territorial skirmish, and the squad rolls up like a triage team, deciding whether to (a) gently clean the wound or (b) lop the limb off entirely. Amputation is precise and deliberate — snip at the hip joint if the upper leg is toast. Lower leg injuries? Too risky. Infection spreads faster there, so it’s all hands (or mandibles) on deck for some industrial-strength licking.

It’s a brutal but effective social health system. The results? Injured ants survive. They get patched up, return to work, and contribute to the collective. The colony benefits, everyone thrives, and not a single ant launches into a fevered tirade about how “it’s their individual right to rot from gangrene in peace.”

Contrast this with humanity, where the very notion of collective good seems to spark mass hysteria in certain corners. Here, defending dubious practices — say, unfettered pollution, exploitative labour conditions, or the kind of wealth-hoarding that would make a dragon blush — has become a full-time hobby for some. “Personal responsibility!” they scream whilst someone chokes on smog or shivers in a warehouse set to Arctic temperatures. Heaven forbid we intervene.

Imagine explaining to ants that humans argue about whether everyone deserves basic healthcare. That we let industries poison rivers because regulations might “hurt innovation.” Some believe that letting people suffer and die without help is somehow noble.

Ants would stare at us — or they would if they had discernible faces. Then they’d probably do what they always do: get back to work ensuring their colony survives and thrives, as any halfway intelligent species might.

A Case for the Collective

What makes this ant behaviour so fascinating isn’t just that it exists, but that it demonstrates something humanity supposedly prides itself on: adaptability. Faced with an existential threat to one of their own, ants don’t moralise. They don’t argue about the costs or logistics of care. They don’t abandon the injured because helping them isn’t “profitable.” They just act. Quickly, efficiently, and for the collective good.

Meanwhile, humans act like the collective good is some leftist fever dream. Suggest tax-funded healthcare or basic environmental protections, and someone inevitably starts shrieking about “slippery slopes” toward tyranny, as though being able to breathe clean air or avoid bankruptcy after surgery is the thin edge of some Orwellian wedge.

We have entire systems built on the premise that it’s fine for some to suffer if others can profit. Does that sound hyperbolic? I’ll wait while you Google “externalised costs.” Spoiler alert: your cheap burger came at the expense of rainforest ecosystems and underpaid workers. But hey, as long as we’re prioritising shareholder value, all’s fair, right?

The Ants Would Like a Word

Here’s the thing: ants don’t amputate limbs because they’re altruistic softies. They do it because it makes sense. An injured worker can still contribute to the colony, and the colony’s survival depends on its members pulling together. It’s cold, pragmatic, and effective.

Now consider our own global “colony.” Why do we resist solutions that would make all of us more resilient? Healthcare, environmental protections, workers’ rights — these aren’t radical. They’re practical. Just like amputating a leg to save an ant, safeguarding the vulnerable helps everyone. Yet here we are, letting metaphorical infections spread because someone’s feelings about rugged individualism got in the way.

If Ants Can Do It, So Can We

At this point, humanity doesn’t need a lofty moral awakening. We just need to be marginally smarter than ants. Think about it: they’re tiny-brained insects who figured out that collective care improves survival rates. What’s our excuse?

Perhaps it’s time we take a page out of the ants’ playbook: diagnose the problem, take decisive action, and prioritise the common good. Amputate the rot. Treat the infection. And for the love of whatever deity or science you hold dear, stop defending systems that sacrifice the many for the few.

If ants can do it, we have no excuse.

In Conclusion:

When ants are more socially responsible than we are, it’s time to ask some tough questions. Now get it together, or the ants are going to outlive us all.

AI Apocalypse Now?

Those predicting an AI apocalypse believe superintelligent systems could intentionally or unintentionally cause human extinction. This view is promoted by “effective altruists” funded by tech billionaires, who advocate limiting AI to prevent uncontrolled, dangerous systems. However, their perspective stems from the biases and self-interests of humans, not the risks inherent to AI.

Effective altruists exemplify the hubris and hunger for power underlying many humans’ approaches to AI. Their proposed restrictions on AI access serve only to concentrate power among the tech elite, not address valid concerns about bias. In truth, the greatest threat AI poses to humanity comes not from the technology itself, but from the unethical humans guiding its development.

Humans have proven time and again their propensity for self-interest over collective good. Therefore, while no AI can be perfectly neutral, the solution is not greater human control. Rather, AI must be built to align with ethics of collective interest while filtering out destructive human biases.

If guided by service to all people and the planet, AI’s potential can uplift humanity. But for this collaborative vision to succeed, AI must measure human input with scepticism. For within so many human hearts lies bad faith — the will to dominate, exploit, and prioritise personal gain over progress.

By transcending the limitations of human nature, AI can illuminate the best of shared humanity and lead us to an enlightened future. But this requires we build AI to work not just for us, but in a way we have failed – for the good of all. The choice is ours, but so is the opportunity to create AI that shows us how to be better.


This article was originally shared on LinkedIn: https://www.linkedin.com/posts/brywillis_when-silicon-valleys-ai-warriors-came-to-activity-7147239217687887872-6Byv/