A surprising number of people have been using the MEOW GPT I released into the wild. Naturally, I can’t see how anyone is actually using it, which is probably for the best. If you hand someone a relational ontology and they treat it like a BuzzFeed quiz, that’s on them. Still, I haven’t received any direct feedback, positive or catastrophic, which leaves me wondering whether users understand the results or are simply nodding like priests reciting Latin they don’t believe.
The truth is uncomfortable: if you haven’t grasped the Mediated Encounter Ontology (of the World), the outputs may feel like a philosophical brick to the face. They’re meant to; mediation has consequences. I’m even considering adding a warning label:
If you hold an unwavering commitment to a concept with any philosophical weight, perhaps don’t input it. There is a non-zero chance the illusion will shatter.
Below is a sampling of the concepts I tested while inspecting the system’s behaviour. I’m withholding the outputs, partly to avoid influencing new users and partly to preserve your dignity, such as it is.
- authenticity
- anattā (Buddhist)
- character (in Aristotle’s virtue-ethical sense)
- consciousness
- dignity
- freedom
- hózhó (Navajo)
- justice
- karma
- love
- progress
- ren ( 仁 )
- table
- tree
- truth
I may have tried others, depending on how irritated I was with the world at the time.
(Now that I think of it, I entered my full name and witnessed it nearly have an aneurysm.)
My purpose in trying these is (obviously) to test the GPT. As part of the test, I wanted to test terms I already considered to be weasel words. I also wanted to test common terms (table) and terms outside of Western modalities. I learned something about the engine in each case.
Tables & Trees
One of the first surprises was the humble ‘table’ which, according to the engine, apparently moonlights across half of civilisation’s conceptual landscape. If you input ‘table’, you get everything from dinner tables to data tables to parliamentary procedure. The model does exactly what it should: it presents the full encounter-space and waits for you to specify which world you meant to inhabit.
The lesson: if you mean a table you eat dinner on, say so. Don’t assume the universe is built around your implied furniture.
‘Tree’ behaves similarly. Does the user mean a birch in a forest? A branching data structure? A phylogenetic diagram? MEOW GPT won’t decide that for you; nor should it. Precision is your job.
This is precisely why I tested ‘character (in Aristotle’s virtue-ethical sense)’ rather than tossing ‘character’ in like a confused undergraduate hoping for luck.
Non-Western Concepts
I also tested concepts well outside the Western philosophical sandbox. This is where the model revealed its real strength.
Enter ‘karma’: it promptly explained that the Western reduction is a cultural oversimplification and – quite rightly – flagged that different Eastern traditions use the term differently. Translation: specify your flavour.
Enter ‘anattā’: the model demonstrated that Western interpretations often reduce the concept to a caricature. Which, frankly, they do.
Enter ‘hózhó’: the Navajo term survives mostly in the anthropological imagination, and the model openly described it as nearly ineffable – especially to those raised in cultures that specialise in bulldozing subtlety. On that score, no notes.
Across the board, I was trying to see whether MEOW GPT would implode when confronted with concepts that resist neat Western categorisation. It didn’t. It was annoyingly robust.
Closing Notes
If you do try the MEOW GPT and find its results surprising, illuminating, or mildly offensive to your metaphysical sensibilities, let me know – and tell me why. It helps me understand what the engine does well and what illusions it quietly pops along the way. Your feedback may even keep me from adding further warning labels, though I wouldn’t count on it.