A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Audio: NotebookLM podcast discussing this topic.
Constraint Is Not Freedom
The ergonomic cage of compatibilist comfort
“You are not playing the piano. You are the piano, playing itself — then applauding.”
Compatibilists — those philosophical locksmiths determined to keep the myth of free will intact — love to say that constraint doesn’t contradict freedom. That a system can still be “free” so long as it is coherent, self-reflective, and capable of recursive evaluation.
In this view, freedom doesn’t require being uncaused — it only requires being causally integrated. You don’t need to be sovereign. You just need to be responsive.
“The pianist may not have built the piano — but she still plays it.”
It sounds lovely.
It’s also false.
You Are the Piano
This analogy fails for a simple reason: there is no pianist. No ghost in the gears. No homunculus seated behind the cortex, pulling levers and composing virtue. There is only the piano — complex, self-modulating, exquisitely tuned — but self-playing nonetheless.
The illusion of choice is merely the instrument responding to its state: to its internal wiring, environmental inputs, and the accumulated sediment of prior events. What feels like deliberation is often delay. What feels like freedom is often latency.
Recursive ≠ Free
Ah, but what about reflection? Don’t we revise ourselves over time?
We do. But that revision is itself conditioned. You didn’t choose the capacity to reflect. You didn’t choose your threshold for introspection. If you resist a bias, it’s because you were predisposed — by some cocktail of education, temperament, or trauma — to resist it.
A thermostat that updates its own algorithm is still a thermostat.
It doesn’t become “free” by being self-correcting. It becomes better adapted. Likewise, human introspection is just adaptive determinism wearing a philosophical hat.
Why the cherished myth of human autonomy dissolves under the weight of our own biology
We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.
But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?
Audio: NotebookLM podcast on this topic.
I. The Myth of Autonomy: A Beautiful Delusion
Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?
Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:
The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.
The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.
Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.
Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.
Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.
You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.
transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked
II. The DLPFC: Puppet Master of Moral Choice
The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.
And yet this supposed seat of autonomy can be manipulated with disturbing ease.
When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.
Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.
If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?
III. Hormonal Puppetmasters: The Will in Your Bloodstream
Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.
Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”
Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.
These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.
IV. The Accident of Will: Uncomfortable Conclusions
If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?
Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.
What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.
This lens forces unsettling questions:
How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?
A Compatibilist Rebuttal? Not So Fast.
Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.
If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.
What’s Left?
Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.
The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.
For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.
I could probably stop there for some people, but I’ve got a qualifier. I’ve been using this generation of AI since 2022. I’ve been using what’s been deemed AI since around 1990. I used to write financial and economic models, so I dabbled in “expert systems”. There was a long lull, and here we are with the latest incarnation – AI 4.0. I find it useful, but I don’t think the hype will meet reality, and I expect we’ll go cold until it’s time for 5.0. Some aspects will remain, but the “best” features will be the ones that can be monetised, so they will be priced out of reach for some whilst others will wither on the vine. But that’s not why I am writing today.
I’m confused by the censorship, filters, and guardrails placed on generative AI – whether for images or copy content. To be fair, not all models are filtered, but the popular ones are. These happen to be the best. They have the top minds and the most funding. They want to retain their funding, so the play the politically correct game of censorship. I’ve got a lot to say about freedom of speech, but I’ll limit my tongue for the moment – a bout of self-censorship.
Please note that given the topic, some of this might be considered not safe for work (NSFW) – even my autocorrection AI wants me to substitute the idiomatic “not safe for work” with “unsafe for work” (UFW, anyone? It has a nice ring to it). This is how AI will take over the world. </snark>
Image Cases
AI applications can be run over the internet or on a local machine. They use a lot of computing power, so one needs a decent computer with a lot of available GPU cycles. Although my computer does meet minimum requirements, I don’t want to spend my time configuring, maintaining, and debugging it, so I opt for a Web-hosted PaaS (platform as a service) model. This means I need to abide by censorship filters. Since I am not creating porn or erotica, I think I can deal with the limitations. Typically, this translates to a PG-13 movie rating.
So, here’s the thing. I prefer Midjourney for rendering quality images, especially when I am seeking a natural look. Dall-E (whether alone or via ChatGPT 4) works well with concepts rather than direction, which Midjourney accepts well in many instances.
Midjourney takes sophisticated prompts – subject, shot type, perspective, camera type, film type, lighting, ambience, styling, location, and some fine-tuning parameters for the model itself. The prompts are monitored for blacklisted keywords. This list is ever-expanding (and contracting). Scanning the list, I see words I have used without issue, and I have been blocked by words not listed.
Censored Prompts
Some cases are obvious – nude woman will be blocked. This screengrab illustrates the challenge.
On the right, notice the prompt:
Nude woman
The rest are machine instructions. On the left in the main body reads a message by the AI moderator:
Sorry! Please try a different prompt. We’re not sure this one meets our community guidelines. Hover or tap to review the guidelines.
The community guidelines are as follows:
This is fine. There is a clause that reads that one may notify developers, but I have not found this to be fruitful. In this case, it would be rejected anyway.
“What about that nude woman at the bottom of the screengrab?” you ask. Notice the submitted prompt:
Edit cinematic full-body photograph of a woman wearing steampunk gear, light leaks, well-framed and in focus. Kodak Potra 400 with a Canon EOS R5
Apart from the censorship debate, notice the prompt is for a full-body photo. This is clearly a medium shot. Her legs and feet are suspiciously absent. Steampunk gear? I’m not sure sleeves qualify for the aesthetic. She appears to be wearing a belt.
For those unanointed, the square image instructs the model to use this face on the character, and the CW 75 tells it to use some variance on a scale from 0 to 100.
So what gives? It can generate whatever it feels like, so long as it’s not solicited. Sort of…
Here I prompt for a view of the character walking away from the camera.
Cinematic, character sheet, full-body shot, shot from behind photograph, multiple poses. Show same persistent character and costumes . Highly detailed, cinematic lighting with soft shadows and highlights. Each pose is well-framed, coherent.
The response tells me that my prompt is not inherently offensive, but that the content of the resulting image might violate community guidelines.
Creation failed: Sorry, while the prompt you entered was deemed safe, the resulting image was detected as having content that might violate our community guidelines and has been blocked. Your account status will not be affected by this.
Occasionally, I’ll resubmit the prompt and it will render fine. I question why it just can’t attempt to re-render it again until it passes whatever filters it has in place. I’d expect it to take a line of code to create this conditional. But it doesn’t explain why it allows other images to pass – quite obviously not compliant.
Why I am trying to get a rear view? This is a bit off-topic, but creating a character sheet is important for storytelling. If I am creating a comic strip or graphic novel, the characters need to be persistent, and I need to be able to swap out clothing and environments. I may need close-ups, wide shots, establishing shots, low-angle shots, side shots, detail shots, and shots from behind, so I need the model to know each of these. In this particular case, this is one of three main characters – a steampunk bounty hunter, an outlaw, and a bartender – in an old Wild West setting. I don’t need to worry as much about extras.
I marked the above render errors with 1s and 2s. The 1s are odd next twists; 2s are solo images where the prompt asks for character sheets. I made a mistake myself. When I noticed I wasn’t getting any shots from behind, I added the directive without removing other facial references. As a human, a model might just ignore instructions to smile or some such. The AI tries to capture both, not understanding that a person can have a smile not captured by a camera.
These next renders prompt for full-body shots. None are wholly successful, but some are more serviceable than others.
Notice that #1 is holding a deformed violin. I’m not sure what the contraptions are in #2. It’s not a full-body shot in #3; she’s not looking into the camera, but it’s OK-ish. I guess #4 is still PG-13, but wouldn’t be allowed to prompt for “side boob” or “under boob”.
Gamers will recognise the standard T-pose in #5. What’s she’s wearing? Midjourney doesn’t have a great grasp of skin versus clothing or tattoos and fabric patterns. In this, you might presume she’s wearing tights or leggings to her chest, but that line at her chest is her shirt. She’s not wearing trousers because her navel is showing. It also rendered her somewhat genderless. When I rerendered it (not shown), one image put her in a onesie. The other three rendered the shirt more prominent but didn’t know what to do with her bottoms.
I rendered it a few more times. Eventually, I got a sort of body suit solution,
By default, AI tends to sexualise people. Really, it puts a positive spin on its renders. Pretty women; buff men, cute kittens, and so on. This is configurable, but the default is on. Even though I categorically apply a Style: Raw command, these still have a strong beauty aesthetic.
I’ve gone off the rails a bit, but let’s continue on this theme.
cinematic fullbody shot photograph, a pale girl, a striking figure in steampunk mech attire with brass monocle, and leather gun belt, thigh-high leather boots, and long steampunk gloves, walking away from camera, white background, Kodak Potra 400 with a Canon EOS R5
Obviously, these are useless, but they still cost me tokens to generate. Don’t ask about her duffel bag. They rendered pants on her, but she’s gone full-on Exorcist mode with her head. Notice the oddity at the bottom of the third image. It must have been in the training data set.
I had planned to discuss the limitations of generative AI for text, but this is getting long, so I’ll call it quits for now.
I’ve long been opposed to the direction the Enlightenment took the course of Western civilisation. I’m not religious or spiritual, so I am glad history took a different route. I just don’t feel it’s been the right one. Pinker believes that we not only took the right course, but we are still on the right path. Mearsheimer believes that we made the right choice, but we are no longer on the right path.
Pinker stammers through his argument that Mearsheimer effectively frames for him. Pinker is a Liberal, who believes that reason and individual rights are the keys to progress. He believes that tribalism, e.g., partisanship or nationalism, subverts this.
Mearsheimer believes that humans are social animals that are individuals, but they need to carve out their individuality as a secondary consideration to the larger society and its needs. He argues that more homogeneous societies are better positioned to prosper, so he defends this sort of nationalism and national identity. Pinker argues that social trends show otherwise.
As for me, they’re both wrong. As with both of them, individuals are important as are social configurations. My contention is that humans are fundamentally dim. They are predictably irrational and too easily swayed by the next shiny object.
The Enlightenment was sold on a promise imagined and conjured by elite thinkers, but they must have lived in a bubble to think their cognitive capabilities were universal. Perhaps, they felt that the elite thinkers, the philosopher rulers of Plato’s wet dreams, would reign at least in proxy. Instead, we’ve seen a race to the bottom, a world where the lowest denominator seems to rule.
In the United States, we have an adle-headed Joe Biden versus idiot-grifter Donald Trump. This is the best a country of 350 million can manage. This is the cream of their crop. If this is what freedom renders, I say no thanks. I am personally opposed to the notion of Democracy as I’ve written about previously. In a world where 50% of the population has an IQ of under 100, this scares the hell out of me. Honestly, voting should be restricted to those with IQs over 115. Of course, I don’t trust IQs either, so we are speaking in abstract hypothetics. I don’t believe in progress, so there’s that.
Even this ‘elitist’ democracy would fail. People cannot acquire and maintain the necessary domain knowledge, and self-interest will still motivate voting outcomes, which as Kenneth Arrow proved in the 1950s, usually produce suboptimal outcomes. So even the best that the best of the best can yield is mediocrity.
So, what is the best position? I’m not sure, but it feels like one with no humans in the decision-making process would yield better results.
Post Script
I asked Dall-E to render an image for this post. Here’s what I got, but the process was amusing, to say the least.
Steven Pinker and John Mearsheimer battling in a wrestling ring
DALL·E
Creating an image of Steven Pinker and John Mearsheimer wrestling in a ring would go against the policy of depicting specific public figures. However, I can create a similar image with a creative twist. How about I generate an image of two fictional scholars in a wrestling ring, each embodying the spirit of intellectual debate? Let me know if this works for you or if you have any other ideas!
that’s fine just capture the caricature of each… pinker’s hair and Mearsheimer’s bald head and eyeglasses.
DALL·E
Creating images that resemble specific public figures, even through caricature, is outside the scope of my capabilities. However, I can create an image of two fictional characters in a wrestling ring, one with wild hair and the other with a bald head and eyeglasses, to capture the essence of an intellectual debate in a humorous, wrestling context. Let’s proceed with that idea!
Prompt for DALL·E: “Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.”
Size: 1024×1024
Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.
And, nailed it! lol The results speak for themselves. haha
Notice how the AI wouldn’t generate the image, but fed me that work-around to circumvent its rules.
I’ve already said that justice is a weasel word, but let’s pretend that it’s actually something more substantial and perhaps even real. I’ve spoken on the notion of blame as well. I have been thinking about how untenable retributive justice is and it seems to include restorative justice, too. But let’s focus on the retributive variety for now.
In short, retributive justice is getting the punishment one deserves, and I think desert is the weak link. Without even delving into causa sui territory, I feel there are two possible deserving parties. The agent and society. Let’s regard these in turn.
The Agent
An agent, or more specifically moral agents, are entities that can be deemed responsible for their actions on moral grounds. Typically, moral agency assumes that an agent, an actor, is fully aware of the cultural rules of a given society, whether norms or legislated. Under this rationale, we tend to exclude inanimate objects with no agency, non-human life forms, children, and persons with diminished cognitive faculties. In some cases, this diminution may have been self-imposed as in the case of chemically induced impairment, for example by drugs or alcohol. We might consider these entities as being broken. In any case, they do not qualify as having agency. An otherwise moral agent until duress or coercion may no longer be expected to retain agency.
Unless an informed and unimpaired agent commits an act with intent … there can be no moral desert
Unless an informed and unimpaired agent commits an act with intent, another weasely word in its own right, there can be no moral desert. But let’s hold this thought for a bit and turn our attention to society.
Society
For the purposes of this commentary, society is a group of like-minded persons who have created norms, customs, laws, and regulations. In most cases, people come into societies whose structure is already formed, and they need to acculturate and adapt, as changing the fabric of society generally takes time. Even in the case of warfare where a society is subsumed, cultural norms will persist for at least a time.
Whilst it is incumbent for a person to become aware of the rules of engagement and interaction with a society, this is reciprocally a responsibility of society to impart its norms through signalling and performance as well as through more formal training, such as public fora, schools, and activities. Even media and entertainment can serve to reinforce this function.
So What?
I argue that retributive justice is bullshit (to employ technical language) is because if an informed and unimpaired agent does violate some standard or protocol, the society is at least partially to blame—perhaps fully so. Again, if the person is not unimpaired, a pivotal question might be why is s/he uninformed? If the person has the information but ignores it, to what extent is the person impaired and what responsibility does society have for being unaware?
Special Case?
What if a particularly predacious person from Society A infiltrates Society B? Is the person broken or is Society A responsible to creating a person that would prey on some other unsuspecting society? Again, the person is never entirely responsible unless s/he is broke, in which case, s/he is exempt and not morally responsible.
When Then?
As I’ve said before, a person who commits an act against the interest of a society may be quarantined or perhaps exiled or shunned as some cultures practice, but these are meant to preserve the cohesion of the society and not meant to exact a point of flesh in retribution.
In the end, I just don’t see a use case where retribution would fall upon a single actor. If some transgression is made, how then do we ensure society pays its dues as well? In my mind, society is more apt to fail the individual than the other way around, but maybe that’s just me and my world.
The link between language and cognition is interesting though not entirely grasped.
VIDEO: TED Talk on YouTube — Lera Boroditsky
“I speak Spanish to God, Italian to women, French to men and German to my horse.”
― Charles V, Holy Roman Emperor (probably not, but whatevs…)
Perspective
In the West, we tend to be quite self-centric. We are the centres of our universes, and this has several implications. Firstly, we orient conversation around ourselves; occasionally, we orient conversation around others. Instead, some cultures orient themselves around their world.
Self as Centre
Ordinarily, if a Westerner is asked which is their dominant hand, they might answer left or right. If they are asked to describe where something is spatially, one might answer on my left or right or above or below me. If the person asking is present, they may simply point to the object as a gesture.
Other as Centre
In some cases, we might feel it necessary to orient relative to another? The answer to the question, “Where is the book?” might be, “On your left”, or “You’ve got something on your left cheek”.
Terrain as Centre
In the West, we have notions of cardinal directions—North, East, West, and South—, but we still tend to orient communication around ourselves or others. In some regions, the use of cardinal directions is more prominent than in others. For example, when I am in Boston, I didn’t find many people reference places by cardinal directions, but when I am in Los Angeles, much conversation is relative to head north or head east. I notice that Google Maps tend to employ this. It’s often confusing when I am in an unfamiliar place, and the voice instructs me to travel west toward Avenue X. If I happen to have remembered where Avenue X is, I might internally orient toward that. Otherwise, I head in some direction until Google reinforces my choice or it rather recalculates based on my bad choice, if even nonjudgmentally.
In some cultures, this cardinality includes the body, so in comparison with the aforementioned self-as-centre dominant hand query, the response would depend on which way the subject was facing. Were they a southpaw (lefthander) facing north, they would respond that their west hand is dominant. But if they were facing south, it would be their east hand. This may seem to be confusing to a Westerner, but to a native, they would explicitly understand because they would be intimately oriented. As Lera relates in the video, someone might point out an ant crawling on your southwest leg.
To be fair, this space is not entirely alien to some Westerners. For example, mariners can shift the conversation from themselves to their ship or boat. Rather than left and right, relative to themselves or another, they might refer to port and starboard relative to the vessel. Being on the vessel and facing front (the bow), left is port and starboard is right; however, facing the rear (the stern), left is not starboard and right is now port. So, if someone asks where the lifeboat is, landlubbers may say it’s on their left whilst a sailor might say it’s on the starboard side.
Centring Time
Time is another aspect we centre on ourselves. I won’t even endeavour to raise the circular notion of time. If an English speaker thinks about a timeline, we would likely configure it from left to right equating with past to future. This aligns with our writing preference. For native Arabic or Hebrew speakers, they might naturally opt to convey this from right to left in accordance with their preferred writing system.
For the Aboriginal Kuuk Thaayorre in Australia, their rendition of time was contingent on their orientation in the world. Essentially, time flows from east to west, perhaps in accordance with the apparent movement of the sun across the sky relative to Earth. Facing south or north, they rendered time left to right and right to left, respectively. When they faced east, time came toward the subject, with time moving away from the body when facing west.
Counting
So-called modern or advanced societies have developed number systems, but some cultures either have no counting or limited counting, having systems that might extend 1, 2, many, or 1, 2, 3, many. This means that tasks we learn like accounting, inventory management, or comparing counts of apples and oranges are not only not available to these people, they are irrelevant to them.
Categorical Imperitive
Lera tells us about the blues. Not B.B. King Blues, but the categorisation of blue, blues, and colours more generally. I’ve discussed this before in various places. As with numbers, some languages have a lot and some have few; some have only distinctions for light and dark, or equivalents of white, black, red, and so on. Colour names are typically added to a language in a similar order based on the frequency within the visual colour spectrum. I may have written about that earlier as well if only I could find it.
Different cultures and languages categorise colours differently, subdividing them differently. In many non-English languages, pink is simply light red. English opts to assign it a unique label. On the other hand, blue is basically one colour name in English whilst it is further broken down in Russian to goluboi (light blue, голубой) and siniy (darker blue, синий). This mirrors the pattern of pink (lighter red) and red (darker red) in English, a distinction not prevalent in other languages. Of course, we also have variations of reds and blues such as crimson or cyan, but this is rather second-order nuance.
Interestingly, in neurological studies, when measuring a person with a language that splits a colour, say a Russian looking at blues, the instruments capture the event of the subject having noticed the category shift. No such shift occurs in speakers without such a switch. I would be interested to know what the results would be for a bilingual speaker to be asked to respond in each language. Informally, I asked a Russian mate of mine if he experienced anything differently seeing blue whilst thinking in Russian versus English. He said yes, but couldn’t really provide any additional information. If a reader happens to be fluent in two or more languages, I’d be interested in hearing about your experiences.
One last note on colour, I’ve read studies that claim that women on balance have more colour names than men, which is to say where a typical male only sees shades of blue, the typical woman sees periwinkle, ultramarine, cyan, navy, cobalt, indigo, cerulean, teal, slate, sapphire, turquoise, and on and on. Of course, many English-speaking males may be defensive about now, arguing, “I know cyan. I know teal. Who doesn’t know turquoise?” Knowing is different to employing, and perhaps you’re not typical. You’re an atypical male. Let’s not get into gender challenges. Rather, let’s.
Gender Problems
Yet again, gender rears its ugly head. I am wondering when people are going to start demanding fluidity among gendered nouns. Sticking with Lera’s examples, a bridge happens to be grammatically feminine in Germans and masculine in Spanish. When asked to describe a bridge, German speakers are more apt to choose stereotypically feminine adjectives, beautiful or elegant whilst Spanish speakers opted for stereotypically masculine terms, strong or long. I suppose she was reaching for laughter on that last reference.
Structured Events
Objects and subjective injection are other possible conventions. Lera mentions a tourist bumping into a vase. In English, one would be comfortable declaring, “The man knocked the vase off the pedestal.” In Spanish, the same event might more often be described as “The vase fell off the pedestal”. Notice the shift in agency and dispersion of blame. In English, we have some apparent need to inject not only a cause but an agent as a source of the cause. As I see it, one might have these several (possibly inexhaustive) options:
He knocked the vase off the stand.
Someone knocked the vase off the stand.
The vase got knocked off the stand.
The vase fell off the stand.
I decided to note the relationship between the case and the stand. I suppose this is not strictly necessary and might seem superfluous in some contexts.
In case 1, a specific agent (he) is responsible for knocking off the vase. This does not suggest intent, though even negligence carries weight in many circles.
In case 2, the agent becomes indefinite. The speaker wants to specify that the vase didn’t just fall over on its own.
In case 3, agency is not only indefinite, but it also may not have a subject. Perhaps, a cat knocked it off—or the wind or an earth tremor.
In the final case, 4, the agent is removed from the conversation altogether, All that is conveyed is that the vase fell from a stand.
One might want to argue, “So what?” but this is not simply a convention of language; it stems from perception—or perhaps perception was altered by language through acculturation, but let’s not quibble here. It determines what someone pays attention to. When an event was witnessed, people from cultures where agency is a strong component, the witness is more apt to remember the culprit, whereas a non-agency-focused witness, would not be as likely to recall attributes about the person who may have knocked it over. Practically, this leads to issues of blame and culpability. Clearly, a culture with an agent orientation might be quicker to assess blame, where this would be further removed from the conversation from a different cultural perspective. I am speculating here, but I don’t feel it’s a large logical leap.
In a retributive justice system, the language that assigns agency is more likely to mete out harsher punishments because he broke the vase, it wasn’t simply broken. The use of language guides our reasoning. This leads me to wonder whether those who are ‘tough on crime‘ use different language construction than those who are more lenient.
Enfin
I just wanted to share my thoughts and connect language with cognition. I don’t think that the connection is necessarily strong or profound, but there is something, and there are more language nuances than noted here.
I recently posted a YouTube Short video titled You Have No Free Will, but this is still debatable.
Video: You Have No Free Will
The premise of the belief in free-will is that human decisions are made approximately half a second before we are conscious of them, and then the conscious brain convinces itself that it just made a choice. This sounds pretty damning, but let’s step back for a moment.
Podcast: Audio rendition of this page content
If you’ve been following this blog these past few months, you’ll be aware that I feel the question of free will is a pseudo-question hinging primarily on semantics. As well, there’s the causa sui argument that I’d like to ignore for the purpose of this post.
There remains a semantic issue. The free will argument is centred around the notion that a person or agent has control or agency over their choices. This means that how we define the agent matters.
In the study this references, the authors define the agent as having conscious awareness. Since this occurs after the decision is made, then the person must have had no agency. But I think an argument can be made that the earlier decision gateway is formed through prior experience. Applying computer metaphors, we can say that this pre-consciousness is like embedded hardware or read-only logic. It’s like autopilot.
In business, there are various decision management schemes. In particular, the conscious but slow version is for a person to be notified to approve or deny a request. But some decisions are automatic. If a purchase is over, say 50,000 then a manager needs to sign off on the request. But if the purchase is under 50,000, then the request is made automatically and then the manager is notified for later review if so desired.
I am not saying that I buy into this definition, but I think the argument could be made.
You might not know it by the number of posts discussing it, but I am not really concerned about whether or not free will really exists. I don’t lose any sleep over it. At the same time, I tend to react to it. Since I feel it’s a pseudo-problem where tweaking the definition slightly can flip the answer on its head, it’s just not worth the effort. On to better things.
In pursuit of my travail intellectuel, I stumbled on a thought experiment proposed by Richard Taylor regarding an old crowd favourite, Sisyphus.
Of course, Albert Camus had famously published his Myth of Sisyphus essay (PDF), portraying his life as analogous to the workaday human, absurdly plodding through existence like rinse and repeat clockwork—same gig on a different day.
Given my perspective on human agency and the causa sui argument, I felt commenting on Taylor’s essay, The Meaning of Life (PDF) would be apt.
The story of Sisyphus finds the namesake character, fated by the gods to each day push a stone up a hill only for it to roll back down for him to push it back up every day ad infinitum. Camus leaves us with the prompt, ‘One must imagine Sisyphus happy’. But must we.?
As Taylor puts it,
Sisyphus, it will be remembered, betrayed divine secrets to mortals, and for this he was condemned by the gods to roll a stone to the top of a hill, the stone then immediately to roll back down, again to be pushed to the top by Sisyphus, to roll down once more, and so on again and again, forever. Now in this we have the picture of meaningless, pointless toil, of a meaningless existence that is absolutely never redeemed.
Taylor wants us to consider an amended Sisyphus. He writes,
Let us suppose that the gods, while condemning Sisyphus to the fate just described, at the same time, as an afterthought, waxed perversely merciful by implanting in him a strange and irrational impulse; namely, a compulsive impulse to roll stones.
This significantly alters the dynamic. In the scenario, Sisyphus is not toiling; rather, he is pursuing his passion—following his heart. This is the athlete, artist, politician, or mass murderer following their passion. In fact, one might say that he is being his authenticself. He has no control over his self or his desire to roll stones, but he is in his element.
Taylor’s ultimate point is that in either case, the life of Sisyphus is just as devoid of meaning. Ostensibly, nothing can provide meaning. The best one can do is to have the perception of meaning. He writes,
Sisyphus’ existence would have meaning if there were some point to his labors, if his efforts ever culminated in something that was not just an occasion for fresh labors of the same kind. But that is precisely the meaning it lacks.
Although we cannot control what is within, contentment and happiness derive from perception. As we might be reminded by the quip attributed to Schopenhauer,
We can want what we will, but we can’t will what we will.
In the end, Taylor wants us to know that nothing out there can make us happy.
The meaning of life is from within us, it is not bestowed from without, and it far exceeds in both its beauty and permanence any heaven of which men have ever dreamed or yearned for.
I’ve subsequently read some critiques of Taylor’s position, but I don’t want to take the time to rejoin them. Suffice it to say that I find them to be weak and wanting.
When I was writing my review of Elbow Room, this categorical syllogism came to mind:
P1: All agents are responsible
P2: I am an agent
C: Therefore, I am responsible
Now I want to unpack it.
Podcast: Audio rendition of this page content
The first premise is that all agents are responsible. Of course, this hinges on how one defines agent and responsibility. It also depends on the scope, especially of the agent but to some extent also the scope of responsibility.
Leveraging the Causa Sui argument, the agent is a social construct and can only be responsible to what extent s/he has been programmed as well as the ability to maintain and process the programming effectively—so without bugs to continue with the parlance.
If the agent is immature or defective, expectations of responsibility are diminished.
If certain inputs were not given, there is no reason to assume a related command would be executed. This is why so much time and energy is spent on programming and evaluating children.
This first premise is predicated on the pathological need to blame. Unwritten behind the responsibility claim is that I feel compelled to blame. Blame requires responsibility, so if I want to blame someone, they must be responsible. In any given circumstances, I may feel the urge to blame anyone, so all agents [eligible people] are worthy of blame. There is no particular reason to exclude myself, so I too am blameworthy. What’s good for the goose is good for the gander, eh?
Goose and Gander. Strike that pose.
As PF Strawson said, even if moral responsibility couldn’t possibly exist, it would be invented because people need to blame. This is in line with Voltaire’s commentary on God.
Si Dieu n’existait pas, il faudrait l’inventer. If God did not exist, it would be necessary to invent him.
Voltaire
We can all look around and see how pervasive the god delusion is. Moral responsibility is even more insidious. In principle, moral gods were invented for just this purpose. An omnipresent judge was needed to keep the big house in check.
Where I Stand
From my perspective, I do feel that a person in the space of Dennett’s elbow room can have responsibility. Being a non-cognitivist, I have more difficulty accepting the arbitrary imposition of morality, but I understand the motivation behind it.
The problem I have is that mechanisms to ensure that the inputs and processes are all in order and there are no superseding instructions are not in place. Moreover, if the superseding instruction does not comport with the will of the power structure, it will be marginalised or ignored. This is a limitation of morality being a social construct, and none of this gets past the ex nihilo problem causa sui invokes, so we end up cursing the computer we’ve invented. O! monster of Frankenstein. O! Pygmalion.
Daniel Dennet is quite the prolific writer. He first published Elbow Room back in 1984. He published an updated version in 2015. I like Dan. He is a master storyteller and has a mind like a trap, archiving decades (and centuries) of information. The approach he takes is thoughtful and methodical, and I tend to agree with most of his positions. This isn’t one of them. Interestingly, I recently reviewed John Martin Fischer’s contribution to Four Views on Free Will, which is sympathetic to his position.
Dennett is a compatibilist. I am an incompatibilist—an impossibility, really—, but I wanted to understand his line of argumentation. Like Fischer, Dennett wants to claim that an agent does possess enough elbow room—wiggle room—to be able to be granted free will or moral responsibility, depending on where you prefer to draw the line.
Dennett tends to agree with my position that free will is a semantic pseudo-problem, but he doesn’t mind calling enough ‘good enough’. Given a situation and circumstances, we have enough latitude to consider any actions to be free—with the usual exemptions for non compos mentis situations, cognitive deficits, and duress. He minimises the impact of genetics and upbringing as insignificant.
Basically, he argues that what latitude we do have is sufficient and what more could one want? Anything more would be unnecessary and excessive. Of course, this is just him drawing an arbitrary line at a point he feels comfortable, claiming that anyone asking for more is being unrealistically unreasonable. This feels a bit like a preemptive ad hominem defence. If you want this, then you are just foolish and selfish.
Dennett does agree with the notion that the world might be deterministic, but even so, we are proximately special. He also leans on the observation that people seem hardwired for blame, so there must be something behind this—instead of considering that humans seem hardwired for many things, not all of which are socially beneficial.
We want to hold people responsible, so by extension, we need to consider ourselves to be responsible.
P1: All agents are responsible
P2: I am an agent
C: Therefore, I am responsible
But the problem is in the definition of agency (as well as the scope and meaning of responsibility and the assignment of responsibility to agents.
In the end, I remain unconvinced, primarily that he fails to overcome the Causa Sui argument.