Cerebral Hemisphere Differences: Pattern Definition

Continuing with a quick post based on observations in The Master and the Emissary by Iain McGilchrist, another example of hemispheral specialisation is illustrated in the image below.

Podcast: Audio rendition of this page content

A typical person will envisage this large S composited with smaller Ss (that could be replaced with any symbols, so there is nothing special about the S comprised of Ss) as represented by the centre image of the rendition of the bilateral interpretations.

Where there is left hemisphere damage, the right would envisage something more like the S on the right—seeing the big picture but losing detail. Where there is right hemisphere damage, the left would perceive something more like the S on the left, which is the detail of the composite Ss without recognising that they composed a bigger picture. This is conveyed in the aphorism of losing the woods for the trees whilst the former right hemisphere dominant view might not realise that the forest has trees.

But even this misses the point slightly because if you are viewing this as a typical person, you can assemble the Ss on the left and realise that it makes a larger S whereas a person with right hemisphere damage will just see a mass of Ss and not see the larger S shape. Moreover, it’s not that the right hemisphere wouldn’t ‘see’ the smaller composite Ss, it just wouldn’t put any significance on them, thus ignoring them and considering them to be background noise.

I really do want to share about the non-stereoscopic animals as well as another instalment from The Matter with Things. At least one of these is probable for the next post.

Cerebral Hemisphere Differences: The Colour and the Shape

One key aspect of left and right hemisphere differences is the notion of identification versus naming and categorisation. I tend to view the right hemisphere as rather Zen. It just sees things as they are without particular care, judgment, or attachment.

Podcast: Audio rendition of this page content
2 Orange Circles as tentatively experienced by the left and right hemispheres of the brain.

I composited a quick illustration to convey the difference. Starting with the right hemisphere, the object is recognised in a global context. Whilst it can be distinguished from a blue thing or a square thing, this is done by holistically surveying the world. The difference is perceived but rather without reflection on memory.

Generally speaking, both hemispheres ‘see’ the object, but where the right hemisphere is interested in the object as presented, the left hemisphere is interested in re-presentation. Where the right hemisphere is about being open to the experience itself, the left closes.

From an evolutionary vantage, the right hemisphere is interested in surveying the world at large and being alert to potential danger or survival queues, perhaps a food source. If the right hemisphere is triggered, the left hemisphere jumps in. This said, the left hemisphere is tightly focused, so if something does alert it—remembering that it is not switched off awaiting the right brain to activate it—, it will respond more quickly than the right hemisphere, though as I’ve noted previously, accuracy is not it’s forte, as the right hemisphere may have to convey that the snake that startled you was, in fact, a garden hose.

The left hemisphere is where categorisation and naming take place. Moreover, it stores the object for later retrieval, creating a map. If a subsequent observation is made, it is compared and contrasted relative to the map. After enough observations are made, the left brain isn’t so interested in observing the external world. It perceives a circle-y shape or perhaps an orangy colour and is convinced (metaphorically) that its cached version is satisfactory.

There is a book named Drawing from the Right Side of the Brain. I don’t want to comment on the book in depth, save to say that the author’s premise is that the so-called left-hemisphere person will look at the face of a subject and draw a generic oval shape. The eyes will be general eye shapes, following the same pattern for the nose and mouth. In the end, they will have rendered a portrait on the level of a child.

Whistler’s Mother, a restoration as reimagined by Mr Bean in the movie Bean.

The artist who inhabits the left brain will instead note the contours, shadows, and colours of the face in front of them. One exercise that I had learned in some art class years before I read this was to draw from an inverted portrait. Not being so common as upright faces, the left brain has no representations modelled and so defers to the right hemisphere that is now looking at the object—the terrain—rather than the model.

Inverted image of Igor Stravinsky, a popular subject for breaking left-brain fixedness

I find the divided hemisphere activity in animals without stereoscopic vision to be fascinating. Perhaps, I’ll comment on this next.


* I am not claiming that the right hemisphere sees the world as fuzzy or hazy. Rather, this was me taking artistic licence to not ascribe strict boundaries to the objects in the world, especially as constrained by language.

VIDEO: The Truth about Truth

I wrote about this content in 2019, but I wanted to revisit it for a video as well as create a podcast audio version.

Video: YouTube version of this page content
Podcast: Audio rendition of this page content

In today’s segment, I am going to share my perspectives on the truth about truth. To start, I’ll let the audience know that I do not believe in the notion of truth. I feel the term is ill-defined especially in the realm of metaphysics and morality. I feel that when most people employ the word ‘truth’, what they mean to say is ‘fact.’ That a fire engine is red, for example, may be a fact, if indeed the fire engine happens to be red, but it is not true. This is a misapplication of the term. If you employ truth as a direct synonym for fact, then this is not what’s being discussed here, and perhaps your time might be better spent watching some content by the Critical Drinker.   

My argument is that truth is not objective. Rather it is subjective and perspectival. I concede that there may be some objective truth out there somewhere, but it is not and will not ever be accessible to us because of limitations in our sense-perception faculties and cognitive limitations. Per Aristotle, we only have five senses with which we can connect to the world, and these senses are limited. If there is anything out there that would require another sense receptor—a sense receptor not available to us—, we would never be able to sense it, to even know of its existence. Perhaps the universe emits 100 sense signals, but we are only capable of receiving and translating five. We’d be oblivious to 95 per cent of reality.

I am not making any claims that this is the case, but human cognition is so limited, that we can’t even conceive of what another sense might be. If you can, please leave a comment.

To be clear, I am not talking about senses we know other species possess. Bats may have echolocation, and sharks may have electroreception. Some animals may have greater sensory acuity—superior vision and auditory senses, olfactory and gustatory, tactile, or whatever. Some can see into infrared or ultraviolet light spectra. Technology that includes biomimicry provides humans with microscopes for the microworld and telescopes for the macroworld. We have x-rays and sonar and radar, radios and televisions that extend our senses, but these provide no new sensory receptors.

Like the story of the blind people and the elephant, we are left grasping at parts. But even if we are able to step back to view the whole elephant, to hear the elephant, to touch and smell or even taste the elephant, if there is more to the elephant, we cannot know it. The same goes for ourselves.

I know that some people might inject gods or psychic or paranormal energy into this void, and sure, feel free, but I am looking beyond these pedestrian concepts. What else might there be?

But let’s depart this train and head in a different direction. I want us to focus on the senses we do have. For the typical human, sight is our primary arbiter of reality, at least as defined idiomatically. We tend to believe what we see, and what we see, we assume as real—even if we are later mistaken. I guess that wasn’t a unicorn or a pink elephant. I must have been hallucinating or dreaming. I could have sworn that was Auntie Em.

There are several competing theories around truth, but I’ll focus on the Correspondence theory, which is simply put, the notion that, proxying reality for truth, human perception corresponds with the real world. And a pragmatist might argue that’s close enough for the government.

Keep in mind that historically humans have contorted themselves into making calculations. Remember how long people had been tying themselves into knots to show planetary motion in a geocentric system creating epicycles and retrograde motion to map understanding to a perceived reality.

One might even argue that we’ve progressed. It wasn’t true or accurate then, but now it is. And perhaps it is. Let’s look at some illustrations.

NB: Due to an editorial mishap, this paragraph was dropped in the podcast, hence dropped from the video, which shared the podcast audio source. As such, this image was also not used in the video. This is unfortunate, as it was meant to introduce those with limited maths knowledge to the asymptotic curve, as described. Apologies, and I hope this serves to orient any travellers who may have lost their way at this point.

In this first illustration, we see Truth (or relative truthiness) on the Y-axis and Time on the X-Axis. On the top, we see a threshold representing Reality. In the plane, I’ve rendered an asymptotic curve, where over time, we get closer and closer to the Truth. But we never quite get there. More on this later.

The next illustration will help to demonstrate what’s happening.

Notice there is a gap between the curve and the Reality cap. For one thing, we don’t really know where we are relative to Reality. In the case of the geocentric system, we might have been at the leftmost space. Once we determined that the system is actually solar-centric, we might have moved right on the curve to close the gap. We might be tempted to defend that we’ve finally reached the truth, but we’d have been equally willing to make the same defence from the geocentric position, so we need to be mindful of the past.

Perhaps, this last example was too obvious. We feel comfortable staking a truth claim—or at least a claim of fact. So let’s look at another example.

Let’s re-use the same axes—Truth and Time—, but rather than an asymptotic curve, let’s presume something more polynomial in nature—or not particularly cyclic. Rather than retrograde motion in planets, let’s visit the supposed progress of Newtonian over Einsteinian physics.

This takes a bit more setup but bear with me.  In this case, I have taken liberties and illustrated the Einsteinian physics gap to capture an inferior vantage on reality over Newtonian physics. Granted, I need to rely on a bit of suspension of disbelief, but in the bigger picture, I am trying to convey a scenario where some new paradigm puts the prior knowledge in perspective.

In this instance, both Newtonian and Einsteinian flavours of physics are based on a materialistic, particles-based model, which is where the modern physics consensus resides. But, let’s say that consensus changes in such a way that it is determined that something else underlies reality, say consciousness per Analytic Idealism as proposed by Bernardo Kastrup or per Integrated Information Theory (IIT) as advanced by Donald Hoffman and others. As with retrograde motion, we might end up finding that we were barking up the wrong tree. This might be a bit different because the particles are a directly perceived manifestation of the underlying consciousness, but I wanted to create a scenario where knowledge thought to have advanced actually regressed, but this wasn’t revealed until a new perspective was available.

Yet again, an important aspect of note is that we don’t actually know the distance between our perceptions and real Reality.

This last illustration builds upon the first asymptotic chart but has an in-built error margin meant to reflect language insufficiencies. There is some concept that people feel they grasp, but the consensus is not as unified as the group thinks.

I’ll share two examples, the first being the concept of justice. To me, Justice is what I deem a weasel word. It’s a word we commonly use, but it means different things to different people. To me, it’s a euphemism for vengeance by proxy, but for others, it transcends that and mirrors some impartial dispensation of just desert—some good old-fashioned law and order.

[Justice is] a euphemism for vengeance by proxy

Without getting stuck down some rabbit hole, my point is that if we aggregate these beliefs, the asymptotic curve represents an average consensus vantage rather than something as obvious as 2 plus 2 equals 4. On this note, allow me to clear the air.

Some viewers might be clamouring to say, “but 2 plus 2 equals four is true.” But this is tautologically true, which is to say that it’s true by definition. It’s a similar tautology to saying that it’s true that snow is white, or coal is black. We’ve already defined snow, white, coal, and black, so these may be facts, but they are true by definition.

Revisiting the chart, notice that there are two curves in the space. In this case, I illustrate competing truth claims from the perspective of an omniscient narrator. The case is whether the earth is an oblate spheroid or is flat. I am going to go out on a limb and assert the earth is spherical, as represented by the top blue curve—and we have some margin of error as to what that might mean. The bottom red curve depicts the perceived truth of the flat earthers, who also have some room for semantic error.

Given that I am presuming that I am in the right adopting the majority position—please be right—, the blue curve is closer to Reality than the red curve. Of course, in the event that the earth is really flat, then it proves my point that we don’t know where we are relative to truth, so we assume that the state of knowledge at any given time is what’s real.

Again, forgive my fanciful examples. Please don’t tell me that this spheroid versus planer earth is tautological too because you’d be correct, but I am already aware. They are just nonsensical illustrations. Nonetheless, I hope they’ve served to express a point.

I could have as well created curves that depicted two cohorts’ beliefs on the efficacy of tarot or astrology in predicting the future. I am sure that it might render somewhat like the last chart, but I’d also presume that both curves would have very low truth values as seen from an objective observer. Secretly, I hope tarot wins the truth battle.

Before I end our time together, I’d like to convey that for an Analytic Idealist, these charts might be more acceptable at face value. For a Realist, Naïve or otherwise, they may argue that this curve is not asymptotic and may in fact reach some tangency. I don’t happen to believe this is the case or I wouldn’t have spent my time assembling and presenting this. Time will tell. Or will it?

VIDEO: The Problem with Postmodernism

The theme of this Institute of Art and Ideas video is ‘Should we move away from postmodernism?

Podcast: Audio rendition of this page content

EDIT: Find my version of this content on YouTube:

Video: Postmodern Defence

At the start, I feel as usual, that the definition of postmodernism is nebulous, and the fora agree, methinks. Toward the end, Hilary Lawson concedes that key actors tied to the early postmodern movement denied being postmoderns, singling out Foucault and Derrida. More on this. Keep reading.

Julian Baggini, the bloke sat on the left and whose positions I am only getting familiar with, starts off the clip. He makes some points, some of which I agree with and others not so much.

He makes a play at claiming that there is some objective truth to be attained, following on with the statement that without this notion, it’s anything goes. I disagree with both of these assertions. Then he cites Thomas Nagel’s The View from Nowhere, wherein he posits that subjectivity and objectivity are extrema on a spectrum and that experience is somewhere in between. This conforms to my beliefs, but there are two provisos. First, the extremum of objective truth is unattainable, objectively speaking. Moreover, as I’ve written before, we have no way of adjudicating whether a given observation is truer than another. It seems that he leaves it that we don’t need to know the absolute truth to know “true enough”, but I think this is both a copout and wrong—but not too wrong for pragmatism to operate.

For example—not mentioned in the clip—, I can imagine that physicists feel that Einsteinian motion physics is truer than Newtonian physics, especially as we need to take measurements nearer to the speed of light. In my thinking, this might provide a better approximation of our notion of the world, but I can also conceive of an Ideal, non-materialistic perspective where both of these are rubbish from the perspective of truth. I feel that people tend to conflate truth with utility.

Julian makes an interesting point about semantics with the claim that “some people” define certain things in such a way as to not possibly be attainable and then claim victory. But what are his three examples? Free will, the self, and objectivity. If you’ve been following me, you’ll know that I might be in his crosshairs because I tend to be in the camp that sees these concepts as sketchy. And to be fair, his claim of defining something in a manner to keep a concept out of bounds is the other side of the same coin as defining something in such a way as to get it into bounds.

The self is different to free will insomuch as it’s a construction. As with any construction, it can exist, but it’s a fiction.

I’ve spoken at length about my position on free will, but I am fairly agnostic and don’t particularly care either way. I feel that the causa sui argument as it applies to human agency is more important in the end. The self is different to free will insomuch as it’s a construction. As with any construction, it can exist, but it’s a fiction. Without interacting with Julian or reading his published works on the self, if there are any, I don’t know how he defines it. And here we are discussing objectivity.

Given Nagel’s objective-subjective polarity, it seems they want to paint postmodernism as claiming that everything is subjective and that science (and religion) hold claims to objectivity. Hilary Lawson, the geezer on the right takes a position between extremes, but he denounces Julian’s claim about objective truth, noting that many people (especially of religious persuasions) make claims on Truth that are diametrically opposed, ostensibly labelling the same object simultaneously black and white. And the object for all intents and purposes is red.

I’ve gotten out of order, but Julie Bindel makes some good points on Feminism and suggests that the philosophical feminists—may I call them pheminists? No? OK then—such as Judith Butler have set women’s rights back by claiming that the category of ‘woman’ is invalid. Minni Salami defended Judith by noting that Butler has helped constructively in some ways and, citing Simone de Beauvoir, that woman is a category established by men to create The Other Sex. Still, Julie—not incorrectly—states that without a category, women (or whatever collective term one decides is representative) cannot be afforded legal protections—because law, as facile as it is, is all about categories and classes.

Hilary reenters the fray and states that it is not acceptable for one person to claim that their lived experience is all that is needed just because that is their truth. To be fair, this feels like a bit of a strawman argument. Perhaps I need to get out more, but I am not familiar with anyone credible making this claim.

I enjoyed watching this clip and processing the information. I hope you do as well. If you have any comments, I’d love to read them.

Using AI to Decode Speech from Brain Activity

Apologies in advance for sharing PR hype from Meta (formerly known as Facebook),but I want to comment on the essence of the idea, which is using AI to decode speech from brain activity. It seems to imply that one would apply supervised machine learning to train a system to map speech to brain activity as illustrated by the image below.

Podcast: Audio rendition of this page content
To decode speech from noninvasive brain signals, we train a model with contrastive learning to align speech and its corresponding brain activity

The dataset would require the captured patterns of a large enough sample size. In this case, it appears to have been some 417 volunteers.

Activations of wav2vec 2.0 (left) map onto the brain (right) in response to the same speech sounds. The representations of the first layers of this algorithm (cool colours) map onto the early auditory cortex, whereas the deepest layers map onto high-level brain regions (e.g. prefrontal and parietal cortex).

This feels like it could have many commercial, consumer, and industrial uses including removing other human-computer interface devices, notably keyboards, but perhaps even mouses. Yes, I said mouses. Sue me.

Given hypotheses related to language and cognition, I am wondering what can be gleaned by mapping different multiple native language speakers to cognitive processes in order to remap them to speech output if it would be able to arrive at some common grammar that could then output a given thought stream into any known (and mapped) language, allowing for instantaneous “translation”.

Of course, a longer-term goal would be to skip the external devices and interface brain to brain. This sounds rogue science fiction scary, as one might imagine an external device trained on a brain to read its contents. One of the last things this world needs is to have to worry about neuro-rights and about being monitored for thought crimes. Come to think of it, isn’t there already a book on this? Nevermind. Probably not.

Technology is generally not inherently harmful or helpful, as that is determined by use. Humans do seem to tend toward the nefarious. Where do you think this will go? Leave a comment.

Video: Blame and Causa Sui

In this segment, I ponder the interplay between blame and Causa Sui. I’ll discuss the implications for moral responsibility as well as legal responsibility, which are not as in sync as one might imagine they might be.

Video: Blame & Causa Sui

To the uninitiated, Western legal systems have no pretensions about being about morality or justice. Legal systems are designed to maintain power structures and the status quo. They are deontological machines, making them prime targets for automation by the machine learning associated with artificial intelligence. This would also diminish the power of rhetoric over facts to some extent. But, I am no legal scholar, and all of this will have to wait for another segment.

I recently shared a video on causa sui and the basics of blame and blameworthiness, so I want to intersect those topics here.

Peter Strawson suggested that for humans, blame is a reactive response. It’s reflexive like having your knee jerk when tapped. Essentially, his position is that if blame didn’t naturally exist, we’d have to invent it, mirroring Voltaire’s quip, ‘If God did not exist, it would be necessary to invent Him’. Of course, this is because they serve the same power control purpose.

If God did not exist, it would be necessary to invent Him

Voltaire

To be fair, blame is closer to real than God, but the point remains. Strawson’s point is also that humans are saddled with blame and it’s not going anywhere no matter how nebulous it becomes in execution. It’s natural.

To me, this starts to sound suspiciously like a naturalistic fallacy. Humans seem to selectively cherry-pick which so-called natural tendencies they choose to defend. One might use nature to argue that female sexual availability begins at menstruation, and yet we have decided to ignore this and defer this on the grounds of civility. It’s obvious that we could consider blame to be an animal instinct we want to domesticate away, but because it serves other purposes, per Strawson’s perspective, it’s a useful tool.
But what’s the causa sui challenge. Let’s quickly recapitulate.

Causa sui argues that one cannot be the cause of oneself, ex nihilo. Being full products of nature and nurture to adopt the lay parlance, any blameworthiness lies with the sources or creators. Since we are concerned with moral responsibility, we can eliminate nature forthrightly. Nature may be responsible—by many estimations approximately 40 per cent responsible—, it possesses no moral agency. And if the individual is not responsible, then we are left with the environment and society, including the social environment. Of course, the environment gets off the hook in the same manner as the genetic and hereditary factors of nature.

Before we consider society, let’s regard the individual.

Albeit the brain-as-computer is a bit facile, it’s still good enough for illustrative purposes. When you are born, your cognitive hardware is installed, as are your edge peripherals and update protocols. Any of these can become damaged through some degenerative processes, or external environmental factors, but since my interest is in optimistic rather than pessimistic scenarios, I’ll ignore these instances. Given that blameworthiness is directly related to presumed cognitive processing, factors that diminish these faculties, mitigate blameworthiness and factors than increase it, ameliorate it.

As a—quote—’normal’ child becomes an adolescent and then an adult, the probability it will become blameworthy, increases with age, ceteris paribus. A person with cognitive deficits or conditions such as aphasia or dementia decreases the probability of blame assignment. Even temporary impairment mitigates judgment—oh, she was drunk.

So, following the brain-as-computer analogy, your brain is a CPU with a self-updating cognitive operating system and instruction set. Essentially, there is also short and long-term memory.
In the case of cognitive deficits, one of these components might be effectively broken. The CPU might process too slowly; it might misinterpret what it receives; there may be issues with the sense organs or the nerves that transport signals.

I’ve got a mate who, due to medical malpractice at birth, experienced nerve damage. Although his eyes and brain are normal, his optic nerve cannot carry signals very well, effectively leaving him blind. Neither can he taste nor smell. So there’s that.

But assuming that this processing and storage hardware are intact, the causa sui constraint still applies, but let’s spend some time evaluating societal interactions.

All inputs come from society—cultures and subcultures. Apart from misinterpreted processing scenarios, if a person doesn’t receive a particular moral instruction set, that person should surely be considered to be exempt from moral blame. It may be difficult to assess whether an instruction has been input. This is a reason why children are categorically exempted: they may not have received all of the expected moral codes, they may not have been stored or effectively indexed, and their processing hardware is still in development—alpha code if you will. Brain plasticity is another attribute I won’t spend much time on, but the current state of science says that the brain is still not fully developed even by age 30, so this is certainly a mitigating factor, even if we allow leeway for the causa sui argument.

I mention subculture explicitly because the predominant culture is not the only signal source. A child raised by, I don’t know, say pirates, would have an amended moral code. I am sure we can all think of different subcultures that might undermine or come at cross odds with the dominant culture, whether hippies, religious cultists, militia groups, racial purist groups, and so on.

So, a commonly held moral in the subdominant group may counter that of the prevailing one. An example that comes to mind is some religious organisations that do not agree with human medical intervention. There have been cases where parents have allowed a child to die from an otherwise curable condition. Although in the United States, there is a claim of freedom of religion—a claim that is spotty at best—, parents or guardians in situations like these have been convicted and sentenced for following their own moral codes. But as with all people, these people are as susceptible to the limitations of causa sui as the rest of us. They are not responsible for creating themselves, but moral responsibility was asserted based on the beliefs of the prevailing culture. Even besides the legal context, persons in the larger society would likely blame the parents for their neglect—though they may be praised for being resolute in their righteousness by their in-group. This just underscores that morality is a collection of socially constructed conventions rather than something more objective.

Returning to causa sui, let’s say a person commits an act that society would typically assign blame. Rather than exercise some act of retributive justice—a concept with no foundation in a causa sui universe—the course of action was remediation. In this case, the desired moral instruction would be delivered thereby seemingly making the moral offender blameworthy. But would they be?

Presumably, (for what it’s worth) psychologists would evaluate the subject for competency in maintaining the programming. In the case of the aforementioned religious parents, they may be threatened with retribution for not abiding by the superseding rules of the prevailing power structure.

Although I might personally allow some leeway even with the causa sui in full force and effect, but I can’t say that I have much faith in the ability of humans to make a correct assessment. My impression is that any assessment would be one of convenience than something sounder.

Perhaps I’ll produce a more robust segment on retributive justice, but my feeling is that retributive justice is an area that legal systems should avoid altogether. If necessary, focus on restorative justice, rehabilitation (or ‘habilitation’ as the case might be) and quarantine models to ensure any bad actors are contained away from society. Again, this puts individuals at the mercy of cultures they find themselves a part of. I am not going to delve into this any further save to remind the listener of gang initiation schemes where a person needs to kill a member of a rival gang to become a trusted member. This is their moral code—quite at odds with the mainstream.

So there you have it. Owing to causa sui constraints, a person cannot be ultimately responsible for their actions. My primary thesis is—apart from metaphorical equipment failures—that any moral responsibility falls wholly on the society or culture. Full stop. And this isn’t as foreign as one might first feel. Although for most people blame is natural, in an individualistic society, people are interested in finding the culprit. In collectivist cultures, any culprit might do. Perhaps I’ll share some stories in a future segment.
Meantime, what are your thoughts on moral responsibility? Can someone be ultimately responsible? Some have said the ‘ultimate responsibility’ is a philosophical red herring and that we can still hold someone responsible, even if not in the ultimate sense, which causa sui disallows. Are you more in this camp? Is this enough to mete out so-called retributive justice? For me, retributive justice is a euphemism for vengeance, and justice is a weasel word. But that’s just me, and perhaps a topic for another segment.

Are there any topics you’d like me to cover? Leave a comment below.

Man versus Machine

Human-designed systems seem to need a central orchestration mechanism—similar to the cognitive homunculus-observer construct substance dualists can’t seem to escape—, where consciousness (for want of a better name) is more likely the result of an asynchronous web with the brain operating as a predictive difference and categorisation engine rather than the perceived cognitive coalescence we attempt to model. Until we unblock our binary fixedness, we’ll continue to fall short. Not even quantum computing will get us there if we can’t escape our own cognitive limitations in this regard. Until then, this error-correcting mechanism will be as close to an approximation of an approximation that we can hope for.

The net-input function of this machine learning algorithm operates as a heuristic for human cognition. Human-created processes can’t seem to create this decoupled, asynchronous heuristic process, instead ending up with something that looks more like a railway switching terminal.

Cover photo: Railroad tracks stretch toward Chicago’s skyline at Metra’s A2 switching station on March 29, 2019. (Antonio Perez/Chicago Tribune); story

Unknown Dimensions

I mentioned in my last post about how Artificial Intelligence discovered a new variable—or, as the claim suggests, a new physics. This was a tie-in to the possible missing dimensions of human perception models.

Without delving too deep, the idea is that we can predict activity within dynamic systems. For example, we are all likely at least familiar with Newtonian physics—postulates such as F = ma [Force equals mass times acceleration or d = vt [distance equals velocity times time] and so on. In these cases, there are three variables that appear to capture everything we need to predict one thing given the other two that need to remain constant. Of course, we’d need to employ calculus instead of algebra if these are not constant. A dynamic system may require linear algebra instead.

When scientists represent the world, they tend to use maths. As such, they need to associate variables as proxies for physical properties and interactions in the world. Prominent statistician, George Box reminds us that all models are wrong, but some are useful. He repeated this sentiment many times, instructing us to ‘remember that models are wrong: the practical question is how wrong do they have to be to not be useful‘. But no matter how hard we try, a model will never be the real thing. The map cannot become the terrain, no matter how much we might expect it to be. By definition, a model is always an approximation.

All models are wrong but some are useful

George Box

In the Material Idealism post, the embedded video featuring Bernardo Kastrup equated human perception to the instrumentation panels of an aeroplane. Like the purported observer in a brain, the pilot can view the instruments and perform all matters of actions to manipulate the plane, including taking off, navigating through the environment, avoiding obstacles, and then landing. But this instrumentation provides only a representation of what’s ‘really’ outside.

Like mechanisms in the body, instrumentation can be ‘wired’ to trigger all sorts of warnings and alerts, whether breached thresholds or predictions. The brain serves the function of a predictive difference engine. It’s a veritable Bayesian inference calculator. Anil Seth provides an accessible summary in Being You. It relies on the senses to deliver input. Without these sense organs, the brain would be otherwise unaware and blinded from external goings on.

The brain cannot see or hear. It interprets inputs from eyes and ears to do so. Eyes capture light-oriented events, which are transmitted to the brain via optic nerves, and brain functions interpret this information into colour and shape, polarisation and hue, depth and distance, and so on. It also differentiates these data into friend or foe signals, relative beauty, approximate texture, and such. Ears provide a similar function within their scope of perception.

As mentioned, some animals have different sense perception capabilities and limitations, but none of these captures data not also accessible to humans via external mechanisms.

Some humans experience synesthesia, where they interpret certain stimuli differently, perhaps hearing colours or smelling music. We tend to presume that they are the odd ones out, but this assumption does not make it so. Perhaps these people are actually ahead of the rest of us on an evolutionary scale. I suppose time might sort that one out.

But here’s the point. Like the pilot, we can only experience what we are instrumented to experience, as limited to our sense perception and cognition faculties. If there are events not instrumented, it will be as if they don’t exist to the pilot. Can the pilot hear what’s happening outside?

This is the point of the AI experiment referenced above. Humans modelled some dynamic process that was presumed to be ‘good enough’, with the difference written off as an error factor. Artificial Intelligence, not limited to human cognitive biases, found another variable to significantly reduce the error factor.

According to the theory of evolution, humans are fitness machines. Adapt or perish. This is over-indexed on hereditary transmission and reproduction, but we are more vigilant for things that may make us thrive or perish versus aspects irrelevant to survival. Of course, some of these may be benign and ignored now but become maleficent in future. Others may not yet exist in our realm.

In either case, we can’t experience what we can’t perceive. And as Kastrup notes, some things not only evade perception but cannot even be conceived of.

I am not any more privileged than the next person to what these missing factors are nor the ramifications, but I tend to agree that there may be unknown unknowns forever unknowable. I just can’t conceive what and where.


I can’t wait to get back to my Agency focus.

Motility, Automotion, and Agency

I just wrapped up chapter eleven of The Righteous Mind by Jonathan Haidt. I’ve got only 35 pages to go to get through chapter twelve. I’ve been tempted to stop reading. Chapter eleven—and I am tempted to inject a bankruptcy pun here—has been more frustrating than the rest thus far. And yet I am glad to have persisted.

My intellectual focus these past months has been on agency. Et voilà, paydirt. Chapter eleven’s title reveals the context: Religion is a Team Sport. Let’s walk through this garden together.

A goal of Haidt is to educate the reader on his third principle of moral psychology: Morality binds and blinds. He establishes parallels between sports and religion. And here’s the thing—I don’t disagree. But here’s the other thing—I feel that are equally vapid—, with no apologies to sports fans or the religious. Let’s keep moving.

“A college football game is a superb analogy for religion.”

Jonathan Haidt, The Righteous Mind, Chapter 12: Religion is a Team Sport

He talks about the organising and unifying functions of both. But here’s the thing. It unifies the like-minded. Haidt claims to be irreligious and not be into sports, and yet he cites these as somehow desirable. I find him to be an apologist for religion.

I am not a psychologist, but if I were, I’d be tempted to claim that Haidt’s conclusions follow from his personal beliefs. He believes in morals, society, order, intuition, and institutions. He is a textbook Modern and an extrovert to boot. I think he also falls into teleological fallacy traps. Was that a play on words?

His goal is to fuse the positions of Darwin and Durheim. Along the way, he reminds us of the New Atheists, their publications, and their positions: Sam Harris’ The End of Faith: Religion, Terror, and the Future of Reason; Richard Dawkins’s The God Delusion; Daniel Dennett’s Breaking the Spell: Religion as a Natural Phenomenon; and Christopher Hitchens’s God Is Not Great: How Religion Poisons Everything.

Although he views religion through rose-coloured glasses, he comes to the conclusion that religions have done a great deal of harm over the millennia, but the good outweighs the bad, especially if you consider it through a social-moral lens. But if religion creates in-groups versus out-groups, which they do, and religious in-groups outlive even non-religious ingroups, then this is a winning option. But what if you don’t like that option?

Personally, I am a collectivist, but this is not willy-nilly any collective.

Haidt contrasts the New Atheist vantage that religious belief is an evolutionary byproduct versus a position that what started as a byproduct evolved into group selection and then, perhaps, an epigenetic phenomenon.

Here’s my contention:

Borrowing from New Atheism, Haidt adopts the notion of a “hypersensitive agency detection device [that] is finely tuned to maximize survival, not accuracy”.

The first step in the New Atheist story—one that I won’t challenge—is the hypersensitive agency detection device. The idea makes a lot of sense: we see faces in the clouds, but never clouds in faces, because we have special cognitive modules for face detection. The face detector is on a hair trigger, and it makes almost all of its mistakes in one direction—false positives (seeing a face when no real face is present, e.g., ), rather than false negatives (failing to see a face that is really present). Similarly, most animals confront the challenge of distinguishing events that are caused by the presence of another animal (an agent that can move under its own power) from those that are caused by the wind, or a pinecone falling, or anything else that lacks agency.

The solution to this challenge is an agency detection module, and like the face detector, it’s on a hair trigger. It makes almost all of its mistakes in one direction—false positives (detecting an agent when none is present), rather than false negatives (failing to detect the presence of a real agent). If you want to see the hypersensitive agency detector in action, just slide your fist around under a blanket, within sight of a puppy or a kitten. If you want to know why it’s on a hair trigger, just think about which kind of error would be more costly the next time you are walking alone at night in the deep forest or a dark alley. The hypersensitive agency detection device is finely tuned to maximize survival, not accuracy.

Op Cit, p. 292

I fully agree with the assertion that the brain values fitness over truth, and I’ve commented in several posts that pareidolia and apophenia create false-positive interpretations of reality.

But now suppose that early humans, equipped with a hypersensitive agency detector, a new ability to engage in shared intentionality, and a love of stories, begin to talk about their many misperceptions. Suppose they begin attributing agency to the weather. (Thunder and lightning sure make it seem as though somebody up in the sky is angry at us.) Suppose a group of humans begins jointly creating a pantheon of invisible agents who cause the weather, and other assorted cases of good or bad fortune. Voilà—the birth of supernatural agents, not as an adaptation for anything but as a by-product of a cognitive module that is otherwise highly adaptive.

Op Cit, p. 293

For me, this supports my contention that agency is a wholly constructed fiction. The same agency we ascribe to unknown natural events, we ascribe to ourselves. And perhaps this ability served an egoistic function, which was then generalised to the larger world we inhabit.

I have an issue with his teleological bias. He feels that because we have evolved a certain way to date; this will serve as a platform for the next level as it were. I’ll counter with a statement I often repeat: It is possible to have adapted in a way that we have been forced into an evolutionary dead end. Historically, it’s been said that 99 per cent of species that ever occupied this earth are no longer extant. That’s a lot of evolutionary dead ends. I am aware that few species could have survived an asteroid strike or extended Ice Ages, but these large-scale extinction events are not the only terminal points for no longer extant species.

So finally, Haidt essentially says that it doesn’t matter that these religious and cultural narratives are wholly fictitious, if they promote group survival, we should adopt them. This seems to elevate the society over the individual, which is fine, but perhaps the larger world would be better off still without the cancer? Just because it can survive—like some virulent strain—doesn’t mean we should keep it.

Finally, given these fictions, what’s a logical reasonable person to do? I don’t buy into ‘this country is superior to that country’ or ‘this religion is better than that religion’ or even ‘this sports team is better than that’ or ‘this company is better than that’.

Haidt does idolise Jeremy Bentham, but this is more Pollyannaism. It sounds good on paper, but as an economist, I’ll reveal that it doesn’t work in the real world. No one can effectively dimensionalise and define ‘good’, and it’s a moving target at that.

No thank you, Jonathan. I don’t want to buy what you are selling.

News Flash: From the time I started this content, I’ve since read the final chapter. Where I categorically reject a lot of what Haidt proposes in this chapter, I tend to find chapter twelve to fit more amicably with my worldview. Perhaps I’ll share my thoughts on that next.

If you’ve reached this far, apologies for the disjointed presentment. I completed this over the course of a day through workaday interruptions and distractions. I wish I had an editor who could assert some continuity, but I am on to the next thing, so…

Bonus: I happened upon this journal article, and it somehow ended up here. I haven’t even read it yet, so I’ve got no commentary. Perhaps someday.

Rai, T. S., and A. P. Fiske. 2011. “Moral Psychology Is Relationship Regulation: Moral Motives
for Unity, Hierarchy, Equality, and Proportionality.” Psychological Review 118:57–75

Cover art source