Fact & Fiction

“Facts are the strangest sort of fiction.”

— Bry Willis

There, I said it. Perhaps I need to help find this quote a home in a story.

Some people love to hide behind facts, convinced that they are somehow inviolable truths. There are facts, but then there are interpretations of them. Therein lies the rub.

As my day taught me, there are three sides to every story. One side, the other side, and the truth. This truth is analogous to the fact of the matter, but it is never directly accessible, and so any started fact is necessarily a constructed fiction, no matter how committed we are to it. This includes tautological facts, which are simply whims of nomenclature.

Retributive Injustice

I’ve already said that justice is a weasel word, but let’s pretend that it’s actually something more substantial and perhaps even real. I’ve spoken on the notion of blame as well. I have been thinking about how untenable retributive justice is and it seems to include restorative justice, too. But let’s focus on the retributive variety for now.

In short, retributive justice is getting the punishment one deserves, and I think desert is the weak link. Without even delving into causa sui territory, I feel there are two possible deserving parties. The agent and society. Let’s regard these in turn.

The Agent

An agent, or more specifically moral agents, are entities that can be deemed responsible for their actions on moral grounds. Typically, moral agency assumes that an agent, an actor, is fully aware of the cultural rules of a given society, whether norms or legislated. Under this rationale, we tend to exclude inanimate objects with no agency, non-human life forms, children, and persons with diminished cognitive faculties. In some cases, this diminution may have been self-imposed as in the case of chemically induced impairment, for example by drugs or alcohol. We might consider these entities as being broken. In any case, they do not qualify as having agency. An otherwise moral agent until duress or coercion may no longer be expected to retain agency.

Unless an informed and unimpaired agent commits an act with intent … there can be no moral desert

Unless an informed and unimpaired agent commits an act with intent, another weasely word in its own right, there can be no moral desert. But let’s hold this thought for a bit and turn our attention to society.

Society

For the purposes of this commentary, society is a group of like-minded persons who have created norms, customs, laws, and regulations. In most cases, people come into societies whose structure is already formed, and they need to acculturate and adapt, as changing the fabric of society generally takes time. Even in the case of warfare where a society is subsumed, cultural norms will persist for at least a time.

Whilst it is incumbent for a person to become aware of the rules of engagement and interaction with a society, this is reciprocally a responsibility of society to impart its norms through signalling and performance as well as through more formal training, such as public fora, schools, and activities. Even media and entertainment can serve to reinforce this function.

So What?

I argue that retributive justice is bullshit (to employ technical language) is because if an informed and unimpaired agent does violate some standard or protocol, the society is at least partially to blame—perhaps fully so. Again, if the person is not unimpaired, a pivotal question might be why is s/he uninformed? If the person has the information but ignores it, to what extent is the person impaired and what responsibility does society have for being unaware?

Special Case?

What if a particularly predacious person from Society A infiltrates Society B? Is the person broken or is Society A responsible to creating a person that would prey on some other unsuspecting society? Again, the person is never entirely responsible unless s/he is broke, in which case, s/he is exempt and not morally responsible.

When Then?

As I’ve said before, a person who commits an act against the interest of a society may be quarantined or perhaps exiled or shunned as some cultures practice, but these are meant to preserve the cohesion of the society and not meant to exact a point of flesh in retribution.

In the end, I just don’t see a use case where retribution would fall upon a single actor. If some transgression is made, how then do we ensure society pays its dues as well? In my mind, society is more apt to fail the individual than the other way around, but maybe that’s just me and my world.

What am I missing here?

The Matter with Things: Chapter Twenty Summary: The coincidentia oppositorum

I have a confession to make. I finished reading the first volume of The Matter with Things about a month ago, and I took a break from reading more of it. I finally got around to continuing, and I read chapter twenty. When I got to the end and turned to the next chapter—chapter twenty-one—, it dawned on me that volume I ended at chapter nine. I had inadvertently skipped volume II and began volume III. Oopsie. I’m lucky it wasn’t a novel, having skipped ten chapters.

Since I’ve read it, I might as well summarise it, Spoiler alert: there are no spoilers to alert. As this chapter is more about exposition and colour, this summary will be much shorter than the summaries of the first volume. I don’t know if this will be a continuing trend. We’ll find out together.

This chapter is labelled the coincidentia oppositorum, the coincidence of opposites. Effectively, the chapter wants to impart three main points.

Symmetry

Firstly, asymmetry is the norm. Symmetry is the exception. We perceive things in opposites. This brings attention to bear. Line straight lines, symmetry does not exist in nature. It is something the left hemisphere perspective approximates. No face is symmetrical; planets are not symmetrical. In fact, if one manipulates an image of a face and mirrors one side as both to appear as a face, it becomes obvious that something is amiss.

Excess

The Ancient Greeks had a penchant for moderation. Buddhists have the Middle Path. Everything is poisonous in large enough quantities. Even poisons can be therapeutic at low doses. The point is to retain this perspective.

To be or not to be…or both

This is not about Schrodinger’s cat. We need to break ourselves of the habit of thinking in opposites. Not everything is a dichotomy—black and white. Some things are black and white—and not just a draughts board. McGilchrist opens the chapter with a nice Iriqois about two brothers who were seeming opposites but were nonetheless necessary. In a manner, this is the good versus evil story. Opposition strengthens us. Trees raised in a windless environment don’t have the strength of natural-grown trees.

This story is encapsulated in a story told by Rabbi Jonathan Sacks.

A faithful man finds in the scriptures that Rabbi X said that a certain thing was true. Later he finds that Rabbi Y said that the very same thing was false. He prays for guidance: ‘Who is right?’ God answers: ‘Both of them are right.’ Perplexed, the man replies: ‘But what do you mean? Surely they can’t both be right?’ To which God replies: ‘All three of you are right.’

In the chapter summary, McGlichrists ends with this:

Just as there is an asymmetry in the relationship of the hemispheres, there is an asymmetry in the coincidentia oppositorum. We need not just difference and union but the union of the two; we need, as I have urged, not just non-duality, but the non-duality of duality with non-duality; and we need not just asymmetry alone, or symmetry alone, but the asymmetry that is symmetry-and-asymmetry taken together.

Summary

As I mentioned at the start, this is a short summary. I really enjoyed this chapter and its lessons. It’s nice to be reminded of such things. This extends to the asymmetry of the hemispheres of the brain. As much as I don’t appreciate the imbalance of the left hemisphere in Modernity, I need to be reminded that we just need to tweak the dial a tab to the right. We don’t need the right hemisphere operating at eleven, to share a reference to Spinal Tap.

Language Perception

The link between language and cognition is interesting though not entirely grasped.

VIDEO: TED Talk on YouTube — Lera Boroditsky

“I speak Spanish to God, Italian to women, French to men and German to my horse.”

― Charles V, Holy Roman Emperor (probably not, but whatevs…)

Perspective

In the West, we tend to be quite self-centric. We are the centres of our universes, and this has several implications. Firstly, we orient conversation around ourselves; occasionally, we orient conversation around others. Instead, some cultures orient themselves around their world.

Self as Centre

Ordinarily, if a Westerner is asked which is their dominant hand, they might answer left or right. If they are asked to describe where something is spatially, one might answer on my left or right or above or below me. If the person asking is present, they may simply point to the object as a gesture.

Other as Centre

In some cases, we might feel it necessary to orient relative to another? The answer to the question, “Where is the book?” might be, “On your left”, or “You’ve got something on your left cheek”.

Terrain as Centre

In the West, we have notions of cardinal directions—North, East, West, and South—, but we still tend to orient communication around ourselves or others. In some regions, the use of cardinal directions is more prominent than in others. For example, when I am in Boston, I didn’t find many people reference places by cardinal directions, but when I am in Los Angeles, much conversation is relative to head north or head east. I notice that Google Maps tend to employ this. It’s often confusing when I am in an unfamiliar place, and the voice instructs me to travel west toward Avenue X. If I happen to have remembered where Avenue X is, I might internally orient toward that. Otherwise, I head in some direction until Google reinforces my choice or it rather recalculates based on my bad choice, if even nonjudgmentally.

In some cultures, this cardinality includes the body, so in comparison with the aforementioned self-as-centre dominant hand query, the response would depend on which way the subject was facing. Were they a southpaw (lefthander) facing north, they would respond that their west hand is dominant. But if they were facing south, it would be their east hand. This may seem to be confusing to a Westerner, but to a native, they would explicitly understand because they would be intimately oriented. As Lera relates in the video, someone might point out an ant crawling on your southwest leg.

To be fair, this space is not entirely alien to some Westerners. For example, mariners can shift the conversation from themselves to their ship or boat. Rather than left and right, relative to themselves or another, they might refer to port and starboard relative to the vessel. Being on the vessel and facing front (the bow), left is port and starboard is right; however, facing the rear (the stern), left is not starboard and right is now port. So, if someone asks where the lifeboat is, landlubbers may say it’s on their left whilst a sailor might say it’s on the starboard side.

Centring Time

Time is another aspect we centre on ourselves. I won’t even endeavour to raise the circular notion of time. If an English speaker thinks about a timeline, we would likely configure it from left to right equating with past to future. This aligns with our writing preference. For native Arabic or Hebrew speakers, they might naturally opt to convey this from right to left in accordance with their preferred writing system.

For the Aboriginal Kuuk Thaayorre in Australia, their rendition of time was contingent on their orientation in the world. Essentially, time flows from east to west, perhaps in accordance with the apparent movement of the sun across the sky relative to Earth. Facing south or north, they rendered time left to right and right to left, respectively. When they faced east, time came toward the subject, with time moving away from the body when facing west.

Counting

So-called modern or advanced societies have developed number systems, but some cultures either have no counting or limited counting, having systems that might extend 1, 2, many, or 1, 2, 3, many. This means that tasks we learn like accounting, inventory management, or comparing counts of apples and oranges are not only not available to these people, they are irrelevant to them.

Categorical Imperitive

Lera tells us about the blues. Not B.B. King Blues, but the categorisation of blue, blues, and colours more generally. I’ve discussed this before in various places. As with numbers, some languages have a lot and some have few; some have only distinctions for light and dark, or equivalents of white, black, red, and so on. Colour names are typically added to a language in a similar order based on the frequency within the visual colour spectrum. I may have written about that earlier as well if only I could find it.

Different cultures and languages categorise colours differently, subdividing them differently. In many non-English languages, pink is simply light red. English opts to assign it a unique label. On the other hand, blue is basically one colour name in English whilst it is further broken down in Russian to goluboi (light blue, голубой) and siniy (darker blue, синий). This mirrors the pattern of pink (lighter red) and red (darker red) in English, a distinction not prevalent in other languages. Of course, we also have variations of reds and blues such as crimson or cyan, but this is rather second-order nuance.

Interestingly, in neurological studies, when measuring a person with a language that splits a colour, say a Russian looking at blues, the instruments capture the event of the subject having noticed the category shift. No such shift occurs in speakers without such a switch. I would be interested to know what the results would be for a bilingual speaker to be asked to respond in each language. Informally, I asked a Russian mate of mine if he experienced anything differently seeing blue whilst thinking in Russian versus English. He said yes, but couldn’t really provide any additional information. If a reader happens to be fluent in two or more languages, I’d be interested in hearing about your experiences.

One last note on colour, I’ve read studies that claim that women on balance have more colour names than men, which is to say where a typical male only sees shades of blue, the typical woman sees periwinkle, ultramarine, cyan, navy, cobalt, indigo, cerulean, teal, slate, sapphire, turquoise, and on and on. Of course, many English-speaking males may be defensive about now, arguing, “I know cyan. I know teal. Who doesn’t know turquoise?” Knowing is different to employing, and perhaps you’re not typical. You’re an atypical male. Let’s not get into gender challenges. Rather, let’s.

Gender Problems

Yet again, gender rears its ugly head. I am wondering when people are going to start demanding fluidity among gendered nouns. Sticking with Lera’s examples, a bridge happens to be grammatically feminine in Germans and masculine in Spanish. When asked to describe a bridge, German speakers are more apt to choose stereotypically feminine adjectives, beautiful or elegant whilst Spanish speakers opted for stereotypically masculine terms, strong or long. I suppose she was reaching for laughter on that last reference.

Structured Events

Objects and subjective injection are other possible conventions. Lera mentions a tourist bumping into a vase. In English, one would be comfortable declaring, “The man knocked the vase off the pedestal.” In Spanish, the same event might more often be described as “The vase fell off the pedestal”. Notice the shift in agency and dispersion of blame. In English, we have some apparent need to inject not only a cause but an agent as a source of the cause. As I see it, one might have these several (possibly inexhaustive) options:

  1. He knocked the vase off the stand.
  2. Someone knocked the vase off the stand.
  3. The vase got knocked off the stand.
  4. The vase fell off the stand.

I decided to note the relationship between the case and the stand. I suppose this is not strictly necessary and might seem superfluous in some contexts.

In case 1, a specific agent (he) is responsible for knocking off the vase. This does not suggest intent, though even negligence carries weight in many circles.

In case 2, the agent becomes indefinite. The speaker wants to specify that the vase didn’t just fall over on its own.

In case 3, agency is not only indefinite, but it also may not have a subject. Perhaps, a cat knocked it off—or the wind or an earth tremor.

In the final case, 4, the agent is removed from the conversation altogether, All that is conveyed is that the vase fell from a stand.

One might want to argue, “So what?” but this is not simply a convention of language; it stems from perception—or perhaps perception was altered by language through acculturation, but let’s not quibble here. It determines what someone pays attention to. When an event was witnessed, people from cultures where agency is a strong component, the witness is more apt to remember the culprit, whereas a non-agency-focused witness, would not be as likely to recall attributes about the person who may have knocked it over. Practically, this leads to issues of blame and culpability. Clearly, a culture with an agent orientation might be quicker to assess blame, where this would be further removed from the conversation from a different cultural perspective. I am speculating here, but I don’t feel it’s a large logical leap.

In a retributive justice system, the language that assigns agency is more likely to mete out harsher punishments because he broke the vase, it wasn’t simply broken. The use of language guides our reasoning. This leads me to wonder whether those who are ‘tough on crime‘ use different language construction than those who are more lenient.

Enfin

I just wanted to share my thoughts and connect language with cognition. I don’t think that the connection is necessarily strong or profound, but there is something, and there are more language nuances than noted here.

What is real? What is true?

An online colleague published an essay on another essay (en français). The gist was to say that their ideas were the same save for whether a core foundation was reality or truth. I am going to stylise these and derivatives in capital initials, e.g., Real and Truth. I am not sure I see the connexion, and perhaps Lance will chime in here directly to correct any misunderstandings and fill in any holes.

Podcast: Audio rendition is this page content

At least in English vernacular, True and Real are close synonyms. I don’t feel they are as close as we may assume at first glance. I think each of these terms carries with it its own ambiguity and connotation, so a meaningful discussion may prove to be difficult.

I’m not sure if it’s a fair characterisation, but I feel that most people consider Real as what they can sense or experience. Some may not even allow for the experiential component. In my mind, metaphorically thinking, of course, a book might be real; an idea might be real; even the idea of a unicorn might be real, but unicorns are not real. If we want to claim unicorns as part of Reality or include it in the set of Reality, then it would be a second-order sort. Substituting Harry Potter for unicorns, the idea of Harry Potter is real, but Harry Potter is a figment. Of course, Harry Potter may be the name of a human or your pet otter, but this is not the manifest Harry Potter of the idea. And Harry Potter is not a unicorn.

Harry Potter is not a unicorn

I mention Harry Potter and, indeed, unicorns, because I have had people argue that these things are real. For me, they are off the table, whether real or imagined. I feel that some people may also reduce Real to material, so a Realist would be the same as a Materialist. That’s fine except we end up with obvious non-material stuff on the cutting room floor. What do we do with emotions and so-called qualia? Sure, some might equate emotions with biochemical reactions and some synaptic exchange in some parts of the brain, further articulated through facial and bodily expressions and gestures. For the Materialist, we may not yet know the mechanism, but it’s only a matter of time—in the same manner as atoms became protons and electrons, which became quarks with spins and colour, and this morphed into fields.

Being sympathetic to Analytic Idealism, I might argue that none of this is real because all we can experience is what we can sense, but what we sense is a second order of Reality. We can’t even experience the first-order variety. The usual analogy is to look at computer bits or the funky Matrix code, and it doesn’t reveal what we see or experience through the interface. In the case of the Matrix, the interface is their perceived reality. But perception isn’t Reality. At least Descartes suggests as much. If first-order Reality is unattainable, we can either consider this sensed and experienced world second order. This leaves our unicorns and Harry Potter to be third order. In this case, we might idiomatically consider the first-order to be understood to exist, but our use of Reality extends only to the second-order variety.

In any case, I don’t expect to resolve the mystery of Reality here and now, but it is a dialogue where accord is necessary to be on the same proverbial page.

But then what is True? What is Truth? I’ve written about this previously. Here, we are explicitly invoking the capital-T version of Truth, not the minuscule-t version where it’s synonymous with pedestrian ‘facts’ and tautologies. By True, are we asking what is objectively real—unadulterated by subjective experience, some universal and invariable condition? And is this Truth what is Real? Are there Truths that are not Real?

To sum it up, it is quite standard—although not universal by a long shot—to consider Real what we can experience whilst True is something that requires proof. A physical table might be real. Like unicorns, mathematic concepts may be true—I’d argue that this is tautological whilst others might defend some Platonic ideal—, but they are not real. They are an abstraction. I suppose my point is to not take these words for granted and presume they can be directly interchanged. I suppose in the adjective form, they are more apt to coincide—Is that a true Picasso? Is that a real Picasso? Clearly, when we are asking if it is real, we are asking if it is truly genuine rather than questioning its materiality.

It may be true that I am wittering away online in some masturbatory pseudo-intellectual frenzy, and the results may be virtually real, but I needed to let my mind wander for a bit. If you’ve gotten this far, bless your heart, and leave a comment.

And so it goes.

GOD BE IN MY HEAD

God be in my head,
And in my understanding;
God be in mine eyes,
And in my looking;
God be in my mouth,
And in my speaking;
God be in my heart,
And in my thinking;
God be at mine end,
And at my departing.

Podcast: Audio rendition of this page content

Sir Henry Walford Davies put this traditional prayer to music as a hymn. Iain McGilchrist recited it as a poem after a brief setup in an interview.

I am an atheist, and the closest I get to gods is through metaphor, allegory, or allusion. And I don’t engage in it, but I understand when others invoke it. And to be completely honest, I was multitasking when Iain was reciting, and I misheard it, and this miss was more profound for me.

God be in my head,
And in my understanding;
Don’t be in mine eyes,
And in my looking;

That’s what prompted me to seek it out and pen a post. In the original form, it’s more of an invocation. In my misinterpretation, I felt he was saying to keep God in your head as a metaphorical reference—as an archetype—, but God is not for the eyes and the looking. God is a matter of faith.

As for the rest, it flows the same. Speak as you understand it. Feel God in your heart, if you should so choose. Think about him if you wish. And carry this thought with you until the end if it brings you comfort.

Myself, I get no comfort from the notion. I don’t feel I need it, but it is a cultural phenomenon, so to be aware is a part of cultural and emotional intelligence.

I feel that I’ve always intuitively understood metaphor. I remember listening to Joseph Campbell in the 1980s as he was describing how one of his biggest challenges was to get people to understand the embodiment of metaphor and not just the vapidity of simple simile.

And there you have it.

Whence Morality?

Where does morality come from? I believe that there exists three possible vectors for morality in one of two categories—objective and subjective. Absolute objective morality derives from some single source outside of the subjective experience. Monotheistic religions have the propensity to adopt this ontology. Subjective morality is a human social construct and may be subdivided into logical and emotional subcategories. As a non-cognitivist, I feel that I am biased toward the emotional vector.

Podcast: Audio rendition of this page content

In my view, emotion always proceeds logic. I’ve been told for as long as I remember that I am hyper-logical and can be as dispassionate as Mr Spock or the Data character from the Star Trek franchise. As an economist, I was trained to stand back and objectify problems. However, the impetus for attention in the first place is always emotional. Or at least I can claim it to be alogical or prelogical. Even so, there would be a chain of events that moved from prelogical to emotional to logical. One may claim that applying logic to 2 + 3 requires no emotional content, but this has been habituated. Neither is there emotion nor logic. It’s a simple rote recitation.

I am going to take literary licence and dismiss objective reality out of hand as excessively unlikely. I think it’s fair to categorise the logical view as Kantian. In this view, humans employed reason and I suppose a consequentialist framework to arrive at the notion that it just made sense to construct moral underpinnings. Of course, by the time of Kant, the Enlightenment was firmly afoot, so we could just borrow and advance the same moral notions. I feel he’d be OK accepting the claim that some classes, say religious, if we follow the money and power trail, and realised that they could exert control and manipulate the playing field if they were the arbiters of morality. I am neither a deeply-read Kant scholar nor an anthropologist, but this is how I see it.

I feel that the emotional impetus for morality might best be characterised by David Hume. In his view, morals would have been made on sentiment and empathy. Then they were interpreted and amended by different cultures and societies. I feel this adjustment is actually the logical element in play.

Fundamentally, animals want a sense of fairness. This is well-documented even in monkeys, so morals are an attempt to codify fairness and fair outcomes. Of course, fairness means different things to different people, so that makes for an unstable foundation. I think Nietzsche takes a more instrumental stance but would side more with Kant with the addition of the power plays that caught Foucault’s attention in the last century.

I’ve shared my perspective here several times. As a non-cognitivist—in the manner of Ayer, Stephenson, and Hare—, morals are entirely emotive responses that then become prescriptive as a template for a civil society. However, as Nietzsche points out in Genealogy of Morals, this template is on the one hand not neutral and, on the other hand, applied differently to different cohorts.

This is not an attempt to provide a deep discourse on morality. Rather, it is just documenting my current perspective on a yet unresolved topic. I’m not sure there that the Kantian or Humean perspective will be the definitive answer. Evolutionary biologists have been tossing their proposals in the hat, but I don’t think we’ll ever get beyond speculation and opinion. This reflects mine.

Related Video Content

Systematic Violence

As humans, we often leverage systems. They seem to make life easier. Whether a routine or a step-by-step instruction through an unknown process, a system can guide us. Systems are also connected, interactive entities, but that’s not for this segment. I am more interested in the loss of humanity that systematic processes and bureaucracy bring, so I am interested in imposed systems rather than systems we invent to find our keys and wallets.

Podcast: Audio rendition of this page content
Image: Spectrum of System versus Human

If we consider systematisation and humanity on a scale, we can see that any move toward systematisation comes at the expense of humanity. It might make logical sense to make this trade-off to some degree or another. The biggest hit to humanity is the one-size-fits-all approach to a problem. It removes autonomy or human agency from the equation. If a system can be that mechanised, then automate it. Don’t assign a human to do it. This is an act of violence.

As I’ve been reading and writing a lot about Iain McGilchrist’s work lately, I feel one can easily map this to left versus right cerebral hemisphere dominance. System-building is inherently human, but it’s in the domain of the left hemisphere. But my imposition of a system on another is violence—one might even argue that it’s immoral.

As with bureaucracy, these imposed systems are Procrustean beds. Everyone will fit, no matter what. And when human beings need to interact with systems, we can not only feel the lack of humanity, but our own humanity suffers at the same time.

A close friend of mine recently checked herself into a mental health facility. After a few days, she called and asked if I could bring her a change of clothes and some toiletries—deodorant, soap, and shampoo. She had some in her house, but the packaging needed to be unopened and factory sealed. I stopped at a shop to buy these items and I brought them to the facility.

At the reception area, I needed to be cross-referenced as an authorised visitor, so I was asked to show proof of my identity as if it mattered who was delivering clothing that was going to be checked anyway. No big deal, they recorded my licence number on a form and ask me to fill it out—name, phone number, and what I was delivering.

The form stated that any open consumable items would not be allowed. I signed the form. An attendant took the bag and told me that I needed to remove the ‘chemicals’, that they would not be delivered. I pointed to the lines on the form that read that this restriction was for open items and reinforced that I had just purchased these and showed her the sales receipt. She told me that the patient would need to obtain a doctor’s permission, and she assured me that the patients all had soap.

I’m sure she thought she was being compassionate and assertive. I experienced it as patronising. Me being me, I chided her lack of compassion and humanity, not a great match for a mental health attendant. In fact, it reminded me of a recent post I wrote on Warmth. In it, I suggested that service staff should at least fake conviviality. I take that back. Faux congeniality is patronising. She mimicked me. “Yes, systems are so inhumane, but here we follow a system.” My first thought was of Adolf Eichmann, who kept the trains on schedule without a care for the cargo. This is the violence inherent in systems.

Systems are not illogical. In fact, they are hyper-logical. And that’s the problem, logic is traded off at the expense of empathy. And one might have a strong argument for some accounting or financial system process, but I’ll retort that this should be automated. A human should not have to endure such pettiness.

I can tell that this will devolve quickly into a rant and so I’ll take my leave and not foist this violence upon you.

Path to the Fall

By fall, I don’t mean autumn except perhaps metaphorically speaking. The accompanying image illustrates a progression from the pre-Enlightenment reformation and the factors leading to the Modern Condition and increases in schizophrenia in people, societies, and enterprises.

Podcast: Audio rendition of this page content.

This image is essentially composited from a later chapter in Iain McGilchrist’s The Master and His Emissary. In it, he outlines a path that commences at the Reformation that led to Lutheranism and Protestantism and further to Calvinism (not separately depicted). Max Weber argued that Capitalism is inextricably linked to Calvinism and the workmanship ideal tradition.

McGilchrists argument is founded on the notion that Catholocism is a communally oriented belief system whilst Protestantism is focused on the individual and salvation through personal work. The essence of capitalism is the same.

Of course, history isn’t strictly linear. In fact, there are more elements than one could realistically account for, so we rely on a reduction. In concert with the Reformation but on a slight delay is the so-called Age of Enlightenment, the Age of Reason, which led not only to faith in science but then to the pathology of Scientism.

This Protestant-Scientismic nexus brought us to Capitalism and into the Industrial Revolution, where humans were devivified or devitalised, trading their souls to be pawns to earn a few shekels to survive. Capitalism and the Industrial Revolution led to Marxism, through Marx’s critique of Capitalism, but Marxism has the same fatal flaw as Capitalism inasmuch as it doesn’t view people as humans. It does afford them a slightly higher function as workers, but this still leaves humanity as a second-tier aspect and even historicity is elevated above as a sort of meta-trend or undercurrent.

From there, we transition to Modernity, which yields the modern condition and schizophrenics in one fell swoop. This is no coincidence.

Although I end this journey at Modernism, McGilchrist is also leery of the effects of post-modernism as well as philosophy itself as overly reductionist in its attempts to categorise and systematise, valuing signs and symbols over lived experience. His main complaint with postmodernism is that it moves from the objective perspective of Modernity to the subjective perspective, and so there remains no base foundation, which is the shared experience. I’m not sure I agree with his critique, but I’m not going to contemplate it here and now.

In the end, this journey and illustration are gross simplifications, but I still feel it provides valuable perspective. The challenge is that one can’t readily put the genie back into the bottle, and the question is where do we go from here, if not Modernism or Postmodernism. I shouldn’t even mention Metamodernism because that seems like an unlikely synthesis, as well-intentioned as it might be. McGilchrist gives examples of reversals in the trend toward left-hemisphere bias, notably the Romantic period, but that too was reversed, recommencing the current trajectory. My feeling is that if we continue down this dark path, we’ll reach a point of no return.

It seems to be that it’s growing at an increasing rate, like a snowball careening down a slope. It not only drives the left-dominant types further left because an analytical person would reinforce the belief that if only s/he and the world were more analytical things would be so much better—even in a world where net happiness is trending downward—, but it also forces this worldview on other cultures, effectively destroying them and assimilating them into the dark side, if I can borrow a Star Wars reference.

Epilogue

I wasn’t planning to share this story—at least not now. In another forum, I responded to a statement, and I was admonished by Professor Stephen Hicks, author of the book of dubious scholarship, Explaining Postmodernism.

I responded to this query:

If you’re a single mother and have a son I’d suggest putting him in a sport or martial arts to add some masculine energy to his life. It’s not a replacement for the actual father but it can help instil structure and discipline into the core of his being.

— Julian Arsenio

“Perhaps this world needs less discipline and structure, not more,” was my response, to which Hicks replied.

The quotation is not about “the world.” It is about boys without fathers. Evaluate the quotation in its context.

— Stephen Hicks

“Disciplined boys create a disciplined world. Not a world I’d prefer to create or live in. We need more right-hemisphere people. Instead, we are being overwhelmed by left hemisphere types, leading to Capitalism and the denouement of humanity as it encroaches like cancer, devouring or corrupting all it touches.

“In the end, it is about the world, which from a left hemisphere perspective is a sum of its parts. Right-hemisphere thinkers know otherwise,” was my reply. He responded,

You seem to have difficulty focusing. From a quotation about fatherless boys you free associate to [sic] weird psychology and global apocalptic [sic] pessimism. Pointless.

— Stephen Hicks

“I’ll suggest that the opposite is true, and perhaps you need to focus less and appreciate the Gestalt. This was not free association. Rather, it is a logical connexion between the disposition of the people in the world and lived reality.

“Clearly, you are a left-hemisphere structured thinker. The world is literally littered with this cohort.

“I suggest broadening your worldview so as not to lose the woods for the trees. I recommend Dr Iain McGilchrist as an apt guide. Perhaps reading The Master and His Emissary and/or The Matter with Things would give you another perspective. #JustSaying”

His final repartee is,

And still, rather than addressing the issue of fatherless boys, you go off on tangents, this time psychologizing about people you’ve zero first-hand knowledge of.

— Stephen Hicks

Feel free to interpret this as you will. For me, his attempt to limit discussion to some notion he had in his head and his failure to see the woods for the trees, as I write, suggests that he is a left-brain thinker. Having watched some of his videos, whether lectures or interviews, this was already evident to me. This exchange is just another proof point.

I considered offering the perspective of Bruno Bettleheim’s importance of unstructured play, but as is evidenced above, he is not open to dialogue. His preference appears to be a monologue. This is the left hemisphere in action. This is an example of how insidious this convergent thinking is, and it makes me worry about what’s ahead in a world of people demanding more structure and discipline. Foucault’s Discipline and Surveillance comes to the forefront.

Humans Ruin the Economy

Humans are ruining the economy.

Podcast: Audio rendition of this page content.

This is the caption on the sign for this segment. The sign advertises a solution, which is to “Vote for DEMOCROBOT… The first party run by artificial intelligence”. It also promises to “give everyone a living wage of £1436.78 a week”.

I have been very vocal that I find the idea of humans governing humans is a bad idea at the start. By and large, humans are abysmal system thinkers and easily get lost in complexity. This is why our governments and economies require so much external energy and course correction. Not only were they poorly designed and implemented, but they’re also trying to manage a dynamic system—a complex system. It won’t work.

What about bots and artificial intelligence? The above image was posted elsewhere, and a person commented that our governments are already filled with artificial intelligence. I argued that at best we’ve got pseudo-intelligence; at worse, we’ve got artificial pseudo-intelligence, API.

The challenge with AI is that it’s developed by humans with all of their faults and biases in-built.

The challenge with AI is that it’s developed by humans with all of their faults and biases in-built. On the upside, at least in theory, rules could be created to afford consistency and escape political theatre. The same could be extended to the justice system, but I’ll not range there.

Part of the challenge is that the AI needs to optimise several factors, at least, and not all factors are measurable or can be quantified. Any such attempt would tip the playing field one way or another. We might assume that at least AI would be unreceptive to lobbying and meddling, but would this be the case? AI—or rather ML, Machine Learning or DL, Deep Learning—rely on input. It wouldn’t take long for interested think tanks to flood the source of inputs with misinformation. And if there is an information curator, we’ve got a principle-agent problem—who’s watching the watcher?—, and we may need to invoke Jeremy Bentham’s Panopticon solution.

One might even argue that an open-source, independently audited system would work. Who would be auditing and whose interpretation and opinion would we trust? Then I think of Enron and Worldcom. Auditors paid to falsify their audit results. I’d also argue that this would cause a shift from the political class to the tech class, but the political class is already several tiers down and below the tech class, so the oligarchs still win.

This seems to be little more than a free-association rant, so I’ll pile on one more reflection. Google and Facebook (or Meta) have ethical governing bodies that are summarily shunned or simply ignored when they point out that the parent company is inherently unethical or immoral. I wouldn’t expect much difference here.

I need a bot to help write my posts. I’ll end here.