Refining Transductive Subjectivity

3–4 minutes

I risk sharing this prematurely. Pushing the Transductive Subjectivity model toward more precision may lose some readers, but the original version still works as an introductory conversation.

Please note: There will be no NotebookLM summary of this page. I don’t even want to test how it might look out the other end.

Apologies in advance for donning my statistician cap, but for those familiar, I feel it will clarify the exposition. For the others, the simple model is good enough. It’s good to remember the words of George Box:

The Simple Model

I’ve been thinking that my initial explanatory model works well enough for conversation. It lets people grasp the idea that a ‘self’ isn’t an enduring nugget but a finite sequence of indexed states:

S0S1S2SnS₀ → S₁ → S₂ → … → Sₙ

The transitions are driven by relative forces, RR, which act as catalysts nudging the system from one episode to the next.

The Markov Model

That basic picture is serviceable, but it’s already very close to a dynamical system. More accurate, yes—though a bit more forbidding to the casual reader – and not everybody loves Markov chains:

Si+1=F(Si,Ri)S_{i+1} = F(S_i, R_i)

Here:

  • SiSi is the episodic self at index i
  • RiRi is the configuration of relevant forces acting at that moment
  • FF is the update rule: given this self under these pressures, what comes next?

This already helps. It recognises that the self changes because of pressure from language, institutions, physiology, social context, and so on. But as I noted when chatting with Jason, something important is still missing:

SiSi isn’t the only thing in motion, and RiRi isn’t the same thing at every step.

And crucially, the update rule FF isn’t fixed either.

A person who has lived through trauma, education, and a cultural shift doesn’t just become a different state; they become different in how they update their states. Their very ‘logic of change’ evolves.

To capture that, I need one more refinement.

The Transductive Operator Model

This addresses the fact that Si isn’t the only aspect in motion and there are several flavours of R over time, so Ri. We need to introduce the Transductive T:

(Si+1,Fi+1)=T(Si,Fi,Ri)(S_{i+1}, F_{i+1}) = \mathcal{T}(S_i, F_i, R_i)

Now the model matches the reality:

  • SS evolves
  • the pressures RR evolve
  • and the update rule FF evolves

RiRi can be further decomposed as Ri=(Rphys,Rsocial,Rsymbolic,)Ri=(R^{phys},R^{social},R^{symbolic},…), but I’ll save that for the formal essay.

That is why this is transductive rather than inductive or deductive:
structure at one moment propagates new structure at the next.

What Transductive Subjectivity Isn’t

What TS rejects is the notion that the self is a summation of the SiSis and other factors; this summation is a heuristic that works as a narrative, and all of its trappings, but it is decidedly incorrect.

SelfΣ(Si,)Self≠Σ(Sᵢ, …)

Effectively,

Self0tExperiencedtSelf ≠ \int_{0}^{t} Experience \, dt

In ordinary life, we talk as if there were a single, stable self that sums all these episodes. Transductive Subjectivity treats that as a convenient narrative, not an underlying fact. For example, someone raised in a rigid environment may initially update by avoiding conflict; after therapy and a cultural shift, they may update by seeking it out when something matters. This fiction is where we project agency and desert, and where we justify retribution.

Two Four Two Three

1–2 minutes

This meme is not what I mean by language insufficiency, but it does capture the complications of language.

Image: Two Four Two Three

I found this image accompanying an article critical of AI – Claude.ai in particular. But this isn’t a Claude problem. It’s a language problem. I might argue that this could have been conveyed verbally, and one could resolve this easily by spelling out the preferred interpretation.

  • A: Two thousand, twenty-three
  • B: Four thousand, four hundred, thirty-three
  • C: Two thousand, four hundred, thirty-three
  • D: Four thousand, four hundred, twenty-three

So, this is not insoluble, but it is a reminder that sometimes, in matters like this, additional information can lead to clearer communication.

I’d also imagine that certain cultures would favour one option over another as it is presented above. As for me, my first guess would have been A, interpreting each number as a place position. I’d have expected teh double number to also have a plural syntax – two threes or two fours – but that may just be me.

The Church of Pareto: How Economics Learned to Love Collapse

—or—How the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere

As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.

Audio: NotebookLM podcast of this content.

There is a priesthood in modern economics—pious in its equations, devout in its dispassion—that gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.

This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not “harmed”—she simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.

Let us be clear: this is not science. This is not even ideology. It is theology—an abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.

The rot runs deep. William Stanley Jevons—yes, that Jevons, patron saint of unintended consequences—warned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandra’s prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.

Then came Simon Kuznets, who—bless his empirically addled soul—crafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.

Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italy’s land was owned by 20% of its people—and rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: “Yes, more of this, please.”

And so the model persisted—narrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But don’t expect to find that on the syllabus.

Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.

We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.

What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living world—ecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.

History will not be kind to this priesthood of polite annihilation. Nor should it be.

What’s Probability?

The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.

Audio: NotebookLM podcast on this topic.

Let us survey the battlefield:

1. Classical Probability (Laplacean Determinism in a Tuxedo)

This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.

2. Frequentist Probability (The Empiricist’s Fantasy)

Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.

3. Bayesian Probability (Subjectivity Dressed as Rigor)

Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.

4. Propensity Interpretation (The Ontology of Maybes)

Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.

5. Logical Probability (A Sober Attempt at Rationality)

Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.

6. Quantum Probability (Schrödinger’s Definition)

In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.

Current Status: War of Attrition

There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.

In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.

And so the probability of this contest being resolved any time soon?
About zero.
Or one.
Depending on your interpretation.

Determinism and the Three-Body Problem

The debate over free will often distils down to a question of determinism—indeterminism, hard or soft determinism, or something else. Poincare’s approach to the three-body problem is an apt metaphor to strengthen the deterministic side of the argument.

Quantum theory introduces aspects of indeterminism, but that doesn’t support the free will argument. Moreover, between quantum events, the universe is again deterministic. It’s simply been reset with the last exogenous quantum event.

Prima facia, Determinism and Chaos might seem strange bedfellows. And therein lies the rub. Chaos theory essentially tells us that even in a scenario of chaos, all possible outcomes can be calculated. They just must be calculated stepwise via numerical integration. Even this leaves us with estimations, as owing to Heisenberg’s Uncertainty Principle and the infinitude of slicing space, we can’t actually calculate the precise answer, although one exists.

My point is that not knowing what is being determined doesn’t invalidate the deterministic nature or process.

Value of Life

Captain Bonespurs now has a flesh wound. Former president-elect Donald J Trump was the target of a not-so-sharpshooter yesterday. Immediately resorting to Godwin’s Law, I wondered if this was like the philosophical hypothetical asking, ‘Would you kill baby Hitler to prevent the eventualities that unfolded?’ Was Hitler the symptom or the disease? What about Donald J? Whatever the cause or motivation, not unlike the fire at the Reichstag, this event has galvanised his supporters. Let’s hope that the outcome doesn’t follow the same path. There is a fear that he’ll take a path similar to Hitler or Ceasar before him in a quest for power.

What is a life worth? The average US-American life is valued at around $7 million, give or take a few million. The number ranges between $1 MM and $10 MM depending on which agency you see. That they equate lives to dollars is curious enough, but that they can’t agree on a single figure is priceless.

For background, this value is used to determine intervention. For FEMA (Federal Emergency Management Agency), a human life is worth about $7.5 MM For the EPA (Environmental Protection Agency) it’s slightly more than $10 MM. Are these cats playing Monopoly? Nah.

The human life calculus considers factors like lifetime earnings potential and discounts it to Present Value. In action, assume there is a disaster. Let’s not use COVID-19. Instead, there is an island with 1,000 inhabitants. Using the $10 MM per person figure to simplify the maths, we would be justified in spending up to $10,000,000,000 to intervene in some potential disaster – $10 MMM or $1e10.

Human lifetime value is an average. Mr Trump has already shown himself to be worth more than $10 MM. I suppose this means that not all humans are created equal. No matter. Another logical question might be what is the cost of a person’s detriment to society. This is a question for a Modernist or someone who feels that a given configuration of society is preferred to all others – or at least some others. How much damage might one human do?

Trump enriched himself and his family and entourage in his first term. In Ukraine, Zelenskyy and his lot bilked the country out of billions. It’s nothing new, but do we subtract the costs from the benefits or is this a gross calculation?

Irrespective of the costs, the next four years ahead are expected to be tumultuous no matter which corporate-sponsored party prevails. Heads, they win; tails, the country – if not the world – loses.

Neglect and a Quarter Dozen

I am going to neglect this blog for at least another day. I’ve got too many irons in the fire.

Why do we routinely say ‘half dozen’ but don’t tend toward ‘quarter dozen’ or ‘third of a dozen’? I know many people have maths deficits, but still. I don’t think even I will attempt to start this trend, but this thought crossed my head today.

I hope to return to reengage with this block, but until then, adieu.

Physics of Free Will

Physicist, Sean Carroll, gives Robert Lawrence Kuhn his take on free will. I was notified about this when it was posted, and given the topical subject matter, I took the 8-odd minutes to listen to it straight away.

I wish I had been there to pose a follow-up question because, although he provided a nice answer, I feel there was more meat on the table.

Like me, Sean is a Determinist who feels that the question of determinism versus indeterminism is beside the point, so we’ve got that in common. Where I feel we may diverge is that I am an incompatibilist and Sean is a compatibilist. I could be interpreting his position wrong, which is what the follow-up question would be.

I say that Sean is a compatibilist because he puts forth the standard emergence argument, but that’s where my confusion starts. Just to set up my position for those who don’t prefer to watch the short clip, as a physicist, Sean believes that the laws of physics, Schrödinger’s equation in particular.

We have an absolutely good equation that tells us what’s going to happen there’s no room for anything that is changing the predictions of Schrödinger’s equation.

— Sean Carroll
Schrödinger’s Equation

This equation articulates everything that will occur in the future and fully accounts for quantum theory. Some have argued that quantum theory tosses a spanner into the works of Determinism and leaves us in an Indeterministic universe, but Sean explains that this is not the case. Any so-called probability or indeterminacy is captured by this equation. There is no explanatory power of anything outside of this equation—no souls, no spirits, and no hocus pocus. So far, so good.

But Sean doesn’t stop talking. He then sets up an analogy in the domain of thermodynamics and statistical mechanics and the ‘fundamental theory of atoms and molecules bumping into each other and [the] emergent theory of temperature and pressure and viscosity‘. I’ve explained emergence in terms of adding two hydrogen and one oxygen atom to create water, which is an emergent molecule with emergent properties of wetness.

My position is that one can view the atomic collection of matter at a moment as an emergent property and give it a name to facilitate conversation. In this case, the label we are applying is free will. But there is a difference between labelling this collection “free will” as having an analogous function to what we mean by free will. That’s a logical leap I am not ready to take. Others have equated this same emergence to producing consciousness, which is of course a precursor to free will in any case.

Perhaps the argument would be that since one now has emergent consciousness—I am not saying that I accept this argument—that one can now accept free will, agency, and responsibility. I don’t believe that there is anything more than rhetoric to prove or disprove this point. As Sean says, this is not an illusion, per se, but it is a construction. I just think that Sean gives it more weight than I am willing to.

Houston, we have a problem

EDIT: Since I first posted this, I’ve discovered that computer algorithms and maths are not playing well together in the sandbox. Those naughty computer geeks are running rogue from the maths geeks.

In grade school, we typically learn a form of PEMDAS as a mnemonic heuristic for mathematical order of operations. It’s a stand-in for Parentheses, Exponents, Multiplication, Division, Addition, and Subtraction. This may be interpreted in different ways, but I’ve got bigger fish to fry. It turns out that many (if not most) programming languages don’t implement around a PEMDAS schema. Instead, they opt for BODMAS, where the B and O represent Brackets and Orders—analogous to Parentheses and Exponents. The important thing to note is the inversion of MD to DM, as this creates discrepancies.

And it doesn’t end here. HP calculators interject a new factor, multiplication by juxtaposition, that mathematician and YouTuber, Jenni Gorham, notates as J resulting in PEJMDAS. This juxtaposition represents the implied multiplication as exemplified by another challenge;

1 ÷ 2✓3 =

In this instance, multiplication by juxtaposition instructs us to resolve 2✓3 before performing the division. Absent the J, the calculation results in ½✓3 rather than the intended 1/(2✓3). As with this next example, simply adding parentheses fixes the problem. Here’s a link to her video:

And now we return to our originally scheduled programming…

Simplifying concepts has its place. The question is where and when. This social media war brings this back to my attention.

As depicted in the meme, there is a difference of opinion as to what the answer is to this maths problem.

6 ÷ 2 ( 1 + 2 ) =

In grade school, children are taught some variation of PEMDAS, BOMDAS, BEDMAS, BIDMAS, or whatever. What they are not taught is that this is a regimented shortcut, but it doesn’t necessarily apply to real-world applications. The ones defending PEMDAS are those who have not taken maths beyond primary school and don’t use maths beyond some basic addition and subtraction. Luckily, the engineers and physicists who need to understand the difference, generally, do.

Mathematicians, scientists, and engineers have learned to transform the equation into the form on the left, yielding an answer of 1. If your answer is 9, you’ve been left behind.

Why is this such a big deal?

When I taught undergraduate economics, I, too, had to present simplifications of models. In practice, the approach was to tell the students that the simplification was like that in physics. At first, you assume factors like gravity and friction don’t exist—fewer variables, fewer complexities. The problem, as I discovered in my advanced studies, is that in economics you can’t actually relax the assumptions. And when you do, the models fail to function. So they only work under assumptions that cannot exist in the real world—things like infinite suppliers and demanders. Even moving from infinite to a lot, breaks the model. Economists know this, and yet they teach it anyway.

When I transitioned from undergrad to grad school, I was taken aback by the number of stated assumptions that were flat out wrong.

When I transitioned from undergrad to grad school, I was taken aback by the number of stated assumptions that were flat out wrong. Not only were these simplifications flat out wrong, but they also led to the wrong conclusion—the conclusion that aligned with the prevailing narratives.

This led me to wonder about a couple of things

Firstly, if I had graduated with an English degree and then became a PhD candidate in English, would I have also learnt it had mostly been a lie for the purpose of indoctrination?

Secondly, what other disciplines would have taught so much disinformation?

Thirdly, how many executives with degrees and finance and management only got the fake version?

Fourthly, how many executives hadn’t even gotten that? Perhaps they’d have had taken a class or two in each of finance and economics and nothing more. How many finance and economics courses does one need to take to get an MBA? This worries me greatly.

To be honest, I wonder how many other disciplines have this challenge. I’d almost expect it from so-called soft sciences, but from maths? Get outta here.

Half-life of knowledge

This also reminds me of the notion of the half-life of knowledge. What you knew as true may eventually no longer be. In this case, you were just taught a lie because it was easier to digest than the truth. In other cases, an Einstein comes along to change Newtonian physics into Oldtonian physics, or some wisenheimer like Copernicus determines that the cosmic model is heliocentric and not geocentric.

If you’ve been keeping up with my latest endeavour, you may be surprised that free will, human agency, identity, and the self are all human social constructs in need of remediation. Get ready to get out of your comfort zone or to entrench yourself in a fortress of escalating commitment.

Not Just a Number

That perception and memory work hand in hand is mostly taken for granted, but this case reminds us that this sometimes breaks down. This is not the case of the neurotypical limitations to fallible sense organs and standard cognitive boundaries and biases. This subject can’t discern the arabic numerals from 2 through 9.

To recap the study, the man can perceive 0 and 1 as per usual, but numerals 2 through 9 are not recognisable. Not even in combination, so A4 or 442 are discernible.

In a neurotypical model, a person sees an object, a 3 or a tree, and perhaps learns its common symbolic identifier—’3′, ‘three’, or ‘tree’. The next time this person encounters the object—or in this case the symbol—, say, 3, it will be recognised as such, and the person may recite the name-label of the identifier: three.

It might look like this, focusing on the numerals:

Encounter 1: 3 = X₀ (initial)
Encounter 2: 3 = X₁ ≡ X₀ (remembered)
Encounter 3: 3 = X₂ ≡ X₀ (remembered)

In the anomalous case, the subject see something more like this:

Encounter 1: 3 = X₀ (initial)
Encounter 2: 3 = Y₀ = { } (no recollection)
Encounter 3: 3 = Z₀ = { } (no recollection)

For each observation, the impression of 3 is different.

Phenomenologically, this is different to the question of whether two subjects share the same perception of, say, the colour red. Even if you perceive red as red, and another perceives red as red, as long as this relative reference persists to the subject, you can still communicate within this space. When you see a red apple, you can remark that the apple is red—the name marker—, and the same is true for the other, who can also communicate to you that the apple is indeed red because the word ‘red’ become a common index marker.

But in the anomalous case, the name marker would have little utility because ‘red’ would be generated by some conceivably unbounded stochastic function:

Colourₓ = ƒ(x), where x is some random value at each observation

It would be impossible to communicate given this constraint.

This, as I’ve referenced, is anomalous, so most of us have a stronger coupling between perception and memory recall. Interesting to me in this instance is not how memory can be (and quite often is) corrupted, but that fundamental perception itself can be corrupted as well—and not simply through hallucination or optical illusion.