If you find value in #ComplexityThoughts, consider helping it grow by subscribing and sharing it with friends, colleagues or on social media. Your support makes a real difference.
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
Complexity Thoughts is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
If you find value in #ComplexityThoughts, consider helping it grow by subscribing and sharing it with friends, colleagues or on social media. Your support makes a real difference.
we walked through some of the most puzzling paradoxes of human reasoning — from the conjunction fallacy to the disjunction effect and the Ellsberg paradox — all showing how classical probability (CP) fails to capture the context-dependent, often counterintuitive patterns of human thought. These were not isolated glitches, but systematic and reproducible features of behavior, as discussed in the previous essay. The natural next step is to ask: if CP breaks down, what comes next?
In this second part, we introduce the framework of quantum probability (QP) — or, more generally, non-classical probability (nCP) — as a candidate for a richer mathematical language of cognition. As I have specified before, I think that using the word “quantum” in this context might recall controversial theories related to the quantum theory of brain and mind, which is not at all the subject of these posts.
Quantum probability without quantum brains
First things first: quantum probability (QP) is not about brains being quantum machines. It is a modeling framework, not a biological hypothesis. The mathematics of quantum theory — Hilbert spaces, superposition, interference — can be applied independently of physics to describe context-dependent phenomena subject to ambiguity, such as decision-making under uncertainty.
This is distinct from “quantum mind” theories like Penrose-Hameroff’s Orch OR, which posit that neurons perform quantum computations. Those remain, to the best of my knowledge, rather speculative and controversial in nature (→ see here). In what follows, we deal only with what we named non-classical behaviors without quantum brains, as widely described in Part I.
The “quantum toolkit” for the math of cognition
Hilbert spaces and state vectors
In CP, uncertainty is modeled in a sample space Ω, with probabilities assigned to subsets. In quantum probability, beliefs are represented in a Hilbert spaceH.
If I got it correctly, the idea is to represent a belief state — that in some literature is also referred to as mental state, regardless of the underlying biological mechanism — as a vector |Ψ⟩ in the Hilbert space. Here, let’s keep the terminology less fancy and generally applicable to a variety of situations, not necessarily cognitive ones. Unlike a distribution over definite states, the vector |Ψ⟩ can be a linear combination (superposition) of basis states:
Here, |A⟩ and |B⟩ might represent mutually exclusive judgments (e.g., “yes” vs. “no”), while the coefficients α and β encode the potentiality of both (note that, in general, these are complex — not real — numbers). Because of these features, it is sufficient to remind that the dual of our belief state is the vector
since ⟨A|B⟩=⟨B|A⟩=0, because |A⟩ and |B⟩ were supposed to be mutually exclusive, as well as ⟨A|A⟩=⟨B|B⟩=1, i.e., orthonormal in this language.
So far we have only ensured normalization, i.e. that the total “weight” of all possibilities adds up to one. But once the state is normalized, the coefficients acquire a probabilistic interpretation: if we “measure” with respect to |A⟩, the probability of obtaining outcome A is obtained as
\(p(A) = |\langle A|\Psi\rangle|^2 = |\alpha|^2\)
Similarly, the probability of obtaining outcome B. We can further generalize this!
More generally, if we introduce a new state |Q⟩ (for a different question or context), then
\(p(Q) = |\langle Q|\Psi\rangle|^2\)
which corresponds to the squared overlap between the belief state |Ψ⟩ and the direction defined by |Q⟩. It can be useful to see this step as the application of a dedicated operator (e.g., imagine a matrix) to the belief state.
We are now ready to make the interference effect appear, that is where CP breaks down. Suppose |Q⟩ is not aligned with either |A⟩ or |B⟩, but is instead a linear combination:
where θ is the angle between the “question vector” and the |A⟩ axis in Hilbert space. Then, projecting |Ψ⟩ onto |Q⟩ literally means to calculate the following scalar product:
The first two terms look exactly like a weighted average of classical probabilities, while the last term — the interference term — has no analogue in classical probability. It can be positive (constructive interference) or negative (destructive interference), and it is precisely what allows non-classical (e.g., quantum) probability to break the classical law of total probability and model behavioral paradoxes like the disjunction effect or the conjunction fallacy.
Note. In fact, if the idea of representing thoughts as vectors still feels odd, consider how we already do something very similar in machine learning. Large Language Models, for instance, use word embeddings: each word or concept is represented as a vector in a high-dimensional space.
Trained Word2Vec Vectors with Semantic and Syntactic relationship. Credits: Nikhil Birajdar
The meaning of a word is captured not in isolation, but through its position relative to others. By adding or combining these vectors, we can generate new meanings — like how queen is often represented as woman + royalty.
This is very close to what happens in quantum probability, but with one crucial difference. In embeddings, the vector operations are purely geometric: dot products or cosine similarities tell us how close two concepts are, but there is no notion of probability attached to the squared length of a vector. By contrast, in nCP the squared amplitude has a direct probabilistic meaning, given by Born’s rule. And while embeddings combine linearly, quantum states can interfere with each other, producing cross-terms that increase or suppress the final probability of an outcome.
Finally, embeddings behave classically: the order in which you combine vectors does not matter. Quantum models break this symmetry: the order of operations is fundamental, because some questions are incompatible. This is fundamentally why nCP can account for paradoxes like the conjunction fallacy or order effects in surveys, while embeddings cannot.
Of course, this is just an analogy: LLM embeddings are entirely classical and while they help us build intuition, they do not exhibit the interference or non-commutativity that make nCP uniquely suited to capture cognitive paradoxes and any attempt to connect them to cognitive features is largely debated, to date (→ see this paper).
Note to the note. If the previous note triggered the idea that one could capitalize on nCP rules to build a quantum version of LLMs, I think that’s crazy enough, even though someone made it very recently (→ see the pre-print).
Since this is the core of the whole proposal for the application of nCP to cognition, let’s recap:
Measurement as projection. When a person is asked a question, this can be modeled as applying a projection operator P_A to the state vector |Ψ⟩ and, according to Born’s rule, the probability of obtaining answer A is
formalizing the idea that answering is constructive: the act of measurement collapses potentiality into a definite outcome.
Incompatibility and non-commutativity. Two questions, represented by projectors P_A and P_B, are called compatible if
\(P_A P_B = P_B P_A \)
If they do not commute1, the questions are incompatible. In that case, asking A first changes the state in a way that alters the outcome of B, and vice versa: this nicely explains why the order of survey questions can bias human judgments (→ Wang et al, 2014).
Interference. If the same outcome Q can be reached via two incompatible “paths” (e.g., through states |A⟩ and |B⟩), the amplitude is a superposition:
where θ is the relative phase between the two contributions. Here, the last term is, again, the interference: as we have mentioned above, it can be constructive (cosθ > 0) or destructive (cosθ < 0), and it has no analogue in classical probability. This is what allows non-classical probability to explain violations of the law of total probability observed in behavioral data (→ Pothos, 2009).
Explaining the puzzles in this framework
Let’s us consider the conjunction fallacy in the Linda’s problem. In the classic experiment, participants respond that “Linda is a bank teller (B) and is active in the feminist movement (F)” is more likely than “Linda is a bank teller (B)”, even though classical probability requires
\(p(B \land F) \leq p(B) \)
In the nCP framework, the two judgments — “feminist” (F) and “bank teller” (B) — are treated as incompatible questions. That means their projectors P_F and P_B do not commute. As we have learned above, the direct probability of B is given by projecting the belief state onto |B⟩:
\(p(B) = |\langle B|\Psi\rangle|^2 \)
The conjunction is instead modeled sequentially: first project onto the F subspace, then onto the B subspace:
\(p(F \text{ then } B) = | P_B P_F |\Psi\rangle |^2 \)
Because the first projection changes the state, it can increase the overlap with |B⟩: in other words, once Linda is mentally framed as a feminist, the likelihood of also judging her a bank teller shifts upward. Accordingly, it becomes possible to have
\(p(F \text{ then } B) > p(B)\)
a result impossible in CP but natural in nCP. What appears as a logical “fallacy” is here reinterpreted as a lawful consequence of sequential, context-dependent projections.
In a similar way we can work out the Tversky–Shafir’s gamble experiment. Remind that in a two-stage gambling task, participants could either win $200 or lose $100 on the first gamble, and then were asked whether they would play again. Results show that:
If they knew they had won, about 69% chose to play again
If they knew they had lost, about 59% chose to play again
If the outcome was unknown, only 36% chose to play again
Classical probability predicts that the “unknown” case should fall between the two known cases (around 64%). Instead, it is far lower: a clear violation of the Sure-Thing Principle.
In the nCP framework, the “unknown” condition corresponds to a superposition of the “win” and “loss” states:
Here the cross-term is negative (destructive interference), which suppresses the overall probability and explains why cooperation drops so sharply in the uncertain case. Once again, what looks like a paradox in classical terms emerges as a natural consequence of interference in non-classical probability.
Not enough? One of the strongest empirical successes of nCP comes from survey research (→ Wang et al, 2014). When respondents were asked “Is Bill Clinton honest?” followed by “Is Al Gore honest?”, Clinton’s ratings came out lower and Gore’s higher. But when the order was reversed, Clinton’s ratings improved and Gore’s dropped.
In classical probability, this is puzzling because the order of events should not matter. nCP resolves this with the concept of incompatible questions: the belief state is disturbed by the first question, which changes the context for the second. Even more striking is that nCP makes a parameter-free, quantitative prediction called the Quantum Question (QQ) equality (honestly, I am not very happy about the terminology there, but I am following the authors’). For two yes/no questions, such as the Feminist (F) and Bank Teller (B) seen before, the equality states:
The left-hand side sums the “diagonal” probabilities of answering yes/no in one order. The right-hand side sums the corresponding “diagonals” in the reversed order.
Classically, there is no reason why these sums should balance: but in the non-classical (quantum) framework, the structure of Hilbert space and the properties of projection operators guarantee this equality. Most importantly, this prediction has been empirically confirmed: across 70 national surveys and multiple laboratory studies, the QQ equality holds with remarkable accuracy, without requiring free parameters or fitting.
Supplement. OK, this is not straightforward, so let’s try to spend a few more words. Imagine I ask you two everyday questions about someone you don’t know well:
Do they like pizza?
Do they like broccoli?
If I ask about pizza first, you might picture someone easygoing, and then broccoli feels less likely. If I ask about broccoli first, you might picture someone health-conscious, and then pizza feels more likely. So the order of questions changes your answers.
Here’s the surprising part: when researchers tested this across many surveys, they found a balance rule. The number of “yes–no” answers in one order always matches the number of “yes–no” answers in the other order. That balance is the Quantum Question (QQ) equality, showing that order effects are not random noise but follow a hidden structure.
It is important to remark that this striking experimental confirmation comes primarily from political and attitude judgment surveys, where order effects follow the balance rule predicted by the QQ equality. However, extending this result to classic judgment tasks such as the conjunction fallacy (e.g., Linda’s case) is more problematic. In these models, incompatibility between Feminist and Bank Teller theoretically requires the QQ equality to hold. Yet, dedicated empirical tests on conjunction fallacy scenarios have shown that this equality can be statistically violated. This suggests that while quantum-like models elegantly capture order effects in many domains, they may not fully account for the general phenomenon of the conjunction fallacy.
Take home message(s)
Besides the achievements, widely discussed in our two essays, there are also several challenges.
First, flexibility. Some argue nCP (or QP, in their terminology) can fit “too much,” risking post-hoc modeling. Second, empirical refutations: Boyer-Kassem et al. (2016) showed violations of reciprocity and QQ in some conjunction fallacy tasks. Third, and likely the most important, the mechanistic gap: QP remains agnostic about neural processes. Bridging abstract models with neurophysiology is an open frontier.
There are also philosophical implications: if cognition follows non-classical rules, then what CP labels as errorsmight be rational under a richer, context-sensitive logic. This shifts the question from “are humans rational?” to “what standard of rationality applies?”.
It is worth stressing once again that the unreasonable effectiveness of quantum mathematics in cognitive science does not prove a quantum brain. It does suggest that the rigid framework of classical probability is insufficient for modeling thought under uncertainty.
The promise of nCP is to transform paradoxes into predictable patterns. The risk is overreach, confusing an elegant formalism with a theory of consciousness. The challenge ahead is to map the boundary conditions: where classical models suffice, where quantum-like models are indispensable, and how to keep the two worlds properly separated.
In the meanwhile, I will continue to poetically tagging this as the unreasonable math of decision-making.
Thanks for reading Complexity Thoughts! This post is public so feel free to share it.
Quantum games: when rational choice meets entanglement
Please, allow me this short detour.
Game theory is often introduced as the mathematical study of rational decision-making, where agents are assumed to maximize payoffs under clear rules. The Prisoner’s Dilemma epitomizes this logic: defection is the unique Nash equilibrium, even though mutual cooperation would yield a better outcome for both players.
Historically, Eisert, Wilkens, and Lewenstein were the first to propose a quantum formulation of the Prisoner’s Dilemma, where strategies are represented in a Hilbert space and can become entangled. Their work was conceived as a theoretical extension of game theory into the quantum domain, not as a claim about irrational human agents. Only later did researchers — including those in quantum cognition and decision theory discussed above — begin to explore whether the same formalism could illuminate the paradoxes of human behavior, as discussed throughout our two essays. That’s why I think that a dedicated section to this foundational work is due.
Remarkably, they have shown that within a certain range of entanglement, a new equilibrium emerges: cooperation becomes both the rational and stable outcome. The switch from defection to cooperation resembles a phase transition, with cooperation possible only beyond a critical entanglement threshold.
This raised an immediate question: does quantum mechanics fundamentally alter the logic of game theory?Benjamin and Hayden challenged the result, arguing that the cooperative equilibrium was an artifact of restricting the players’ strategy space, although Eisert and colleagues defended their choice, maintaining that the restriction ensured physical consistency and still revealed a genuine quantum effect.
The broader lesson here is that once context, incompatibility and entanglement are allowed into the formalism, the predictions of classical rational choice theory can shift dramatically. What was once an inescapable dilemma (defection) may turn into a cooperative equilibrium. Very recent work using agent-based models has reinforced this insight, showing how cooperation can spread in large populations of quantum players, again with transitions akin to those in statistical physics (→ see the paper).
In this sense, quantum game theory acts as a stress test for classical rationality: it illustrates how the rigid predictions of standard game theory may break down when decision-making is modeled with a richer, context-sensitive probability framework — the very same motivation driving the field of quantum cognition.
Surely quantum game theory deserves a dedicated post in the future, here I leave just three interesting papers:
→ Please, remind that if you find value in #ComplexityThoughts, you might consider helping it grow by subscribing, or by sharing it with friends, colleagues or on social media. See also this post to learn more about this space.
Since we are working with operators, i.e., the projectors P are not scalar numbers, the commutative property is far from being granted (which is another distinctive feature of quantum mechanics).
Recommended from a reader:
https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0099
within a whole special issue on this matter: https://royalsocietypublishing.org/toc/rsta/2016/374/2058