Competing with pseudoscience in algorithmic ecosystems
The complex trade-off between the abundance of information and the scarcity of attention
If you find value in #ComplexityThoughts, consider helping it grow by subscribing and sharing it with friends, colleagues or on social media. Your support makes a real difference.
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
I have recently posted on LinkedIn, to ask: how can scientists compete with pseudoscience in algorithmic ecosystems?
In the digital attention economy, scientists aren’t just explaining their work: they’re competing with mystics, hype-merchants and self-anointed “experts” selling simplified truths for clicks. Unfortunately — and sadly — the algorithms don’t care who’s right; they care who’s profitable.
Structurally, most — if not all — major platforms are designed to reward monetization. Functionally, this creates feedback loops: content that drives subscriptions or ad revenue gets amplified, while equally rigorous, open-access science often languishes unseen. It is not even a problem related to the quality of the content: there are top science communicators out there, the quality of their posts cannot just be questioned. Instead, in this environment, truth competes on an uneven playing field against narratives engineered for virality. In a nutshell:
engineered narratives engage users the most → engaged users keep using the platform and their time can be monetized
and, as Tim Wu mentioned once, human attention became the defining industry of our time.
A toy computational model
To better show to non-expert readers the power of networks, I have written a simple simulator. This simulation models how attention to online content spreads through a social network, where some users produce posts and others consume and occasionally reshare them.
Two types of content — pseudoscience and science — compete for visibility, with differences in production rate, perceived attractiveness and how far posts travel through the network. The model uses a clustered network structure and simulates reading decisions step by step, showing how patterns of exposure and resharing shape the reach of each content type. Animated “waves” of colored nodes reveal which type dominates attention at different points in time. Here, blue encodes science, red encodes pseudo-science.
I distributed roles using the 80-20 rule: 20% of producers in a small universe of 1000 users and 80% of consumers.
CASE A. When science producers are the large majority (here, 20% post about pseudoscience and 80% about science), science gains a structural advantage in seeding the network, allowing it to dominate attention flows even though pseudoscience is simulated as more prolific and attractive per producer:
CASE B. With producers evenly split between pseudoscience and science (here, 50%-50%), but pseudoscience slightly more prolific and attractive, attention tends to drift toward pseudoscience clusters, leading to its broader reach despite equal starting representation.
One can easily draw conclusions: according to this “simple” model, which is of course not fully representative of reality and its complexity, we learn that it is not enough to be evenly split.
Science communication must be more frequent and more attractive. And desirably we need more science communicators than pseudo ones.
One might think that I am advocating for specific alternative approaches, while in reality there is no single clear solution yet. Still, I am perfectly aware about the problems of spreading (mis|dis)information in online social networks:
In the past, I have also written about the evidence supporting over-optimistic or catastrophic impacts of social media on our democracy:
To better make my point, recently, I altered the subscription structure of #ComplexityThoughts. Not to lock knowledge behind a paywall, nor to “monetize” science, but to test a hypothesis: will even a minimal paid tier trigger algorithmic signals that increase visibility for evidence-based content? All new posts remain open for ten weeks — which is longer than any monetization playbook would recommend — yet the platform now registers the existence of a revenue stream.
I am not alone worrying about Substack’s specifically and the fight for attention between credible authors and crackpots. And for sure the issue exists beyond science and Substack.
Cascading Implications
We should always keep in mind the following important points when assessing the attention economy in online platforms:
Algorithmic bias toward monetization: platforms are not neutral infrastructures; they optimize for their own revenue. If credible content doesn’t fit that model, it risks invisibility. Platforms thus optimize for behaviors aligned with revenue proxies (e.g., reshares, manipulative ranking algorithms and feeds).
Asymmetric competition: pseudoscience thrives here because its producers can optimize purely for attention, unconstrained by peer review or factual accuracy. This is an outstanding advantage over science communicators who play by rules — evidence, accuracy, transparency — that slow production but build trust.
Falsehoods have an inherent attention advantage for sensational misinformation over careful science communication, while attention-first strategies favor provocateurs over evidence-first communicators.Erosion of public trust: as misleading content propagates, it not only misinforms but also polarizes, making it harder for accurate knowledge to coordinate collective action. Misinformation (from health to science) reduces intent and degrade social coordination.
Need for resilient science ecosystems: without strategies to adapt to platform incentives, credible knowledge risks becoming a niche, while misinformation remains mainstream.
Pathways toward a solution
Platforms, financial interests, human behavior at individual and collective levels, interconnectedness: the perfect mix of ingredients for a complex system that requires systemic solutions well beyond linear thinking.
I have no clear solutions to this problem, but we need more research on this matter.
We need to develop reputation-weighted ranking signals that favor accuracy over virality, implementing integrity audits for algorithmic curation. Surely we cannot rely on “AI”: we need humans.
In terms of policies, I think that policy makers should push for, or at least encourage, transparency requirements for ranking algorithms, including disclosure of monetization bias. If we all know the rules of the game, we can play accordingly.
Finally, what about building consortia of credible science communicators to share and cross-amplify content, countering the scale advantage of misinformation networks?
I am perfectly aware of the potential risks related to gatekeeping and the emergence of rich clubs, but we can mathematically design mechanisms to deal with these issues in a transparent way (see this example, I have worked out with some colleagues to propose an alternative to the current scientific publishing system).
Take-home message
The question is not if we should adapt to algorithmic ecosystems: it’s whether we can do so without surrendering the openness and integrity of science. We need to design better probes into a larger problem: how to make credible, evidence-based content algorithmically competitive. If we don’t find systemic solutions, we risk leaving truth in the shadows while the loudest myths take center stage.
→ Please, remind that if you find value in #ComplexityThoughts, you might consider helping it grow by subscribing, or by sharing it with friends, colleagues or on social media. See also this post to learn more about this space.