If you find Complexity Thoughts interesting, follow us! Click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service. The frequency and quality of this newsletter relies on social interactions. Thank you!
Dall-e 3 representation of this issue’s content
Foundations of network science and complex systems
The Ontology of Complex Systems: Levels of Organization, Perspectives, and Causal Thickets
This chapter (click here for a direct link to the pdf) is not easy to read if you look for formal definitions and plots, but it builds some interesting arguments about complexity and propose a brave attempt to define a “phylogenetic ontology of our world as we see it”.
Ecosystems
A tiny fraction of all species forms most of nature: Rarity as a sticky state
Using data from a wide range of natural communities including the human microbiome, plants, fish, mushrooms, rodents, beetles, and trees, we show that universally just a few percent of the species account for most of the biomass. This is in line with the classical observation that the vast bulk of biodiversity is very rare. Attempts to find traits allowing the tiny fraction of abundant species to escape rarity have remained unsuccessful. Here, we argue that this might be explained by the fact that hyper-dominance can emerge through stochastic processes. We demonstrate that in neutrally competing groups of species, rarity tends to become a trap if environmental fluctuations result in gains and losses proportional to abundances. This counter-intuitive phenomenon arises because absolute change tends to zero for very small abundances, causing rarity to become a “sticky state”, a pseudoattractor that can be revealed numerically in classical ball-in-cup landscapes. As a result, the vast majority of species spend most of their time in rarity leaving space for just a few others to dominate the neutral community. However, fates remain stochastic. Provided that there is some response diversity, roles occasionally shift as stochastic events or natural enemies bring an abundant species down allowing a rare species to rise to dominance. Microbial time series spanning thousands of generations support this prediction. Our results suggest that near-neutrality within niches may allow numerous rare species to persist in the wings of the dominant ones. Stand-ins may serve as insurance when former key species collapse.
Rate-induced tipping in complex high-dimensional ecological networks
If you don’t know what R-tipping points are (as opposed to B- and N- tipping points) then you might want to read this paper first.
to avoid climate-change-induced species extinction, it would be necessary to ensure that no parameters change with time, and this may pose an extremely significant challenge in our efforts to protect and preserve the natural environment.
Human activities are having increasingly negative impacts on natural systems, and it is of interest to understand how the “pace” of parameter change may lead to catastrophic consequences. This work studies the phenomenon of rate-induced tipping (R-tipping) in high-dimensional ecological networks, where the rate of parameter change can cause the system to undergo a tipping point from healthy survival to extinction. A quantitative scaling law between the probability of R-tipping and the rate was uncovered, with a striking and devastating consequence: In order to reduce the probability, parameter changes must be slowed down to such an extent that the rate practically reaches zero. This may pose an extremely significant challenge in our efforts to protect and preserve the natural environment.
Transient phenomena in ecology
Ecological systems can switch between alternative dynamic states. For example, the species composition of the community can change or nutrient dynamics can shift, even if there is little or no change in underlying environmental conditions. Such switches can be abrupt or more gradual, and a growing number of studies examine the transient dynamics between one state and another—particularly in the context of anthropogenic global change. Hastings et al. review current knowledge of transient dynamics, showing that hitherto idiosyncratic and individual patterns can be classified into a coherent framework, with important general lessons and directions for future study.
Neuroscience
Inferring neural activity before plasticity as a foundation for learning beyond backpropagation
The plasticity-stability dilemma is about how the brain manages to stay flexible enough to learn and adapt, while also maintaining enough stability to preserve identity and long-term function.
Neural plasticity refers to the brain's ability to change and adapt in response to stimuli, and is achieved by strengthening/weakening synapses or forming new synaptic connections. In practice, plasticity is essential for learning, adapting to new environments, and for the recovery of the brain after injury.
Neural stability relates to the brain's need to maintain a certain level of constancy and reliability in its functioning. Stability ensures that once neural circuits are established, they remain effective and do not change too easily. This is important for maintaining long-term memories, established skills, and consistent behavior.
The dilemma: too much plasticity can lead to instability, since neural circuits are constantly changing and can’t retain information or maintain consistent behavior. Too little plasticity results in a brain that is too “rigid”, impeding learning and adaptation to new situations. But (most of the time) our brain works like a charm…
Our theory addresses a long-standing question of how the brain solves the plasticity-stability dilemma, for example, how it is possible that, despite adjustment of representation in the primary visual cortex during learning, we can still understand the meaning of visual stimuli we learned over our lifetime
For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as ‘credit assignment’. It has long been assumed that credit assignment is best solved by backpropagation, which is also the foundation of modern machine learning. Here, we set out a fundamentally different principle on credit assignment called ‘prospective configuration’. In prospective configuration, the network first infers the pattern of neural activity that should result from learning, and then the synaptic weights are modified to consolidate the change in neural activity. We demonstrate that this distinct mechanism, in contrast to backpropagation, (1) underlies learning in a well-established family of models of cortical circuits, (2) enables learning that is more efficient and effective in many contexts faced by biological organisms and (3) reproduces surprising patterns of neural activity and behavior observed in diverse human and rat learning experiments.
Information decomposition and the informational architecture of the brain
Information is not a monolithic entity, but can be decomposed into synergistic, unique, and redundant components. Relative predominance of synergy and redundancy in the human brain follows a unimodal–transmodal organisation and reflects underlying structure, neurobiology, and dynamics. Brain regions navigate trade-offs between these components to combine the flexibility of synergy for higher cognition and the robustness of redundancy for key sensory and motor functions. Redundancy appears stable across primate evolution, whereas synergy is selectively increased in humans and especially in human-accelerated regions. Computational studies offer new insights into the causal relationship between synergy, redundancy, and cognitive capabilities.
Neural heterogeneity controls computations in spiking neural networks
Homogeneous excitatory networks can be leveraged for reliable information encoding via localized states of persistent activity, whereas heterogeneous networks provide robust and flexible information processing capacities in regimes of oscillatory dynamics
Neurons are the basic information-encoding units in the brain. In contrast to information-encoding units in a computer, neurons are heterogeneous, i.e., they differ substantially in their electrophysiological properties. How does the brain make use of this heterogeneous substrate to carry out its function of processing information and generating adaptive behavior? We analyze a mathematical model of networks of heterogeneous spiking neurons and show that neural heterogeneity provides a previously unconsidered means of controlling computational properties of neural circuits. We furthermore uncover different capacities of inhibitory vs. excitatory heterogeneity to regulate the gating of signals vs. the encoding and decoding of information, respectively. Our results reveal how a mostly overlooked property of the brain—neural heterogeneity—allows for the emergence of computationally specialized networks.
Causation in neuroscience: keeping mechanism meaningful
How should causation be understood in neuroscience?
A fundamental goal of research in neuroscience is to uncover the causal structure of the brain. This focus on causation makes sense, because causal information can provide explanations of brain function and identify reliable targets with which to understand cognitive function and prevent or change neurological conditions and psychiatric disorders. In this research, one of the most frequently used causal concepts is ‘mechanism’ — this is seen in the literature and language of the field, in grant and funding inquiries that specify what research is supported, and in journal guidelines on which contributions are considered for publication. In these contexts, mechanisms are commonly tied to expressions of the main aims of the field and cited as the ‘fundamental’, ‘foundational’ and/or ‘basic’ unit for understanding the brain. Despite its common usage and perceived importance, mechanism is used in different ways that are rarely distinguished. Given that this concept is defined in different ways throughout the field — and that there is often no clarification of which definition is intended — there remains a marked ambiguity about the fundamental goals, orientation and principles of the field. Here we provide an overview of causation and mechanism from the perspectives of neuroscience and philosophy of science, in order to address these challenges.
Out of curiosity, how did you discover the Wimsatt article? Is it cited by someone that you're interested in? Just wondering why there's buzz around this obscure article all of a sudden.