Complexity Thoughts: Issue #18
Unraveling complexity: building knowledge, one paper at a time
If you find Complexity Thoughts, click on the Like button, leave a comment, repost on Substack or share this post. It is the only feedback I can have for this free service.
The frequency and quality of this newsletter relies on social interactions. Thank you!
In a nutshell
In this Issue we have papers ranging from describing the state of physical systems without full knowledge of their mathematical makeup to the dynamics (stigmergy) of human cooperation in digital spaces. In one study, the authors suggest there might be an undiscovered guiding principle behind the increasing complexity of some systems. Neuroscience is having a great time, with a significant release of studies (21!) to understand the human brain at a cellular level, and a study about annotated connectomes. Another interesting work harnesses the power of past data, combined with cutting-edge techniques, to provide a framework to forecast viral evolution that could be applied in the next pandemic.
Thanks for reading Complexity Thoughts! Subscribe for free to receive new posts and support my work.
Complex systems and network science foundations
This is a very nice (and technical!) paper, but it is worth reading it. The paper correctly highlights the model dependence of the framework, while the accompanying News&Views is a bit disappointing, with a catchy title poorly representing the message, IMHO: there is always a model behind.
State estimation is concerned with reconciling noisy observations of a physical system with the mathematical model believed to predict its behaviour for the purpose of inferring unmeasurable states and denoising measurable ones1,2. Traditional state-estimation techniques rely on strong assumptions about the form of uncertainty in mathematical models, typically that it manifests as an additive stochastic perturbation or is parametric in nature3. Here we present a reparametrization trick for stochastic variational inference with Markov Gaussian processes that enables an approximate Bayesian approach for state estimation in which the equations governing how the system evolves over time are partially or completely unknown. In contrast to classical state-estimation techniques, our method learns the missing terms in the mathematical model and a state estimate simultaneously from an approximate Bayesian perspective. This development enables the application of state-estimation methods to problems that have so far proved to be beyond reach. Finally, although we focus on state estimation, the advancements to stochastic variational inference made here are applicable to a broader class of problems in machine learning.
Social insect societies can be well understood as distributed systems whose organization allows to accomplish complex tasks that might exceed what single individuals can accomplish in isolation. In some systems, this collective behavior is achieved by means of indirect communication through (physical, chemical, etc) modifications of the environment, such in the case of ants colonies, via a mechanism known as stigmergy (see also this paper for a more recent, yet two decades old, overview), that we have recently used as a perfect case for our potential-driven random walk modeling.
Can a similar phenomenon be observed where digital traces are used as modifications of the environment and system’s units are humans? It seems that the answer is: Yes!
Stigmergy is a generic coordination mechanism widely used by animal societies, in which traces left by individuals in a medium guide and stimulate their subsequent actions. In humans, new forms of stigmergic processes have emerged through the development of online services that extensively use the digital traces left by their users. Here, we combine interactive experiments with faithful data-based modeling to investigate how groups of individuals exploit a simple rating system and the resulting traces in an information search task in competitive or noncompetitive conditions. We find that stigmergic interactions can help groups to collectively find the cells with the highest values in a table of hidden numbers. We show that individuals can be classified into three behavioral profiles that differ in their degree of cooperation. Moreover, the competitive situation prompts individuals to give deceptive ratings and reinforces the weight of private information versus social information in their decisions.
Systems of many interacting agents display an increase in diversity, distribution, and/or patterned behavior when numerous configurations of the system are subject to selective pressure.
There are some important claims in this paper, requiring more detailed comments as for the recent paper about Assembly Theory (see my short essay about it).
The universe is replete with complex evolving systems, but the existing macroscopic physical laws do not seem to adequately describe these systems. Recognizing that the identification of conceptual equivalencies among disparate phenomena were foundational to developing previous laws of nature, we approach a potential “missing law” by looking for equivalencies among evolving systems. We suggest that all evolving systems—including but not limited to life—are composed of diverse components that can combine into configurational states that are then selected for or against based on function. We then identify the fundamental sources of selection—static persistence, dynamic persistence, and novelty generation—and propose a time-asymmetric law that states that the functional information of a system will increase over time when subjected to selection for function(s).
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Artificial neural networks (ANNs) have gained considerable momentum in the past decade. Although at first the main task of the ANN paradigm was to tune the connection weights in fixed-architecture networks, there has recently been growing interest in evolving network architectures toward the goal of creating artificial general intelligence. Lagging behind this trend, current ANN hardware struggles for a balance between flexibility and efficiency but cannot achieve both. Here, we report on a novel approach for the on-demand generation of complex networks within a single memristor where multiple virtual nodes are created by time multiplexing and the non-trivial topological features, such as small-worldness, are generated by exploiting device dynamics with intrinsic cycle-to-cycle variability. When used for reservoir computing, memristive complex networks can achieve a noticeable increase in memory capacity a and respectable performance boost compared to conventional reservoirs trivially implemented as fully connected networks. This work expands the functionality of memristors for ANN computing.
There are BIG news in the field. The National Institute of Health's BRAIN Initiative - Cell Census Network (BICCN) released a collection of 21 papers at once in AAAS journals, including Science, Science Advances, and Science Translational Medicine. The collection is devoted to large-scale multi-omics analysis of the human brain at the cell-type level, revealing unique human attributes when comparing against nonhuman primate and rodent brains. Honestly, I didn’t read all the papers, but this is definitely on my reading list and it was worth sharing.
Annotated connectomes? Yes, this is the way!
The brain is a network of interleaved neural circuits. In modern connectomics, brain connectivity is typically encoded as a network of nodes and edges, abstracting away the rich biological detail of local neuronal populations. Yet biological annotations for network nodes — such as gene expression, cytoarchitecture, neurotransmitter receptors or intrinsic dynamics — can be readily measured and overlaid on network models. Here we review how connectomes can be represented and analysed as annotated networks. Annotated connectomes allow us to reconceptualize architectural features of networks and to relate the connection patterns of brain regions to their underlying biology. Emerging work demonstrates that annotated connectomes help to make more veridical models of brain network formation, neural dynamics and disease propagation. Finally, annotations can be used to infer entirely new inter-regional relationships and to construct new types of network that complement existing connectome representations. In summary, biologically annotated connectomes offer a compelling way to study neural wiring in concert with local biological features.
This is a term to cover a large amount of literature, from ecology to epidemiology, especially their overlap.
This paper combines evolutionary genomics with machine learning with biophysics to achieve an ambitious goal. Hopefully, it will be useful to tackle the next pandemic from an emergent pathogen (see also the accompanying News&Views):
Effective pandemic preparedness relies on anticipating viral mutations that are able to evade host immune responses to facilitate vaccine and therapeutic design. However, current strategies for viral evolution prediction are not available early in a pandemic—experimental approaches require host polyclonal antibodies to test against1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16, and existing computational methods draw heavily from current strain prevalence to make reliable predictions of variants of concern17,18,19. To address this, we developed EVEscape, a generalizable modular framework that combines fitness predictions from a deep learning model of historical sequences with biophysical and structural information. EVEscape quantifies the viral escape potential of mutations at scale and has the advantage of being applicable before surveillance sequencing, experimental scans or three-dimensional structures of antibody complexes are available. We demonstrate that EVEscape, trained on sequences available before 2020, is as accurate as high-throughput experimental scans at anticipating pandemic variation for SARS-CoV-2 and is generalizable to other viruses including influenza, HIV and understudied viruses with pandemic potential such as Lassa and Nipah. We provide continually revised escape scores for all current strains of SARS-CoV-2 and predict probable further mutations to forecast emerging strains as a tool for continuing vaccine development (evescape.org).
Thanks for reading Complexity Thoughts! Subscribe for free to receive new posts and support my work.