Higher-order links and broken standards
The balance in the Force has been restored
If you find value in #ComplexityThoughts, consider helping it grow by subscribing and sharing it with friends, colleagues or on social media. Your support makes a real difference.
→ Don’t miss the podcast version of this post: click on “Spotify/Apple Podcast” above!
In the latest years we have seen hundreds, if not thousands, of papers reframing many typical network science problems in terms of hypergraphs or, more generally, higher order networks to model complex systems (see here for a recent review on the topic).
The vast majority of those studies often frame the choice as a clear dichotomy:
This might be a useful intuition in some contexts, but it can also blur two logically distinct (and important) ingredients:
the domain of interaction (→ who can affect whom),
the interaction rule (→ the function that maps neighbors’ states into a node’s response).
A standard graph fixes the first point via neighborhoods: let’s indicate it as
for a generic node i. Then, a very general dynamical model is simply given by
Note that nothing here is “pairwise” unless we impose a decomposition such as
which is nothing but a modeling choice (often a linearization), not a consequence of using graphs. In fact, many classic mechanisms — such as threshold rules, bootstrap contagion, gene-regulation logic — are multivariate on ordinary graphs: the output depends on several neighbors jointly and nonlinearly.
So what? Well, the right question to ask is then: what does a hypergraph add?
A hypergraph can be read as prescribing a particular grouping of neighbors into subsets (hyperedges) and often encourages “group terms” that appear coherently across all nodes in that group. Let’s think about this for a few seconds, please.
That request is a specific constraint on how one chooses to structure the function f_i, not an expansion of what functions are possible.
In fact, graph-based formulations already allow arbitrary multivariate dependence on neighborhoods: hypergraph structure selects a subclass where those dependences respect a shared group membership pattern.
This sounds like an additional constraint, not a generalization!
OK about structure, but what about dynamics?
Very often dynamics is written as sums of contributions by interaction order. This is common order-decomposability assumption:
where D is the order used to describe your higher-order phenomenon and H is some function of the node states.
Under this assumption, there is a simple but important representational non-uniqueness: the same node-level dynamics can be realized as infinitely many multilayer (multiplex) dynamics! To see this, let us introduce D layer states
where each element denotes the layer-α copy of the full node-state vector in the lifted D-layer (multiplex) representation. Concretely, if
stacks all node states (with m variables per node), then for each α=1,…,D we have
In the simplest case, we can take X_α = x, but in general any
will be fine. Now let’s project back by averaging:
Let’s define the lifted dynamics as
with the only (simple-yet-powerful) constraint that interlayer exchange (governed by K) has zero mean:
Projecting gives
exactly! This result is obtained regardless of the specific choice of K, that acts like a gauge field. Since there are infinitely many admissible K, there are infinitely many multilayer realizations of the same “higher-order” node-level equation.
Of course, it does not mean every such multilayer construction corresponds to a realistic microscopic mechanism and does not imply hypergraphs are useless: it only shows non-identifiability of microscopic “layered vs higher-order” interpretations given only the projected node-level dynamics under that decomposition.
TL;DR without math: if our observations and equations live in node space, we cannot uniquely infer the hidden multilayer/higher-order microstructure without extra assumptions or data. The multilayer gauge construction is a clean way to formalize that non-uniqueness!
Take home message
The takeaway is not “never use hypergraphs” but: separate structure from mechanism. One should decide what one empirically knows (neighborhoods? groups? layers?), then choose interaction functions accordingly and compare alternatives when possible. In three points:
FIRST. A graph tells you who can influence whom, not how they influence each other.
In a graph, an edge only defines a neighborhood (the set of adjacent nodes). The actual interaction is a function that can depend on all neighbors at once, in any nonlinear way. So “graphs = pairwise interactions” is a category error: pairwise is a restriction on the function, not on the graph. This is crucial!
SECOND. Hypergraphs don’t enlarge what you can model: they add extra constraints, so they are a special case of graph-based models. Not vice versa!
A hyperedge groups nodes and implicitly demands a kind of shared, symmetric “group term” across the members of that group (i.e., consistent group membership structure). But any hypergraph model can be rewritten as a graph model by working with the union of the involved neighbors (or via standard factor-graph/bipartite constructions), while graph-based formulations also allow many interaction patterns that cannot be represented as hyperedges because they don’t satisfy those mutual-membership constraints. So, in terms of expressive power: general graph-based interactions ⊇ hypergraph interactions.
THIRD. Key “hypergraph-only” phenomena (like abrupt transitions) usually come from multivariate nonlinear rules, not from hyperedges.
Behaviors often advertised as uniquely “higher-order” (e.g., explosive synchronization, discontinuous epidemic transitions) can be reproduced exactly on ordinary graphs, even graphs that are locally tree-like (no cliques). Mean-field analyses commonly used in the hypergraph literature typically erase the structural distinctions anyway, so what drives the phenomenology is the cooperative functional form (needing multiple neighbors simultaneously), not the hypergraph structure.
If you want to know more technical details, you might want to read this paper, where we discuss practical “gold standards” for modeling and inference: how to separate the interaction domain (who can affect whom) from the interaction functions (how they do so), how to make assumptions explicit (e.g., order-decomposability), and how to compare graph-, multilayer- and hypergraph-based parametrizations in a principled, evidence-driven way. It is another great collaboration with Tiago Peixoto, Leto Peel and Thilo Gross, which also builds on the spirit of a 2022 paper about how to reconnect data and theory via statistical inference.
→ Please, remind that if you find value in #ComplexityThoughts, you might consider helping it grow by subscribing, or by sharing it with friends, colleagues or on social media. See also this post to learn more about this space.



