How Your Brain Runs Its Own Belief Network

Imagine you are at university, sitting in the library, when three things happen almost simultaneously. A friend messages you, “Huge storm coming, buses might stop.” At the same time, you see a dark cloud through the window, and then you read a post online saying, “Public transport strike today!” In a few seconds, you decide whether to pack up and leave or keep studying. You do not write down equations, but you quickly combine these bits of information, ignoring some while trusting others more, and end up with a single decision. This everyday moment is precisely the kind of situation that Pearl describes when he talks about “belief networks” and how we fuse and spread information in our minds.

Pearl describes a belief network as a web of small questions about the world, each one represented as a node, with arrows indicating which ideas directly influence which. A node might be “there is a storm,” another “the bus is late,” another “I see dark clouds,” and so on. Instead of trying to track every possible combination of all these ideas, the network only stores simple, local relationships: how strongly one thing affects another. Pearl explains this using examples like suspects, fingerprints, and lab reports, where each piece of evidence is linked to a possible cause. The key insight is that our mind does not handle one giant, impossible table of chances; it uses many small links between related ideas, which is much closer to how we actually think when we ask, “If this is true, how likely is that?”

Once the network is in place, new information has to move through it, and this is where things become very practical. Pearl shows that each link can carry two kinds of support: one coming from “causes” (what usually leads to this) and one from “effects” (what we have seen that points back to it). When something changes—say you get a new lab report, or in your life, a new message, a news alert, or a friend’s opinion—that update first affects the nearby node and then spreads step by step through the network. Importantly, each node only communicates with its neighbors, so the process is local and easy to manage, yet the final picture remains globally consistent. Pearl even warns that we must avoid counting the same clue twice, like when a rumor appears on several accounts that all secretly copy each other. His method keeps “upward” and “downward” flows of belief apart so they do not get stuck in loops of self-reinforcement.

Another idea from Pearl that fits daily life is the concept of multiple explanations competing. In one story, an alarm can be triggered by either a burglary or an earthquake. Hearing that the alarm went off increases your belief in both causes. Still, once you also hear a reliable earthquake report, the “earthquake” explanation makes the “burglary” explanation less likely, because one clear cause can “explain away” the same event. The same pattern appears when you feel tired before an exam: you might blame stress, lack of sleep, or getting sick. A positive COVID test, for instance, suddenly shifts most of your belief toward one cause and away from the others. Pearl and colleagues also discuss “hidden causes,” extra nodes that we do not directly see but that help explain why several things tend to happen together, such as a shared background reason for your friends’ moods or repeated delays on your train line. Thinking in terms of these networks can help young people make better choices: check where your information really comes from, notice when two pieces of “news” are actually the same source, and remember that one good explanation can reduce the need to invent many others. In short, your mind is already running a belief network; learning to see it that way can make your everyday reasoning clearer, calmer, and more honest.

Reference:
Pearl, J. (1986). Fusion, propagation, and structuring in belief networks. Artificial Intelligence29(3), 241–288. https://doi.org/10.1016/0004-3702(86)90072-X

When Small Things Trigger Big Avalanches in Everyday Life

Imagine you are slowly piling up notifications on your phone. A friend texts you, a teacher posts a new assignment, a group chat explodes with memes, your bank app pings you about a payment. For a while, you handle everything with a few quick taps. Then one more message lands at precisely the wrong moment, and suddenly you miss a deadline, forget a meeting, and end up in a mini-crisis. From the outside, a slight extra nudge has caused a surprisingly big mess. This feeling that “nothing much changed, but suddenly everything tipped over” is at the heart of what Bak and colleagues call self-organized criticality.

According to Bak et al., many large systems in nature and society slowly move toward a special state where they are just barely stable. To explain it, they use a simple picture: a pile of sand. Grain by grain, the pile grows steeper. Most grains fall to the ground and do almost nothing. But sometimes a single grain makes a small slide, and sometimes it sets off an enormous avalanche that runs all the way down the side. The rules that describe this sandpile are straightforward, yet the result is remarkable: the pile naturally settles into a state where avalanches of all sizes occur. There is no single “typical” size or time. The same idea can be applied to many systems that change incrementally, such as the flow of rivers, the light from distant quasars, the Sun’s activity, and even the movement of prices on a stock market.

Bak and colleagues demonstrate that in this special state, small causes can have effects on multiple scales. This is why they discuss “1/f noise,” also known as flicker noise. Instead of random, short blips, the system displays slow, long-lasting fluctuations alongside quick ones. If you think of your life, you can picture days where nothing much happens and then a period where many things change at once: a new job, a new city, and new people. In their models, this occurs because the system is constantly balancing on the edge between calm and collapse. Energy, pressure, or “slope” builds up everywhere, and then it is released in bursts that can be tiny or huge. The pattern in space also looks special: instead of neat, regular shapes, you get messy, repeating patterns that look similar at different scales, like mountain ranges or coastlines.

The most striking message of Bak et al. for everyday life is that constant small changes can quietly push systems toward a critical point. A friendship, an online community, or even your own schedule can become a “sand pile” where tension slowly builds up. One more careless comment, one more late night, or one more task added to your to-do list may then trigger an “avalanche” of reactions. This does not mean that everything is always on the verge of falling apart. It means that in many real situations, there is no single obvious warning sign or simple knob you can turn to avoid all problems. Instead, it helps to notice how often you are adding “grains of sand” to your life without giving the system time to relax. Taking breaks, solving minor conflicts early, and not letting every part of your day reach its limit is like gently smoothing the sand pile before it gets too steep. Understanding self-organized criticality is a reminder that significant changes often emerge from many small steps, and that paying attention to these steps is one of the most practical skills you can develop.

Reference:
Bak, P., Tang, C., & Wiesenfeld, K. (1987). Self-organized criticality: An explanation of the 1/f noise. Physical Review Letters59(4), 381–384. https://doi.org/10.1103/PhysRevLett.59.381

Climbing the Peaks of Progress: How Small Steps (and Occasional Jumps) Help Us Improve

Picture a group project that starts messy. Everyone tries tiny tweaks—renaming a file, switching slides, or fixing one chart. Each fix helps a bit, but soon you’re “stuck” on a version that’s decent, not amazing. To get something truly great, you might need a bigger change—like rebuilding the deck around a sharper story. That’s the basic idea behind how improvement often works in life: we take small steps up a hill, but sometimes we need a jump to reach a higher one. Kauffman and Levin use this simple picture to explain “adaptive walks,” their way of describing how things—organisms, ideas, even study plans—get better over time on a rugged landscape full of peaks and valleys. 

According to Kauffman and Levin, imagine every possible version of something as a point in a huge space, and give each point a “fitness” score—how well it works. Moving to a version that’s one small change away is like taking a step to a neighbor on this map. Many maps are bumpy, with lots of little peaks where nearby options are worse, so small-step climbing can end early at a “local optimum.” On totally random (uncorrelated) maps, there are tons of these local peaks, so most improvement paths are short—think a handful of steps before you top out. A greedy strategy that always picks the best neighboring step is even shorter. That’s why quick tweaks often help fast, then stall. 

Here’s the twist that matters for everyday life: progress slows as you get better because you have fewer “better neighbors” left. Early on, lots of options beat what you have, so branching out is easy; later, choices shrink and things feel stuck—Kauffman and Levin call this pattern “radiation then stasis.” One fix is to allow bigger leaps—try several changes at once. When the jump is big enough, you can escape the local peak and discover fresh ground. In that “long-jump” mode, they find a simple rule of thumb: the number of improvements grows roughly with the log of the time you keep searching. Translation: early wins come quickly, then the gaps between wins get longer, which feels slow but is normal. 

What can you do with this? When your workout, study routine, or side hustle stops improving with tiny tweaks, treat that as a signal. First, accept that short bursts of progress are followed by plateaus, which are built into the rugged nature of maps. Next, plan occasional, safer “jumps”: test a new schedule for a week, learn a different tool, or reorganize roles on your team. Expect that the better you already are, the rarer the easy wins—and that’s okay. Take small steps for steady gains, and reserve bold experiments for breaking out of ruts. That mix—patient climbing plus brilliant jumping—is how you keep finding higher peaks in school, work, and everything in between.

Reference:
Kauffman, S., & Levin, S. (1987). Towards a general theory of adaptive walks on rugged landscapes. Journal of Theoretical Biology128(1), 11–45. https://doi.org/10.1016/S0022-5193(87)80029-2

Why Crowds Move Like Magic (and How You Can Use It)

Picture this: you and your friends are leaving a concert, and thousands of people flood the streets. Nobody is shouting directions, yet the crowd curves around barriers, speeds up in open space, and slows near the subway stairs. It feels almost choreographed, but each person is just watching what’s right around them and adjusting a little at a time. That everyday scene is the same idea Craig Reynolds describes for birds in the sky and fish in the sea: complex group motion can emerge from simple rules followed locally, not from a central authority.

According to Reynolds, flocking occurs when individuals balance three urges: avoiding collisions with neighbors, matching their speed and direction, and staying close to the group. In practice, that means you step sideways to avoid bumping someone (separation), you fall into the same walking pace (alignment), and you drift back toward your friends if you’re sliding to the edge (cohesion). Each member only “pays attention” to nearby neighbors and what’s ahead—no one needs a full map of the whole crowd. Add gentle steering around obstacles and a shared pull toward a goal—such as an exit sign or your meeting spot—and the whole group flows smoothly.

Why does this matter to you? Because the same rules help in daily life. When biking through traffic or skateboarding in a busy park, think like a flock: maintain a safe distance, synchronize your speed with the lane, and aim for clear space rather than forcing your way through. In team games, you can move as a unit by mirroring a teammate’s direction and spacing instead of waiting for shouted commands. Even your study group works better with these habits: avoid “collisions” by not talking over others, align by matching the group’s focus, and keep close enough that no one drifts out of the conversation. Small, quick adjustments beat big, late corrections.

There’s also a creative twist. Animators and game designers utilize these simple rules to create realistic crowds, schools of fish, or traffic without scripting every path. You can apply the same mindset to planning events or routes: assume people follow what they see around them, not a master plan. Place obstacles so they gently nudge the flow instead of blocking it. Start a “turn” from one side and let it ripple like a chorus line. And remember the secret of smooth motion: prioritize. In a tight spot, safety comes first (avoid collisions), then match pace, and then regroup. When you practice these three habits, you’ll find that everyday movement—leaving a concert, changing buses, or walking to class—starts to feel less like chaos and more like quiet, shared choreography.

Reference:
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics21(4), 25–34. https://doi.org/10.1145/37402.37406

When Typos Evolve: What “Quasi-Species” Teach Us About Change

Imagine copying a friend’s workout plan by hand and sharing it in your group chat. A few people fix a line, others tweak an exercise, and someone adds a rest day. Soon, there isn’t one “original” plan. There’s a cloud of similar versions, with one popular template in the center and numerous nearby variations. That living swarm of copies is how Manfred Eigen, John McCaskill, and Peter Schuster describe molecules that copy themselves with small errors. They call it a quasi-species: not a single perfect sequence, but a whole population clustered around a “master” one. Selection acts on the cluster, not just the one pristine original. 

According to Eigen and colleagues, you can picture all possible sequences laid out in a vast “sequence space,” where distance means how many positions differ (the Hamming distance). Close neighbors are only one typo away; far ones need many changes. In high dimensions, there are countless short routes between any two points, so many ways to wander by small mistakes. That matters because copying is never perfect. A master sequence tends to be the best copier, but its mutants also copy, and together they form a stable crowd that wins because, as a group, they reproduce more effectively than rivals. Think of clans rather than isolated individuals—whoever owns the strongest “mode” of the population dominates.

There’s a catch called the error threshold. If copies are too sloppy, the crowd can’t stay clustered, and the “message” dissolves. In plain terms: the longer the sequence, the higher the bar for accuracy; if fidelity drops, the system can’t hold specific information. The authors demonstrate that selection can be viewed as hill-climbing on a landscape, where the population tends to converge toward better replicators, provided that errors are low enough to maintain group cohesion. Even cooler, evolution isn’t purely a random stumbling process. Because more successful variants become more common, new mutants tend to appear at the populated edges along “ridges” that connect good regions—so change is biased toward success. It’s like a playlist algorithm nudging you from one favorite song to another via similar tracks.

Why should you care? Because this is how ideas, habits, and even internet memes behave. Your study method, your budget, your fitness routine—each lives as a quasi-species of versions you try, keep, and share. Keep errors small in the parts that carry meaning (your essential steps), or the routine turns to noise. Allow some variation to occur at the edges so you can discover better peaks without losing the overall plot. And when you want real progress, look for ridges: minor, low-risk tweaks that connect what already works to something better. You don’t need a perfect plan; you need a stable core with room to explore. That is the everyday power of quasi-species thinking that Eigen, McCaskill, and Schuster brought to light.

Reference:
Eigen, M., McCaskill, J., & Schuster, P. (1988). Molecular quasi-species. The Journal of Physical Chemistry92(24), 6881–6891. https://doi.org/10.1021/j100335a010