Reality in Bits: Why Your Questions Matter (Wheeler’s Big Idea)

You check your phone and see a notification. Tap or ignore. Yes or no. That tiny choice decides what you see next, which ad appears, and which song autoplays. John Archibald Wheeler, a physicist with a flair for bold ideas, argued that the universe itself works a bit like that. He claimed every “it” in the world—particles, fields, even space and time—gets its meaning from “bits,” the simple yes-no answers our measurements pull from nature. He called it “it from bit,” and he thought observer participation is not a footnote, but the starting point. 

According to Wheeler, an experiment is like asking nature a clear question and writing down a clean answer. No question, no answer. When a detector clicks, we often say “a photon did it,” but what we truly have is a recorded yes-no event, a single bit that makes the story real for us. In another example, turning on a hidden magnetic field shifts an interference pattern; the shift is again read as counts—yes–no answers that reveal the field. Even black holes, the ultimate cosmic mystery, carry “entropy” that can be read as the number of hidden bits about how they were formed. Everyday version? Think of scanning a ticket at a concert: the gate doesn’t “know” you until your QR code returns a yes. The event becomes real for the system at the moment of that verified click. 

Wheeler also lays down four shake-ups: no infinite “turtles all the way down,” no eternal prewritten laws, no perfect continuum, and not even space and time as basic givens. He urges a loop: physics gives rise to observer-participancy, which gives rise to information, which then gives rise to physics. Meaning isn’t private; it’s built through communication—evidence that can be checked and shared. That’s why the past, in this view, is what’s recorded now; our arrangements today decide which path that ancient photon “took” when we finally measure it. In daily life, that’s how group chats settle plans: until a poll closes, there is no fixed “Friday plan.” Once the votes (bits) are in, the plan (the “it”) exists for everyone. 

So what’s useful here? First, ask better questions. The choice of question shapes what you have the right to say about the world. Second, respect the click—the simple, reliable bit—because significant patterns grow from countless small answers; “more is different” when many bits combine. Third, remember that meaning needs community. A claim doesn’t count until others can check the evidence. In short, your everyday yes-no choices—what you measure, share, and record—are not trivial. They’re how reality, in Wheeler’s sense, gets built, from the lab to your life.

Reference:
Wheeler, J. A. (1990). Information, Physics, Quantum: The Search for Links. In Feynman and Computation (pp. 309–336). CRC Press. https://doi.org/10.1201/9780429500459-19

Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.

How Flies Read the World—And What That Teaches Us About Signals

Imagine biking downhill with the wind in your face. Everything is moving fast, yet you still dodge potholes and react in a blink. Your brain is turning bursts of electrical “pings” from your eyes into smooth, useful information about motion. That everyday magic—making sense from quick spikes—is exactly what Bialek and colleagues set out to understand. They flipped the usual lab view. Instead of asking how a known picture makes a neuron fire on average, they asked how a living creature could decode a short, one-off burst of spikes to figure out an unknown, changing scene in real time. They showed it’s possible to “read” a neural code directly, not just describe it in averages. 

According to Bialek and colleagues, the classic “firing rate” concept is an average over many repetitions or across many cells. Real life rarely gives you that luxury. You usually get one noisy shot. So they focused on decoding from a single spike train, as an organism must do on the fly—literally. In the blowfly’s visual system, a motion-sensitive neuron called H1 feeds fast flight control. With only a handful of neurons in that circuit, the animal can’t compute neat averages; decisions rely on just a few spikes. The team’s key move was to replace rate summaries with a real-time reconstruction of the actual motion signal from those spikes. 

Here’s how they put it to the test. The fly viewed a random moving pattern whose steps changed every 500 microseconds, while the researchers recorded H1’s spike times. Then they built a decoding filter to turn spikes back into the motion waveform. To make it realistic, they required the filter to be causal and studied the tradeoff between speed and accuracy: waiting a bit longer improves the estimate, but you can’t wait forever if you need to act. Performance rose with delay and then leveled off around 30–40 milliseconds—right around the fly’s behavioral reaction time. The reconstructions were strong across a useful bandwidth, with errors that looked roughly Gaussian rather than systematic. Best of all, the neuron achieved “hyperacuity”: with one second of viewing, the motion could be judged to about 0.01°, far finer than the spacing of photoreceptors and close to theoretical limits set by the input itself. 

Why does this matter for your daily life? First, simple tools can decode rich signals: a straightforward linear filter turned spikes into motion with surprising fidelity. Second, quick decisions don’t require tons of data; a brief ~40 ms window and a few spikes can convey what matters, which is why “firing rate over time” isn’t always the right mental model. Third, robust systems tolerate minor timing errors; the code still works even when spike times are nudged by a few milliseconds. In short, smart decoding beats brute averaging, waiting just long enough maximizes usefulness, and good designs are fault-tolerant. That’s a handy recipe for studying, sports, or any fast choice you make under uncertainty. And yes—this work demonstrates that we can literally read a neural code in real-time.

Referencia:
Bialek, W., Rieke, F., de Ruyter van Steveninck, R. R., & Warland, D. (1991). Reading a Neural Code. Science252(5014), 1854–1857. https://doi.org/10.1126/science.2063199

Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.

Leveling Up Your Choices: How Simple Rules Help You Learn Faster

You’re juggling classes, a part-time job, and maybe a side hustle. Every week brings a new app to try, a study trick to test, and a different way your friends are making money online. It can feel like the world keeps changing just when you think you’ve figured it out. Holland and Miller describe this kind of world as one that’s always throwing something new at us, where smart choices come from adapting step by step rather than guessing the perfect plan in advance. In their view, useful patterns can be tested inside “artificial worlds,” like simulations, so we can watch how strategies improve over time and see which ones actually work. 

According to Holland and Miller, many real-life settings look like “complex adaptive systems.” Think of a campus or a marketplace: lots of people trying things, learning from feedback, and adjusting based on what everyone else does. There isn’t one final, perfect strategy; instead, you find local “niches” where certain habits pay off—like a study routine that works for your schedule or a pricing trick that fits your small shop. New moves by others create new niches, so improvement never really stops—more like constantly upgrading your loadout in a game than finishing a final boss. 

So how do you get better inside a shifting system? One idea is to learn the way a basic “genetic algorithm” learns: keep a bunch of simple strategies, reward the ones that perform well, and mix their best parts to create new ones. In plain terms, if Pomodoro sprints help you focus and walking meetings spark ideas, combine them into “walk, then sprint” and test again. This mixing step—called “crossover”—is powerful because it builds on what already works, instead of starting from scratch each time. Over many rounds, you bias your search toward better “building blocks,” and your average results rise without you needing perfect information or heavy math. 

Another tool they discuss is a “classifier system,” which you can picture as a bunch of if-then rules competing to guide your next move: if the library is crowded, then study in a quiet café; if a post flops, then try a shorter caption. Each rule earns “strength” when it helps you get a payoff and loses strength when it doesn’t—like a built-in scoreboard for your habits. Over time, helpful rules link up, forming smarter routines that still stay flexible, because every rule is provisional and can be replaced when the world changes. This way of learning—small rules, constant feedback, and recombining what works—makes progress feel doable even when wins are rare or delayed. It’s a reminder that you don’t need to be all-knowing to act more intelligently tomorrow than you did today.

Reference:
Holland, J. H., & Miller, J. H. (1991). Artificial Adaptive Agents in Economic Theory. The American Economic Review, Papers and Proceedings of the Hundred and Third Annual Meeting of the American Economic Association81(2), 365–370. https://www.jstor.org/stable/2006886

Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.

How Tiny Mistakes Can Grow Cooperation

You and a friend decide to study together every week. Most days, you both show up and share notes. Now and then, someone is late, or a message gets lost, and the plan derails. Do you quit, forgive, or try a new routine? Kristian Lindgren built a simple computer world to study choices like these. In his world, many “students” play a repeated cooperation game, sometimes make mistakes, and learn over generations which habits survive. The surprise is that small errors don’t just cause chaos. They can encourage communities to adopt smarter, fairer ways of cooperation. 

Here’s the setup in everyday terms. Each player follows a short rulebook, like “if they helped me last time, I help now.” These rulebooks are like tiny genomes with memory, and they can change through “mutations” such as flips, copies, or trims—simple edits that create new habits to test. Everyone plays everyone, good habits earn more “offspring,” and the game keeps going. Even a classic friendly rule like “Tit for Tat” struggles when messages glitch, averaging less than perfect because error cascades can lock partners into pointless payback. Lindgren illustrates how this occurs and how new, stricter rules emerge when mistakes are prevalent. 

What grows out of this messy mix is very relatable. Populations sit in long, calm periods, then flip fast into something new—like a group project that works for weeks and suddenly collapses when one shortcut spreads. Sometimes two different “okay-but-flawed” rules prop each other up: when they meet, they sync and recover cooperation after a slip, even though each one alone would spiral into conflict. Later, sturdier rules emerge that recall a bit more history and respond to one defection with two firm responses before returning to peace. That move blocks freeloaders and keeps the average payoff high even with noise, much like setting clear boundaries after someone flakes. 

So what can you use today? First, expect errors and design for recovery. If a friend misses once, don’t nuke the friendship; try a brief, clear consequence and then reset. Second, remember that patience plus memory beats snap reactions. Keeping track of the last couple of interactions helps you respond fairly, not just emotionally. Third, watch for sneaky patterns that benefit in the short term but ultimately erode trust; they can cause “extinctions” where good vibes vanish for everyone. Lindgren’s message is simple: cooperation is not naïve. With the right habits, it’s robust, even when life is noisy.

Reference:
Lindgren, K. (1991). Evolutionary Phenomena in Simple Dynamics. In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen (Eds.), Artificial Life II, SFI Studies in the Sciences of Complexity. Santa Fe Institute. https://www.researchgate.net/publication/258883366_Evolutionary_Phenomena_in_Simple_Dynamics

Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.

Why So Much of Life Runs Through Organizations (and How That Helps You)

Picture your day. Classes, a part-time job, a club meeting, maybe a shift at the cafe. Notice a pattern? Almost everything happens inside a group with rules, roles, and someone setting direction. Herbert A. Simon suggests that if a visitor from Mars looked at Earth, they’d see big “green” zones of organizations connected by thin “red” market lines—and they’d probably call this an “organizational economy,” not just a market one. The label matters because it changes what we pay attention to in real life: most people are employees, not owners, and the big question becomes how groups actually get people to work toward shared goals. 

Simon argues that classic theories love markets and contracts, but the real action is inside firms—schools, startups, nonprofits, public agencies—where people coordinate every hour. One reason firms exist is the employment deal: you agree to take direction now for tasks that can’t be fully predicted or negotiated in advance. That’s an “incomplete” contract, and it’s efficient when the future is messy. Day to day, you’re not micromanaged; you work within a “zone of acceptance” where lots of choices are fine to you but important to your boss—like which customer email to answer first or which drink to prep next—so orders can focus on results, principles, or constraints instead of step-by-step instructions. That’s why initiative matters: good work isn’t just “follow every rule,” it’s spotting decisions and moving things forward. 

So why do people try hard if a contract can’t spell everything out or pay for every extra effort? Money and promotions help, but they’re not enough on their own. Simon points to identification—the feeling of “we”—as a powerful everyday engine. When we’re taught and encouraged to care about the team, we take real pride in its wins and act for the group, not just ourselves. He links this to a broader human trait he calls “docility,” meaning teachability and responsiveness to social norms, which makes loyalty and cooperation common—even when they’re not instantly “selfish.” For you, that’s practical: choose teams where the “we” is clear, learn the local goals fast, and use simple scoreboards (quality, safety, service) to guide choices when no one is watching. That mix—some rewards, strong identity, and clear cues—explains why many organizations work surprisingly well. 

There’s one more everyday superpower of organizations: coordination. Think of “rules of the road,” or the registrar that turns campus chaos into a class schedule—standards that let everyone predict each other and get on with it. Beyond rules, groups also balance things by quantities, not just prices: low bin of cups? The system reorders; suppliers schedule production; the whole chain adjusts. Put together—authority used to set clear goals, a shared “we” that motivates effort, and simple coordination tools—organizations can specialize deeply and still run smoothly. That’s why Simon says modern economies are best seen as organizational economies, and why learning to navigate teams is a life skill as useful as any class.

Referencia:
Simon, H. A. (1991). Organizations and Markets. Journal of Economic Perspectives5(2), 25–44. https://doi.org/10.1257/jep.5.2.25

Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.