
Imagine your friend group chat. Some people talk a lot, others only chime in when tagged, and the vibe shifts as new friends join or old ones mute the thread. That’s a simple way to picture how many “smart” systems work: they’re networks whose connections matter and can change. Farmer calls these “connectionist” models and defines them by two things: only certain parts talk to each other at a time, and those lines of talk can strengthen, weaken, or rewire as the system runs. He also argues this idea isn’t just for neural networks—it also fits rule-based learners, immune systems, and even chemical reaction webs.
Under the hood, the common picture is a graph: dots (nodes) connected by links (who can influence whom). You can describe that picture with a matrix or a compact list, and whether the web is dense or sparse changes how you store and work with it. What makes these systems feel “alive” is that there are often three tempos at once: fast changes to the node states (what’s happening right now), slower tweaks to parameters like weights (learning), and the slowest shifts to the wiring itself (who talks to whom). That last part—rewiring—can also be a form of learning.
Take neural networks you’ve seen in AI headlines. In a feed-forward net, signals move layer by layer; in a recurrent net, outputs can circle back, which adds memory but also makes “when to stop” less noticeable. Learning can be as simple as “cells that fire together wire together” (a Hebbian principle that amplifies activity) or as guided as backpropagation, which adjusts connections to minimize error on known examples. Classifier systems look different on the surface—lots of if-this-then-that rules that post messages—but they’re still networks. Messages act like node activations, rules carry strengths, and a “bucket brigade” passes credit backward along the chain while genetic tweaks (mutations and crossovers) keep improving the rule set. Even practical details like thresholds and “support” change how many messages get through each step.
Now zoom way inside your body. Farmer and colleagues show how the immune system can also be read as a learning network. It must tell “self” from “not-self,” and that skill is learned rather than hard-wired. Beyond single cells reacting, there are interactions across types that may form a regulating web. To model this, they create an “artificial chemistry” where antibody and antigen types are encoded as strings that match more or less firmly. Then, the system learns through clonal selection and even “gene shuffling” to explore new kinds. The point isn’t fancy math—it’s the practical lesson: functional systems learn by adjusting both how strongly parts talk and which parts talk at all. Think of your own routines like that chat: prune noisy threads, boost the ones that move you forward, and don’t be afraid to rewire who—and what—you let influence you.
Reference:
Farmer, J. D. (1990). A Rosetta stone for connectionism. Physica D: Nonlinear Phenomena, 42(1–3), 153–187. https://doi.org/10.1016/0167-2789(90)90072-W
Privacy Notice & Disclaimer:
This blog provides simplified educational science content, created with the assistance of both humans and AI. It may omit technical details, is provided “as is,” and does not collect personal data beyond basic anonymous analytics. For full details, please see our Privacy Notice and Disclaimer. Read About This Blog & Attribution Note for AI-Generated Content to know more about this blog project.