How Tech Can Read the Room—and Make Your Experience Better

Ever felt like a game, app, or museum exhibit “just gets you” and reacts at the right moment? Rosales et al. explain a simple idea behind that feeling: measure how you’re interacting, then use that to adapt what you see. They lean on a classic set of eight clues about your behavior—presence, interactivity, control, feedback, creativity, productivity, communication, and adaptation—to describe your “level of interaction.” Think of them as vibes the system watches for: Are you engaged? Are you trying new things? Are you getting responses when you press buttons? These signals help the system learn what to show next, so you don’t get bored or lost.

To test this in real life, the team visited an interactive science museum in Tijuana, where people—especially children and teenagers—play to learn. They tracked everyday details, such as how long someone stayed, where they moved, whether they read information labels, and if they returned to the same spot. That may sound small, but together, those bits tell a story about attention and curiosity, helping designers make labels clearer, stations easier to use, and activities more enjoyable. Imagine a driving or flight station that notices you’re stuck and gives a quick tip, or speeds things up when you’re clearly nailing it—that’s the goal.

Under the hood, Rosales et al. use a fuzzy logic system—don’t worry, it’s just math that handles “in-between” values instead of only yes/no. Each of the eight clues gets a score between 0 and 1, and the system groups those scores into levels from “very bad” up to “excellent.” Then it determines your overall interaction level, ranging from 0 to 5, much like a skill tier in a game. If your level is near the next tier, it nudges you upward and updates its knowledge of you for the next step. In plain terms, the exhibit watches what you do, estimates your current mood, and adapts so you can keep learning without zoning out.

Does it work? They tried it with data from 500 visitors. The team split the group in half—one half to set up the tool and the other half to test it—and compared the system’s calls with human judgments. The results were close most of the time, with about 76% accuracy, which is decent for a first pass. For everyday life, that means smarter exhibits, apps, and games that can sense when to give you hints, when to challenge you, and when to switch things up. It’s the same idea you can use yourself: notice your own signals—am I engaged, getting feedback, learning something new?—and tweak your setup, whether that’s changing a study app’s difficulty, turning on captions, or picking a different mode in a game. Small cues add up to a better experience.

Suggested by Gayesky and Williams’ level idea and brought to life by Rosales et al., this approach is about meeting you where you are and moving with you. The more systems pay attention to those eight everyday clues—and the more they adjust in the moment—the more tech feels like a helpful guide, rather than a hurdle. Next time a tool feels smooth and responsive, there’s a good chance it’s quietly reading the room and adapting to keep you in the zone.

Reference:
Rosales, R., Ramírez-Ramírez, M., Osuna-Millán, N., Castañón-Puga, M., Flores-Parra, J. M., & Quezada, M. (2019). A fuzzy inference system as a tool to measure levels of interaction. In Advances in Intelligent Systems and Computing (Vol. 931). https://doi.org/10.1007/978-3-030-16184-2_52