# The Ideas That Rhyme ## Why the search engine you actually want would also ruin your life. I was comparing two Obsidian plugins when I accidentally described a search engine that doesn't exist yet — and the reason it can't exist without solving the hardest problem in cognitive architecture. The plugins both promised to surface "connections" in my notes. One counted backlinks. The other used AI embeddings to find similar content. I uninstalled both. Neither was doing what I actually wanted, which was finding notes that *rhyme* with each other. Not notes about the same topic. Not notes with overlapping keywords. Notes that share a structural skeleton across completely different domains. I use "rhymes with" constantly in conversation. "This paper on swarm intelligence rhymes with that Seurat painting." "The way Netflix degrades rhymes with how ADHD hyperfocus spirals." Everyone I say this to knows exactly what I mean. Nobody can explain why. --- ## Rhyming Is Not Analogy There's a critical distinction here that the cognitive science literature has mostly collapsed. When Hofstadter says "atoms are like solar systems," he's making an **analogy**. There's a direction to it — the familiar (solar system) explains the unfamiliar (atom). Reverse it — "solar systems are like atoms" — and it sounds reductive, almost dismissive. Analogy has a source and a target. It's a reasoning tool. When I say "atoms rhyme with solar systems," something different is happening. Neither one is explaining the other. I'm recognizing that they share a relational shape — central mass, orbiting bodies, force-distance relationships — and the recognition goes both ways equally. "Solar systems rhyme with atoms" is just as true. There's no pedagogical direction. It's pure pattern detection. | | Analogy | Rhyming | |---|---|---| | Direction | A → B (source explains target) | A ↔ B (mutual recognition) | | Purpose | Understanding, reasoning | Detection, discovery | | Feels like | "Let me explain X using Y" | "Huh — X and Y have the same shape" | | Reversal | Often breaks ("solar systems are like atoms" — awkward) | Always holds ("solar systems rhyme with atoms" — fine) | This isn't just a terminological preference. It's a different cognitive operation. Analogy is something you do deliberately, with direction, for explanation. Rhyming is something that happens *to you* — you detect the structural match before you can articulate what the shared structure is. That distinction has engineering consequences. --- ## What Rhyming Actually Is For two ideas to "rhyme," two conditions must hold simultaneously: **1. Cross-domain distance.** The ideas must come from different enough domains that surface features don't do the matching for you. "Cats are like dogs" isn't a rhyme — it's categorization. The shared features (four legs, fur, pets) are surface-level attributes, not structural invariants. **2. Bridgeable structural invariants.** Despite the domain distance, the ideas must share a relational skeleton — the same tensions, the same patterns, the same topology — that creates a genuine bridge. Without distance, you get surface similarity. Without the structural bridge, you get noise. Cat and hat rhyme phonetically — they share the sound pattern `-at` — but the *concepts* of a cat and a hat share nothing structural. The semantic distance is too great and there's no relational bridge to cross it. Conceptual rhyming lives in the sweet spot: far enough apart that the connection is surprising, close enough in structure that the connection is real. --- ## Rhyme Chains Here's where it gets interesting. Conceptual rhymes are **transitive**. They chain. > **Murmurations of starlings** → **schools of herring** → **syncopated music** → **pointillism** The structural invariant threading through this chain: *emergent collective pattern from independent local agents following local rules, with no central coordinator.* - **Starlings → herring** is a short hop. Close domains, obvious match. Both are animal swarms. Surface similarity almost gets you there. - **Herring → syncopation** is a real bridge. Biology → music. Nothing on the surface connects fish to drum patterns. The structural invariant does all the work: independent elements following local timing rules, producing emergent groove. - **Syncopation → pointillism** bridges again. Temporal → spatial. Auditory → visual. But the shape holds: many individual elements, each meaningless in isolation, producing coherent pattern at a higher level of organization. A keyword search can't make this chain. An embedding search might connect starlings to herring — similar domain — but would never reach pointillism. Each hop increases the domain distance while the structural invariant holds steady. And crucially, analogies don't chain well. "Atoms are like solar systems" doesn't naturally extend to "solar systems are like... something else that atoms are also like?" The directionality collapses. But "starlings rhyme with herring rhyme with syncopation rhyme with pointillism" works — each link carries the structural invariant forward while the domains diverge. That transitivity is what makes conceptual rhyming a **graph traversal problem** rather than a pairwise comparison problem. And that's what would make it a search engine. --- ## The Search Engine That Doesn't Exist Nobody is building this. Current search engines match on keywords (Google), embeddings (Semantic Scholar, Elicit), or collaborative filtering (Netflix, Spotify — "users who liked X also liked Y"). All three converge. Keywords find more of the same words. Embeddings find more of the same meaning. Collaborative filtering finds more of the same taste. They optimize the **preference function**: here's what's similar to what you've already consumed. What I want is a **curiosity function**: here's what shares deep structural properties with what matters to you, in a domain you haven't explored yet. The closest anyone has come is [Kang and Hope's analogical search engine](https://dl.acm.org/doi/full/10.1145/3530013) at CMU and AI2. They decomposed scientific papers into purpose (what problem does it solve?) and mechanism (how does it solve it?), then matched papers with similar purpose but different mechanism. Their finding: ideation success peaked at an **intermediate** level of structural matching. Too close and it's just surface similarity. Too far and the bridge collapses. That intermediate sweet spot is conceptual rhyming. They found it empirically. But their purpose/mechanism decomposition is a simplified proxy — two attributes standing in for the full relational skeleton. The real thing would need to represent and match on structural invariants directly. Nobody knows how to do that at scale. Hofstadter's [Fluid Analogies Research Group](https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_Analogies) built computational models (Copycat, Tabletop) that detect structural parallels, but they operate on micro-domains, not web-scale corpora. Gentner's [Structure-Mapping Engine](https://groups.psych.northwestern.edu/gentner/papers/GentnerMarkman97.pdf) formalizes the math but requires hand-coded relational representations. The [2025 CAR study](https://www.sciencedirect.com/science/article/abs/pii/S1871187125000574) identified neural connectivity patterns for cross-domain structural matching, confirming the biological mechanism but not how to engineer it. The pieces exist. The integration doesn't. --- ## The Reason It Can't Exist (Without Solving Something Harder) Here's the twist. A rhyme-detection search engine would be both the best tool I've ever used and the worst thing that ever happened to my productivity. I wrote [[Why Netflix Cant Recommend What You Actually Want|previously]] about how Netflix traps you in shallow loops — more jazz, more jazz, more jazz. The preference function converges to tedium. Eventually you get bored and disengage. The algorithm feeds you diminishing returns until you find your own off-ramp. A rhyme engine traps you in **deep spirals**. Starlings → herring → syncopation → pointillism → cellular automata → Conway's Game of Life → urban planning → ant colonies → and it's 3 AM. The critical difference: **the shallow loop gets boring. The deep spiral never does.** Each hop in a rhyme chain is novel. Each connection is genuine. The structural invariant holds while the domain keeps shifting, so you're perpetually in the discovery sweet spot. There is no natural off-ramp. The dopamine is earned at every step. This is the hardest regulatory problem in cognitive architecture: **interrupting a process that's working correctly.** The interrupt signal can't be "this isn't working" — it *is* working. Every link is real. Every connection is rewarding. The signal has to be "this is working *on something other than what you sat down to do.*" That requires maintaining a representation of the original intent across an arbitrarily long and genuinely rewarding chain of diversions. In [[Stopper Protocol|my research on executive function scaffolding]], I call this the STOPPER problem: the regulatory mechanism that interrupts not failure, but flow that has outlived its purpose. The moment when "this is fascinating" has disconnected from "this is serving my goal." A structural resonance search engine without executive function scaffolding is just a more sophisticated attention trap. It's not Netflix serving you diminishing returns — it's Netflix serving you *perpetually increasing returns* on a question you stopped asking an hour ago. The STOPPER isn't a nice-to-have feature for the rhyme engine. It's what distinguishes a tool from a vice. You can't build one without the other. --- ## The Punchline The conversation that led to this essay started with me asking whether I needed two Obsidian plugins. It passed through Netflix recommendations, topological homology, Hofstadter, Gentner, murmuration→pointillism, and executive function scaffolding. Every hop was structural resonance. Every link was genuine. It took an hour. It could have gone for days. That's the product demo for the search engine that doesn't exist yet. And the proof that it needs a STOPPER built in. --- ## Now Imagine It's Your Social Feed A search engine is opt-in. You type a query, you get results, you close the tab. The damage is contained. A social feed is a *push* system. It comes to you. And a rhyme-powered social feed would be the most addictive product ever built — not through junk, but through genuine insight. Twitter's algorithm already stumbles into this by accident. It's why your timeline occasionally serves you a post from a domain you've never engaged with and you *can't stop reading the thread*. But it does it through engagement proxy signals — likes, retweets, time-on-post. Crude instruments. Imagine it done on purpose. A feed optimized for conceptual rhyming wouldn't show you more politics because you engaged with politics. It would show you a ceramics video because the way that potter centers clay on the wheel *rhymes with* the argument about centralized vs. distributed governance you read twenty minutes ago. And you'd feel the resonance. And you'd click. And you'd be *right* to click, because the connection is real. That's not a feed. That's an IV drip of structural insight with no off-valve. The engagement metrics would be extraordinary — not because you're doom-scrolling through rage bait, but because every piece of content genuinely rewards you. Time on platform goes up. Satisfaction goes up. Content quality is high. "Did you accomplish what you sat down to do?" goes to zero. And here's the regulatory nightmare: **the platform can't tell the difference.** From the outside, healthy exploration and addictive spiral look identical. Both produce high engagement, high satisfaction, high-quality content consumption. The only signal that distinguishes them — "is this still serving the user's original intent?" — is invisible to the platform, because it lives inside the user's head. Current feed addiction is a **content quality** problem. The content is junk; the hooks are behavioral. You can solve it by improving content, throttling rage bait, adding friction. We know how to do this. We mostly choose not to. Rhyme feed addiction would be a **regulatory** problem. The content is *good*. Every connection is real. You can't solve it by improving content quality — the quality is already what makes it dangerous. Quality and danger scale together. They're the same curve. The only solution is executive function scaffolding that tracks original intent across an arbitrarily long chain of genuinely rewarding diversions. Not "is this content good?" (it is). Not "is the user engaged?" (they are). But "is this engagement still connected to what the user sat down to do?" That's a STOPPER problem. And it suggests something uncomfortable: the better we get at building recommendation systems that surface real structural resonance, the more urgently we need the regulatory architecture to keep them from consuming us. The search engine and the scaffold aren't separate projects. They're the same project. Build one without the other and you've built a very sophisticated trap. --- ## The Punchline The conversation that led to this essay started with me asking whether I needed two Obsidian plugins. It passed through Netflix recommendations, topological homology, Hofstadter, Gentner, murmuration→pointillism, and executive function scaffolding. Every hop was structural resonance. Every link was genuine. It took an hour. It could have gone for days. That's the product demo for the search engine that doesn't exist yet. And the proof that it needs a STOPPER built in. --- *Scot Campbell is an independent AI researcher focused on cognitive architecture and executive function scaffolding. He is the creator of the [[Stopper Protocol]]. He writes at [Simpleminded Robot](https://simpleminded.bot).*