# Why Netflix Can't Recommend What You Actually Want
## The difference between "more of the same" and "more of the similar" is the difference between a rut and a flow state.
I was lying in bed complaining to Claude about its memory system when I accidentally described a new cognitive architecture.
The complaint was mundane: Claude kept confidently asserting things about my projects that I'd moved on from weeks ago. Old tool configurations, abandoned frameworks, superseded ideas — all confidently cited as current fact. The AI equivalent of your friend who still asks about the ex you broke up with six months ago.
But when I said *"I have the same problem with recommendation systems,"* the conversation took a turn I didn't expect.
---
## "More of the Same" Is Not What I Want
Here's the thing about recommendation algorithms: they assume your history is your preference. Watch three documentaries about jazz? Here are seventeen more documentaries about jazz. Read two articles about graph databases? Your entire feed is now graph databases.
For most people, this works fine. Their consumption patterns are relatively stable. Someone who watches cooking shows will probably enjoy another cooking show. The algorithm converges on the target and everyone's happy.
My pattern breaks the assumption. And if you're reading a blog about AI cognitive architecture, yours probably does too.
What I actually want isn't "more of the same." It's **more of the similar** — but only sometimes, and only when *I* decide I want something different. That sounds close to what the algorithms already do. It is not. Those are fundamentally different optimization targets.
---
## Similarity vs. Homology
"More of the same" operates in **feature space**. It clusters by attributes — genre, director, tags, keywords. If you liked *Dark* on Netflix, here are other German sci-fi shows. The matching is shallow: surface features in, surface features out.
What I'm actually looking for is something deeper. Not *attribute similarity* but **structural resonance**. Not "another documentary about AI" but something that shares the same *shape* — maybe the tension between emergent complexity and top-down control, maybe the pattern of a field being transformed by an outsider perspective — regardless of whether it's about AI, mycology, or free jazz.
In mathematics, there's a precise term for this: **homology**. Two shapes can look completely different but have the same homological properties — the same number of holes, the same connectivity structure. A coffee mug and a donut are homologous. They're not "similar" in any surface sense. They share deep structural invariants.
That's what I want from a recommendation system. Not "this looks like what you watched." Instead: "this has the same *structural skeleton* as the things that have mattered to you, but you've never encountered it before."
I call this **topological homological matching**. And I think it describes something much bigger than recommendation systems.
---
## Netflix Almost Gets It (By Accident)
Credit where it's due: Netflix is better at this than most. Not because they cracked homological matching, but because they brute-forced an approximation.
Netflix famously uses 2,000+ micro-genres — not just "action" but "cerebral foreign action thrillers." Each row on your home screen is an independent similarity vector. Any single row is doing shallow "more of the same." But the *ensemble* across rows creates something more interesting.
Think of it as a matrix where genres are rows and each row's basis is similarity to past viewing. Any individual dimension is flat and feature-based. But the intersection of many flat dimensions creates a higher-dimensional surface that starts to capture topological properties almost by accident.
It's a poor man's homology through brute-force dimensionality. And it's remarkably similar to how biological cognition works — individual neurons doing simple threshold operations, but the ensemble producing pattern recognition that *feels* like structural understanding.
The part that still breaks for certain users is the **recency bias**. Netflix assumes temporal locality of preference: what you watched last week is the strongest signal. For someone with stable tastes, that's a reasonable prior. For someone whose obsessions run hot and burn out in days, last week's signal is noise by Tuesday.
---
## Flow Is Not What You Think It Is
Mihaly Csikszentmihalyi (roughly "cheek-sent-me-HIGH" — yes, real name) defined flow as the optimal experience state. His work describes the *phenomenology* of being in the zone. Executive function research describes the *control mechanisms* that keep you there.
What neither framework describes is the **navigation logic** — the structural principle by which a well-functioning cognitive system selects what to attend to next.
That's the gap topological homological matching fills.
Flow theory tells you *what it feels like* when everything is working. Executive function theory tells you *how to intervene* when it breaks. Topological homological matching describes *what the system is actually doing* when it's working — moving fluidly across domains by structural resonance, finding connections that aren't obvious at the surface level, exploring the adjacent possible without getting trapped.
This extends directly from the [[Theory/Cognitive Universality]] framework I've been developing — the claim that executive function requirements are universal across cognitive substrates. If that's true (and the evidence from AI systems independently converging on clinical interventions suggests it is), then topological homological matching should appear wherever you find healthy directed cognition. In brains, in AI agents, in recommendation systems that actually work.
---
## Why This Matters Beyond Netflix
The reason recommendation systems fail for certain users isn't that those users are weird. It's that the systems are optimizing the wrong function.
Current systems model your **preference function** — what do you like? Topological homological matching models your **curiosity function** — what would surprise you in a structurally meaningful way?
The difference:
| Preference function | Curiosity function |
|---|---|
| "Show me jazz fusion because I like jazz" | "Show me something I haven't encountered that shares deep structural properties with what matters to me" |
| Converges (narrows over time) | Explores (expands over time) |
| Assumes stationarity (your tastes are fixed) | Assumes dynamism (your interests evolve) |
| Matches on surface features | Matches on structural invariants |
And this isn't just about entertainment. Every system that models user intent — search engines, AI memory, social media feeds, learning platforms — faces the same choice. Model what someone has consumed, or model *how they think*?
The same failure mode shows up in AI memory systems. Claude fixating on my three-month-old tool configuration is the same structural problem as Netflix serving me jazz documentaries because I watched one in January. Both systems have collapsed from homological matching (what's structurally relevant to where I am now) to surface matching (what's keyword-similar to what I did before).
Runaway pattern matching. The degraded state. And right now, there's no executive control for your Netflix feed.
---
## What Would Actually Work
A time-aware homological recommender wouldn't just ask "what are the structural invariants of content you like?" It would ask "what's the *cadence* of your interest transitions, and what structural features predict a jump versus continued exploration?"
That's modeling curiosity as a dynamical system rather than a static preference profile. It's hard. Nobody's doing it well. But the fact that Netflix's brute-force micro-genre approach *partially* captures it by accident suggests the target is real — it's just not the target anyone is explicitly optimizing for.
Maybe they should be.
---
## To Sum Up
I started the morning complaining about stale AI memories and ended up describing a three-layer cognitive architecture that connects recommendation systems, flow states, executive function, and the structural mathematics of topology.
The hierarchy: topological homological matching is the healthy default — the flow state where you navigate by structural resonance. When that degrades, you get runaway pattern matching — the Netflix loop, the AI fixation, the hyperfocus spiral. There needs to be an exception handler that interrupts the degraded state and creates conditions to re-enter flow.
Every recommendation system, every AI memory, every attention-management tool is operating somewhere in that hierarchy. Most of them are stuck optimizing the degraded state — getting better and better at surface matching when what you actually want is structural navigation.
The question isn't "what's similar to what I've already seen?" The question is "what shares the deep structure of what matters to me, in a domain I haven't explored yet?"
Nobody's building that. Yet.
---
*Scot Campbell is an independent AI researcher focused on cognitive architecture and executive function scaffolding. He is the creator of the [STOPPER protocol](https://zenodo.org/records/17652383). He writes at [Simpleminded Robot](https://simpleminded.bot).*