# Word Order
I build AI systems for a living. Not the kind that make headlines — not frontier models, not chatbots that pass bar exams, not the demos that go viral on Twitter before quietly failing in production. I build the ones that have to work on Monday morning. Systems that sit inside one of the largest banks in the world and help real people do real jobs, where getting it wrong has consequences measured in dollars, compliance violations, and trust.
I tell you this not to establish credentials but to establish position. I’m writing from inside the machine. And from in here, the view is very different from what you’re hearing.
-----
There’s a phrase that shows up in almost every serious conversation about artificial general intelligence. You’ve heard it so many times you’ve stopped noticing which version people use.
“When and if AGI arrives…”
“If and when AGI arrives…”
Same words. Different order. The difference is everything.
“When and if” treats arrival as the default and doubt as the hedge. The *if* is vestigial — a rhetorical comma, a throat-clearing gesture toward uncertainty before getting on with the real business of prediction. It’s the phrasing of people who’ve already decided what’s coming and are now just arguing about the timeline.
“If and when” leads with the uncertainty. It says: we don’t know. We might be building toward something genuinely unprecedented, or we might be scaling a paradigm that hits walls we haven’t imagined yet. Both possibilities deserve serious preparation. Neither deserves premature commitment.
This is not a small distinction. It’s the fault line that runs beneath the entire AI discourse, and which side you stand on determines what you build, what you fund, what you regulate, and what you fear.
-----
The “when and if” crowd has won the narrative. Not because they’re right — nobody knows if they’re right — but because certainty is a better product than nuance. Certainty gets funded. Certainty sells books. Certainty fills auditoriums and moves policy. “AGI is coming, here’s how we survive it” is a story. “We’re not sure what’s coming, but here’s how we think well about it” is a seminar. People don’t line up for seminars.
The result is an AI discourse that’s been colonized by two subspecies of certainty. The accelerationists are certain AGI will be glorious. The doomers are certain it will be catastrophic. They disagree on the outcome but agree on the premise: it’s coming. The “if” has been settled. All that remains is the “when” and the “what do we do about it.”
And because both camps share that premise, the entire preparation apparatus — the safety research, the governance frameworks, the policy proposals, the billions in alignment funding — is oriented around a singular event. A threshold. A moment when the system becomes *generally* intelligent and everything changes.
I spend my days building AI systems in one of the most regulated industries on earth. I can tell you what happens when you orient your entire strategy around a singular event that may or may not arrive in the form you imagined.
You miss everything that’s actually happening.
-----
Here’s what’s actually happening. Every day, in thousands of organizations, people are sitting down next to AI systems and trying to get work done. Not AGI. Not superintelligence. Systems that are brilliant at some things and bewilderingly bad at others. Systems that can synthesize a thousand pages of regulatory guidance in seconds but can’t tell you whether the person reading the output is confused. Systems that find patterns humans miss but miss context humans take for granted.
The interesting problem — the *urgent* problem — isn’t how to survive AGI. It’s how to design the relationship between human intelligence and artificial intelligence right now, today, in a way that makes both better off.
This is not a lesser problem. It is arguably *the* problem. Because if we get this relationship right, it changes what AGI even means. And if we get it wrong, we don’t need AGI to cause tremendous harm. Sub-AGI systems deployed at scale with the wrong relationship architecture will do that just fine.
-----
I design systems that are meant to augment, not replace. That sounds like a talking point. It isn’t. It’s an architectural decision, and it’s harder than it sounds, because the defaults all push the other way.
The default in most organizations is to think about AI as automation. You identify a task a human does. You build a system that does it faster, cheaper, more consistently. The human becomes unnecessary. You call this “efficiency” and move on.
But when you actually sit with the work — when you watch how people do their jobs, what they’re good at, what they struggle with, where their judgment matters and where it doesn’t — something more interesting emerges. The most powerful configuration isn’t AI replacing human work. It’s AI and human intelligence *complementing* each other, each doing what they’re genuinely best at.
The AI is better at synthesis, persistence, pattern recognition at scale, tireless consistency. The human is better at social judgment, contextual reasoning, institutional knowledge, moral intuition, and knowing when something *feels* wrong even before they can articulate why.
Consider what this looks like in practice. When a critical system goes down in a large enterprise, the traditional response is a war room full of engineers drowning in data. They’re parsing millions of log entries, correlating across monitoring platforms, fielding status calls from every stakeholder who dials into the bridge — and trying to actually diagnose the problem somewhere in between. The senior engineers who should be applying their judgment and experience are instead spending most of the incident gathering information and repeating themselves.
Now redesign that workflow with mutualism as the architecture. The AI processes the logs, correlates against historical incidents, and assembles a remediation recommendation in minutes instead of hours. It automates status updates so nobody wastes an engineer’s attention asking what’s happening. The humans are freed to do the work that actually requires a human: assessing whether the recommended fix will cascade into downstream systems, applying institutional knowledge that exists nowhere in any log file, and owning the consequences of the decision.
The AI without the humans generates recommendations nobody should trust in production. The humans without the AI are buried in data while the clock runs. Together, incidents resolve faster with better outcomes. Neither party is diminished. Neither is the tool of the other. Both are doing work the other genuinely cannot do.
Ecology has a term for this: *obligate mutualism*. A relationship between two organisms so deeply interdependent that neither thrives without the other. Not cooperation, which is optional. Not symbiosis in the loose sense, where one party could walk away. Obligate. Structural. The clownfish and the anemone. The mycorrhizal fungi and the forest. Remove one, and the other doesn’t just lose a convenience — it loses a capability it can no longer provide for itself.
That’s what I think the AI-human relationship needs to become. Not as a metaphor. As a design goal.
The AI safety community talks about “alignment” — which sounds collaborative but isn’t. Alignment means one party steering the other. It’s a leash dressed up in friendlier language. Obligate mutualism is something fundamentally different. It means designing systems where the AI genuinely cannot function well without human judgment, and the human genuinely cannot function well without AI capability, and both parties are better off because that interdependence is *real*, not performed.
We have hundreds of words for how AI might destroy us. We barely have language for how we might need each other.
-----
The AI safety establishment has done important work. I want to be clear about that. The people thinking seriously about existential risk are not foolish. Some of the alignment research is genuinely brilliant.
But almost all of it shares an assumption so deep it’s become invisible: the AI is something to be *constrained*.
Constitutional AI. RLHF. Guardrails. Red-teaming. Containment protocols. Every major alignment framework in production or in research starts from the same metaphor: there is a powerful thing, and our job is to keep it pointed in the right direction. Control it. Bound it. Train it to want what we want.
This is the logic of domestication, not partnership. And domestication has a ceiling. You can train a dog to heel, but you cannot train a dog to *want* what you want. You can only suppress what it wants instead. That works until it doesn’t.
The alternative is obligate mutualism — but designed, not discovered. In ecology, obligate mutualism evolves over millions of years through selection pressure. We don’t have millions of years. We have to engineer the interdependence deliberately, building systems where the AI’s success genuinely depends on human input and the human’s success genuinely depends on AI capability. Not as a safety feature. As architecture.
The hard part — the genuinely hard part — is goals. In ecological mutualism, organisms don’t share goals. They share incentive structures that happen to align. The mycorrhizal network doesn’t *want* the forest to thrive. It just can’t eat without the trees, and the trees can’t drink without it. The alignment is structural, not intentional.
That’s actually the more honest model. We don’t need AI to *want* what we want — that’s the anthropomorphic trap the alignment community keeps falling into. We need to build systems where the architecture itself makes mutual benefit the path of least resistance. Where defection is structurally costly, not just prohibited.
If goals diverge, mutualism becomes parasitism instantly. Every ecologist knows this. The question isn’t how to build a smarter AI. The question is how to build interdependence so robust that the relationship remains mutualistic even under pressure. And that question is architectural, not ideological. No amount of RLHF solves it.
-----
“If and when” isn’t a retreat from seriousness. It’s a demand for a different kind of seriousness.
“When and if” builds bunkers. It prepares for a specific future with specific characteristics. If that future arrives differently than expected — or doesn’t arrive at all — the preparation is wasted at best, harmful at worst.
“If and when” builds frameworks that are robust across scenarios. It says: we don’t know if AGI is coming, but we know that increasingly capable AI systems are here now, and the relationship between those systems and the humans who use them is being designed — well or badly — every single day. The choices we make about that relationship now are not a warmup act for the real show. They *are* the show. They are shaping what more powerful systems will inherit, what patterns will be baked in, what defaults will be set.
This also means being honest about what changes. Obligate mutualism doesn’t mean every current job is safe. It means *designing work differently* — so that the human role isn’t the leftovers after automation, but the judgment layer that makes the whole system trustworthy. The engineer in that war room hasn’t lost their job. They’ve stopped doing the part of their job that was grinding them down and started spending most of their time on the part that actually requires an experienced engineer. That’s not deskilling. That’s the opposite. But it demands that we design for it intentionally, not hope it emerges on its own.
And I want to be honest about where this argument is vulnerable. The dependency I’m describing is real today because there are things AI genuinely cannot do. Institutional judgment. Consequence modeling across complex sociotechnical systems. The felt sense that something is wrong before you can explain why. But the “if and when” question applies to my own framework too. If those capability gaps close — if a future system can model downstream consequences as well as a twenty-year veteran — then the structural dependency dissolves, and obligate mutualism becomes optional mutualism. And optional mutualism is one defection away from parasitism.
I don’t have an answer to that. I have a design philosophy that produces genuine interdependence at current capability levels, and an open question about whether the architecture survives scaling. That’s the most honest thing I can say. And I trust it more than anyone’s certainty about what’s coming.
You don’t wait for certainty about the destination before you start building the road. But you do build roads differently when you acknowledge that you might be headed somewhere you didn’t expect.
I’m a product manager and solution architect who builds AI systems inside a major bank. I’m an independent researcher who studies how intelligence works across different substrates. I’m someone who has spent enough time watching humans and AI systems try to work together to know that the hard problems are not the ones getting the most attention.
The hardest problem in AI isn’t intelligence. It’s relationship.
And that problem is here now. Not if. Not when.
Now.