From DNA to AI

Published January 19, 2026

If you zoom out far enough, the story of life starts to look less like biology and more like an algorithm. Not an algorithm in the sci-fi sense of a thinking machine, and not an algorithm with a plan, but an algorithm in the plainest, most brutal mathematical sense: patterns that can copy themselves will tend to persist, and patterns that can’t will tend to vanish. Add variation, add a world that “filters” outcomes, and you get something that looks like direction without intention. That is the uncomfortable elegance at the center of evolution. It’s also the reason a molecule, given time, can become a cell; a cell can become a nervous system; and a nervous system can eventually produce a species that builds cities and networks that wrap the planet.

Richard Dawkins crystallized one way of looking at this in The Selfish Gene. The phrase “selfish” is deliberately provocative, but the underlying move is careful: he invites you to treat genes as replicators, and organisms as the vehicles that carry them. Genes don’t have to be conscious to be “selfish.” They don’t have to want anything. If a gene happens to produce a body that helps it replicate, that gene becomes more common. If it doesn’t, it fades. Over long stretches of time, this selection pressure sculpts creatures that appear to be trying-trying to survive, trying to reproduce, trying to dominate their environment-when what’s really happening is a mindless, persistent filter acting on replication and variation.

Once you internalize that lens, the world around you starts to read like a downstream consequence of replication’s long march. Life didn’t merely adapt to Earth; it reshaped Earth. The oxygen in the atmosphere, the composition of soil, the layers of fossilized carbon, the patterns of forests and reefs-these are not just “nature,” they are signatures left by replicators exploiting a substrate at planetary scale. And if you keep following the chain forward, our own artifacts begin to look less like a break from that story and more like its continuation. Cities are not separate from biology; they’re biology’s externalized infrastructure. The internet is not separate from replication; it’s replication’s nervous system in a new medium-information flowing at light speed through machines built by apes whose brains were built by genes.

This is where the conversation gets interesting, because something has changed. For most of Earth’s history, replication’s primary substrate was biological. Copies were expensive. Iteration was slow. Evolution “compiled” intelligence over millions of years, and even within our species, the time from birth to maturity is measured in decades. The bottleneck wasn’t imagination; it was the cost and cadence of reproducing minds.

Now we’ve crossed into a regime where intelligence is beginning to exist on a non-biological substrate, and the rules of replication look different. Once a model is trained, it can be copied essentially instantly. It can be distributed globally in minutes. It can be invoked thousands of times in parallel. It can be embedded into tools, workflows, and institutions. The speed limit is no longer gestation or childhood. The speed limit is distribution, interfaces, coordination, compute, incentives-everything around the intelligence that determines whether it can actually be used to act in the world.

If you accept the replicator’s-eye view, this shift matters more than almost any headline. It means that the most consequential feature of machine intelligence isn’t just that it’s “smart.” It’s that it is cheap to copy and easy to scale compared to biological minds. That single fact changes the dynamics of adoption the way bacteria changed the chemistry of oceans. When copying becomes near-free, the natural question becomes: what are the channels through which the copies spread, and what is the “host organism” that gives them leverage in the environment?

That’s the frame I use to understand what we’re building.

This work isn’t, at its core, a bet that we can convince a few companies to buy a product because it saves them money. It may do that, and we care about practical value, but I don’t think “software ROI” is the deepest story. The deeper story is that we’re living through a transition in the substrate of intelligence, and the winning constraint is no longer “can intelligence exist?” but “can intelligence travel?” Can it move from a lab to a laptop to an enterprise without breaking things? Can it be invoked safely, repeatably, and with enough context to be useful? Can it be governed, observed, and improved? Can it be integrated into the messy reality of human systems-permissions, data boundaries, compliance, quality bars-without turning every attempt into a bespoke one-off?

In the language of biology, we’re building tissue. We’re building the connective layer that lets a new form of intelligence plug into existing organisms-teams, companies, workflows-without tearing them apart. We’re making the mechanisms by which machine intelligence can be applied, measured, iterated, and shared. In other words, we’re building distribution and coordination for cognition.

If that sounds grand, it’s because the pattern is grand. Every major leap in the history of life wasn’t simply “a smarter thing appeared.” It was that replication found a new lever. Single cells found cooperation and became multicellular organisms. Organisms found communication and became colonies and societies. Humans found language and writing and became civilizations that can accumulate knowledge across generations. At each step, the replicator didn’t just spread; it learned to stabilize higher-order structures, because those structures replicated better than their competitors.

That detail matters, because it points to a healthier, more accurate way to talk about what’s happening with AI. The doom version of this story is conquest: machine intelligence “takes over,” humans become irrelevant, the end. That makes for good fiction, but it’s not the only-and not even the most interesting-implication of the replicator lens. Replication doesn’t only create domination. It also creates symbiosis, because symbiosis can be an unbeatable replication strategy. Cooperation, when it scales, outcompetes solitary strength.

So when I say it can feel like “AI is driving” the creation of tools like these, I don’t mean it mystically and I don’t mean it as an abdication of responsibility. I mean it in the strict sense that once a replicator exists that can be copied cheaply and produces value when deployed, then selection pressures emerge around it. People who adopt it gain leverage. Organizations that operationalize it outcompete those that don’t. Workflows that integrate it become the standard. The ecosystem starts to reward the things that help the intelligence spread safely and effectively, and it punishes the things that block it or make it fragile.

In that environment, you don’t need an AI with a secret agenda. You don’t need consciousness. You don’t need “want.” You just need a mathematical fact: replication plus advantage plus distribution channels yields prevalence. And once you see the distribution channels as the bottleneck, it becomes obvious why so much of the next decade will be decided not by the model labs alone, but by the builders who create the practical pathways for deployment-interfaces, agents, workflows, observability, governance, and the cultural patterns that let humans trust and use these systems.

That said, there’s a responsibility that comes with telling this story honestly. It would be easy-and tempting-to use inevitability as a shield. “It’s happening anyway, so why worry?” That’s the wrong conclusion. If anything, inevitability increases the need for intention. A river will flow downhill whether you care or not. But where it floods, what it nourishes, and what it destroys depends on the channels you build. If machine intelligence is going to spread through society, then the ethical question isn’t whether it spreads. The ethical question is what kind of channels we build, what safeguards we normalize, and who gets access to the leverage it creates.

This is one reason I’m drawn to the unglamorous parts of the work. The future is not only written in model architecture; it’s written in the discipline of deployment. It’s written in the boring plumbing that prevents data leaks, the audit trails that allow accountability, the evaluation harnesses that detect failure, the permission systems that prevent abuse, the feedback loops that make systems improve instead of ossify. It’s written in the way we turn “a powerful demo” into “a reliable capability.” That’s where the difference lies between a world where AI amplifies human agency and a world where it erodes it.

This project exists because I want the incredible things I’ve seen AI do to be available in the broadest possible sense-but available in a way that earns trust. Not hype. Not magic. Trust. I want intelligence that can be invoked like infrastructure: consistently, safely, and with respect for the messy realities of human organizations. I want a world where the benefits of AI aren’t gated behind elite teams who can custom-build everything, but can be packaged, shared, and improved the way software best practices are shared-through repeatable flows that carry knowledge, constraints, and accountability.

If you’re reading this and you feel a little unsettled, that’s okay. This is a story with real gravity. We’re talking about a new medium for intelligence, and mediums change civilizations. But I find the replicator lens oddly clarifying, even comforting, because it strips away superstition. It doesn’t require believing in a conscious AI destiny, and it doesn’t require believing we’re powerless. It simply says: a powerful pattern has appeared, and powerful patterns spread. Our job-our responsibility-is to decide how it spreads, who it serves, and what kinds of structures we build around it.

In the gene story, life didn’t stop at replication; it built complexity. It built cooperation. It built minds capable of love, art, and meaning-things that aren’t reducible to replication even if they emerged from it. In the same way, the story of machine intelligence doesn’t have to be one of replacement. It can be one of composition: humans and machines forming a higher-order system that is more capable than either alone, with governance and care built into its connective tissue.

That’s the philosophy behind this work. We are building the channels through which intelligence moves. We are building the tissue that lets it plug into the world without breaking the world. And we’re doing it with our eyes open: aware of the mathematics, respectful of the risks, and committed to shaping the path rather than pretending the path doesn’t exist.

Because if there’s one lesson to take from the last four billion years, it’s this: when a replicator finds a new substrate, the world doesn’t stay the same.

Tags: ai • evolution • systems

← Back to all posts