A new job title is circulating across the industry: AI Engineer.
It sounds exciting, and it feels like standing at the edge of something new. But if you look closely at what most “AI engineers” are actually building, you’ll find that the reality is far less glamorous: a pile of duct tape holding together a language model they don’t own.
Instead of designing cohesive systems, they are wiring up filesystem-backed memory, routing bash commands through sandboxes, and stacking abstraction on top of abstraction until the original foundation collapses. When that happens, they build yet another layer on top of the rubble.
That is not engineering. It’s harnessing. And the distinction matters.
The harness is not the system
A harness engineer optimizes the tool. A systems engineer optimizes for the outcome.
Right now, the agentic software ecosystem is dominated by harness engineering. If you pick up a popular agent framework and follow its getting-started guide, you will see the pattern immediately. You are told to use the filesystem for storage. You are encouraged to give your agent broad bash access and trust it to behave. You are handed hundreds or thousands of lines of framework-specific boilerplate that quietly lock in architectural decisions you never consciously made.
These frameworks are not malicious. They are trying to make it easier to get started. But getting something running is not the same as having software that works—and the gap between the two is where production systems tend to fail.
When you build with a harness-first mindset, you start hitting walls almost immediately. Filesystems do not handle concurrent users well, so you end up layering a database abstraction on top of them. Bash access introduces security risks, so you add per-request sandboxes. Each limitation of the tool becomes another patch you have to ship. The harness grows more complex, while the underlying system remains fundamentally flawed.
This is not just a framework problem. It is a thinking problem.
Software engineering is systems engineering
Software engineering has always been systems engineering. In the 1940s, Bell Labs faced the challenge of building a national telephone network. Millions of components—relays, cables, switches, and operators—had to operate together reliably at scale. The engineers discovered something that now seems obvious: optimizing individual parts does not produce an optimized system. Call routing, reliability, and capacity emerged from how the components interacted, not from the components themselves.
So they invented a discipline focused on exactly that—on optimizing the whole system rather than individual parts—and they called it systems engineering.
Sixty years later, we’re making the same mistake again. We’re optimizing the model, the prompt, or the tool, while neglecting the system as a whole.
Agentic software is not a fundamentally new category. It’s regular software with agents handling portions of the business logic. That means it still depends on the same essential layers every production system has always required: agent logic, data, security, interfaces, and infrastructure. None of these is optional. None of them goes away because you’re using a language model.
When you design each layer in isolation, you introduce constraints that ripple across the entire system. When you design with the full system in mind, each layer reinforces the others.
The choices seem obvious once you zoom out
Systems thinking isn’t abstract. It has concrete, practical implications for the choices you make every day.
Take storage. Filesystem-backed memory is quick to set up, but it cannot safely isolate users. When one user’s context leaks into another’s, it is not just a bug; it is a data breach. A proper database provides isolation, structured queries, and performance that has been refined over decades. The patterns already exist. You should use them.
Now consider security. Read-only access is not something you enforce through a prompt. It is a tool configuration. It is a PostgreSQL connection parameter. The database should reject the write regardless of what the model generates. If your security depends on the model behaving correctly, then you do not have real security.
Now think about interfaces. In the old world, you had one API and one client. Today, an agent might be accessed through a REST endpoint, a Slack bot, a web interface, and an MCP server all at once. Each of these surfaces introduces different identities and contexts. A Slack user ID is not the same as your product’s user ID. An MCP client acting as another agent is not a human. Your authentication and authorization systems must remain consistent across all of them, because the agent itself has no awareness of the origin of a request.
Stop debating the wrong things
The harness engineering mindset produces a very particular kind of argument. But debates like MCP vs. CLI or REST vs. gRPC miss the point. They feel technical, but they’re actually theological. They are really arguments about tools rather than systems. They’re arguments about which harness is holier.
When you adopt a systems perspective, those debates lose their weight. You stop asking which tool is superior in the abstract and start asking what your system actually needs. From there, the right tool choices follow naturally from that understanding.
The agentic ecosystem has given individual engineers something genuinely powerful: the ability to ship systems that would have taken teams. But that power only compounds if you know what you’re building.
You are not building a harness. You are building a system.
Act like it.


.png)