Every few decades the binding constraint in the economy moves. Land, then capital, then information, then attention. The next move is already underway and it is simpler than it looks. The binding constraint on how an organization behaves has always been human cognition — how many people can hold the state of the business in their heads, how many decisions they can make in a day, how much drift the whole thing can take before the org chart becomes the product. Most companies are not widgets or services. They are cognition stretched thin across too many people.
That constraint is being lifted. Not rhetorically, not in the next decade. Now. The economic implication is not that existing companies get a little faster. It is that a new category of company becomes possible at all.
What AI-only actually means
An AI-only company is one that is structurally unable to function without AI. If you removed the models, the agents, the orchestration, the robots in the field, the thing collapses on contact. You cannot do that test on most of what gets pitched as AI today. Remove the AI from an AI-first SaaS product and you still have a CRM. Remove it from an incumbent with an AI pilot and you still have the incumbent. The AI-only test is stricter. It asks whether the company exists on its own terms, or whether the AI is decoration on something built for people.
The distinction matters for the same reason EV-only mattered. A petrol car with a battery bolted on is still a petrol car with compromised packaging and a weight problem. A car designed from the skateboard up as electric is a different animal: different drivetrain, different software stack, different manufacturing footprint, different unit economics. The equivalent is now true in corporate form. A business designed from the start to run on AI has a different cost curve, a different decision cadence, and a different relationship with scale than any business retrofitting toward the same destination.
Why now, and why this is not a pitch
The question I get from friends who do this for a living is fair: why not wait? Model costs are still dropping. Tooling is still immature. The surface area of what an agent can do reliably is still narrow enough that every serious deployment is a custom job.
The reason not to wait is that the companies that define the next cycle will be the ones whose operating model was native to this environment when it was still hard. Waiting is for people investing in the technology. We are not investing in the technology. We are building operating companies that use it. That job does not get easier when the tools are finished. It gets harder, because by then the org chart, the culture, and the cost structure of the legacy approach will have calcified around every high-value niche, and the AI-only entrants will be the second cohort of incumbents instead of the first.
The argument I have already been making
I have been writing about this in public since May 2024. The pieces fit together.
Arm Up Against AI Job Destruction made the first practical case: the $200K associate, the radiology resident, the field technician — none of them are safe, and the labor displacement is not a ten-year story, it is a three-year story. Scarcity Is Dead extended it: when the marginal cost of cognition approaches zero, the classical scarcity frame that organizes economics, policy, and education stops tracking the thing it was invented to describe. Bandwidth Economics gave it a mechanism: institutions behave the way they do because they are lossy compressions running against a fixed cognitive budget. Raise the budget and the compression changes, which means the institutions change.
If any of that is right — and the operational evidence in the field now says it is — then a company whose architecture assumes the old cognitive budget is carrying around a liability. The point of an AI-only company is to not carry that liability.
The dignity question
There is a version of this thesis that is only about cost and that version is boring. The Dignity Paradox sits across from the cost argument and refuses to let it off the hook. What do people do in and around a company where most of the work was designed to be done by machines? What claim on dignity do they keep when the claim through labor has been hollowed out?
My honest answer is that this is the most interesting question in the thesis and the one I am least finished thinking about. The draft answer, for now, is that an AI-only company does not mean a company without humans. It means a company where humans are present because they choose to be — as founders, operators, judges, customers, owners — not because the business would fail without their input. The shift is from labor to agency, and the test of a company built well is whether the humans around it still have it.
Funder and operator, both
Unsal Partners is not a fund in the conventional sense and not a holding company either. We capitalize AI-only companies we can also help run. The operator half is as important as the funder half. A company that is not actively operated by people who believe the thesis becomes an AI-first company within two quarters — the gravitational pull of human labor as the default unit of work is that strong.
The first live example is a global energy services group operating across fifty-plus countries, on the path to AI-only. Field robotics is the closest layer to deployment; the AI operating brain follows. Heavy industry is a harder test than SaaS, which is part of the point. If the operating model works here, it works in the easier cases by construction.
A small set of additional AI-only opportunities are in formation alongside it.
The short version of this page is three sentences.
The next category of great company is the one whose org chart starts with machines. We are building those. Come back in a few years and we will have a longer portfolio page.