time is the moat in software. and it’s compressing.

This is mostly raw thoughts, captured for my own benefit to read and reflect on later. Possibly much later.

moats

Moats are the things that keep you King-of-that-land, for whatever land it is you want to be king of. It’s the critical natural resources only present in your location. It’s the dataset nobody else has. It’s the massive amount of capital required to play the game at all. But all of these moats can essentially be reduced to time. Given enough time, you can learn how to increase your search sophistication and find scarce natural resources. You can synthesize the data. You can generate capital to break the barrier to entry.

Is this universally true? No. But it’s definitely true (and always has been) for certain software engineering contexts. If you can convert the moat to a function of time, and the current king is largely a fixed-target, you can drain the moat. And if you can compress time, the moat drains faster.

time compression (software)

AI compresses time. But so do compilers. And libaries. And IDEs. And intellisense. But those don’t compress time like AI. Intellisense can’t get me from zero-to-one on using Ansible to deploy to my infrax if I don’t know Ansible. I can learn Ansible via pure search and reading documentation, but at a 0.1x rate compared to learning and iterating with a good LLM. Learning without LLMs now incurs a huge time penalty - you could have been doing something else more important by now. Using a LLM to learn and convert your deployments to Ansible is a compounding effect; you’re buying even more time in the future by not dealing with infrax deployment bugs and weirdness you would have experienced otherwise. There are many other things LLMs can teach you about and produce code for to save future time. It adds up. This is especially true for things with shitty documentation and/or software projects that changed significantly over time (in my experience). LLMs help us fix the barrier to entry problem. Hopefully barrier to entry isn’t part of your moat.

That’s just the personal learning part. I’m watching AI climb engineering walls in minutes when it would have taken me a day (or maybe I would have just quit). I’ve let Cursor bang away at problems in agent mode like “get these three large libaries and program to compile with a resulting ELF < 1M in size. You’re allowed to make minimal code changes but can’t sacrifice the following functionality”. I’m not entirely sure I would have even succeeded. I’m also not sure I would have ever risked the precious time to even try at all. With half-decent LLM agents, not only do I not have to worry about that any more, I don’t even have to sit at the keyboard while it tries.

Do agents and LLMs always succeed? Of course not. But I no longer have to decide between wasting a day on a high-risk idea or grinding away at known-good pathways. I can explore multiple high-risk ideas, observe, iterate, and grind at 2-10x speed on known-good pathways all in parallel. I’m taking risks and exploring ideas I’d never explore before. Not only has time has compressed from weeks to hours (best-case) or days (average-case), I’m spending time in higher-quality ways.

context engineering

This stuff doesn’t just automatically work. LLMs and agents are not magic like airplanes and sailboats are not magic. Airplanes and sailboats can be unwieldy. They drift and require intervention to stay on-course. So do LLMs with the context of the problem. Context engineering is a thing - knowing the right way to give the LLM the best chance for success, providing the best tools, and knowing when to start over from a previous context (or doing multi-agent shenanigans in a useful way like giving Mr. Meeseeks a Mr. Meeseeks button) is essential. This is kind of a new moat. It won’t last forever.

what now?

I can’t wait to see what happens over the next few years. Certain moats could dry up overnight - we probably can’t even predict which ones. Certain companies/products will be at massive risk and have to choose between making a better or cheaper product or perish. Software companies that refuse or fail to augment their software engineering with AI are going to have to rely on the ultimate moats (full vendor lock-in, esoteric government certifications, raw politics/systemic corruption) to stay alive. And those won’t last.