For years, our way of taking over legacy systems was simple: start small.
A client would come to us with an existing application and a list of things they needed done. Sometimes that list was mostly bugs. Sometimes it was features. Sometimes it was a mix of vague product wishes, or urgent production issues.
So we would begin with the safest tickets.
Fix a small bug. Change a minor behavior. Add a low-risk feature. Read the code around it. Ask questions. Ship. Repeat.
Starting with low-risk work lets us deliver value to the client while we build confidence and learn the codebase we’re working on.
As the weeks went by, we would move toward more complex parts of the system. By then, we had seen enough of the data model, the workflows, the weird edge cases, and the business rules to make better calls. After one or two months, we usually had enough confidence to work almost anywhere in the application.
That approach still works. But it has a limitation: it assumes the application can be explored gradually.
Sometimes it can’t.
The old approach depended on the app working
A big part of our traditional onboarding was using the application itself.
We would log in, click around, follow user journeys, compare what the client said with what the product actually did, and then connect those screens back to the code. That gave us a practical map of the system.
But recently we had a case where that was impossible.
A few months ago, Joseph came to us with an application he had spent roughly eight months building. The product featured AI agents and workflows, had a meaningful business idea behind it, and already had a lot of code written. The problem was that it barely worked.
Login was broken or unreliable. Core flows failed. The parts we needed to inspect were hidden behind errors. The usual “let’s use the app and learn from it” path was blocked almost immediately.
Fixing it meant investing time into a codebase that might later be discarded. Rebuilding it meant accepting the cost of starting over. Both options could be right. Both could be wasteful.
Our new AI-assisted onboarding process
This is where our onboarding process changed. Instead of relying only on tickets and manual exploration, we started using AI-assisted analysis to generate a first layer of documentation from the codebase.
We used AI to help us build structured documents around questions like:
- What are the main user journeys?
- Which roles exist in the platform?
- What are the core entities?
- Which parts of the application are connected to each other?
- Where does the business logic actually live?
- Which flows appear incomplete, duplicated, or fragile?
- What external services does the system depend on?
- Which architectural choices are going to make future work expensive?
With the old methodology, we reduced risk by starting with small tasks and slowly building confidence. With the new methodology, we can build a much better map before we touch production behavior, and we can get there in much less time.
What we learned from Joseph’s app
In Joseph’s case, the AI-assisted documents gave us enough context to evaluate the product without first making the app usable.
If we had started by fixing bugs one by one, we would have spent paid hours inside a codebase that might not survive the assessment. Worse, every fix would have created momentum toward keeping it. Once you invest enough time patching a system, rewriting it starts to feel like admitting waste, even when rewriting is the better option.
The architecture had enough deep problems that fixing the existing codebase would likely take about as long as rebuilding the first version properly. Our estimate was that a rebuild would take somewhere between two weeks and a month. Trying to stabilize the existing application could land in the same range, with more uncertainty and less confidence at the end.
So our recommendation was to rebuild it.
The point is reaching confidence faster
The value we provide during onboarding has not changed: we want to deliver useful work early, while we build enough confidence to take care of the system responsibly.
What changed is how quickly we can get there.
Legacy systems are full of unknowns. Some parts are safe to touch. Some are risky. Some look harmless until you understand what depends on them. The sooner we can tell the difference, the sooner we can choose the right first tickets and start producing visible results.
AI gives us an initial map before the traditional onboarding loop begins. It helps us see which areas matter, which changes are likely to be low-risk, and which parts of the system deserve more care before anyone touches them.
The confidence we used to build in one or two months can now start to appear in one or two weeks.
That is the real change. We still like earning confidence through shipped work. But now we can reach that confidence much earlier, before we decide where the first tickets should be.