7 Counterintuitive Insights: How to Vibe-Code Full Stack AI Products
Lessons related to Product Thinking, Execution, and Career Growth
The 3rd cohort of Advanced Tech & AI Program is starting 20th September. We're building production-ready AI products in 8 weeks, along with learning advanced AI theoretical topics. You can learn more and apply to the program here
You can also check what PMs like about the program here
So I recently conducted the second episode of "How to X with AI" series. The series is focused on teaching PMs how to achieve product tasks/ outcomes with AI. The first episode was about vibe-coding basic apps and is available here.
What We Covered in The Session
The session was focused on vibe-coding full-stack and AI apps. We showed the frameworks and principles to build these apps focusing more on interviews. The example that we picked to demonstrate was building a full stack Q&A app, and adding AI features to it.
While sharing the recording of the session with subscribers, I usually provide a summary of the session. But summaries are boring, counterintuitive insights aren’t.
So I wrote the insights out to share them with you. Here we go:
Product Thinking
1. Writing The v1 of PRDs Manually
In my previous stints, I have always requested PMs to remember the metrics/ numbers for the products they manage, rather than relying on dashboards. From my experience, PMs who remember key metrics without relying on dashboards think more critically and connect data points faster. Apply the same principle to PRDs: draft the first basic version manually to exercise and show product intuition, then refine with AI.
2. AI Apps Need to Manage Stakes/ Risks
In high-stakes domains like education or healthcare, unchecked AI output can mislead users. A safer approach is routing it through experts first—for example, in our EdTech Q&A app, instructors review AI answers before students see them.
Product Execution
3. Bug-Fixing Skills and PMs
Prototypes break when you are building them during the interviews, and the response matters more than the resolution. Here is the recommended approach:
Articulate the troubleshooting framework: "This issue has appeared before. The approach would be systematically testing hypotheses, starting with the most probable causes. Then we will do …"
Interviewers evaluate problem-solving skills and understanding of tech/ issues, not debugging when it comes to PMs.
4. Functional Prototypes Don't Mean Production-Ready Code
An audience question that appeared towards the end of the session led to this - ‘How much can we scale the AI prototypes?’
When it comes to AI prototypes, the scaling is the easy bit. The core problems are around tech debt, security, compliance, etc. AI-generated code requires full audit and security review. Even expert-written code undergoes security audits through bug bounty programs in large companies.
5. More Patience with LLMs Prevents Failures
Another audience question we discussed around this is - ‘how do I make sure that the tool is just modifying what I told it to and nothing more?’
The answer is patience. When modifying requirements, we need to follow a two-step process:
Step 1: Ask it to describe the changes it will implement. No coding in this step, just chat.
Step 2: Review the plan carefully, then only authorise changes.
This prevents unintended feature breakage during modifications. If you go for speed by doing it all in a single step, it can often result in more time spent fixing bugs.
6. Sequential approach > 1-shot
You can develop an urge to put the entire MVP requirements in these tools, and ask them to code it. There are two problems with that:
Attempting to build the whole thing at once increases bugs/failure probability.
When the failure happens, it becomes hard to figure out the reason.
I recommend a sequential approach:
Start at the core functionalities.
Add the AI features
Build the adjacent layers next, such as authentication layer
Testing after each layer reduces bugs while minimising overall development time.
Career Growth
7. Have an Annual Learning Investment
"How many tools do we pay for to learn? It’s quite expensive.”
Have a $300-$500 per annum budget for the next two years. Here is why:
Tool subscriptions average $20/month. You can always cancel subscriptions you don’t like.
The flux in job roles and responsibilities is really high right now. Consider this professional development investment. Market leaders who navigate technological shifts capture disproportionate rewards.
After all, the best investment you can make today is in your skills, body, and mind!
This would be all for now. You can watch the complete session here
Thanks,
Deepak