What an MVP actually is
An MVP is deployed software — real infrastructure, real users, real data — with the minimum feature set that tests the core hypothesis. It is not:
- A prototype (not deployed, not real data)
- A demo (optimised to impress investors, not to test user behaviour)
- A beta (often just a v1 with a disclaimer attached)
- A "quick build" (the MVP should be built with the same quality standard as the v1 — just with a smaller scope)
The distinction matters because the point of an MVP is to generate real learning from real user behaviour. A prototype generates feedback on the concept. An MVP generates data on whether people will pay for it, return to it, and tell others about it.
Define the hypothesis first
Every MVP should start with a written hypothesis in this form: "We believe that [user type] will [behaviour] because [reason]. We will know this is true when [measurable outcome]."
Example: "We believe that restaurant owners will pay $99/month for a mobile ordering app because delivery platform fees are eating their margin. We will know this is true when ten restaurants complete a full month of orders through the app with no churn."
The hypothesis determines the scope. Every feature either contributes to testing the hypothesis or it doesn't. If it doesn't, it's out of the MVP.
How to scope the MVP
Start with the complete feature list you want. Then apply three filters:
- Required to test the hypothesis? If not, cut it.
- Required to deploy? Auth, payment if monetised, basic error handling. Include it.
- Required for legal/compliance? HIPAA, GDPR, PCI — include what's required, exclude the gold-plating.
Common cuts: admin dashboard (use manual processes at MVP scale), growth mechanics (referral, email sequences — you need users before you can grow them), advanced reporting (use analytics tools instead), secondary flows (edge cases, bulk actions, power user features).
Build vs no-code: which is right?
No-code tools (Bubble, Webflow, Softr, Glide) can produce MVP-quality products for certain types of products at lower cost and faster timeline. Choose no-code when:
- Your product is primarily a workflow or form-based tool
- Your competitive advantage is the business model, not the software
- You need to validate demand before committing to a technical build
Choose a technical build when:
- Your product logic is novel and wouldn't be replicable in a no-code tool
- Your product handles sensitive data (healthcare, fintech, legal)
- Your v1 will require performance or scalability that no-code can't deliver
- The no-code tool's limitations will require a full rebuild at v1 stage
The hidden cost of no-code: a no-code MVP that proves the thesis often has to be rebuilt from scratch for v1. A custom MVP, properly scoped, is the foundation of the v1.
Nine-step MVP build process
- Write the hypothesis in the form above. Share it with everyone involved.
- Define done — write acceptance criteria for the MVP before scoping a single feature. "Users can complete the core flow from signup to first action without help" is a criterion. "The app looks good" is not.
- Cut the scope — apply the three filters above. Be ruthless. You can add features after you have users; you can't recover time spent building features no one used.
- Choose the stack — pick the technology that gets you to production fastest, with the best-maintained ecosystem, and the widest pool of engineers who can maintain it after you. Not the most interesting option.
- Run discovery — map the data model, define every integration, and agree on the fixed scope and acceptance criteria before build begins. Discovery is not optional; it's the cost of eliminating mid-build scope changes.
- Design before coding — high-fidelity screens approved before the first line of production code. This catches UX problems before they're structural problems in the codebase.
- Build the core loop first — build the core user flow end-to-end (sign up, core action, result) before building any secondary feature. The core loop is the only thing that has to work at launch.
- Deploy to real users — not a waiting list, not a closed beta with your friends. Real users who found the product through a channel that will still exist at v1.
- Measure and decide — does user behaviour validate the hypothesis? Persevere (build the v1), pivot (change the hypothesis and rebuild the MVP), or kill (the hypothesis was wrong).
How to validate an MVP
Validation is not a survey. It's behaviour. Metrics that validate a hypothesis:
- Retention: Do users come back? Day 1, Day 7, Day 30 retention curves. Nothing says "this is valuable" like a user returning without a push notification.
- Completion: What percentage of users complete the core flow? If 80% drop before the action that creates value, the UX or the proposition has a problem.
- Payment: Will they pay? Payment is the strongest validation signal. Everything before payment is intent, not commitment.
- Referral: Do they tell others? Not a formal referral programme — organic sharing. If users are not telling others without being asked, the product doesn't have product-market fit yet.
Give the MVP at least four to six weeks with real users before drawing conclusions. One week of data from ten users is not statistically meaningful. Four weeks of data from 50 users starts to be.
