Development guide· 14 min read

How to build an MVP in 2026

An MVP is not a prototype and not a demo. It's deployed software with the core loop intact that lets you test a specific hypothesis with real users. Here's how to scope, build, and validate one.

What an MVP actually is

An MVP is deployed software — real infrastructure, real users, real data — with the minimum feature set that tests the core hypothesis. It is not:

  • A prototype (not deployed, not real data)
  • A demo (optimised to impress investors, not to test user behaviour)
  • A beta (often just a v1 with a disclaimer attached)
  • A "quick build" (the MVP should be built with the same quality standard as the v1 — just with a smaller scope)

The distinction matters because the point of an MVP is to generate real learning from real user behaviour. A prototype generates feedback on the concept. An MVP generates data on whether people will pay for it, return to it, and tell others about it.

Define the hypothesis first

Every MVP should start with a written hypothesis in this form: "We believe that [user type] will [behaviour] because [reason]. We will know this is true when [measurable outcome]."

Example: "We believe that restaurant owners will pay $99/month for a mobile ordering app because delivery platform fees are eating their margin. We will know this is true when ten restaurants complete a full month of orders through the app with no churn."

The hypothesis determines the scope. Every feature either contributes to testing the hypothesis or it doesn't. If it doesn't, it's out of the MVP.

How to scope the MVP

Start with the complete feature list you want. Then apply three filters:

  1. Required to test the hypothesis? If not, cut it.
  2. Required to deploy? Auth, payment if monetised, basic error handling. Include it.
  3. Required for legal/compliance? HIPAA, GDPR, PCI — include what's required, exclude the gold-plating.

Common cuts: admin dashboard (use manual processes at MVP scale), growth mechanics (referral, email sequences — you need users before you can grow them), advanced reporting (use analytics tools instead), secondary flows (edge cases, bulk actions, power user features).

Build vs no-code: which is right?

No-code tools (Bubble, Webflow, Softr, Glide) can produce MVP-quality products for certain types of products at lower cost and faster timeline. Choose no-code when:

  • Your product is primarily a workflow or form-based tool
  • Your competitive advantage is the business model, not the software
  • You need to validate demand before committing to a technical build

Choose a technical build when:

  • Your product logic is novel and wouldn't be replicable in a no-code tool
  • Your product handles sensitive data (healthcare, fintech, legal)
  • Your v1 will require performance or scalability that no-code can't deliver
  • The no-code tool's limitations will require a full rebuild at v1 stage

The hidden cost of no-code: a no-code MVP that proves the thesis often has to be rebuilt from scratch for v1. A custom MVP, properly scoped, is the foundation of the v1.

Nine-step MVP build process

  1. Write the hypothesis in the form above. Share it with everyone involved.
  2. Define done — write acceptance criteria for the MVP before scoping a single feature. "Users can complete the core flow from signup to first action without help" is a criterion. "The app looks good" is not.
  3. Cut the scope — apply the three filters above. Be ruthless. You can add features after you have users; you can't recover time spent building features no one used.
  4. Choose the stack — pick the technology that gets you to production fastest, with the best-maintained ecosystem, and the widest pool of engineers who can maintain it after you. Not the most interesting option.
  5. Run discovery — map the data model, define every integration, and agree on the fixed scope and acceptance criteria before build begins. Discovery is not optional; it's the cost of eliminating mid-build scope changes.
  6. Design before coding — high-fidelity screens approved before the first line of production code. This catches UX problems before they're structural problems in the codebase.
  7. Build the core loop first — build the core user flow end-to-end (sign up, core action, result) before building any secondary feature. The core loop is the only thing that has to work at launch.
  8. Deploy to real users — not a waiting list, not a closed beta with your friends. Real users who found the product through a channel that will still exist at v1.
  9. Measure and decide — does user behaviour validate the hypothesis? Persevere (build the v1), pivot (change the hypothesis and rebuild the MVP), or kill (the hypothesis was wrong).

How to validate an MVP

Validation is not a survey. It's behaviour. Metrics that validate a hypothesis:

  • Retention: Do users come back? Day 1, Day 7, Day 30 retention curves. Nothing says "this is valuable" like a user returning without a push notification.
  • Completion: What percentage of users complete the core flow? If 80% drop before the action that creates value, the UX or the proposition has a problem.
  • Payment: Will they pay? Payment is the strongest validation signal. Everything before payment is intent, not commitment.
  • Referral: Do they tell others? Not a formal referral programme — organic sharing. If users are not telling others without being asked, the product doesn't have product-market fit yet.

Give the MVP at least four to six weeks with real users before drawing conclusions. One week of data from ten users is not statistically meaningful. Four weeks of data from 50 users starts to be.

Questions

The answers we give most often.

What does MVP actually stand for?
Minimum Viable Product. The minimum surface area of working software that lets you test the core hypothesis with real users. Not a prototype, not a proof-of-concept — a deployed product people can actually use.
What should I cut from my MVP?
Everything that isn't required to test the core hypothesis. Admin tools (manual processes work at MVP scale), growth mechanics (you don't have users yet), secondary features (they exist to retain users you haven't acquired), edge case handling (you'll learn the edge cases from real users).
How do I know when my MVP is done?
When a real user can complete the core flow from start to finish without help from you. Not when every feature is built — when the core loop works.
Can I build an MVP myself?
If you have the technical skills, yes. No-code tools (Bubble, Webflow + Memberstack) can produce MVP-quality products for the right type of product. For anything with complex logic, real-time features, or regulated data, a technical build is usually faster in the long run.
How long should an MVP take?
Eight to twelve weeks from a senior team. Less if scope is very tight. More if integrations are complex. The danger of rushing an MVP is building technical debt that makes the v1 cost twice what it should.