Why MMU is an MIT open-source CLI, not a paid SaaS. Details the solo builder workflow fit, the 3-tier 'What-How-Auto' revenue model, and explicit validation metrics.
The First Idea Was a SaaS Web Dashboard
Right after organizing the 534 items, the first thing that came to mind was a web dashboard. It was the natural call. There was structured content, users would need per-project progress tracking, team collaboration would be nice, and subscription pricing could be attached. The textbook starting point for a B2B SaaS.
The Initial SaaS Vision
graph TB
subgraph SaaS["Web Dashboard SaaS (Initial Plan)"]
A["User sign-up/login"] --> B["Load checklist"]
B --> C["Check items + progress visualization"]
C --> D["Team sharing + collaboration"]
D --> E["Export reports"]
end
subgraph PRICING["Pricing Plan"]
P1["Free: 3 categories"]
P2["Pro: All categories, $9/mo"]
P3["Team: Collaboration + reports, $19/mo"]
end
SaaS --> PRICING
The feature matrix we sketched out:
| Feature | Free | Pro ($9/mo) | Team ($19/mo) |
|---|---|---|---|
| Checklist access | 3 categories | All 15 | All 15 |
| Progress dashboard | Basic bar chart | Category heatmap | Category heatmap |
| Stack-based filtering | No | Yes | Yes |
| Team sharing | No | No | Up to 5 members |
| Report export | No | PDF + CSV | |
| Priority recommendations | No | Basic | AI-powered |
| Custom item addition | No | 10 items | Unlimited |
Three advantages to this model:
- Recurring revenue (MRR): Monthly subscriptions mean predictable income. 100 Pro users = $900/mo, 20 Team accounts = $380/mo. Combined: $1,280/mo.
- Data accumulation: Analyzing users’ check patterns could generate insights like “Top 10 most-missed items” or “Risk patterns by stack.” These become content marketing material and fuel for premium features.
- Competitive moat: Keeping the checklist private means the content itself is the competitive advantage. You have to sign up to see all 534 items.
Clean business model. We even built MRR scenario spreadsheets.
Implementation Cost Estimate
Building a SaaS requires infrastructure. Minimum stack estimate for a solo operator:
| Component | Option | Monthly Cost |
|---|---|---|
| Frontend | Next.js + Vercel | $0 (Hobby)–$20 (Pro) |
| Backend API | Railway / Fly.io | $5-15 |
| DB | Supabase / PlanetScale | $0-25 |
| Auth | Clerk / NextAuth | $0-25 |
| Payments | Stripe | 2.9% + $0.30 per transaction |
| Resend | $0-20 | |
| Monitoring | Sentry | $0-29 |
| Total (operating cost) | $5-134/mo |
Server costs exist even at 0 subscribers. You need 100 subscribers to break even. Until then, it’s a net loss.
Then there’s development time. Auth, payments, dashboard UI, team collaboration, report export — built by one person, that’s at minimum 4-6 weeks. Time that could be spent refining the checklist content.
The problem was the user perspective.
We Found the Contradiction
The Target User’s Reality
The people who need this checklist are solo SaaS builders. Indie hackers, solo founders, side-project creators. Here’s what their monthly expenses already look like:
| Item | Monthly Cost Already Being Paid |
|---|---|
| Hosting (Vercel, Railway, Fly.io, etc.) | $0-25 |
| DB (Supabase, PlanetScale, Neon, etc.) | $0-25 |
| Domain | $1-2 |
| Payment platform fees | 2.9-5% of revenue |
| Monitoring (Sentry, Betterstack, etc.) | $0-29 |
| Email (Resend, Postmark, etc.) | $0-20 |
| Analytics (PostHog, Mixpanel, etc.) | $0–free tier |
| AI API (OpenAI, Anthropic, etc.) | $10-50+ |
| Total | $10-150+ |
Telling this person “to check what you missed before launch, subscribe to yet another SaaS at $9/month” was a contradiction.
Let’s be specific. The checklist’s reason for existence is reducing unnecessary wasted effort for builders. It catches missed items early so you don’t spend time patching them after launch. But to access those savings, you pay $9 every month.
Among the 534 items are things like:
- “Review and cut unnecessary SaaS subscriptions” (Category: Operations)
- “Check for free-tier alternatives that can replace paid tools” (Category: Costs)
- “Launch with minimum tooling. Only add what you need after launch” (Category: Process)
There’s a real chance the checklist tool itself would qualify as an “unnecessary subscription.” For a project with no revenue yet, is $9/month for a pre-launch verification tool rational? Most solo builders would say no.
A tool that tells users to “cut unnecessary subscriptions” becoming an unnecessary subscription itself is a structural contradiction.
The Usage Frequency Problem
SaaS subscriptions are justified for tools used daily or weekly. Slack, Notion, Figma are daily drivers. $10-20/month is fine.
A launch checklist is different. The usage pattern looks like this:
| Timeframe | Usage Frequency | Time Per Use |
|---|---|---|
| 2-4 weeks before launch | 1-2x/day | 10-30 min |
| Launch week | 3-5x/day | 1-2 hours |
| First week after launch | 1x/day | 10 min |
| 1 month post-launch onward | Almost never | 0 |
Peak usage lasts 4-6 weeks at most. After that, the subscription gets cancelled. In a monthly subscription model, this creates extremely high churn. Per-user LTV (Lifetime Value) of $18-54. Hard to justify CAC (Customer Acquisition Cost).
We considered an annual subscription at $49, but even fewer people would prepay $49 for a tool they’d use once or twice a year.
The Weakness of the Content Moat
We assumed keeping the checklist private would make the content itself a competitive advantage. But honest analysis showed otherwise:
| Factor | Reality |
|---|---|
| Source of items | 70%+ of the 534 items were collected from public sources (YC, Indie Hackers, OWASP, Web.dev) |
| Replication difficulty | A competitor could compile similar sources into a comparable checklist in 2-3 weeks |
| Actual competitive moat | Not the content itself, but “the quality of the experience using the checklist” |
| Open-source risk | Forking and reselling is possible, but since the content is already based on public sources, additional protection is marginal |
The conclusion was clear. The benefits of closing the content (subscription revenue) were smaller than the benefits of opening it (adoption speed, community contributions, trust, organic discovery).
Comparing Three Alternatives
After ruling out the SaaS dashboard, we evaluated three alternatives:
| Criterion | Web Dashboard (SaaS) | Static Site (Docs) | CLI (Open Source) |
|---|---|---|---|
| Entry barrier | Sign-up + payment | None | pip install |
| Offline use | No | Yes (download) | Yes |
| Automation integration | Requires API (extra development) | No | CI/CD native |
| Data ownership | Stored on server | User’s local machine | .mmu/ local |
| Context switching | Browser tab switch | Browser tab switch | Runs in terminal |
| Community contribution | Difficult | PR possible | PR possible |
| Monetization | Direct (subscriptions) | Difficult | Indirect (paid content) |
| Server maintenance | Required (cost incurred) | Not required | Not required |
| Build time | 4-6 weeks | 1-2 weeks | 2-3 weeks |
| Maintenance burden | High (infra + CS) | Low | Medium (Issues) |
Decision Flow
graph TB
subgraph DECISION["Decision Flow"]
Q1{"Target user's<br/>primary environment?"}
Q1 -->|"Terminal"| A1["CLI direction"]
Q1 -->|"Browser"| A2["Web-based direction"]
A2 --> Q2{"Can you sustain<br/>server costs?"}
Q2 -->|"Yes"| A3["Web dashboard"]
Q2 -->|"No"| A4["Static site"]
A1 --> Q3{"Is CI/CD automation<br/>needed?"}
Q3 -->|"Yes"| Q4{"Release as<br/>open source?"}
Q3 -->|"No"| A4
Q4 -->|"Yes"| FINAL["CLI + MIT Open Source"]
Q4 -->|"No"| A5["CLI + Proprietary"]
A3 -.->|"Contradiction: forcing subscriptions"| X["Rejected"]
A4 -.->|"Limitation: no automation"| X2["Keep as secondary channel"]
A5 -.->|"Limitation: slow adoption"| X3["Rejected"]
end
A static site (documentation) was also a strong contender. Shortest build time, lowest maintenance burden. We even considered a GitHub README.md format like an awesome-checklist repo.
But static docs had two critical limitations:
- No automation: Documentation is for reading, not executing. Only a CLI can look at
package.json, determine it’s a Node.js project, and filter items accordingly. - No state tracking: Tracking “how many of 534 items have I completed?” requires users to create separate documents. A CLI auto-saves to the
.mmu/directory.
Conclusion: CLI + open source fit most naturally into the workflow of the target user — solo builders who code in the terminal.
Three Reasons We Chose CLI
1. Builders Work in the Terminal
Having someone who writes code open a browser tab to check a checklist introduces context switching. Code -> browser -> checklist dashboard -> check item -> back to code. If this round-trip happens 10+ times a day, focus breaks.
Running mmu status in the terminal means zero context switching.
# Check directly in terminal
$ mmu status
MAKE ME UNICORN - STATUS DASHBOARD
──────────────────────────────────
Evolution: [████████░░] Hatchling → Pegasus
Launch Gates
────────────
M0 Skeleton [██████████] 100% PASS
M1 Core [████████░░] 83%
M2 Hardening [██████░░░░] 61%
M3 Pre-launch [████░░░░░░] 44%
M4 Launch Day [██░░░░░░░░] 22%
M5 Post-launch [░░░░░░░░░░] 0%
mmu next tells you the highest-priority item based on current progress. No browser needed.
2. Zero External Dependencies
A web service dies when the server goes down. When a SaaS company shuts down, the data goes with it. In 2024-2025 alone, dozens of small SaaS products shut down — some without even giving users time to export their data.
A CLI runs locally. Checklist data is stored locally in .mmu/. Once the package is installed from PyPI, it works without an internet connection.
| Scenario | Web Dashboard | CLI |
|---|---|---|
| Internet down | Unusable | Works normally |
| Service shutdown | Data loss risk | Local data preserved permanently |
| Server outage | Unusable | Works normally |
| Privacy | Data sent to server | Local only, no external transmission |
| Airplane/subway | Unusable | Works normally |
| Security audit | Must report external service dependency | Not applicable |
This isn’t just a technical advantage — it’s a matter of user data sovereignty. A solo builder’s checklist progress, however trivial it might seem, reveals the maturity and weaknesses of their project. There’s no reason to upload this data to a third-party server.
3. Automation Is Possible
The real value of a CLI is CI/CD integration. Having the system automatically verify on every code push is more reliable than relying on human memory to check a checklist.
# .github/workflows/mmu-guardrails.yml
name: MMU Launch Gate Check
on: [push]
jobs:
mmu-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install make-me-unicorn
- run: mmu scan # Auto-detect project stack
- run: mmu doctor # Auto-verify launch gates
- run: mmu gate --stage M2 # Check M2 pass status
mmu scan auto-detects the project’s tech stack. If package.json exists, it’s recognized as a Node.js project; if requirements.txt exists, Python. It auto-pre-checks already-satisfied items like the presence of robots.txt or LICENSE files.
mmu doctor diagnoses the current project state against all 534 items and outputs missing items as warnings. mmu gate --stage M2 returns the pass/fail status for a specific milestone as an exit code. CI/CD can use this exit code to block deployments or trigger alerts.
Achieving this kind of automation with a web dashboard would require building a separate API, managing auth tokens, and introducing network dependencies. Unnecessary complexity.
License Decision: Why MIT
After deciding on CLI, the next decision was the license. Whether to release as open source, and if so, which license.
License Options Reviewed
graph LR
subgraph OPEN["Fully Open"]
MIT["MIT"]
APACHE["Apache 2.0"]
end
subgraph RESTRICTED["Conditionally Open"]
FAIR["Fair-code<br/>(n8n model)"]
BSL["BSL<br/>(HashiCorp model)"]
CUSTOM["Additional conditions<br/>(Dify, SSPL)"]
end
subgraph CLOSED["Closed"]
PROP["Proprietary"]
end
MIT -->|"Max spread"| ADOPTION["Adoption Speed"]
FAIR -->|"Resale defense"| PROTECTION["Value Protection"]
PROP -->|"Full control"| CONTROL["Control"]
ADOPTION -->|"Current scale < 100"| PRIORITY["Current Priority"]
PROTECTION -->|"Scale > 10,000"| FUTURE["Future Consideration"]
We compared each license across 7 criteria:
| Criterion | MIT | Apache 2.0 | Fair-code (n8n) | BSL (HashiCorp) | SSPL (MongoDB) | Proprietary |
|---|---|---|---|---|---|---|
| Adoption ease | Highest | High | Medium | Low | Low | Lowest |
| Resale defense | None | None | Yes (commercial resale limited) | Strong (competing service restricted) | Very strong | Complete |
| Enterprise legal clearance | Instant | Instant | Review required | Review mandatory | Often avoided | N/A |
| Community contribution willingness | High | High | Medium | Low | Low | None |
| Fork risk | Yes | Yes | Low | Very low | Very low | None |
| License change flexibility | Easy (permissive to restrictive) | Easy | Difficult | Difficult | Difficult | N/A |
| Notable projects | LangChain, CrewAI | HuggingFace, Qdrant | n8n | Terraform, Vault | MongoDB | - |
License Change Case Studies
Open-source license changes have been frequent in recent years. Analyzing these patterns informed our decision:
| Project | Before | After | Scale at Change | Reason |
|---|---|---|---|---|
| Elastic | Apache 2.0 | SSPL | Tens of thousands of users, AWS conflict | AWS launched competing service (OpenSearch) |
| MongoDB | AGPL | SSPL | Massive commercial usage | Cloud providers offering MongoDB as a service |
| HashiCorp | MPL 2.0 | BSL 1.1 | Terraform with tens of thousands of users | Competing services free-riding |
| Redis | BSD-3 | SSPL | Massive commercial usage | Cloud provider conflict |
Common thread: All changed licenses only after massive adoption. Choosing a defensive license when you have fewer than 100 users is optimizing in the wrong order.
Our Rationale
We did wrestle with it. The scenario of someone taking all 534 items, building a web dashboard, and selling it as a paid service. MIT can’t prevent that.
But at this stage, adoption mattered more than defense.
- A 534-item checklist nobody knows about has zero value. MIT minimizes friction for adoption. No “am I allowed to use this?” hesitation — just
pip install. - No enterprise legal review friction. Startups or large companies, MIT passes instantly. Fair-code or BSL triggers “does this condition apply to our use case?” reviews. That delays or kills adoption.
- Higher community contribution motivation. No concern that “my contributed code might get locked behind a license change later,” making it easier to submit PRs.
- With fewer than 100 users, debating license defense strategy is premature. As the table above shows, license changes happen at the 10,000+ user scale.
Scale may justify changing the license later. But at under 100 users, spending time on license defense strategy is the wrong priority. Adoption first, defense second.
Why We Didn’t Reconsider Freemium SaaS
Even after deciding on CLI + open source, a thought briefly surfaced: “What if we keep the CLI free but build a separate premium web dashboard?” Like GitLab or Sentry — open-source CLI/SDK with a Freemium SaaS web dashboard.
After review, it didn’t fit at this stage:
| Factor | Reality |
|---|---|
| Market size | The TAM for “SaaS launch checklist dashboard” is narrow. Among global indie hackers, very few would pay for a checklist tool |
| Willingness to pay | Checklists are closer to “reference material.” Putting a monthly subscription on reference material costs more than a book |
| Support burden | SaaS means support requests scale with users. Solo operation has hard limits on manageable CS volume |
| Development resources | Maintaining both a web dashboard and CLI simultaneously means one gets neglected. One person managing both leads to lower quality on both |
| Time to first revenue | Dashboard development 4-6 weeks + user acquisition 3-6 months. First revenue at minimum 4-8 months out |
Building the CLI well, securing adoption first, and generating revenue from the How layer (Playbooks) is more realistic for a solo builder.
Where the Money Comes From — The What / How / Auto Model
What we released as open source is “what you need to check” (What). Revenue is planned from “how to actually do it” (How) and “do it automatically” (Auto).
3-Tier Revenue Model
graph TB
subgraph WHAT["What (Open Source, Free)"]
W1["534-item checklist"]
W2["CLI tool (mmu)"]
W3["Scoring + visualization"]
W4["Stack auto-detection (scan)"]
W5["Priority recommendations (next)"]
end
subgraph HOW["How (Playbook Pack, Paid)"]
H1["Provider-specific step-by-step guides"]
H2["Code snippets + screenshots"]
H3["Stripe/LemonSqueezy payment setup"]
H4["NextAuth/Clerk auth implementation"]
H5["SEO completion checklist"]
end
subgraph AUTO["Auto (AI Coach, Future)"]
A1["mmu doctor --deep"]
A2["Automated code analysis"]
A3["Fix suggestion generation"]
A4["Auto PR creation"]
end
WHAT -->|"I know what to check,<br/>but how do I do it?"| HOW
HOW -->|"Doing it manually<br/>every time is tedious"| AUTO
WHAT -.->|"Free"| USERS["User Acquisition"]
HOW -.->|"$29-49 one-shot"| REVENUE1["Revenue 1"]
AUTO -.->|"$9-19/mo"| REVENUE2["Revenue 2"]
The What Layer — Open Source (Current)
The checklist items themselves, the CLI tool, progress visualization, scoring, and stack auto-detection. All free.
The checklist has an item: “Add webhook signature verification to payment processing.” Knowing this item exists is provided for free.
The role of this layer is not revenue generation but user acquisition and trust building. The quality of the free checklist must be high enough to create motivation for purchasing paid Playbooks.
The How Layer — Playbook Pack (Planned)
You now know the item “Add webhook signature verification to payment processing.” But how to actually implement it in Stripe, how Lemon Squeezy differs, how to test it — that requires a separate guide.
That guide is the Playbook Pack. Provider-specific, step-by-step implementation guides. $29-49 one-shot pricing.
The concrete difference between What and How:
| What (Free) | How (Paid Playbook) |
|---|---|
| “Add webhook signature verification to payments” | Stripe webhook signature verification code (Node.js/Python), testing methods, Stripe CLI local test setup, failure retry configuration, monitoring dashboard setup |
| ”Apply RBAC to authentication” | NextAuth + Clerk RBAC implementations respectively, middleware config, role-based access control testing, admin page protection patterns |
| ”Measure Core Web Vitals” | Lighthouse CI setup, optimization guides per framework (Next.js/Astro/Nuxt), image optimization, font loading, bundle analysis |
First 3 Playbooks planned:
| Playbook | Target Checklist | Expected Price | Expected Length |
|---|---|---|---|
| Billing Setup (Stripe + LemonSqueezy) | 04-billing, 36 items | $29 | 80-100 pages |
| Auth Complete (NextAuth + Clerk) | 03-auth, 42 items | $29 | 60-80 pages |
| SEO Launch Kit | 08-seo-marketing, 58 items | $49 | 100-120 pages |
The Auto Layer — AI Coach (Future)
Even reading a Playbook and implementing it yourself takes time. mmu doctor --deep would analyze code, detect missing items, and auto-generate fix suggestions or PRs.
Technically, this is feasible today. LLMs reading code and cross-referencing checklist items is well within 2026 capabilities.
But demand hasn’t been validated, so we’re not building it. The investment sequence:
- Issues/Discord accumulate enough “how do I do this?” questions
- Build and sell Playbooks
- Playbook buyers provide “wish this was automatic” feedback
- Then build Auto
No demand means we don’t build. “Technically feasible” and “market wants it” are different questions.
Open Source as a Distribution Channel
Open source is not a license — it’s a distribution strategy. The act of releasing under MIT is itself marketing.
Open Source Adoption Funnel
graph TB
subgraph FUNNEL["Open Source Adoption Funnel"]
D1["Discovery<br/>(GitHub Trending, blog, HN)"]
D1 --> D2["Install<br/>(pip install make-me-unicorn)"]
D2 --> D3["Use<br/>(mmu scan, mmu status)"]
D3 --> D4["Share<br/>(mmu share → score sharing)"]
D4 --> D5["Contribute<br/>(Issues, PRs, item additions)"]
D5 --> D6["Convert<br/>(Playbook purchase)"]
end
D1 -.->|"SEO + community"| ORGANIC["Organic traffic"]
D4 -.->|"Viral"| D1
D5 -.->|"Quality improvement"| D3
Benefits of open source at each stage:
| Stage | Closed SaaS | Open-Source CLI |
|---|---|---|
| Discovery | Paid ads, SEO competition | GitHub Trending, community shares, “Show HN,” blog citations |
| Install | Sign-up -> email verify -> login | pip install make-me-unicorn (30 seconds) |
| Use | Requires onboarding flow design | Start with a single mmu scan command |
| Share | ”This SaaS is good” (low credibility) | “My project scored 78” + screenshot (concrete) |
| Contribute | Feedback form | Direct item additions/edits via GitHub PR |
| Convert | Free->Paid conversion typically 2-5% | CLI user -> Playbook purchase (conversion rate unverified) |
The key: discovery cost (CAC) is extremely low. SaaS requires investment in Google Ads or content marketing to be found. For open source, GitHub itself is the distribution platform. The README is the landing page, Stars are social proof, and Issues are the user feedback channel.
The mmu share command is an intentionally designed viral element. It generates an image of the project score for sharing on social media. The desire to share “my project’s launch readiness score” is natural. That sharing feeds back into discovery.
Core Principles of the Open-Core Model
MMU’s open-core structure in one sentence:
The list of what to check (What) is free, the implementation guide (How) is paid, automatic execution (Auto) is SaaS.
For this structure to work, three conditions must be met:
| Condition | Description | Current Status |
|---|---|---|
| What quality must be sufficient | The free checklist alone must be valuable enough for adoption | 534 items, 15 categories |
| What->How conversion must feel natural | ”I see the items but don’t know how to implement them” must arise organically | Monitoring Issues |
| How quality must be clearly above What | If not differentiated from free content, no purchase motivation | Unvalidated |
The third condition is the hardest. What (checklist items) is essentially organized information already available on the internet. How (Playbooks) must also differentiate clearly from tutorials findable online.
The differentiator is context. Individual tutorials cover one topic. Playbooks are 1:1 linked to checklist items, so “why this needs to be done (checklist) -> how to do it (Playbook) -> auto-verify completion (CLI)” flows as a single connected experience. This connectivity is hard to replicate by searching for individual tutorials.
Risk Analysis
Open-source release is not risk-free. Here are the anticipated risks and mitigation plans:
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| Fork and resell as paid service | Medium | Low | 70% of content is from public sources, so protective effect is marginal. May actually increase awareness of the original project |
| Large company launches similar tool | Low | High | Market is too small for large-company entry incentives. Even if they enter, community-based differentiation |
| Adoption stagnation (Stars < 100) | Medium | High | At 3-month mark, if adoption metrics miss targets, re-examine distribution channels. Content itself is preserved |
| No What->How conversion | Medium | High | If no “how-to” demand in Issues/Discord, pause Playbook production. Consider pivoting to consulting/workshops |
| No contributors | High | Medium | Maintain scope sustainable for solo operation. Community contributions are a bonus, not a requirement |
| Backlash if license change needed | Low | Medium | Not applicable at current scale. Future changes would preserve MIT license on existing versions |
The most critical risk is adoption stagnation combined with no What->How conversion. In that scenario, we’ve released everything for free with zero revenue. Mitigation is addressed in the validation criteria below.
Similar Patterns That Have Worked
We analyzed cases where giving away the What for free and monetizing How/Auto has actually worked:
| Project | What (Free) | How/Auto (Paid) | Scale | Solo Operation? |
|---|---|---|---|---|
| LangChain | Framework (MIT) | LangSmith (observability + debugging SaaS) | 80K+ Stars | No (VC-backed) |
| Hugging Face | Transformers (Apache 2.0) | Hub Pro, Inference Endpoints | 130K+ Stars | No (VC-backed) |
| n8n | Workflow core (Fair-code) | Cloud, Enterprise license | 50K+ Stars | No (VC-backed) |
| Excalidraw | Whiteboard (MIT) | Excalidraw+ (collaboration, storage) | 80K+ Stars | Small team |
| Cal.com | Scheduling core (AGPL) | Enterprise, managed service | 30K+ Stars | No (VC-backed) |
| Plausible | Analytics core (AGPL) | Managed cloud | 20K+ Stars | Small team (bootstrapped) |
Honestly, most of these had VC funding and teams of dozens to hundreds. Cases where a solo builder generated revenue purely OSS-first are rare.
But there are applicable patterns:
- Plausible: Grew through bootstrapping without VC funding. A two-track structure of free self-host + paid cloud. The conversion trigger: “If you don’t want to manage it yourself, we’ll handle it.”
- Excalidraw: Gathered users through MIT open source, then monetized collaboration features and storage. Core features free, convenience features paid.
Applied to MMU: Checklist (core) is free, implementation guides (convenience) are paid. Similar to Plausible’s “self-host vs managed” pattern, we create conversion motivation through the convenience gap between “find and implement it yourself vs follow the Playbook.”
Validation Criteria and Pivot Conditions
We don’t yet know if this decision is right. So we pre-defined criteria for judging “is this working.”
3-Month Checkpoint
| Metric | Target | Action if Missed |
|---|---|---|
| GitHub Stars | 200+ | Re-examine distribution channels. Improve README/landing, increase community posting |
mmu share weekly executions | 50+ | Viral feature ineffective. Improve sharing UX or explore alternative growth channels |
| Issues/Discussions “how-to” tag | 5+/week | What->How conversion demand insufficient. Pause Playbook production |
| PyPI weekly downloads | 100+ | Insufficient actual users. Analyze discovery paths |
6-Month Checkpoint
| Metric | Target | Action if Missed |
|---|---|---|
| GitHub Stars | 500+ | Analyze growth rate. If stagnating, reconsider project direction |
| Playbook waitlist | 100+ | Under 100: consider switching from Playbooks to consulting/workshop model |
| Community contribution PRs | 10+ cumulative | If no contributions, scope down to solo-maintainable range |
mmu doctor weekly executions | 30+ | CI/CD integration demand weak. Re-validate CLI’s core value proposition |
Pivot Scenarios
Pre-defined alternatives if the above criteria are not met:
| Situation | Alternative |
|---|---|
| Adopted but no conversion | Change revenue model from Playbooks to 1:1 coaching/consulting |
| Adoption itself stalls | Repackage as VS Code extension, GitHub Action, or other interfaces |
| Competitor launches better tool | If differentiation is impossible, contribute checklist content to them and sunset the tool |
Principles Learned from This Process
Looking back at this decision process, there are principles worth referencing for any solo builder designing a business model for an open-source project:
- With fewer than 100 users, spread beats defense. License defense, competitive moats, IP protection — worry about these after you have users.
- Separate “technically feasible” from “market wants it.” AI Coach (Auto tier) is technically feasible, but demand is unvalidated, so we don’t build it.
- A tool’s purpose and business model must not contradict each other. A tool that tells users to reduce costs cannot add costs.
- Set validation criteria before executing. Not “hope this works” but “200 Stars at 3 months, 100 waitlist at 6 months” — numbers let you avoid missing the pivot window.
- Pre-defining pivot scenarios reduces emotional attachment. When “if this doesn’t work, we do that” is already decided, failure doesn’t pull you into emotional decision-making.
We’ll continue documenting this process. The next post goes deeper into real-world monetization cases of AI/LLM open-source projects that use the same open-core structure.
Related Posts

Where the 534 Items Came From
Documenting how 534 SaaS launch checklist items were derived from 80 service analyses, 12 guidelines, and 5 real-world failures, including the P0-P3 priority logic.

The Code Was Done, But Everything Else Wasn't
Analyzing the difference between 'code complete' and 'product complete' through the 3-week post-feature work (payment stability, security, legal docs, SEO, etc.) which accounted for 88% of the effort.

Multi-Engine Architecture — Parallel Collection from 3 AI Search Engines
Analysis of multi-engine architecture design principles that leverage response variance as signals, featuring parallel collection structures and scalability via the adapter pattern.