Documenting the psychological mechanisms and viral loop strategies behind three sharing elements designed for MMU's PLG growth: score cards, badges, and custom checklist links.
The First Question of Product-Led Growth
PLG (Product-Led Growth) is the approach where “the product itself is the marketing channel.” No sales team, no ads — the experience of using the product is what brings in new users.
For PLG to work in a CLI tool like MMU, users need to share their results. If someone runs the checklist, looks at the results, thinks “that’s nice,” and stops there, growth is zero.
graph LR
A["User runs\nMMU"] --> B["Checks\nchecklist results"]
B --> C{"Motivated\nto share?"}
C -->|"YES"| D["Shares\nresult card"]
C -->|"NO"| E["Ends here\n(no growth)"]
D --> F["New user\ndiscovers MMU"]
F --> A
style E fill:#ffebee,stroke:#f44336
style D fill:#e8f5e9,stroke:#4caf50
Why the Share Feature Is “First”
Here’s why mmu share was the first feature built on MMU’s roadmap:
Priority Comparison
| Feature | User Value | Growth Contribution | Implementation Difficulty | Priority |
|---|---|---|---|---|
| share (result sharing) | Medium | High | Low | 1st |
| compare (previous result comparison) | High | None | Medium | 2nd |
| export (PDF report) | High | Low | Medium | 3rd |
| dashboard (web dashboard) | High | Medium | High | 4th |
Build the feature with the highest growth contribution first. No matter how high the user value, it’s meaningless if the user count is zero. The feature that increases user count comes first.
The Share, Discover, Use Loop
graph TD
subgraph LOOP["PLG Growth Loop"]
U["Use"] --> S["Share"]
S --> D["Discover"]
D --> U
end
subgraph METRICS["Metrics at Each Stage"]
M1["Weekly run count"]
M2["Share cards generated"]
M3["Share-to-install conversion"]
end
U --> M1
S --> M2
D --> M3
In this loop, without sharing there is no discovery, and without discovery there is no growth. That is why share comes first.
Why We Chose Plain Text Cards
Options Compared
| Format | Pros | Cons |
|---|---|---|
| Image card (PNG) | Visually appealing | Requires server (image generation), awkward in CLI context |
| Web link (URL) | Rich information | Requires server, hosting costs |
| Plain text | No server needed, paste anywhere | Visually simple |
| JSON export | Data-rich | Hard for non-technical people to read |
Why plain text won:
- No server required: MMU is a CLI tool. It doesn’t run a server. Image generation or web hosting would introduce infrastructure costs and maintenance
- Universal compatibility: Text can be pasted into Twitter, Slack, Discord, GitHub, blogs, email — anywhere
- Natural in CLI context: CLI users are comfortable with text environments. Text feels more “CLI-native” than images
- Instant copy:
mmu sharecopies to clipboard automatically — paste immediately
Card Format
+-------------------------------------+
| Make Me Unicorn |
| SaaS Launch Readiness Report |
| |
| Score: 412/445 (93%) |
| 93% |
| |
| Security 42/45 |
| Legal 28/30 |
| Billing 44/48 |
| SEO 18/22 |
| Monitoring 35/38 |
| ... (15 categories) |
| |
| Badges: Ship-Ready, Secure, |
| Compliant |
| |
| Try it: npx make-me-unicorn |
+-------------------------------------+
What the Card Includes and Why
| Element | Reason for Inclusion |
|---|---|
| Overall score + progress bar | Instant status comprehension |
| Per-category scores | Visualize strengths and weaknesses |
| Badges | Sense of achievement, sharing motivation |
npx make-me-unicorn | CTA — viewers can run it immediately |
The CTA (Call to Action) must be inside the card. “What is this?” — the viewer sees
npx make-me-unicornat the bottom and runs it immediately. No searching, no installation — a singlenpxcommand is all it takes.
Gated Progression: If the Numbers Don’t Work, Don’t Move Forward
What Is a Gate?
A gate is a checkpoint: this number must be achieved before proceeding to the next feature or investment. It prevents wishful thinking (“if we build it, they will come”).
graph LR
G1["Gate 1\n50 weekly runs"] -->|"Pass"| G2["Gate 2\n10 share cards/week"]
G2 -->|"Pass"| G3["Gate 3\n3 share-to-install/week"]
G3 -->|"Pass"| G4["Playbook Pack\nLaunch"]
G1 -->|"Miss"| R1["Improve CLI\nor pivot"]
G2 -->|"Miss"| R2["Strengthen share\nmotivation or change format"]
G3 -->|"Miss"| R3["Improve CTA\nor change channels"]
style G4 fill:#e8f5e9,stroke:#4caf50
style R1 fill:#fff3e0,stroke:#ff9800
style R2 fill:#fff3e0,stroke:#ff9800
style R3 fill:#fff3e0,stroke:#ff9800
MMU’s Gates
| Gate | Metric | Threshold | If Passed | If Missed |
|---|---|---|---|---|
| Gate 1 | Weekly CLI runs | 50/week | Proceed to Gate 2 | Improve CLI UX |
| Gate 2 | Weekly share card generation | 10/week | Proceed to Gate 3 | Improve sharing motivation/format |
| Gate 3 | Share-to-install conversion | 3/week | Launch Playbook Pack | Improve CTA/channels |
| Gate 4 | Playbook Pack revenue | $500/month | Develop AI Coach | Improve pack content/pricing |
Why These Specific Numbers
- 50/week: “50 people using it at least once a week” or “one person using it repeatedly across multiple projects.” Either way, it confirms a minimum usage base
- 10/week: 20% of runs result in a share. If this ratio isn’t met, sharing motivation is insufficient
- 3/week: 30% of shares convert to new installs. This represents the K-factor (viral coefficient)
- $500/month: One-tenth of the minimum threshold for a solo builder to go full-time. Confirms the “10x to $5K/month” possibility
More important than the specific numbers is the “data-driven decision-making” principle. Not “if we build it they’ll use it,” but “confirm they’re using it, then build the next thing.”
Badge System: Designing Sharing Motivation
Why Badges Are Needed
Checklist scores alone provide weak sharing motivation. A number like “82%” is meaningless without context. Badges visualize achievement and create a reason to share.
Badge Criteria
| Badge | Condition | Meaning |
|---|---|---|
| Secure | Security category 90%+ | Security verification complete |
| Compliant | Legal category 90%+ | Legal compliance verified |
| Ship-Ready | Overall 85%+ | Launch-ready |
| Perfect | Overall 100% | Flawless |
| CI-Ready | CI/CD category 90%+ | Automation pipeline complete |
Badge names are states, not actions. Not “completed security verification” but “Secure” (a state of being). State descriptions feel more natural when shared. “I’m Ship-Ready” vs “I completed security verification” — the former is the one you actually want to share.
Measurement Approach
How do you measure usage for a CLI tool? There’s no server, so traditional analytics tools (GA4, Mixpanel) aren’t an option.
Options Compared
| Method | Privacy | Accuracy | Implementation |
|---|---|---|---|
| npm download count | High | Low (install does not equal run) | Zero |
| Anonymous telemetry (opt-in) | Medium | High | Medium |
| GitHub Issues/Stars | High | Low (engagement does not equal usage) | Zero |
mmu share generation count | High | Medium | Low |
Current approach: npm downloads + GitHub activity + share generation count
Only methods that don’t collect personal data are used. Telemetry should be opt-in by principle, but it’s not a priority for early implementation.
Summary
| Key Point | Details |
|---|---|
| Why share comes first | The feature that grows user count must come first |
| Plain text cards | No server needed, universal, natural in CLI context |
| Gate thresholds | 50 weekly runs, 10 shares, 3 installs, $500 revenue |
| Badges | Achievement visualization drives sharing motivation |
| Core principle | If the numbers don’t work, don’t move to the next stage |
Related Posts

Using Feature Flags to Fix CLI Score Accuracy — Solving the False-Pass Problem
Solving the false-pass problem in MMU's checklist by designing a condition marker system with 17 feature flags and improving score accuracy through 48 regression tests.

The Code Was Done, But Everything Else Wasn't
Analyzing the difference between 'code complete' and 'product complete' through the 3-week post-feature work (payment stability, security, legal docs, SEO, etc.) which accounted for 88% of the effort.

Multi-Engine Architecture — Parallel Collection from 3 AI Search Engines
Analysis of multi-engine architecture design principles that leverage response variance as signals, featuring parallel collection structures and scalability via the adapter pattern.