A guide to zero-cost minimum monitoring using Sentry and Betterstack, showing how an 80-minute pre-launch investment reduced error response time from 13 hours to 1 hour.
Learning About Errors From Users
WICHI launched, and for the first week, there was no error monitoring. Features seemed to work. Local tests passed.
The way we learned about problems was user feedback.
| # | User Feedback | Actual Cause | Time to Discovery |
|---|---|---|---|
| 1 | ”The results page won’t load” | API timeout on a specific query | ~18 hours |
| 2 | ”Analysis stops midway” | LLM response parsing failure | ~12 hours |
| 3 | ”I paid but credits weren’t added” | Exception during webhook processing | ~6 hours |
| 4 | ”It worked yesterday, broken today” | Dependency API change | ~24 hours |
graph TD
A["Error occurs"] --> B{"Monitoring\nin place?"}
B -->|"Yes"| C["Immediate alert\n(within minutes)"]
B -->|"No"| D["User discovers it"]
D --> E["User contacts you"]
E --> F["Developer checks"]
F --> G["Root cause analysis"]
C --> G
H["Without monitoring:\naverage 15 hours"] --> I["With monitoring:\naverage 5 minutes"]
style D fill:#ffebee,stroke:#f44336
style C fill:#e8f5e9,stroke:#4caf50
style H fill:#ffebee,stroke:#f44336
style I fill:#e8f5e9,stroke:#4caf50
By the time a user reports an error, three things have already gone wrong: (1) the error occurred, (2) monitoring didn’t catch it, and (3) the user experience suffered.
The Three Layers of Monitoring
Monitoring isn’t one thing. You need at least three layers.
graph TD
subgraph L1["Layer 1: Uptime Monitoring"]
U1["Is the service alive?"]
U2["Is response time normal?"]
U3["Is the SSL certificate valid?"]
end
subgraph L2["Layer 2: Error Tracking"]
E1["Runtime error capture"]
E2["Error frequency and impact scope"]
E3["Stack traces"]
end
subgraph L3["Layer 3: Log Management"]
L3a["Structured logs"]
L3b["Log search/filter"]
L3c["Log retention period"]
end
L1 --> L2 --> L3
style L1 fill:#e8f5e9,stroke:#4caf50
style L2 fill:#e3f2fd,stroke:#2196f3
style L3 fill:#fff3e0,stroke:#ff9800
| Layer | Question It Answers | Example Tools | Priority |
|---|---|---|---|
| Uptime | ”Is the service down?” | Betterstack, UptimeRobot | 1st |
| Error Tracking | ”What error occurred?” | Sentry, Bugsnag | 1st |
| Log Management | ”Why did it occur?” | Betterstack Logs, Datadog | 2nd |
| APM | ”What’s slow?” | Sentry Performance, New Relic | 3rd |
| User Analytics | ”What are they doing?” | PostHog, Mixpanel | 3rd |
For solo SaaS, the first priority is Uptime + Error Tracking. If you can answer “Is the service down?” and “Did an error occur?”, you’ve covered 80%.
Sentry: Error Tracking
Why Sentry
| Tool | Free Tier | Error Tracking | Source Maps | Performance | Alerts |
|---|---|---|---|---|---|
| Sentry | 5K events/mo | Yes | Yes | Yes | Yes |
| Bugsnag | 7.5K events/mo | Yes | Yes | No | Yes |
| Rollbar | 5K events/mo | Yes | Yes | No | Yes |
| LogRocket | 1K sessions/mo | Yes | Yes | Yes | Yes |
| Self-built | Free | Manual | Manual | Manual | Manual |
Why Sentry won:
- Sufficient free tier: 5,000 events/month — plenty for early-stage SaaS
- SDK for every framework: Next.js, Express, Python, React, and more
- Automatic source map upload: See original locations even in minified code
- Release tracking: Identify which deployment introduced an error
Minimum Setup
// sentry.config.js (Next.js example)
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
// Only active in production
enabled: process.env.NODE_ENV === 'production',
// 100% error sampling (capture every error early on)
sampleRate: 1.0,
// 10% performance sampling (conserve free tier)
tracesSampleRate: 0.1,
});
Adding Context to Errors
// User context
Sentry.setUser({
id: user.id,
email: user.email, // Caution: consider GDPR
});
// Additional tags
Sentry.setTag('plan', user.plan); // free, pro, enterprise
Sentry.setTag('feature', 'analysis'); // feature categorization
// Manual error capture
try {
await processAnalysis(query);
} catch (error) {
Sentry.captureException(error, {
extra: {
queryId: query.id,
engineCount: query.engines.length,
},
});
throw error;
}
Optimizing the Sentry Free Tier
To use 5,000 events/month efficiently:
| Strategy | Method | Effect |
|---|---|---|
| Deduplication | Sentry auto-groups identical errors | Saves event count |
| Error filtering | Ignore 404s, bot traffic | Removes noise |
| Sampling | tracesSampleRate: 0.1 | Saves 90% of performance events |
| Environment separation | Send production only | Excludes dev events |
// Filtering out unnecessary errors
Sentry.init({
beforeSend(event) {
// Ignore 404s
if (event.exception?.values?.[0]?.type === 'NotFoundError') {
return null;
}
// Ignore bot traffic
if (event.request?.headers?.['user-agent']?.includes('bot')) {
return null;
}
return event;
},
});
Betterstack: Uptime + Logs
Why Betterstack
| Tool | Free Tier | Uptime | Logs | Status Page | Alert Channels |
|---|---|---|---|---|---|
| Betterstack | 5 monitors, 1GB logs | Yes | Yes | Yes | Slack, Email, SMS |
| UptimeRobot | 50 monitors | Yes | No | Yes | |
| Pingdom | Paid | Yes | No | Yes | Various |
| Datadog | Paid | Yes | Yes | No | Various |
Why Betterstack won:
- Uptime + logs in one place: One tool instead of two
- Free status page: Show users the service status
- Clean UI: Fast comprehension for solo developers
- Heartbeat monitors: Watch cron jobs and background tasks
Uptime Monitor Configuration
Monitors to set up in Betterstack:
| # | Target | Check Method | Interval | Alert |
|---|---|---|---|---|
| 1 | Main site | HTTP 200 check | 3 min | Slack + Email |
| 2 | API health check | /api/health endpoint | 1 min | Slack + Email |
| 3 | Payment webhook | Heartbeat (periodic ping) | 5 min | Email + SMS |
| 4 | SSL certificate | Expiry date check | 1 day | Email (30 days before) |
Health Check Endpoint
// /api/health -- service status verification
app.get('/api/health', async (req, res) => {
const checks = {
server: 'ok',
database: 'unknown',
cache: 'unknown',
};
try {
await db.query('SELECT 1');
checks.database = 'ok';
} catch {
checks.database = 'error';
}
try {
await redis.ping();
checks.cache = 'ok';
} catch {
checks.cache = 'error';
}
const healthy = Object.values(checks).every(v => v === 'ok');
res.status(healthy ? 200 : 503).json({
status: healthy ? 'healthy' : 'degraded',
checks,
timestamp: new Date().toISOString(),
});
});
A health check endpoint should not simply return 200. It should also verify database and key dependency status. Without this distinction, you’ll miss the case where the server is alive but the database connection is severed.
Status Page
Betterstack’s free tier includes a status page.
status.myapp.com
|-- API . Operational
|-- Website . Operational
|-- Database . Operational
|-- Payments . Operational
Uptime: 99.95% (last 90 days)
Why a status page matters:
| Without Status Page | With Status Page |
|---|---|
| User: “Why isn’t it working?” sends emails/DMs | User: checks status page, “ah, maintenance” |
| Developer: time spent answering tickets | Developer: focused on fixing |
| Trust drops: “This service seems unreliable” | Trust maintained: “They operate transparently” |
Sentry + Betterstack Combined: WICHI Case Study
Monitoring Architecture
graph TD
subgraph APP["WICHI Application"]
FE["Frontend\n(Next.js)"]
BE["Backend\n(Express)"]
CRON["Cron Jobs"]
end
subgraph SENTRY["Sentry (Error Tracking)"]
SE["Error capture"]
SP["Performance monitoring"]
SR["Release tracking"]
end
subgraph BETTER["Betterstack (Uptime + Logs)"]
BU["Uptime monitor"]
BL["Log collection"]
BS["Status page"]
end
subgraph ALERT["Alerts"]
SLACK["Slack"]
EMAIL["Email"]
end
FE --> SE
BE --> SE
BE --> BL
CRON --> BL
BU -->|"Checks every 3 min"| BE
SE --> SLACK
BU --> SLACK
BU --> EMAIL
style SENTRY fill:#e3f2fd,stroke:#2196f3
style BETTER fill:#e8f5e9,stroke:#4caf50
Role Division
| Situation | Detection | Alert Content |
|---|---|---|
| Runtime error (500) | Sentry | Error message + stack trace + user context |
| Service down | Betterstack Uptime | ”API not responding” + downtime log |
| Background job failure | Betterstack Heartbeat | ”Last heartbeat was 5 minutes ago” |
| Response time degradation | Sentry Performance | ”p95 response time exceeds 3 seconds” |
| SSL expiring soon | Betterstack | ”SSL certificate expires in 30 days” |
Real Incident Response Comparison
After implementing monitoring, the same type of error occurred. Here’s the difference:
graph LR
subgraph BEFORE["Before: No Monitoring"]
B1["Error occurs\n(2:00 AM)"] --> B2["User discovers\n(10:00 AM)"]
B2 --> B3["Feedback sent\n(11:00 AM)"]
B3 --> B4["Root cause found\n(1:00 PM)"]
B4 --> B5["Fix deployed\n(3:00 PM)"]
end
subgraph AFTER["After: Sentry + Betterstack"]
A1["Error occurs\n(2:00 AM)"] --> A2["Slack alert\n(2:01 AM)"]
A2 --> A3["Checked in morning\n(9:00 AM)"]
A3 --> A4["Root cause found\n(9:15 AM)\nStack trace available"]
A4 --> A5["Fix deployed\n(10:00 AM)"]
end
style BEFORE fill:#ffebee,stroke:#f44336
style AFTER fill:#e8f5e9,stroke:#4caf50
| Metric | Before | After |
|---|---|---|
| Error awareness time | ~8 hours | ~1 minute (alert) |
| Root cause analysis | ~2 hours | ~15 minutes (stack trace) |
| Total response time | ~13 hours | ~1 hour |
| User impact duration | ~13 hours | ~8 hours (overnight occurrence) |
Free Tier Cost Breakdown
In the early stages of a solo SaaS, monitoring doesn’t have to cost anything.
| Tool | Free Tier | Cost If Exceeded | Enough for Early SaaS? |
|---|---|---|---|
| Sentry | 5,000 events/mo | $26/mo (50K) | Yes |
| Betterstack Uptime | 5 monitors | $24/mo (20) | Yes |
| Betterstack Logs | 1GB/mo | $24/mo (5GB) | Yes |
| Total | $0/mo | — | — |
Tips for Extending Free Tier Life
| Strategy | Sentry | Betterstack |
|---|---|---|
| Environment separation | Production only | Production endpoints only |
| Filtering | Exclude 404s, bot errors | Exclude unnecessary log levels |
| Sampling | 10% for performance events | — |
| Retention | Default 90 days | Default 3 days (free); keep essential logs only |
Pre-Launch Monitoring Checklist
| # | Item | Tool | Time Required |
|---|---|---|---|
| 1 | Create Sentry project + install SDK | Sentry | 15 min |
| 2 | Error alerts to Slack | Sentry | 5 min |
| 3 | Error filtering setup (exclude 404s, bots) | Sentry | 10 min |
| 4 | Uptime monitors (site + API) | Betterstack | 10 min |
| 5 | Health check endpoint implementation | Custom | 20 min |
| 6 | Status page setup | Betterstack | 10 min |
| 7 | Heartbeat monitor (for cron jobs) | Betterstack | 10 min |
| Total | ~80 min |
graph LR
A["80-minute investment"] --> B["Error awareness\n8 hours -> 1 minute"]
A --> C["Root cause analysis\n2 hours -> 15 minutes"]
A --> D["User complaints\nreduced"]
A --> E["Cost\n$0/month"]
style A fill:#e3f2fd,stroke:#2196f3
style E fill:#e8f5e9,stroke:#4caf50
80 minutes gets you set up. Launch without monitoring, and the first error will cost you 100 times those 80 minutes.
Monitoring as Checklist Items
Of MMU’s 534 checklist items, 38 are monitoring-related. The 10 essential pre-launch items:
| # | Item | MMU Category |
|---|---|---|
| 1 | Is an error tracking tool installed in production? | Monitoring |
| 2 | Is an uptime monitor configured? | Monitoring |
| 3 | Are error alerts being sent to Slack/Email? | Monitoring |
| 4 | Is there a health check endpoint? | Monitoring |
| 5 | Is there a status page? | Monitoring |
| 6 | Are source maps uploaded to the error tracking tool? | Monitoring |
| 7 | Do errors include user context? | Monitoring |
| 8 | Do background jobs have heartbeat monitors? | Monitoring |
| 9 | Is there an SSL certificate expiry alert? | Security |
| 10 | Do logs exclude sensitive information (passwords, tokens)? | Security |
All 10 can be set up with free tools, in under 80 minutes total. “I’ll do monitoring after launch” is the same as “I’ll investigate errors after users complain.”
Summary
| Key Point | Details |
|---|---|
| Launching without monitoring | Errors discovered via user reports (average 15 hours later) |
| Minimum stack | Sentry (error tracking) + Betterstack (uptime + logs) |
| Cost | $0/month (free tiers are sufficient) |
| Setup time | ~80 minutes |
| Impact | Error awareness 8 hours to 1 minute; root cause analysis 2 hours to 15 minutes |
| Principle | Monitoring comes before launch. Before features |
Related Posts

Solo SaaS Security — The Minimum You Must Do
Essential security checklist for solo SaaS builders. Defense against five critical OWASP Top 10 vulnerabilities and real-world security configuration cases improved for WICHI.

Build, Document, Share
Personal execution notes from a non-tech builder who started with AI FOMO and is now navigating the messy reality of production beyond the initial 'one-click' hype.

Multi-Engine Architecture — Parallel Collection from 3 AI Search Engines
Analysis of multi-engine architecture design principles that leverage response variance as signals, featuring parallel collection structures and scalability via the adapter pattern.