Minbook
KO
What Happens When You Launch Without Monitoring

What Happens When You Launch Without Monitoring

MJ · · 4 min read

A guide to zero-cost minimum monitoring using Sentry and Betterstack, showing how an 80-minute pre-launch investment reduced error response time from 13 hours to 1 hour.

Learning About Errors From Users

WICHI launched, and for the first week, there was no error monitoring. Features seemed to work. Local tests passed.

The way we learned about problems was user feedback.

#User FeedbackActual CauseTime to Discovery
1”The results page won’t load”API timeout on a specific query~18 hours
2”Analysis stops midway”LLM response parsing failure~12 hours
3”I paid but credits weren’t added”Exception during webhook processing~6 hours
4”It worked yesterday, broken today”Dependency API change~24 hours
graph TD
    A["Error occurs"] --> B{"Monitoring\nin place?"}
    B -->|"Yes"| C["Immediate alert\n(within minutes)"]
    B -->|"No"| D["User discovers it"]
    D --> E["User contacts you"]
    E --> F["Developer checks"]
    F --> G["Root cause analysis"]

    C --> G

    H["Without monitoring:\naverage 15 hours"] --> I["With monitoring:\naverage 5 minutes"]

    style D fill:#ffebee,stroke:#f44336
    style C fill:#e8f5e9,stroke:#4caf50
    style H fill:#ffebee,stroke:#f44336
    style I fill:#e8f5e9,stroke:#4caf50

By the time a user reports an error, three things have already gone wrong: (1) the error occurred, (2) monitoring didn’t catch it, and (3) the user experience suffered.


The Three Layers of Monitoring

Monitoring isn’t one thing. You need at least three layers.

graph TD
    subgraph L1["Layer 1: Uptime Monitoring"]
        U1["Is the service alive?"]
        U2["Is response time normal?"]
        U3["Is the SSL certificate valid?"]
    end

    subgraph L2["Layer 2: Error Tracking"]
        E1["Runtime error capture"]
        E2["Error frequency and impact scope"]
        E3["Stack traces"]
    end

    subgraph L3["Layer 3: Log Management"]
        L3a["Structured logs"]
        L3b["Log search/filter"]
        L3c["Log retention period"]
    end

    L1 --> L2 --> L3

    style L1 fill:#e8f5e9,stroke:#4caf50
    style L2 fill:#e3f2fd,stroke:#2196f3
    style L3 fill:#fff3e0,stroke:#ff9800
LayerQuestion It AnswersExample ToolsPriority
Uptime”Is the service down?”Betterstack, UptimeRobot1st
Error Tracking”What error occurred?”Sentry, Bugsnag1st
Log Management”Why did it occur?”Betterstack Logs, Datadog2nd
APM”What’s slow?”Sentry Performance, New Relic3rd
User Analytics”What are they doing?”PostHog, Mixpanel3rd

For solo SaaS, the first priority is Uptime + Error Tracking. If you can answer “Is the service down?” and “Did an error occur?”, you’ve covered 80%.


Sentry: Error Tracking

Why Sentry

ToolFree TierError TrackingSource MapsPerformanceAlerts
Sentry5K events/moYesYesYesYes
Bugsnag7.5K events/moYesYesNoYes
Rollbar5K events/moYesYesNoYes
LogRocket1K sessions/moYesYesYesYes
Self-builtFreeManualManualManualManual

Why Sentry won:

  1. Sufficient free tier: 5,000 events/month — plenty for early-stage SaaS
  2. SDK for every framework: Next.js, Express, Python, React, and more
  3. Automatic source map upload: See original locations even in minified code
  4. Release tracking: Identify which deployment introduced an error

Minimum Setup

// sentry.config.js (Next.js example)
import * as Sentry from '@sentry/nextjs';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,

  // Only active in production
  enabled: process.env.NODE_ENV === 'production',

  // 100% error sampling (capture every error early on)
  sampleRate: 1.0,

  // 10% performance sampling (conserve free tier)
  tracesSampleRate: 0.1,
});

Adding Context to Errors

// User context
Sentry.setUser({
  id: user.id,
  email: user.email,  // Caution: consider GDPR
});

// Additional tags
Sentry.setTag('plan', user.plan);     // free, pro, enterprise
Sentry.setTag('feature', 'analysis'); // feature categorization

// Manual error capture
try {
  await processAnalysis(query);
} catch (error) {
  Sentry.captureException(error, {
    extra: {
      queryId: query.id,
      engineCount: query.engines.length,
    },
  });
  throw error;
}

Optimizing the Sentry Free Tier

To use 5,000 events/month efficiently:

StrategyMethodEffect
DeduplicationSentry auto-groups identical errorsSaves event count
Error filteringIgnore 404s, bot trafficRemoves noise
SamplingtracesSampleRate: 0.1Saves 90% of performance events
Environment separationSend production onlyExcludes dev events
// Filtering out unnecessary errors
Sentry.init({
  beforeSend(event) {
    // Ignore 404s
    if (event.exception?.values?.[0]?.type === 'NotFoundError') {
      return null;
    }
    // Ignore bot traffic
    if (event.request?.headers?.['user-agent']?.includes('bot')) {
      return null;
    }
    return event;
  },
});

Betterstack: Uptime + Logs

Why Betterstack

ToolFree TierUptimeLogsStatus PageAlert Channels
Betterstack5 monitors, 1GB logsYesYesYesSlack, Email, SMS
UptimeRobot50 monitorsYesNoYesEmail
PingdomPaidYesNoYesVarious
DatadogPaidYesYesNoVarious

Why Betterstack won:

  1. Uptime + logs in one place: One tool instead of two
  2. Free status page: Show users the service status
  3. Clean UI: Fast comprehension for solo developers
  4. Heartbeat monitors: Watch cron jobs and background tasks

Uptime Monitor Configuration

Monitors to set up in Betterstack:

#TargetCheck MethodIntervalAlert
1Main siteHTTP 200 check3 minSlack + Email
2API health check/api/health endpoint1 minSlack + Email
3Payment webhookHeartbeat (periodic ping)5 minEmail + SMS
4SSL certificateExpiry date check1 dayEmail (30 days before)

Health Check Endpoint

// /api/health -- service status verification
app.get('/api/health', async (req, res) => {
  const checks = {
    server: 'ok',
    database: 'unknown',
    cache: 'unknown',
  };

  try {
    await db.query('SELECT 1');
    checks.database = 'ok';
  } catch {
    checks.database = 'error';
  }

  try {
    await redis.ping();
    checks.cache = 'ok';
  } catch {
    checks.cache = 'error';
  }

  const healthy = Object.values(checks).every(v => v === 'ok');
  res.status(healthy ? 200 : 503).json({
    status: healthy ? 'healthy' : 'degraded',
    checks,
    timestamp: new Date().toISOString(),
  });
});

A health check endpoint should not simply return 200. It should also verify database and key dependency status. Without this distinction, you’ll miss the case where the server is alive but the database connection is severed.

Status Page

Betterstack’s free tier includes a status page.

status.myapp.com
|-- API          .  Operational
|-- Website      .  Operational
|-- Database     .  Operational
|-- Payments     .  Operational

Uptime: 99.95% (last 90 days)

Why a status page matters:

Without Status PageWith Status Page
User: “Why isn’t it working?” sends emails/DMsUser: checks status page, “ah, maintenance”
Developer: time spent answering ticketsDeveloper: focused on fixing
Trust drops: “This service seems unreliable”Trust maintained: “They operate transparently”

Sentry + Betterstack Combined: WICHI Case Study

Monitoring Architecture

graph TD
    subgraph APP["WICHI Application"]
        FE["Frontend\n(Next.js)"]
        BE["Backend\n(Express)"]
        CRON["Cron Jobs"]
    end

    subgraph SENTRY["Sentry (Error Tracking)"]
        SE["Error capture"]
        SP["Performance monitoring"]
        SR["Release tracking"]
    end

    subgraph BETTER["Betterstack (Uptime + Logs)"]
        BU["Uptime monitor"]
        BL["Log collection"]
        BS["Status page"]
    end

    subgraph ALERT["Alerts"]
        SLACK["Slack"]
        EMAIL["Email"]
    end

    FE --> SE
    BE --> SE
    BE --> BL
    CRON --> BL

    BU -->|"Checks every 3 min"| BE
    SE --> SLACK
    BU --> SLACK
    BU --> EMAIL

    style SENTRY fill:#e3f2fd,stroke:#2196f3
    style BETTER fill:#e8f5e9,stroke:#4caf50

Role Division

SituationDetectionAlert Content
Runtime error (500)SentryError message + stack trace + user context
Service downBetterstack Uptime”API not responding” + downtime log
Background job failureBetterstack Heartbeat”Last heartbeat was 5 minutes ago”
Response time degradationSentry Performance”p95 response time exceeds 3 seconds”
SSL expiring soonBetterstack”SSL certificate expires in 30 days”

Real Incident Response Comparison

After implementing monitoring, the same type of error occurred. Here’s the difference:

graph LR
    subgraph BEFORE["Before: No Monitoring"]
        B1["Error occurs\n(2:00 AM)"] --> B2["User discovers\n(10:00 AM)"]
        B2 --> B3["Feedback sent\n(11:00 AM)"]
        B3 --> B4["Root cause found\n(1:00 PM)"]
        B4 --> B5["Fix deployed\n(3:00 PM)"]
    end

    subgraph AFTER["After: Sentry + Betterstack"]
        A1["Error occurs\n(2:00 AM)"] --> A2["Slack alert\n(2:01 AM)"]
        A2 --> A3["Checked in morning\n(9:00 AM)"]
        A3 --> A4["Root cause found\n(9:15 AM)\nStack trace available"]
        A4 --> A5["Fix deployed\n(10:00 AM)"]
    end

    style BEFORE fill:#ffebee,stroke:#f44336
    style AFTER fill:#e8f5e9,stroke:#4caf50
MetricBeforeAfter
Error awareness time~8 hours~1 minute (alert)
Root cause analysis~2 hours~15 minutes (stack trace)
Total response time~13 hours~1 hour
User impact duration~13 hours~8 hours (overnight occurrence)

Free Tier Cost Breakdown

In the early stages of a solo SaaS, monitoring doesn’t have to cost anything.

ToolFree TierCost If ExceededEnough for Early SaaS?
Sentry5,000 events/mo$26/mo (50K)Yes
Betterstack Uptime5 monitors$24/mo (20)Yes
Betterstack Logs1GB/mo$24/mo (5GB)Yes
Total$0/mo

Tips for Extending Free Tier Life

StrategySentryBetterstack
Environment separationProduction onlyProduction endpoints only
FilteringExclude 404s, bot errorsExclude unnecessary log levels
Sampling10% for performance events
RetentionDefault 90 daysDefault 3 days (free); keep essential logs only

Pre-Launch Monitoring Checklist

#ItemToolTime Required
1Create Sentry project + install SDKSentry15 min
2Error alerts to SlackSentry5 min
3Error filtering setup (exclude 404s, bots)Sentry10 min
4Uptime monitors (site + API)Betterstack10 min
5Health check endpoint implementationCustom20 min
6Status page setupBetterstack10 min
7Heartbeat monitor (for cron jobs)Betterstack10 min
Total~80 min
graph LR
    A["80-minute investment"] --> B["Error awareness\n8 hours -> 1 minute"]
    A --> C["Root cause analysis\n2 hours -> 15 minutes"]
    A --> D["User complaints\nreduced"]
    A --> E["Cost\n$0/month"]

    style A fill:#e3f2fd,stroke:#2196f3
    style E fill:#e8f5e9,stroke:#4caf50

80 minutes gets you set up. Launch without monitoring, and the first error will cost you 100 times those 80 minutes.


Monitoring as Checklist Items

Of MMU’s 534 checklist items, 38 are monitoring-related. The 10 essential pre-launch items:

#ItemMMU Category
1Is an error tracking tool installed in production?Monitoring
2Is an uptime monitor configured?Monitoring
3Are error alerts being sent to Slack/Email?Monitoring
4Is there a health check endpoint?Monitoring
5Is there a status page?Monitoring
6Are source maps uploaded to the error tracking tool?Monitoring
7Do errors include user context?Monitoring
8Do background jobs have heartbeat monitors?Monitoring
9Is there an SSL certificate expiry alert?Security
10Do logs exclude sensitive information (passwords, tokens)?Security

All 10 can be set up with free tools, in under 80 minutes total. “I’ll do monitoring after launch” is the same as “I’ll investigate errors after users complain.”


Summary

Key PointDetails
Launching without monitoringErrors discovered via user reports (average 15 hours later)
Minimum stackSentry (error tracking) + Betterstack (uptime + logs)
Cost$0/month (free tiers are sufficient)
Setup time~80 minutes
ImpactError awareness 8 hours to 1 minute; root cause analysis 2 hours to 15 minutes
PrincipleMonitoring comes before launch. Before features
Share

Related Posts