Minbook
KO
Jocoding Hackathon Build Log — Building a GEO SaaS in 3 Days

Jocoding Hackathon Build Log — Building a GEO SaaS in 3 Days

MJ · · 11 min read

Recording the 3-day MVP build for WICHI during the Jocoding Hackathon, covering tech stack choices (FastAPI, React, Supabase) and priorities for creating a 'working product' under tight deadlines.

Background

GEO as a Field

GEO (Generative Engine Optimization) is the discipline of measuring and optimizing how well a brand appears in AI search engine responses. As AI search engines like ChatGPT, Gemini, and Perplexity begin replacing traditional search, a new set of visibility metrics distinct from SEO became necessary.

Traditional SEO deals with link rankings on search engine results pages (SERPs). Traffic is generated when users click on links. AI search has a different structure entirely. Users ask a question, the AI generates a response, and within that response, a brand or product is either mentioned or not. Visibility is the mention itself, not a click. This difference is what establishes GEO as a separate discipline.

From late 2025, the keyword “AI search optimization” started appearing frequently in the marketing industry, and by early 2026, several agencies had already added GEO consulting to their service lineups. But most of it was manual work — querying AI engines directly, then reading the responses by eye. SaaS tools automating this process were still scarce.

Why a Hackathon

WICHI is the project that turns GEO into a SaaS. It sends queries to multiple AI engines, analyzes the frequency and context of brand mentions in responses, and calculates a GEO Score. The project was already underway before the hackathon.

The problem was velocity. The loop of “let me polish this a bit more,” “let me add this feature first” kept repeating, preventing convergence into an actually usable form. It was the classic side-project trap: endlessly expanding the feature design without ever producing a complete flow.

The hackathon was a deadline device. When there’s a submission deadline, “should we add this?” turns into “can we finish in 3 days?”

The Jocoding Hackathon was an opportunity to impose an artificial deadline on this project. More than prizes or awards, the goal was to produce a working MVP within a 3-day window. After the hackathon, a working product would remain, and commercialization could proceed from there. Regardless of the result, there was nothing to lose.

Starting State

Here is what existed and what did not at the time the hackathon began:

ItemStatusNotes
GEO concept definitionCompleteBrand visibility measurement framework designed
Query generation logicDraftIdea-level only, not coded
AI engine integrationNot startedOnly API docs reviewed
Scoring logicNot startedOnly metric definitions existed
FrontendNot startedNo design mockups
Auth / PaymentsNot startedOnly platform candidates researched
DeploymentNot startedLocal development only

The core idea and market research existed, but virtually no code — that was the starting state.

3-Day Timeline

gantt
    title WICHI MVP — 3-Day Hackathon Timeline
    dateFormat  YYYY-MM-DD
    axisFormat  %m/%d

    section Day 1 — Pipeline
    Query generation module            :d1a, 2026-02-27, 1d
    Multi-engine response collection   :d1b, 2026-02-27, 1d
    Brand mention analysis logic       :d1c, 2026-02-27, 1d
    FastAPI server + Supabase DB       :d1d, 2026-02-27, 1d

    section Day 2 — Frontend
    React + Vite setup                 :d2a, 2026-02-28, 1d
    Report visualization view          :d2b, 2026-02-28, 1d
    Supabase Auth integration          :d2c, 2026-02-28, 1d
    Dashboard UI                       :d2d, 2026-02-28, 1d

    section Day 3 — Payment/Submit
    Lemon Squeezy integration          :d3a, 2026-03-01, 1d
    Sample data setup                  :d3b, 2026-03-01, 1d
    Full-flow QA                       :d3c, 2026-03-01, 1d
    Submission prep and submit         :d3d, 2026-03-01, 1d

Day 1 — Core Pipeline

Day one was all-in on the backend pipeline. Progress was verified in the terminal — no frontend. Even if you can’t show it to users, the core logic has to run before anything else is meaningful.

Query Generation Module

The starting point of GEO analysis is queries. When a user enters a brand name and industry, the system automatically generates questions that consumers would realistically ask AI search engines in that industry. Patterns like “What’s the best ~?” or “Recommend a ~” are adapted per industry.

The quality of this module determines the quality of the entire analysis. If the queries are unrealistic, the responses are meaningless and the scores are untrustworthy. A significant portion of Day 1’s working hours went here.

Multi-Engine Response Collection

A module was built to send generated queries to multiple AI search engines and collect their responses. Since each engine has a different API format and response structure, per-engine adapters were needed to normalize responses into a unified format.

Brand Mention Analysis

Logic was written to analyze the frequency, context, and position of the target brand within collected response texts. The key was not simply counting how many times the brand name appeared, but distinguishing the context of each mention — recommendation, comparison, negative mention, and so on.

API Server and Database

The API server was set up with FastAPI, and analysis results were stored in Supabase PostgreSQL. Table design was kept minimal: three tables for users, analysis requests, and analysis results.

End of Day 1: Entering a brand name in the terminal runs query generation → multi-engine collection → mention analysis → DB storage automatically. User interface: 0%.

Day 2 — Frontend and Auth

Day two’s goal was “making it usable by humans.”

Frontend Setup

The frontend was set up with React + Vite. Vite was chosen over CRA (Create React App) for build speed. In a hackathon, slow HMR (Hot Module Replacement) directly slows iteration velocity. Vite’s fast start and refresh speed suited a 3-day sprint.

No UI framework or component library was introduced. Just Tailwind CSS for minimal layout. The goal was “something that works,” not “something that looks good.”

Report Visualization

The report view was implemented to deliver analysis results to users. Since this screen is the core value delivery point of WICHI, it received the most time on Day 2.

Elements included in the report:

  • GEO Score: aggregate score (0-100)
  • Per-engine comparison chart: brand visibility comparison across each AI engine
  • Per-query detail: each engine’s response and brand mention status for individual queries
  • Competitive positioning (when competitors were entered)

Auth Integration

Signup and login were connected via Supabase Auth. Email + password-based auth was the default, with OAuth (Google) deferred due to time constraints. Row Level Security (RLS) was configured so each user could only see their own analysis results.

Dashboard Base UI

A dashboard was built for viewing analysis history and starting new analyses. A simple list of previous analysis results, with each item clicking through to the report view.

End of Day 2: The flow from signup → login → analysis request → report viewing works. Payment and sample data not yet implemented.

Day 3 — Payment, Demo Data, Submission

The final day had the most to do: payment integration, demo data setup, full-flow QA, and submission materials.

Payment Integration

Lemon Squeezy was integrated as the payment platform. Product creation, checkout page connection, and webhook reception had to be completed within a single day. The reason for choosing Lemon Squeezy is covered separately, but the core advantage is that it acts as a MoR (Merchant of Record) handling VAT and regulatory compliance for Korean sellers doing global sales.

Payment integration at the Day 3 stage was minimal. Credits were granted manually after payment completion; webhook automation was deferred to post-hackathon. The bar was “payment works at all.”

Sample Data Setup

Users needed to see what the service does immediately after signup, even without spending analysis credits. The approach was to provide pre-run analysis results as sample reports. Actual brands were analyzed in advance and their results exposed as samples.

Full-Flow QA

Signup → login → view sample → payment → credit top-up → analysis request → report viewing. This entire flow was run from start to finish repeatedly. Fix broken parts, start over from the beginning again. Most of Day 3’s afternoon went to this.

Submission

Submission materials included the service URL, a brief description, and a demo video. No separate presentation deck was created. The working service itself was deemed the best demo.

Daily Progress Summary

DayMajor WorkCompletion CriteriaStatus
Day 1Query generation, multi-engine collection, mention analysis, API + DBFull pipeline running in terminalAchieved
Day 2React setup, report view, auth, dashboardSignup through report viewing possible in browserAchieved
Day 3Payment, sample data, QA, submissionFull paid flow working + submission completeAchieved

Tech Stack Selection Rationale

The only criterion for tech choices in a 3-day hackathon was: already used it before. There was no time to experiment with new technology. Only tools with near-zero learning curve were selected.

graph TD
    subgraph Frontend
        A[React + Vite] --> B[Tailwind CSS]
        A --> C[Supabase Auth SDK]
    end

    subgraph Backend
        D[FastAPI] --> E[AI Engine Adapters]
        D --> F[Query Generator]
        D --> G[Brand Analysis]
    end

    subgraph Infrastructure
        H[Supabase PostgreSQL]
        I[Supabase Auth + RLS]
        J[Railway - BE Hosting]
        K[Vercel - FE Hosting]
    end

    subgraph Payment
        L[Lemon Squeezy]
    end

    A -->|API calls| D
    D -->|Read/Write| H
    A -->|Auth| I
    L -->|Webhook| D

Selection Details

AreaChoiceAlternatives ConsideredSelection RationaleKept Post-Hackathon
BackendFastAPIDjango, ExpressPython-based, unifying language with AI pipeline. Native async support. Concise route definitions enable rapid API developmentYes
FrontendReact + ViteNext.js, SvelteVite’s fast HMR suited to hackathon speed. React was the most familiar frameworkYes (structure changed later)
DBSupabase PostgreSQLPlanetScale, NeonAuth, DB, and RLS handled in one platform. Query directly via client library without a separate ORMYes
AuthSupabase AuthAuth0, ClerkSame platform as DB means zero integration cost. Natural RLS integrationYes
BE HostingRailwayFly.io, RenderPrior experience from previous projects. Deploy via git push. Free tier availableYes
FE HostingVercelNetlify, Cloudflare PagesDe facto standard for React project deployment. Minimal build configuration neededYes (reused after hosting migration)
PaymentLemon SqueezyStripe, PaddleMoR handling for Korean sellers in global sales. Tax calculation, invoicing, refunds handled by platformYes

Why FastAPI + Python

The core of a GEO SaaS is AI engine API calls and text analysis. Python’s ecosystem is overwhelmingly strong in both. LLM API clients, text processing libraries, and data analysis tools are all Python-first. Unifying the backend in Python allows the AI pipeline code to run in the same process without splitting it into a separate service.

FastAPI’s async support was also important. Sending queries to multiple engines simultaneously and waiting for responses is I/O-bound work, and async/await parallel processing significantly reduces analysis time.

Why All-In on Supabase

There was no time in 3 days to set up auth, database, and security policies each on different services. Supabase provides PostgreSQL + Auth + Row Level Security in one project. Since JWTs issued by Auth automatically connect to DB RLS policies, implementing “logged-in users can only see their own data” required no additional code — just SQL policies.

Why Lemon Squeezy

The biggest hurdle for Korean sellers doing global SaaS sales is tax and regulatory compliance. Calculating each country’s VAT, issuing invoices, and handling refunds is a project unto itself. Lemon Squeezy acts as a MoR (Merchant of Record), handling all of this on behalf of the seller. The seller just sets a product price; the platform handles taxes and compliance. The commission is higher than Stripe, but there was no time in a 3-day hackathon to write tax processing code.

MVP Feature Scope

graph LR
    subgraph "Included in Hackathon MVP"
        A[Automatic query generation]
        B[Multi-engine response collection]
        C[Brand mention analysis]
        D[GEO Score calculation]
        E[Report view]
        F[Email auth]
        G[Basic payment]
        H[Sample reports]
    end

    subgraph "Excluded from Hackathon"
        I[OAuth social login]
        J[i18n multilingual]
        K[SEO optimization]
        L[GA4 monitoring]
        M[Credit automation]
        N[Security hardening]
        O[Blog/content]
        P[Deep competitor comparison]
    end

What Was Included

The criterion for what absolutely had to be in the MVP was simple: “Does the service fail to exist without this?”

FeatureInclusion Rationale
Automatic query generationRequiring users to manually enter queries creates too high an entry barrier
Multi-engine response collectionSupporting only a single engine eliminates the “multi-engine comparison” value of GEO
Brand mention analysisCore feature. Without this, there is no service
GEO ScoreAnalysis results need to be summarized in a single number for intuitive communication
Report viewUsers need to be able to see the output
Email authPer-user data isolation is essential
PaymentAs a paid SaaS, the payment flow’s existence must be demonstrated
Sample reportsUsers must see what the service delivers before paying for conversion to happen

What Was Excluded

Items that could have been done in 3 days but were intentionally deferred, or cut due to time constraints:

FeatureExclusion RationalePriority
OAuth (Google login)Email auth alone meets minimum functionality. OAuth setup takes timeImmediately post-hackathon
i18n (multilingual)Starting with English-only does not hinder MVP validationCommercialization phase
SEO (sitemap, meta)Hackathon judging involves judges accessing the service directly; search exposure is unnecessaryCommercialization phase
GA4 monitoringTraffic analysis is meaningless during hackathon periodCommercialization phase
Webhook automationManual credit assignment feasible at the volume of hackathon-level transactionsCommercialization phase
Triple input validationProbability of malicious users in a hackathon environment is near zeroCommercialization phase
Blog/content marketingUnnecessary for hackathon judgingCommercialization phase
Deep competitor comparisonBasic comparison was available, but detailed analysis was time-constrainedPost-commercialization

Most excluded items fell under “unnecessary for MVP validation” or “irrelevant in the hackathon judging context.” When transitioning to commercialization, this list became the roadmap.

Submission and Demo

Submission Contents

A working MVP was submitted — an actual service, not a demo, with the following complete flow operational:

  1. Signup (email verification)
  2. Login
  3. View sample report
  4. Payment (Lemon Squeezy checkout)
  5. Analysis request (enter brand name + industry)
  6. Analysis execution (multi-engine query dispatch + response collection + analysis)
  7. GEO Score report viewing
flowchart LR
    A[Signup] --> B[Login]
    B --> C[View sample report]
    C --> D[Payment]
    D --> E[Analysis request]
    E --> F[Multi-engine collection]
    F --> G[Brand analysis]
    G --> H[GEO Score report]

The goal was a state where judges could sign up and actually use the service. Not a service that exists only on slides, but one you can use right now.

Demo Video

A short video recording of the full flow was submitted alongside. The video showed the actual screens from signup to report viewing, with no narration or slides — just screen recording.

The Role of AI Coding Tools

AI coding tools were a major contributor to building an MVP at this level in 3 days. Handling backend, frontend, payment, auth, and deployment solo in 3 days would have been unrealistic without them.

Frontend: Lovable

Initial frontend structure was rapidly set up using Lovable. Lovable is a no-code/low-code tool that generates React components from prompts. It was useful for quickly producing “decent-looking UI” without design sensibility.

The limitations were equally clear. Generated code structures were hard to reuse, and customization ultimately required manual editing. “It runs, so it’s fine” worked for the hackathon, but during commercialization, the frontend was fully migrated away from Lovable to a code-based approach.

Backend and General: Claude

Claude was primarily used for backend code, API design, DB schema, and debugging. Tasks where it was particularly useful:

  • FastAPI route boilerplate generation
  • Supabase RLS policy SQL writing
  • Per-engine response parsing logic
  • Error handling patterns
  • Lemon Squeezy webhook integration code

AI coding tools were not “writing code instead of you” but “accelerating repetitive implementation.” Architecture decisions, feature scope judgments, and priority setting were human responsibilities, and the tool’s role was to speed up the coding of decided items.

Contribution Areas by Tool

AreaHuman RoleAI Tool Role
Architecture designOverall structure, module separation decisionsFeedback on design, alternative suggestions
API designEndpoint definition, interface designBoilerplate code generation
Analysis logicMetric definition, analysis framework designParsing/processing code implementation
FrontendScreen composition, user flow decisionsComponent generation, styling
Payment integrationPlatform selection, product structure decisionsWebhook handler code
DebuggingProblem definition, reproduction condition identificationRoot cause analysis, fix suggestions

Retrospective

What Time Constraints Force

Three days turns every decision into a binary. Not “is this better or that better?” but “can this be done in 3 days or not?” This frame shift had a significant impact on productivity.

Normally, tech choices would have taken days of deliberation. “Next.js or Vite?” “Supabase or PlanetScale?” In a hackathon, these deliberations vanish. “The fastest thing I’ve already used” is the answer.

The value of a hackathon is not the award — it’s the deadline pressure that puts a completed, working product in your hands.

The Value of Imperfect Completion

WICHI at the Day 3 submission was imperfect. Security was minimal, multilingual support was absent, and payment automation was incomplete. But “a working imperfect product” is overwhelmingly more useful than “a perfect unfinished design.”

With a working product, the next steps are clear: fix “the worst part right now.” With only a design, the next steps are ambiguous: “this needs to be done and that needs to be done” expands infinitely.

What Hackathons Cannot Teach

There are things hackathons do not reveal:

  • Operations: Building a service in 3 days and operating it for 3 months are completely different undertakings
  • Repeat users: Hackathon judges are one-time users. Getting the same user to return weekly cannot be validated in a hackathon
  • Scale issues: Code that works with 5 users may not work with 500
  • Revenue structure: Having a payment flow and actually making money are different things

This list became the set of challenges to solve during the post-hackathon commercialization process.

What I Would Change If Doing It Again

ItemWhat Was Done in HackathonWhat I Would Do Differently
Frontend toolLovable (no-code)Code-based from the start. Going with Lovable incurred full rewrite costs at commercialization
Payment integration timingDay 3 (last day)Day 2 evening. Payment takes more time than expected
Test dataScrambled together right before submissionAccumulate real analysis results from Day 1. Real data strengthens demo persuasiveness
Submission materialsScreen recording onlyAdd ~30 seconds of structured explanation. Let judges grasp context immediately

What Happened After

After submission, WICHI was eliminated in the preliminary round. But the hackathon served its purpose as a deadline device. Based on the MVP built in 3 days, the project pivoted to an independent commercialization track, and releasing the constraints imposed for the hackathon (branch freeze, feature scope limits, etc.) actually accelerated development.

The items that changed when transitioning from hackathon MVP to commercial service — security, i18n, payment automation, monitoring, and more — are covered in a separate post.

Share

Related Posts