Recording the 3-day MVP build for WICHI during the Jocoding Hackathon, covering tech stack choices (FastAPI, React, Supabase) and priorities for creating a 'working product' under tight deadlines.
Background
GEO as a Field
GEO (Generative Engine Optimization) is the discipline of measuring and optimizing how well a brand appears in AI search engine responses. As AI search engines like ChatGPT, Gemini, and Perplexity begin replacing traditional search, a new set of visibility metrics distinct from SEO became necessary.
Traditional SEO deals with link rankings on search engine results pages (SERPs). Traffic is generated when users click on links. AI search has a different structure entirely. Users ask a question, the AI generates a response, and within that response, a brand or product is either mentioned or not. Visibility is the mention itself, not a click. This difference is what establishes GEO as a separate discipline.
From late 2025, the keyword “AI search optimization” started appearing frequently in the marketing industry, and by early 2026, several agencies had already added GEO consulting to their service lineups. But most of it was manual work — querying AI engines directly, then reading the responses by eye. SaaS tools automating this process were still scarce.
Why a Hackathon
WICHI is the project that turns GEO into a SaaS. It sends queries to multiple AI engines, analyzes the frequency and context of brand mentions in responses, and calculates a GEO Score. The project was already underway before the hackathon.
The problem was velocity. The loop of “let me polish this a bit more,” “let me add this feature first” kept repeating, preventing convergence into an actually usable form. It was the classic side-project trap: endlessly expanding the feature design without ever producing a complete flow.
The hackathon was a deadline device. When there’s a submission deadline, “should we add this?” turns into “can we finish in 3 days?”
The Jocoding Hackathon was an opportunity to impose an artificial deadline on this project. More than prizes or awards, the goal was to produce a working MVP within a 3-day window. After the hackathon, a working product would remain, and commercialization could proceed from there. Regardless of the result, there was nothing to lose.
Starting State
Here is what existed and what did not at the time the hackathon began:
| Item | Status | Notes |
|---|---|---|
| GEO concept definition | Complete | Brand visibility measurement framework designed |
| Query generation logic | Draft | Idea-level only, not coded |
| AI engine integration | Not started | Only API docs reviewed |
| Scoring logic | Not started | Only metric definitions existed |
| Frontend | Not started | No design mockups |
| Auth / Payments | Not started | Only platform candidates researched |
| Deployment | Not started | Local development only |
The core idea and market research existed, but virtually no code — that was the starting state.
3-Day Timeline
gantt
title WICHI MVP — 3-Day Hackathon Timeline
dateFormat YYYY-MM-DD
axisFormat %m/%d
section Day 1 — Pipeline
Query generation module :d1a, 2026-02-27, 1d
Multi-engine response collection :d1b, 2026-02-27, 1d
Brand mention analysis logic :d1c, 2026-02-27, 1d
FastAPI server + Supabase DB :d1d, 2026-02-27, 1d
section Day 2 — Frontend
React + Vite setup :d2a, 2026-02-28, 1d
Report visualization view :d2b, 2026-02-28, 1d
Supabase Auth integration :d2c, 2026-02-28, 1d
Dashboard UI :d2d, 2026-02-28, 1d
section Day 3 — Payment/Submit
Lemon Squeezy integration :d3a, 2026-03-01, 1d
Sample data setup :d3b, 2026-03-01, 1d
Full-flow QA :d3c, 2026-03-01, 1d
Submission prep and submit :d3d, 2026-03-01, 1d
Day 1 — Core Pipeline
Day one was all-in on the backend pipeline. Progress was verified in the terminal — no frontend. Even if you can’t show it to users, the core logic has to run before anything else is meaningful.
Query Generation Module
The starting point of GEO analysis is queries. When a user enters a brand name and industry, the system automatically generates questions that consumers would realistically ask AI search engines in that industry. Patterns like “What’s the best ~?” or “Recommend a ~” are adapted per industry.
The quality of this module determines the quality of the entire analysis. If the queries are unrealistic, the responses are meaningless and the scores are untrustworthy. A significant portion of Day 1’s working hours went here.
Multi-Engine Response Collection
A module was built to send generated queries to multiple AI search engines and collect their responses. Since each engine has a different API format and response structure, per-engine adapters were needed to normalize responses into a unified format.
Brand Mention Analysis
Logic was written to analyze the frequency, context, and position of the target brand within collected response texts. The key was not simply counting how many times the brand name appeared, but distinguishing the context of each mention — recommendation, comparison, negative mention, and so on.
API Server and Database
The API server was set up with FastAPI, and analysis results were stored in Supabase PostgreSQL. Table design was kept minimal: three tables for users, analysis requests, and analysis results.
End of Day 1: Entering a brand name in the terminal runs query generation → multi-engine collection → mention analysis → DB storage automatically. User interface: 0%.
Day 2 — Frontend and Auth
Day two’s goal was “making it usable by humans.”
Frontend Setup
The frontend was set up with React + Vite. Vite was chosen over CRA (Create React App) for build speed. In a hackathon, slow HMR (Hot Module Replacement) directly slows iteration velocity. Vite’s fast start and refresh speed suited a 3-day sprint.
No UI framework or component library was introduced. Just Tailwind CSS for minimal layout. The goal was “something that works,” not “something that looks good.”
Report Visualization
The report view was implemented to deliver analysis results to users. Since this screen is the core value delivery point of WICHI, it received the most time on Day 2.
Elements included in the report:
- GEO Score: aggregate score (0-100)
- Per-engine comparison chart: brand visibility comparison across each AI engine
- Per-query detail: each engine’s response and brand mention status for individual queries
- Competitive positioning (when competitors were entered)
Auth Integration
Signup and login were connected via Supabase Auth. Email + password-based auth was the default, with OAuth (Google) deferred due to time constraints. Row Level Security (RLS) was configured so each user could only see their own analysis results.
Dashboard Base UI
A dashboard was built for viewing analysis history and starting new analyses. A simple list of previous analysis results, with each item clicking through to the report view.
End of Day 2: The flow from signup → login → analysis request → report viewing works. Payment and sample data not yet implemented.
Day 3 — Payment, Demo Data, Submission
The final day had the most to do: payment integration, demo data setup, full-flow QA, and submission materials.
Payment Integration
Lemon Squeezy was integrated as the payment platform. Product creation, checkout page connection, and webhook reception had to be completed within a single day. The reason for choosing Lemon Squeezy is covered separately, but the core advantage is that it acts as a MoR (Merchant of Record) handling VAT and regulatory compliance for Korean sellers doing global sales.
Payment integration at the Day 3 stage was minimal. Credits were granted manually after payment completion; webhook automation was deferred to post-hackathon. The bar was “payment works at all.”
Sample Data Setup
Users needed to see what the service does immediately after signup, even without spending analysis credits. The approach was to provide pre-run analysis results as sample reports. Actual brands were analyzed in advance and their results exposed as samples.
Full-Flow QA
Signup → login → view sample → payment → credit top-up → analysis request → report viewing. This entire flow was run from start to finish repeatedly. Fix broken parts, start over from the beginning again. Most of Day 3’s afternoon went to this.
Submission
Submission materials included the service URL, a brief description, and a demo video. No separate presentation deck was created. The working service itself was deemed the best demo.
Daily Progress Summary
| Day | Major Work | Completion Criteria | Status |
|---|---|---|---|
| Day 1 | Query generation, multi-engine collection, mention analysis, API + DB | Full pipeline running in terminal | Achieved |
| Day 2 | React setup, report view, auth, dashboard | Signup through report viewing possible in browser | Achieved |
| Day 3 | Payment, sample data, QA, submission | Full paid flow working + submission complete | Achieved |
Tech Stack Selection Rationale
The only criterion for tech choices in a 3-day hackathon was: already used it before. There was no time to experiment with new technology. Only tools with near-zero learning curve were selected.
graph TD
subgraph Frontend
A[React + Vite] --> B[Tailwind CSS]
A --> C[Supabase Auth SDK]
end
subgraph Backend
D[FastAPI] --> E[AI Engine Adapters]
D --> F[Query Generator]
D --> G[Brand Analysis]
end
subgraph Infrastructure
H[Supabase PostgreSQL]
I[Supabase Auth + RLS]
J[Railway - BE Hosting]
K[Vercel - FE Hosting]
end
subgraph Payment
L[Lemon Squeezy]
end
A -->|API calls| D
D -->|Read/Write| H
A -->|Auth| I
L -->|Webhook| D
Selection Details
| Area | Choice | Alternatives Considered | Selection Rationale | Kept Post-Hackathon |
|---|---|---|---|---|
| Backend | FastAPI | Django, Express | Python-based, unifying language with AI pipeline. Native async support. Concise route definitions enable rapid API development | Yes |
| Frontend | React + Vite | Next.js, Svelte | Vite’s fast HMR suited to hackathon speed. React was the most familiar framework | Yes (structure changed later) |
| DB | Supabase PostgreSQL | PlanetScale, Neon | Auth, DB, and RLS handled in one platform. Query directly via client library without a separate ORM | Yes |
| Auth | Supabase Auth | Auth0, Clerk | Same platform as DB means zero integration cost. Natural RLS integration | Yes |
| BE Hosting | Railway | Fly.io, Render | Prior experience from previous projects. Deploy via git push. Free tier available | Yes |
| FE Hosting | Vercel | Netlify, Cloudflare Pages | De facto standard for React project deployment. Minimal build configuration needed | Yes (reused after hosting migration) |
| Payment | Lemon Squeezy | Stripe, Paddle | MoR handling for Korean sellers in global sales. Tax calculation, invoicing, refunds handled by platform | Yes |
Why FastAPI + Python
The core of a GEO SaaS is AI engine API calls and text analysis. Python’s ecosystem is overwhelmingly strong in both. LLM API clients, text processing libraries, and data analysis tools are all Python-first. Unifying the backend in Python allows the AI pipeline code to run in the same process without splitting it into a separate service.
FastAPI’s async support was also important. Sending queries to multiple engines simultaneously and waiting for responses is I/O-bound work, and async/await parallel processing significantly reduces analysis time.
Why All-In on Supabase
There was no time in 3 days to set up auth, database, and security policies each on different services. Supabase provides PostgreSQL + Auth + Row Level Security in one project. Since JWTs issued by Auth automatically connect to DB RLS policies, implementing “logged-in users can only see their own data” required no additional code — just SQL policies.
Why Lemon Squeezy
The biggest hurdle for Korean sellers doing global SaaS sales is tax and regulatory compliance. Calculating each country’s VAT, issuing invoices, and handling refunds is a project unto itself. Lemon Squeezy acts as a MoR (Merchant of Record), handling all of this on behalf of the seller. The seller just sets a product price; the platform handles taxes and compliance. The commission is higher than Stripe, but there was no time in a 3-day hackathon to write tax processing code.
MVP Feature Scope
graph LR
subgraph "Included in Hackathon MVP"
A[Automatic query generation]
B[Multi-engine response collection]
C[Brand mention analysis]
D[GEO Score calculation]
E[Report view]
F[Email auth]
G[Basic payment]
H[Sample reports]
end
subgraph "Excluded from Hackathon"
I[OAuth social login]
J[i18n multilingual]
K[SEO optimization]
L[GA4 monitoring]
M[Credit automation]
N[Security hardening]
O[Blog/content]
P[Deep competitor comparison]
end
What Was Included
The criterion for what absolutely had to be in the MVP was simple: “Does the service fail to exist without this?”
| Feature | Inclusion Rationale |
|---|---|
| Automatic query generation | Requiring users to manually enter queries creates too high an entry barrier |
| Multi-engine response collection | Supporting only a single engine eliminates the “multi-engine comparison” value of GEO |
| Brand mention analysis | Core feature. Without this, there is no service |
| GEO Score | Analysis results need to be summarized in a single number for intuitive communication |
| Report view | Users need to be able to see the output |
| Email auth | Per-user data isolation is essential |
| Payment | As a paid SaaS, the payment flow’s existence must be demonstrated |
| Sample reports | Users must see what the service delivers before paying for conversion to happen |
What Was Excluded
Items that could have been done in 3 days but were intentionally deferred, or cut due to time constraints:
| Feature | Exclusion Rationale | Priority |
|---|---|---|
| OAuth (Google login) | Email auth alone meets minimum functionality. OAuth setup takes time | Immediately post-hackathon |
| i18n (multilingual) | Starting with English-only does not hinder MVP validation | Commercialization phase |
| SEO (sitemap, meta) | Hackathon judging involves judges accessing the service directly; search exposure is unnecessary | Commercialization phase |
| GA4 monitoring | Traffic analysis is meaningless during hackathon period | Commercialization phase |
| Webhook automation | Manual credit assignment feasible at the volume of hackathon-level transactions | Commercialization phase |
| Triple input validation | Probability of malicious users in a hackathon environment is near zero | Commercialization phase |
| Blog/content marketing | Unnecessary for hackathon judging | Commercialization phase |
| Deep competitor comparison | Basic comparison was available, but detailed analysis was time-constrained | Post-commercialization |
Most excluded items fell under “unnecessary for MVP validation” or “irrelevant in the hackathon judging context.” When transitioning to commercialization, this list became the roadmap.
Submission and Demo
Submission Contents
A working MVP was submitted — an actual service, not a demo, with the following complete flow operational:
- Signup (email verification)
- Login
- View sample report
- Payment (Lemon Squeezy checkout)
- Analysis request (enter brand name + industry)
- Analysis execution (multi-engine query dispatch + response collection + analysis)
- GEO Score report viewing
flowchart LR
A[Signup] --> B[Login]
B --> C[View sample report]
C --> D[Payment]
D --> E[Analysis request]
E --> F[Multi-engine collection]
F --> G[Brand analysis]
G --> H[GEO Score report]
The goal was a state where judges could sign up and actually use the service. Not a service that exists only on slides, but one you can use right now.
Demo Video
A short video recording of the full flow was submitted alongside. The video showed the actual screens from signup to report viewing, with no narration or slides — just screen recording.
The Role of AI Coding Tools
AI coding tools were a major contributor to building an MVP at this level in 3 days. Handling backend, frontend, payment, auth, and deployment solo in 3 days would have been unrealistic without them.
Frontend: Lovable
Initial frontend structure was rapidly set up using Lovable. Lovable is a no-code/low-code tool that generates React components from prompts. It was useful for quickly producing “decent-looking UI” without design sensibility.
The limitations were equally clear. Generated code structures were hard to reuse, and customization ultimately required manual editing. “It runs, so it’s fine” worked for the hackathon, but during commercialization, the frontend was fully migrated away from Lovable to a code-based approach.
Backend and General: Claude
Claude was primarily used for backend code, API design, DB schema, and debugging. Tasks where it was particularly useful:
- FastAPI route boilerplate generation
- Supabase RLS policy SQL writing
- Per-engine response parsing logic
- Error handling patterns
- Lemon Squeezy webhook integration code
AI coding tools were not “writing code instead of you” but “accelerating repetitive implementation.” Architecture decisions, feature scope judgments, and priority setting were human responsibilities, and the tool’s role was to speed up the coding of decided items.
Contribution Areas by Tool
| Area | Human Role | AI Tool Role |
|---|---|---|
| Architecture design | Overall structure, module separation decisions | Feedback on design, alternative suggestions |
| API design | Endpoint definition, interface design | Boilerplate code generation |
| Analysis logic | Metric definition, analysis framework design | Parsing/processing code implementation |
| Frontend | Screen composition, user flow decisions | Component generation, styling |
| Payment integration | Platform selection, product structure decisions | Webhook handler code |
| Debugging | Problem definition, reproduction condition identification | Root cause analysis, fix suggestions |
Retrospective
What Time Constraints Force
Three days turns every decision into a binary. Not “is this better or that better?” but “can this be done in 3 days or not?” This frame shift had a significant impact on productivity.
Normally, tech choices would have taken days of deliberation. “Next.js or Vite?” “Supabase or PlanetScale?” In a hackathon, these deliberations vanish. “The fastest thing I’ve already used” is the answer.
The value of a hackathon is not the award — it’s the deadline pressure that puts a completed, working product in your hands.
The Value of Imperfect Completion
WICHI at the Day 3 submission was imperfect. Security was minimal, multilingual support was absent, and payment automation was incomplete. But “a working imperfect product” is overwhelmingly more useful than “a perfect unfinished design.”
With a working product, the next steps are clear: fix “the worst part right now.” With only a design, the next steps are ambiguous: “this needs to be done and that needs to be done” expands infinitely.
What Hackathons Cannot Teach
There are things hackathons do not reveal:
- Operations: Building a service in 3 days and operating it for 3 months are completely different undertakings
- Repeat users: Hackathon judges are one-time users. Getting the same user to return weekly cannot be validated in a hackathon
- Scale issues: Code that works with 5 users may not work with 500
- Revenue structure: Having a payment flow and actually making money are different things
This list became the set of challenges to solve during the post-hackathon commercialization process.
What I Would Change If Doing It Again
| Item | What Was Done in Hackathon | What I Would Do Differently |
|---|---|---|
| Frontend tool | Lovable (no-code) | Code-based from the start. Going with Lovable incurred full rewrite costs at commercialization |
| Payment integration timing | Day 3 (last day) | Day 2 evening. Payment takes more time than expected |
| Test data | Scrambled together right before submission | Accumulate real analysis results from Day 1. Real data strengthens demo persuasiveness |
| Submission materials | Screen recording only | Add ~30 seconds of structured explanation. Let judges grasp context immediately |
What Happened After
After submission, WICHI was eliminated in the preliminary round. But the hackathon served its purpose as a deadline device. Based on the MVP built in 3 days, the project pivoted to an independent commercialization track, and releasing the constraints imposed for the hackathon (branch freeze, feature scope limits, etc.) actually accelerated development.
The items that changed when transitioning from hackathon MVP to commercial service — security, i18n, payment automation, monitoring, and more — are covered in a separate post.
Related Posts

Six GEO Business Opportunities and WICHI's Choice
Strategic analysis of three opportunity factors in the AI search (GEO) market and why WICHI chose 'SaaS-based monitoring' over advertising or agency models.

Prototype to Production — The Complete Change List
10 key changes for transitioning an MVP to commercial SaaS: security hardening, JWT auth, KO/EN i18n, and payment automation to build a 'payment-ready' service.

After Hackathon Rejection — Pivoting to Independent SaaS
Recording the 24-hour pivot of WICHI to an independent SaaS after a hackathon rejection, covering i18n implementation, SEO setup, and monetization roadmap restructuring.