← back to report

how we score SaaS moats.

rubric version v13 · last updated 2026-04-30

Tier (SOFT / CONTESTED / FORTRESS) tells you how attackable the incumbent is. SOFT means the engineering bar is low and the wedge is in distribution or niche. CONTESTED means the head-on clone is real work but doable. FORTRESS means the walls are thick — you can't bulldoze them, so you find a crack.

The score under each tier is a weighted aggregate of seven moat axes, each 0–10. Higher = thicker walls. Lower = wedgeable. The math is deterministic — derived from the per-report normalization projection (canonical components, capabilities, attributes) plus a single Serper SERP call for distribution signals. No LLM is in the scoring loop, on purpose.

the seven axes.

Capital

0–10

What the incumbent had to invest to build the thing. Audits, licensing, banking relationships, training infrastructure — capex you can't shortcut around.

how it's scored: Derived from capex-flagged cost lines, take prose, wedge thesis, challenge notes, descriptive est_total strings, and numeric monthly cost magnitude. SOC 2 is excluded because it is table-stakes, not a real moat.

Technical

0–10

Depth of the incumbent's underlying engineering. The R&D you can't recreate by gluing OSS libraries together.

how it's scored: Derived from the difficulty distribution of the report challenges. Nightmare, hard, and medium challenges add weighted points; easy challenges do not. The LLM does not emit a technical score.

Note: Capital and Technical deliberately ignore the projection's cost / component data, because that data describes the indie hacker's clone stack, not the incumbent's actual investment. The incumbent's moat is precisely what you can't buy off the shelf.

Network

0–10

Users compound users — the product gets more valuable as more people use it.

how it's scored: Counts capabilities tagged for network effects (multi_sided, ugc, marketplace, viral_loop). 1 capability → 4; 2 → 8; 3+ → 10.

Switching

0–10

How sticky customer data and workflow state are once they're in.

how it's scored: Counts capabilities tagged for switching cost (data_storage, workflow_lock_in, integration_hub). Same 4-per-capability curve.

Data

0–10

Proprietary data that accumulates with use, and would be expensive or impossible for a wedge entrant to recreate.

how it's scored: Counts capabilities tagged for data moats (proprietary_dataset, training_data, behavioral). Same 4-per-capability curve.

Regulatory

0–10

Real licenses, audits, regulatory exposure that legally bars indie hackers from operating.

how it's scored: Counts capabilities tagged for regulatory moats (hipaa, finra, gdpr_critical, licensed). Same 4-per-capability curve. SOC 2 deliberately does NOT count — every B2B SaaS would otherwise score artificially high.

Distribution

0–10

How firmly the incumbent owns the SERP and the brand-recognition surface for their own name. The hardest moat for a wedge entrant to chip away at.

how it's scored: Weighted aggregate of six sub-signals from a single Serper SERP call: sitelinks under the top organic result (× 4), compressed organic (Google returns < 10 results, indicating entity-confidence — × 3), authoritative third-party domains in top 10 (Wikipedia / LinkedIn / Crunchbase / TechCrunch / Bloomberg / G2 / etc. — gated on top organic owned, × 3), Knowledge Graph presence (× 2), top organic owned (× 2), own-domain count in top 10 (× 1). Returns null when the SERP call fails entirely.

the aggregate.

Weighted root-mean-square across the seven axes. Equal weights — each axis is roughly as important as the others when you're deciding whether a small team can compete. RMS rather than arithmetic mean because real moats are often specialist: Stripe's defensibility lives in capital + technical + regulatory + distribution, with the other three axes legitimately near zero. Averaging that to 5/10 misrepresents how hard it actually is to displace. RMS rewards concentration without changing anything for products whose strength is spread evenly.

Distribution-axis can return null when the SERP call fails — in that case the aggregate skips it from both numerator and denominator and computes honestly over the six axes we could score. We'll re-tune the cross-axis weights themselves once we have enough scored reports to see real clustering.

what we don't score.

Brand is not directly modeled. The closest proxy is in the distribution axis (Knowledge Graph + authoritative third-party coverage), which captures the slice of brand strength that's legible to Google's index. The rest — emotional resonance, founder following, design taste — is harder to put a number on without paid data, so we don't pretend.

Capability of the team behind the product is not modeled. A genuinely brilliant engineering org can hold a moat the structural axes don't see.

why deterministic.

The point of the score is to be defensible. Every number on a report card can be traced back to a specific component, a specific capability tag, a specific cost line, or a specific Serper-payload field. If the score is wrong, the input is wrong, and the input is fixable. An LLM-authored score has none of those properties.

The taxonomy itself — which capabilities exist, which moat tags they carry, what each component's commoditization level is — is hand-curated and reviewed in code. The admin tools at /admin/unknowns and the score workbench use Claude to suggest additions, but a human decides.

Source-of-truth: lib/normalization/moat.ts · lib/scanner/distribution.ts · lib/normalization/taxonomy/

SaaS moat scoring methodology - saaspocalypse