SAASPOCALYPSEverdict #CLAUDE-1FB3
scanned 2026.05.01 · 18:16
subject of investigation
claude.ai
▸ AI assistant / LLM chat interface
verdictFORTRESS
wedge score
25
/100
tier · fortress
wedge thesis
there is no door — the moat is a proprietary frontier model trained on billions of dollars of compute, defended by a safety research org with $7B+ in funding.
thick walls — wedge plays only·ship in ∞·run for $7,500,000 + usage
wedge map
methodology →where the walls are.
the door
no network effect to overcome — users don't compound users.
watch out
their capital wall is real — ongoing capex puts a floor under any clone.
capital
10.0/10
investment the incumbent had to make
technical
10.0/10
depth of the underlying engineering
networkdoor
0.0/10
users compound users
switching
0.0/10
stickiness of customer data + workflow
data
10.0/10
proprietary data accumulates over time
regulatory
0.0/10
real licenses, not SOC 2 theater
distribution
9.5/10
brand SERP grip, knowledge graph, news flow
take
the blunt take.
color around the thesis
“You are not training a frontier LLM this weekend. You are not hiring 500 ML researchers. You are going to pay Anthropic's API rate and you are going to like it.”
The chat interface itself is a weekend build — but the interface is not the product. The product is Claude 3.x, which is the result of years of RLHF, constitutional AI research, and compute spend that dwarfs most Series C rounds. Wrapping the API is not competing with it.
cost
cost of competing.
their price ←→ your run-rate
what they charge●
Claude Pro
$20
/ user/mo
※ Free tier exists; Pro at $20/mo; API usage billed separately per token
annual:$240
what running yours costs✦
01 · Pre-training compute (H100 cluster)$5,000,000
02 · RLHF + Constitutional AI research team$2,000,000
03 · Inference infra at scale$500,000
04 · Safety & alignment researchpriceless
05 · Your remaining sanitypriceless
TOTAL / mo$7,500,000 + usage
▸ break-even:approximately never — the model is the moat, not the chat UI
build
what you're up against.
est. total: ∞
Dario started in 2021. Still training.
easy
medium
hard
nightmare
01
easy
Chat UI wrapper
Next.js + Vercel AI SDK. Ship in an afternoon. This is not the hard part.
02
medium
Streaming responses + tool use
Vercel AI SDK handles most of it. Edge cases in multi-turn context management will bite you.
03
hard
Fine-tuning on domain-specific data
Possible with open weights, but eval pipelines, data cleaning, and VRAM requirements are a real project.
04
nightmare
Pre-training a competitive base model
Tens of thousands of H100-hours. Requires a team of ML researchers. Not a solo project. Not a funded startup project.
05
nightmare
Constitutional AI / alignment research
Anthropic's core IP. Years of published and unpublished research. Not replicable from a blog post.
06
nightmare
Inference infra at Claude.ai scale
Custom serving, KV-cache optimization, speculative decoding. This is a dedicated infra org, not a side project.
stack
their position.
inferred + measured stack
detected signals· measured
cdnCloudflare
recommended stack · inferred
regulatory attorneys ($800/hr)10,000× H100s (several, actually)Constitutional AI research pipelineyour remaining tears
rivals
who else has tried this.
indies + alternatives
option A
OpenAI ChatGPT (free tier)
GPT-4o free tier. Already exists. Already better-known. Skip the build.
option B
Ollama (self-host)
Run Llama 3, Mistral, or Gemma locally. Free. No API costs. Not frontier-grade, but yours.
option C
LM Studio + open weights
Desktop app, local inference, zero cloud spend. Good enough for most personal use cases.
compare
similar scans.
same shape - different moat
ready to wedge in?
Get the wedge plan. You're not climbing the wall — you're finding the door.
▸ generated with love, by a heartless robotverdict v2.5 · saaspocalypse.dev