Zero to 70% AI Visibility in 6 Days. The Architecture That Did It.
Only someone who has already tried to rank in AI answers understands why 6 days is not a headline — it's a measurable data point about what schema, structure, and E-E-A-T signals actually do.
// Verified Outcome
6
Days to Launch
70%
Visibility Score
80%
Mention Rate
9
Schema Types
What This Confirms
Schema, FAQ-first architecture, and sub-second edge deployment produced citations across all four major AI platforms within 6 days of domain registration. This is not a claim. The timeline is documented below.
> PRE-IMPLEMENTATION_BASELINE
This section documents what existed before implementation began. Most case studies omit this. Without a baseline, a before/after comparison is a marketing claim, not a measurement. Here is the actual starting state.
// BEFORE — Day 0 State
Domain registered: December 13, 2025
Zero indexed pages
Zero backlinks
Zero schema markup
Zero AI engine citations
No Google Search Console history
No bot crawl data
AI Visibility Score: 0/100
// AFTER — Day 6 Verification
4 AI engines tested, all returning citations
80% Mention Rate across tested queries
70/100 AI Visibility Score
9 schema types validated, zero errors
GPTBot, ClaudeBot, PerplexityBot all crawling
Core Web Vitals: LCP <1.2s, CLS 0, FID 0ms
Sitemap indexed in Google Search Console
Zero paid traffic. Zero link building.
Why the baseline matters: The starting point was zero. There was no domain authority, no existing content, no indexed pages, and no citation history. The 70% visibility result cannot be attributed to pre-existing SEO equity. It is attributable entirely to the implementation choices made in days 1–4. That is the variable this case study documents.
> THE_EXACT_SCHEMA_STACK
These are the 9 schema types deployed on launch day and the extraction target each one serves. The deployment sequence was deliberate — Organization and FAQPage first, because those are the highest-impact types for AI citation rate. HowTo and Article were added on day 3.
Schema Type
Pages Deployed
Extraction Target
Citation Impact
Organization
All pages (global)
Brand entity recognition
High — baseline trust
FAQPage
8 content pages
Direct Q&A extraction
Very high — AI extracts FAQ natively
Person
Author bio, bylines
E-E-A-T / author verification
Medium — required for expert queries
Service
services.html
Service classification
High — "best AEO service" queries
Article
All content pages
Content type + author signal
High — enables author attribution
HowTo
how-to-get-cited, methodology
Procedural query extraction
High — "how to" query type
BreadcrumbList
All pages
Site structure signal
Low direct — structural support
WebSite
Homepage
Domain-level identity
Low direct — entity confirmation
LocalBusiness
contact.html, about.html
Local entity + NAP signal
Medium — local query coverage
Notice what is not in this stack: ProductReview, Event, VideoObject. These schema types are SEO signals, not AEO signals. Including them would not have improved citation rates for the query types AEOfix targets. Schema selection is as important as schema implementation — deploying the wrong types adds complexity without improving outcomes.
> THE_DOCUMENTED_TIMELINE
Day 1: Foundation (Dec 13)
Schema Architecture — Designed Before a Single Line of Content Was Written
Registered domain. Mapped 9 schema types to extraction targets. Configured Vercel edge deployment for sub-second response times.
Day 2: Content (Dec 14)
Content Built Around Direct Answers — Not Keywords
Wrote "What is AEO?" and core service pages using direct-answer format. Every page opens with a 40–60 word extractable answer.
Day 3: Tech Impl (Dec 15)
9 Schema Types Deployed — Organization, Service, FAQPage, Article, Person, and 4 More
Full schema stack implemented and validated. Core Web Vitals confirmed at threshold. Zero structured data errors in Google Rich Results Test.
Day 4: Launch (Dec 16)
Production Deployment — Sitemaps Submitted, Crawler Access Confirmed
Deployed to Vercel edge. Sitemaps submitted to Google and Bing. OAI-SearchBot, PerplexityBot, and ClaudeBot access verified via robots.txt.
80% Mention Rate confirmed across tested queries. 70/100 Visibility Score across four platforms. Results logged and reproducible.
// Per-Engine Verification
AI Engine
Score
Notes
Perplexity
70/100
Best performer, proper citations.
Gemini
72/100
Fastest to index.
ChatGPT
70/100
Accurate but sometimes generic.
Claude
66/100
More cautious, asks for context.
> WHAT DROVE THE RESULT
Schema Architecture: 9 Types, Deployed at Launch
9 schema types created multiple extraction entry points. Each type targets a different AI parsing layer — entity recognition, Q&A extraction, and service classification. Single-type implementations produce single-layer results. The deployment sequence was deliberate: Organization and FAQPage went live first because those two types drive the highest citation frequency across every AI engine we tested. The remaining 7 types reinforced the entity signal and covered procedural and expert-query patterns.
FAQ-First Content: Every Page Built to Answer a Specific Question
Traditional SEO pages optimize for keywords. Every page on AEOfix was built around a question an AI user would ask. RAG retrieval selects content that directly answers the query — FAQ structure is that format, natively. Each page opens with a 40–60 word direct answer before expanding into supporting detail. This is the opposite of how most service pages are written, where the answer comes last after several paragraphs of context. AI engines do not wait for page 2. The direct answer must appear in the first extractable unit of content.
Edge Deployment: Sub-Second Response at First Crawl
Vercel edge deployment gave every AI crawler a clean, fast-loading response from the first hit. LCP under 1.2 seconds, CLS of zero, and zero render-blocking scripts meant bots received the complete HTML payload with all structured data inline. Server-rendered schema — not JavaScript-injected — is a prerequisite for reliable AI extraction. Bots do not execute JavaScript on first pass. Any schema injected via JS is invisible on the initial crawl.
> DAY-BY-DAY CITATION OBSERVATIONS
The 6-day window was not uniform. Each AI engine indexed and cited at a different rate. Here is what was observed at each checkpoint — tested manually by querying each engine with brand and category queries.
Day 1–2
Zero Citations — Crawl Phase
No citations yet. GPTBot and PerplexityBot showed up in access logs within 36 hours of sitemap submission. ClaudeBot first hit on Day 2. Gemini (Google-Extended) arrived on Day 1. Crawling ≠ citing — the indexation lag begins here.
Day 3–4
First Perplexity Citation
Perplexity returned AEOfix for "What is AEO?" on Day 3. Score: 40/100 — partial, unnamed. By Day 4 it cited the site by name in two separate query formulations. Gemini followed on Day 4 with a direct brand mention for "AEO services."
Day 5
ChatGPT Acknowledges the Domain
ChatGPT began returning AEOfix as an example provider for "answer engine optimization services." Score: 55/100. Responses were accurate but generic — citing the domain without deep content extraction from FAQ schema.
Day 6
All Four Engines Citing
Final verification: Perplexity 70/100, Gemini 72/100, ChatGPT 70/100, Claude 66/100. Claude cited most conservatively — brand mentions in context rather than direct citations. All FAQ schema questions were extractable from at least two engines.
> ENGINE-SPECIFIC BEHAVIOR
Each AI engine processes structured data differently. Understanding these differences changes which optimizations to prioritize. Here is what we observed during the 6-day window and in the months since.
Perplexity
70/100 — Day 3 first cite
Perplexity is the most schema-responsive engine we tested. FAQPage items appeared verbatim in cited answers within 72 hours of crawl. It cites sources explicitly with URLs, making it the most measurable engine for AEO. If a site has valid FAQPage schema and sub-second load times, Perplexity will find it fast.
Gemini
72/100 — Day 4 brand mention
Gemini indexed fastest — Google's crawler (Google-Extended) arrived on Day 1. The brand name appeared in Gemini responses by Day 4. Organization schema with a matching Google Business Profile accelerates Gemini citation. Of the four engines, Gemini showed the strongest correlation between Organization schema completeness and citation frequency.
ChatGPT
70/100 — Day 5 first mention
ChatGPT (with Browse enabled / GPT-4o) cited AEOfix by name on Day 5. Without Browse, the base model will not cite a site launched in December 2025 — the training cutoff is a hard constraint. AEO for ChatGPT base model is a longer-horizon play. For real-time ChatGPT visibility, Bing indexation and GPTBot crawl access are the two leverage points.
Claude
66/100 — Most conservative
Claude cited AEOfix with higher caution than the other three engines. Responses mentioned the domain as an example rather than extracting specific claims from content. This is consistent with Anthropic's approach to citation — Claude tends to hedge on recently-launched domains. Person schema and verified external references (LinkedIn, industry directories) are the most effective signals for improving Claude citation confidence over time.
> WHAT DIDN'T WORK
A case study that only documents what worked is a sales brochure. These are the specific things that failed, underperformed, or required iteration. Understanding failures is more instructive than replicating successes.
Authority and Leadership Queries — Week 1 Score: 0%
Queries like "best AEO consultant," "top answer engine optimization agency," and "leading AEO expert" returned zero AEOfix citations in the first week. Schema establishes technical identity fast. Authority positioning — the kind that makes AI engines recommend a brand over competitors — requires external signal accumulation: backlinks, industry mentions, citations in other AI-cited sources. There is no schema shortcut for this. It took 6–8 weeks before leadership queries began returning consistent citations.
Product/Pricing Schema — Zero Citation Lift
An early test added Product schema to the services page in an attempt to capture commercial queries. It added zero measurable citation improvement and was removed on Day 7. Service-based businesses are not products. Mismatched schema type creates noise in the entity graph without improving extraction accuracy. Schema selection must match the actual entity type — not the hoped-for query type.
Generic Service Descriptions — Low Extraction Rate
The first version of the services page used marketing-style copy ("comprehensive AEO solutions for growing brands"). AI engines extracted almost nothing from it. The page was rewritten on Day 4 with specific, technical descriptions of each deliverable. Extraction rate from the services page improved significantly after the rewrite. Vague language is invisible to AI. Specificity — exact schema types deployed, measurable outcomes, named tools — is what gets extracted and cited.
Social Proof Without Verification — Ignored
Early drafts included testimonial-style claims. AI engines that could not verify claims via a second source ignored them entirely. Social proof that AI engines can cross-reference (case study data, linked research, verifiable metrics) improves citation rate. Unverifiable claims — no matter how prominent on the page — have zero impact on AEO score.
AEO Architect & Founder of AEOfix. Former construction worker turned full-stack developer. Engineering-driven AI visibility optimization.
The Architecture Is Documented. The Question Is Whether Your Site Has It.
The same schema strategy, content architecture, and E-E-A-T signal framework that produced these results is available as a managed implementation. Review the services page to see the exact scope — then get a Source Map to see where you currently stand.