Based on 6 audits across 3 US cities and 2 industries. 83% of audited service businesses receive zero AI citations — including businesses ranked #1 in Google Maps.
Preliminary findings from the first 6 audits in the VisiGap 2025 study. Full dataset targeting 33 businesses across 11 cities will be published when complete.
The highest-reviewed Personal Injury firm in our Atlanta audit — Montlick Injury Attorneys — held the #1 position in Google Maps for "best personal injury lawyers in Atlanta" at the time of audit. The firm had 1,597 Google reviews, a 4.9-star rating, 40+ years in operation, LegalService schema markup, and 4 Super Lawyers-listed attorneys. It received zero AI citations across all 15 query-engine combinations tested (Perplexity, ChatGPT, Google AI Overviews). Local pack rank and AI citation are separate signals requiring separate optimization strategies.
Each business tested across Website Diagnostics, Citation Sources, NAP Consistency, AI Query Performance, and Entity Recognition. Scale: 0–100.
Tier thresholds: Bottom 0–34 · Mid 35–57 · Top 58–100. All current businesses fall in Mid tier.
6 audits complete. 27 additional audits planned across 8 additional cities and up to 9 industries.
| Industry | City | Business Audited | AI Score | AI Citations | Tier | Top Gap |
|---|---|---|---|---|---|---|
| HVAC | Chicago, IL | Guardian Heating & Cooling | 42 | 0/15 | Mid | 5 phone variants · Generic schema |
| HVAC | Houston, TX | Ace Comfort Air Conditioning | 48 | 0/15 | Mid | 193 reviews vs ~2,386 threshold |
| HVAC | Atlanta, GA | PV Heating, Cooling & Plumbing | 55 | 3/15 | Mid | Zero schema markup; 3 phone variants |
| PI Law | Chicago, IL | Attorneys of Chicago | 43 | 0/15 | Mid | 5 phones; BBB wrong address (Mokena, IL) |
| PI Law | Houston, TX | Simmons and Fletcher, P.C. | 47 | 0/15 | Mid | No LegalService schema; GBP phone mismatch |
| PI Law | Atlanta, GA | Montlick Injury Attorneys | 48 | 0/15 | Mid | #1 local pack · 0 AI citations — decoupled |
| HVAC | Dallas, TX | TBD | — | — | — | — |
| HVAC | Phoenix, AZ | TBD | — | — | — | — |
| HVAC | Philadelphia, PA | TBD | — | — | — | — |
| PI Law | Dallas, TX | TBD | — | — | — | — |
| PI Law | Phoenix, AZ | TBD | — | — | — | — |
| + 21 additional audits planned across additional cities & industries — results will populate here | ||||||
Every business in our study has at least three of these five issues. These are not edge cases — they are the norm for US service businesses in AI-competitive markets.
Every business in our study has 2–5 different phone numbers in circulation across their website, Google Business Profile, Yelp, BBB, and other directories. AI engines like Perplexity resolve business identity by aggregating directory data into a single entity. When that data conflicts — different phone on Yelp vs GBP vs the website — the engine loses confidence in the entity and reduces citation frequency. The worst case: Attorneys of Chicago has 5 distinct phone numbers, and their BBB listing shows an address 30 miles from their actual location.
Perplexity explicitly stated in one audit response that the average top-cited Houston HVAC contractor has 2,386 reviews. The audited Houston HVAC business had 193 — a 12x gap. Our data suggests the AI citation threshold in competitive service markets is approximately 2,000+ Google reviews. Four of five zero-citation businesses in our study fall significantly below this threshold. Note: high review volume alone is insufficient — Montlick (1,597 reviews, #1 local pack) received zero citations — but below-threshold volume is a hard blocker.
AI engines classify businesses into categories during entity resolution. Using a generic schema type — LocalBusiness instead of HVACBusiness, or Organization instead of LegalService — signals lower specificity. The top AI-cited HVAC competitor in Chicago (Deljo Heating) uses HVACBusiness schema. Guardian Heating uses LocalBusiness and is absent from all 15 AI responses. One HVAC business in our study has zero schema markup at all (PV Atlanta) yet received 3 citations from content depth alone — proving schema matters, but isn't the only factor.
Two businesses in our study have robots.txt rules that prevent AI engines from indexing high-value content. Montlick Injury Attorneys blocks /case_result/ — the pages that document settlement amounts and verdict data, which is precisely the content that drives PI law AI citation. Simmons and Fletcher blocks /llms.txt and directory pages. When AI engines cannot crawl content that would establish authority, that content cannot contribute to citation decisions regardless of how good it is.
Industry-specific directories — ACCA and Expertise.com for HVAC; Martindale-Hubbell, FindLaw, Justia, and Avvo for PI law — are frequently cited directly in AI responses. Perplexity's Q5 review-aggregation responses pull directly from these platforms, not from individual business websites. Five of six audited businesses have incomplete Tier 3 directory presence. Only the Chicago HVAC business with the most complete directory footprint shows up on Expertise.com's "Best HVAC" lists — and those lists appear in AI responses.
AI search engines like Perplexity, ChatGPT, and Google AI Overviews do not use the same signals as traditional search. Understanding the difference is the first step to optimizing for both.
Before any citation decision, AI engines must resolve whether a business is a distinct, trustworthy entity. They do this by aggregating data from GBP, Yelp, BBB, and directories and checking consistency. A business with 5 phone numbers fails entity resolution — the engine cannot confidently assert which record is canonical. This is why NAP fragmentation is the #1 universal gap: it affects every downstream citation decision.
Schema markup tells AI crawlers what type of entity a business is and what it does. HVACBusiness schema signals a specific, verifiable service type. LegalService + Attorney schema creates a structured entity graph for law firms. FAQPage schema creates directly citable content — AI engines frequently extract FAQ content verbatim. Businesses without category-specific schema are less likely to be retrieved for category-specific queries.
Review volume is a proxy for authority and market presence. AI engines — Perplexity in particular — use review count as a quality filter when constructing recommendation lists. Our data suggests a ~2,000 review threshold for competitive citation in HVAC and PI law markets. Recency matters too: businesses with reviews from the past 30 days signal active operations. High volume with stale reviews is weaker than high volume with recent reviews.
AI engines extract content from the web to answer questions, not just to list businesses. A business that publishes detailed content on "HVAC repair vs replacement costs in Chicago" or "how to choose a personal injury attorney in Houston" becomes a citable source for those queries — even without being explicitly recommended. PV Heating (Atlanta) achieved 3 AI citations largely through 466 indexed pages of content, not through technical optimization. Content depth creates citation surface area.
Perplexity's "best [service] in [city]" responses frequently synthesize results from Angi, Expertise.com, Yelp, and industry-specific directories — not from crawling individual business websites. Being present, complete, and consistently reviewed across these platforms is a prerequisite for appearing in aggregated AI recommendations. A business absent from Angi and Expertise.com has no surface area in the majority of Perplexity location-service queries.
AI engines must bind a business entity to a specific geographic location to include it in location-specific responses. This binding comes from consistent city data in website title, H1, meta description, GBP, and directory listings. Montlick Injury Attorneys uses "National Personal Injury Lawyers" in its title with no Atlanta in H1 or meta — deliberately, for national reach — but this dilutes Atlanta entity signals. Geographic specificity and national branding are in direct tension for AI citation purposes.
Business selection: For each city-industry combination, we searched "[industry] [city]" on Google Maps and selected the business at organic position 6 or 7 (excluding sponsored positions and national chains). This targets businesses that are visible enough to be credible but not yet AI-cited — the most actionable segment for this research.
AI query testing: Each business was tested across 5 standardized query intents (direct service, cost/pricing, comparison, how-to, and local reviews) run across 3 AI engines (Perplexity, ChatGPT, Google AI Overviews), for a total of 15 query-engine combinations per business. Queries were run in the same session within a 48-hour window to control for temporal variation.
Scoring: Each business receives an AI Visibility Score (0–100) across six dimensions: Website Diagnostics (15pts), Citation Sources (25pts), NAP Consistency (15pts), AI Query Performance (20pts), Entity Recognition (15pts), and Competitor Gap Analysis (10pts). Tier labels: Bottom 0–34, Mid 35–57, Top 58–100.
Data limitations: Perplexity rate-limiting affected 3 queries in the Chicago PI audit and all 5 queries in the Houston and Atlanta PI audits. These are noted as data gaps in individual audit records. Score calculations account for missing data where possible but these limitations should be considered when interpreting PI law findings.
Get a complete AI Visibility Audit — 6-section, 100-point report covering every gap documented in this study.
Get Your AI Visibility Audit — $499