Sub-grade spoke
Review Markup — does your site expose ratings as data agents can cite?
Reviews are the trust signal AI agents weight most heavily when comparing options. The signal only counts when it ships as structured data — AggregateRating + Review markup in JSON-LD — not as star glyphs in a JS-rendered widget. Sites that publish structured reviews get cited as the reputation source; sites that show stars only to humans get treated as if they had no reviews at all. The work is one schema block on the product page and a few fields per review.
By Chris Mühlnickel · 2026-05-16
What is Review Markup?
Review Markup is whether your site emits valid Schema.org AggregateRating and Review structured data — the canonical machine-readable surface for ratings, review counts, individual review text, and review provenance (author, date, verified-purchase flag).
By the numbers
- 93% — of consumers say online reviews affect their buying decisions for local businesses — BrightLocal. (BrightLocal — Local Consumer Review Survey 2025)
- 270% — higher purchase probability for products with at least five reviews vs. zero reviews. (Spiegel Research Center (via Fera))
- 15-35% — CTR uplift on Google SERP listings that earn star ratings via AggregateRating schema markup. (Schema.dev — How to show star ratings on Google)
Why it matters
Reviews are the highest-bandwidth trust signal agents have access to. When ChatGPT, Claude, Perplexity, or AI Overviews recommend a product, the comparison logic leans heavily on the aggregate rating, the review count, and the qualitative content of individual reviews. The 93% of consumers BrightLocal reports as review-influenced is the demand-side anchor; the supply-side reality is that agents now consume that influence on the user's behalf, and they consume it through AggregateRating and Review markup. A product page with five real reviews surfaced in structured data outranks a competing page with fifty unstructured reviews in the citation game agents play.
Reviews-as-text is invisible to agents. The most common failure mode in calibration is the site that has hundreds of reviews on the product page — rendered visually, complete with star glyphs — but emits zero structured-data markup describing them. To an agent fetching the page, that site has no reviews. The HTML contains review prose, but parsing review prose from a generic <div> requires natural-language extraction that the agent will either skip or get wrong. The structured competitor wins by default. The fix is one JSON-LD block per product page; the cost of the current state is being treated as a no-reviews vendor by every agent that touches the page.
AggregateRating is the Power slice of Schema Coverage on commerce pages. Schema Coverage is the Clarity Power parameter overall, but on transactional product pages specifically, the AggregateRating block is the single field that flips the page from "summarizable" to "citable as a recommendation." The 15-35% CTR uplift Schema.dev measures on star-rated SERP listings tracks the same signal — humans click stars, agents cite ratings. The work is the same JSON-LD investment for both audiences.
Verified-purchase + provenance becomes the citation gate. Agents trained after 2024 increasingly distinguish reviews with provenance metadata (author identity, date, verified-purchase flag) from anonymous five-star blocks. The 270% purchase-probability lift Spiegel measures for products with five-plus reviews still holds, but the qualitative gap between provenanced and unprovenanced reviews widens every model generation. Sites that publish review provenance structurally are positioned for the next phase; sites that ship anonymous aggregates are exposed to the same kind of trust-collapse that wiped out fake-review-heavy listings on Amazon in 2023-2024.
Where it's heading
Review provenance becomes a binary citation gate. Within 12-24 months expect AI surfaces to treat reviews without verified-purchase or structured author metadata as non-citation material — the same way Google's Spam Brain treats keyword-stuffed pages today. The training data for the next agent generation includes review-fraud signals at scale; the inference-time gating will catch up. Sites that ship provenance now earn the citation premium; sites that don't get filtered.
AI-generated review detection moves to the platform layer. Trustpilot, Google, and major commerce platforms already deploy fake-review detection; the next iteration extends to AI-generated review text identified via watermarking or model-fingerprint analysis. Sites that source reviews from real verified customers — and publish that provenance in Review schema — pass through the new filters unchanged. Sites that pad their counts get downranked across surfaces simultaneously.
Aggregate-rating extensions for agent-specific dimensions land in Schema.org. Schema.org community working groups are discussing extensions for agent-relevant review dimensions — fulfillment quality, return experience, post-purchase support — that map cleanly to the comparisons agents are already running. Sites with strong baseline Review Markup adopt the extensions cheaply; sites still hand-rolling their first AggregateRating block in 2027 are two generations behind.
Common mistakes
- Reviews rendered as text only, no schema. Hundreds of visible reviews on the product page, zero structured markup. To an agent, the page has no reviews.
- AggregateRating without nested Review entries. The summary score earns the star rich result but gives agents nothing to quote. Both layers are needed for citation-grade coverage.
- JS-injected review widgets that never render server-side. Yotpo, Stamped, and similar widgets inject schema on the client. Most agents fetch initial HTML and see nothing.
- Self-serving reviews on Organization or LocalBusiness types. Google explicitly disallowed these in 2019. Move to a third-party aggregator and link via
sameAs. - Missing review provenance fields.
author,datePublished, verified-purchase flags. Their absence is increasingly read as a fake-review signal by agents and platforms.
Frequently asked
What's the minimum Review markup my product pages need?
An AggregateRating block nested inside the Product JSON-LD — with ratingValue, reviewCount, and bestRating — plus at least one nested Review entry with author, datePublished, and reviewBody. The product schema is the host; AggregateRating is the summary; Review is the granular evidence agents cite. A Spekto audit reports the per-page gaps.
Do I need both AggregateRating and individual Review entries?
Both, ideally. AggregateRating alone earns the star-rating rich result on Google SERPs and gives agents the headline number. Individual Review entries let agents quote a specific reviewer's claim — increasingly the citation pattern in AI Overview and ChatGPT answers. Sites that ship only the aggregate score get summarized; sites that ship both get directly quoted, which is the stronger citation surface.
Does Google still allow self-hosted Review markup, or do I need a third-party widget?
Self-hosted Review markup is allowed and supported for Product, Recipe, Book, LocalBusiness, and a handful of other types. Self-serving reviews on Organization or LocalBusiness were restricted in 2019 — those reviews need a third-party aggregator like Trustpilot, Trustedshops, or BBB. For product reviews, your own page is the canonical home and structured markup is expected there.
How do I signal review provenance — verified purchase, real customer?
Two paths. Schema.org's Review type supports an author with structured identity, a datePublished, and properties like reviewAspect and publisher to indicate the verification context. Some platforms also emit additionalProperty blocks flagging verified-purchase status. Sites that publish review provenance consistently earn the citation premium agents reserve for trustworthy review sources; sites that don't get treated as unverified.
What about reviews on a third-party platform — do those count?
They count, but as a separate signal — vendor reputation, not on-page Review Markup. The two are complementary: your own site emits AggregateRating + Review schema for products; aggregators like Trustpilot or BBB host reputation reviews about the business. Both should be linked via sameAs so agents can stitch the picture together.
Are AI-generated or paid fake reviews a real risk for citation rankings?
Yes, and the platforms are responding. Google's June 2024 site reputation abuse policy targets review manipulation; the FTC's 2024 rule prohibits fake reviews and AI-generated testimonials. Agents weigh provenance heavily — a review with datePublished, a real author, and a verified-purchase flag carries more weight than an unattributed five-star block. Sites that ship structured provenance are positioned for the next round of platform tightening; sites that don't are exposed.
How do I know my Review markup is valid?
Run the page through Google's Rich Results Test — it validates the AggregateRating + Review schema and reports any field-level errors. Validation must happen on every deploy, ideally in CI, because schema breakage is the failure mode that produces silent ranking drops. A Spekto audit catches the cases where the markup is technically valid but missing the fields agents actually weigh.