Writing FAQ content for Perplexity and Google AI Mode is a smaller and more disciplined job than most SEO guidance implies. The FAQs that get cited share a short list of qualities: the questions come from real shopper input rather than keyword research, each answer stands alone as a complete response, the visible text and the FAQPage schema match exactly, and the total number of questions stays small enough that none of them is padding. The FAQs that do not get cited usually fail on the same list: invented questions, answers that assume prior context, schema that diverges from the visible page, and volume that dilutes quality. This article is about how to build FAQ sections that end up in the answers, not in the footnotes of unused rich results.
Short answer
Pull FAQ questions from customer service logs, live chat, and review text, not from keyword tools. Write each answer as two to four sentences that stand alone without the product page context. Keep the visible FAQ and the FAQPage schema in exact parity. Limit each page to three to six entries. Render the schema server-side. Test citation by running a monthly prompt set against Perplexity and Google AI Mode and logging which queries surface your pages.
What you need to know
- Questions come from real shopper data. Customer service tickets, chat logs, reviews, and pre-purchase email. Not keyword tools.
- Answers must stand alone. An extracted answer will often appear without the question context, so each answer should restate enough to make sense on its own.
- Parity between visible and schema is mandatory. Divergence lowers citation confidence and is a Google policy risk.
- Three to six entries per page. Above that, the FAQ starts to repeat itself or pull in intent the page should not carry.
- Perplexity and Google AI Mode have different patterns. Perplexity cites source pages heavily and rewards dense answer pages. Google AI Mode leans on authoritative, well-structured sites and schema parity.
- Measurement is manual for now. Referrer data from AI engines is partial; prompt-set testing is the reliable check.
Where do the questions come from?
The single most consequential decision in FAQ writing is where the questions come from. FAQs built from keyword research read generic and rarely extract. FAQs built from what shoppers actually ask extract routinely.
The source discipline:
Customer service tickets and chat logs. These are the richest source. Pull the last three to six months of tickets for a product or category. Cluster by topic. The clusters that contain three or more independent inquiries become candidate FAQs.
Reviews. Review text often contains implicit questions ("I wish I had known that it only fits stroller models from 2020 or later"). Each implicit question is a candidate FAQ. Reviews also provide the language shoppers actually use, which is often different from the phrasing used in marketing copy.
Pre-purchase email and contact form inquiries. The questions that come in before purchase are the objections blocking conversion. They belong on the product page or collection page where the purchase decision is being made.
Returns reasons. When the returns data reveals a pattern ("fit smaller than expected", "was not compatible with model X"), the FAQ should address the underlying uncertainty before the purchase, not after.
Search data inside the store. Site search logs reveal the questions shoppers ask when they cannot find an answer on the page they are on. A long tail of searches for "material composition" or "washing instructions" is an FAQ opportunity.
Keyword tools and People Also Ask are a supplementary input at best. They surface what people ask generally, not what your specific shoppers ask about your specific products. The overlap is real, but the priority belongs with first- party data.
How should each answer be written?
The answer is where most FAQ content fails. The pattern that works is short, self-contained, specific. The pattern that fails is short, vague, and dependent on the question for context.
The structural rules:
Two to four sentences, with a complete opening. The first sentence should stand alone as a valid quote. Avoid opening with "Yes" or "No" without a restatement; "Yes, the vest is machine washable at 30 degrees." reads better extracted than "Yes." does.
Concrete nouns and numbers. "The weighted vest ships within two business days from our Lisbon warehouse." is extractable. "We ship quickly from our warehouse." is not.
Restate the subject. An extracted answer often appears without its question. "It weighs 20 kg and adjusts between 80 and 140 cm" is useless without the subject. "The Meridian vest weighs 20 kg and adjusts between 80 and 140 cm" is complete.
Name constraints and exceptions. If the answer has a caveat, include it in the same paragraph. Constraints make the answer more trustworthy, not less. "Free returns are available within 30 days from delivery, provided the product is unused and in its original packaging." is stronger than the version without the conditions.
Avoid answers that only restate the question. "Is the product vegan? Yes, the product is vegan." adds nothing. Rewrite to include what makes it vegan, what certification if any it carries, and what ingredients the shopper might worry about.
Keep the voice consistent with the rest of the page. A formal FAQ on a casual brand page reads as unrelated content bolted on. Match the register.
How should FAQPage schema be implemented?
FAQPage is one of the structured data types most actively policed by Google, which makes implementation discipline more important than coverage. The implementation that holds up:
Match visible content exactly. Google's FAQPage structured data guidance requires that the JSON-LD content correspond to the FAQ visible on the page. Truncated answers in the schema, extra entries not shown on the page, and different phrasing all break the parity requirement.
Render server-side. On Shopify, the FAQPage JSON-LD should emit from the theme Liquid at initial render, not from a client-side script added by an app. Server rendering keeps the schema reachable by crawlers that do not execute scripts.
Use only where a genuine FAQ section exists. Applying FAQPage schema to pages where the "FAQ" is two bullet points inside a benefit section is a misuse. A genuine FAQ has a heading, three or more question and answer pairs, and addresses actual shopper questions.
Do not duplicate across pages. The same shipping and returns FAQ emitted as FAQPage schema on every product page looks like boilerplate padding. Consolidate shared FAQs on a dedicated page and keep product pages for product-specific questions.
Keep one FAQPage block per page. Multiple FAQPage JSON-LD blocks on the same page cause parsing ambiguity. Consolidate to one.
Stores on Online Store 2.0 themes often get the cleanest result by building a reusable FAQ section that renders the visible FAQ and emits the JSON-LD from the same data source, so the two cannot drift.
What do Perplexity and Google AI Mode reward differently?
The two engines both cite FAQ content regularly, but the patterns of what gets cited differ.
Perplexity leans on dense answer pages. Perplexity's public material describes its behaviour as real-time retrieval that grounds answers in cited sources, documented in its Perplexity business guidelines. Pages that give a clear, specific answer in the first paragraph of the relevant section tend to earn the citation slot; pages that hedge or require context earn less.
Google AI Mode leans on authoritative, well-structured content. According to Google's AI Mode announcement, the feature leans on Google's existing ranking signals and generative layer. In practice, FAQ content that performs well in classic Search also tends to perform in AI Mode, with schema parity and answer completeness as reinforcing signals.
Freshness matters more for Perplexity. Perplexity fetches live and prefers pages whose last modified date is recent. A page updated this quarter has a practical advantage over one updated two years ago for the same query.
Attribution is tighter on Perplexity. Perplexity lists sources visibly in its answers, which means the citation is directly observable. Google AI Mode citation varies in visibility; some answers carry visible source links, others present the generative summary with the sources accessible through a secondary click.
The implication for writing is that the same FAQ structure works for both engines, but the supporting infrastructure differs. Perplexity rewards live accessibility and freshness; Google AI Mode rewards authority signals accumulated in Search.
What mistakes should I cut from the FAQ section?
The patterns that consistently hurt citation outcomes:
Invented questions. "Why do you love our products?" is not a question a shopper asks. It reads as marketing and does not extract.
Answers that pitch. "Because of our unmatched quality, our sustainable materials, and our dedication to craftsmanship." does not answer. It pitches. Replace with concrete facts.
Overlapping questions. "Is the product durable?" and "How long will the product last?" are the same question. Collapse to one entry.
Schema drift. Visible FAQ updated six months ago, schema still reflects the older version. Run a quarterly audit.
FAQ volume as an SEO tactic. Fifteen FAQs on a product page dilute the section. The engine cannot tell which answer matters, and the whole block loses authority. Fewer, sharper questions outperform longer, padded sets.
Hiding FAQs behind closed accordions. When the FAQ section is collapsed by default, some crawlers still read the content but others extract a smaller subset. Leaving the FAQ open by default, at least on mobile, avoids the edge cases.
Cross-linking FAQs as a coverage play. Building an FAQ as a series of internal links to product pages and blog posts turns the FAQ into a navigation menu, which extracts poorly. The FAQ should answer in place; links belong inside the answers, not as the answers.
Frequently asked questions
How many FAQ entries should a page carry before the quality starts to drop?
Three to six entries per page is where most operational FAQs land. Below three, the page rarely covers the core objections shoppers bring. Above six, the questions start to repeat each other or address intent the page should not be carrying. The ceiling is lower than most SEO guidance suggests because AI engines reward concentrated, specific answers and discount padded coverage.
Can I reuse the same FAQ across multiple product pages?
The shipping, returns, and warranty questions that apply to the whole catalogue can be reused. Product-specific questions should not be reused, because an answer that is identical across ten products reads as generic to the engine and loses extraction priority. If a question applies to a category, keep it on the collection page instead of duplicating it on every product.
Do AI engines treat a visible FAQ and an FAQPage schema block as redundant?
They treat them as complementary and cross-check them. The visible FAQ is what gets extracted and cited in the answer. The schema is the signal that tells the engine the section is structured FAQ content rather than unrelated prose. If the schema and the visible text diverge, the engine usually trusts the visible text for extraction and lowers confidence in the page overall.
Is it still safe to use FAQPage schema given Google's tightened rich result rules?
Yes, when used on pages where a visible FAQ section exists and the questions reflect real shopper intent. The tightening is directed at manipulative applications: FAQPage schema on pages without visible FAQs, sites applying it to every page as a coverage tactic, and schema that does not match the visible content. Applied with discipline, FAQPage remains supported and is frequently cited by AI engines.
How do I know whether my FAQ is actually being cited, as opposed to just indexed?
Run a small prompt set against the AI engines you care about, using the questions your FAQ answers in natural shopper phrasing. Record which queries return your site as a source and which return competitors. Repeat monthly. Referrer data from Perplexity and Google AI Mode is partial and delayed, so the prompt-set method is the most reliable way to confirm extraction in the short term.
Key takeaways
- Source questions from customer service logs, reviews, and pre-purchase data. Keyword tools are a supplement, not the source.
- Write each answer as two to four sentences that stand alone. The first sentence should be a complete quote without the question context.
- Keep three to six questions per page. Volume dilutes; concentration compounds.
- Maintain exact parity between the visible FAQ and the FAQPage schema. Render server-side.
- Validate with a monthly prompt set against Perplexity and Google AI Mode. The referrer data alone is not enough to confirm whether the FAQ is being cited.
This article is intended for informational purposes. Search engine guidance, AI provider behaviour, and structured data policy can change over time. Verify current details with Google Search Central, each AI provider's published guidance, and a direct conversation with nivk.com before making a strategic or technical decision.



