The Great GEO (Generative Engine Optimization) Grift

Scary new paradigm? Evolution of SEO? Something else? Let's get to the bottom of this once and for all.

David McSweeney

October 16, 2025

You’re being sold a lie.

And in some cases, a lie that may end up damaging your business.

That lie is that “GEO” (generative engine optimization) is a discipline fully distinct from—or even a replacement for—SEO.

In this article I’m going to argue that GEO, at least in its most widely presented form, is:

  1. nothing new
  2. possibly a waste of your time
  3. definitely a waste of your money
  4. polluting the web
  5. potentially harming your business

Why should you listen to me?

Well, you should make your own mind up. After all, I’m just some guy on the internet. But, I’ve been around the block.

  • Involved in SEO since 1997.
  • Ran my own Ecommerce business from 2006-2012.
  • Consulted for 15 years
  • Former head of content for Ahrefs and Seobilty

I’m also an AI obsessive and an early adopter.

So, make no mistake: I don’t write this from a position of ignorance regarding the tech, or a fear for my own position.

Indeed, if the latter was the case it would be simpler for me to simply pivot to selling “GEO” services like half of my peers.

But here’s why I don’t.

⬥⬥⬥

Part 1: Some “GEO tactics” are solid. But they don’t need a new acronym; they’re just SEO.

Gaslighting (noun): psychological manipulation of a person usually over an extended period of time that causes the victim to question the validity of their own thoughts, perception of reality, or memories and typically leads to confusion, loss of confidence and self-esteem, uncertainty of one's emotional or mental stability, and a dependency on the perpetrator
— Merriam-Webster Dictionary

There’s a fair amount of rewriting of history going on at the moment.

An attempt to reframe best practices that have been part of any well executed SEO strategy for at least 10 years as shiny new “GEO” tactics.

Let’s gather some information.

We’ll start by looking at the top “geo tactics” recommended by Google’s AI mode. Seems appropriate?

GEO Tactics: per Google's AI mode
a search for "geo tactics" in Google's AI mode.

Let’s log them as we go (I too can read the text in images).

Google’s AI Mode: Create expert-led, authoritative content, prioritize user intent, optimize content structure, implement structured data, use an omnichannel strategy, monitor and iterate.

Next, let’s do some deep research with GPT5.

deep research on "geo tactics" with GPT5
deep research on "geo tactics" with GPT5

GPT5 was wordy in its response (it was after all deep research), but here were the headlines:

GPT5: Allow and optimize AI crawling, create definitive content on your topic, optimize key pages (About, Home, Services) for AI recommendations, get featured on authoritative “best of” lists, encourage reviews and address feedback, use schema and structured data, incorporate multimedia and alternative formats, engage in industry conversations (forums/social), monitor AI mentions and refine content accordingly, avoid black-hat or over-optimization tricks

Moving on to Semrush’s AI Visibility tips (in fairness they don’t call it GEO here, but they do elsewhere which I’ll cover):

semrush AI visibility tips
Semrush's AI visibility tips

Semrush: brand mentions, content quality and originality, citations quotes and statistics, structured data, content freshness.

Finally, let’s go for the top ranking Google result for “geo tactics” at the time of writing.

Foundation Inc: Research topics relevant to your customers, create query intent based content at scale, implement digital PR activities, incorporate structured data, focus on user intent, distribute your content, embrace multimedia, leverage social media.

Now let’s boil this down a bit and extract the core “GEO tactics” from these four* sources.

* If four sources seems a little light, bear in mind that Google’s AI mode and GPT5’s deep research synthesized from multiple sources, so the actual number is much higher.
Tactic AI Mode GPT5 Semrush Foundation
AI crawler access & technical parseability
Structured data / schema
Expert, definitive, original content
User intent & topic research
Answer-extractable structure
Proof on key brand pages + reviews
Off-site authority & PR
Multimedia / repurposing
Social / community engagement
Monitoring & iteration
So now we have ten core GEO tactics. Let’s break them down.
Note: I asked GPT5 to create a steelman argument for each of these. Why? Because the published arguments were so weak and easy to rebut (generally variations of spam reddit, get into “best x for y” listicles, break up your content into semantic chunks). And also because GEO advancers will often try to bamboozle you with jargon. But if you know of any strong, cohesive argument for GEO as its own, stand-alone discipline that goes beyond what I have below then feel free to send it my way.

1. Ensure AI crawlers can access and parse your content

📢 The GEO Argument
“AI-oriented crawling is a different problem than search-engine crawling. Classic SEO optimizes for document indexing and ranking within a link graph; AI answer engines need structured, liftable facts they can ingest, chunk, and recompose into generated text. That pushes you to design answer surfaces rather than pages: server-rendered fact blocks, lightweight HTML fallbacks for JS apps, and explicit machine-readable endpoints (e.g., JSON feeds of specs/pricing/stats) that LLM retrievers can fetch and embed. It also adds bot governance beyond Google/Bing: explicit allow/deny rules for LLM crawlers, rate-limits to manage token-hungry fetchers, and clarity on reuse/licensing so models can quote safely. None of that targets a SERP; it targets an embedding pipeline.

Operationally, SEO logs tell you crawl → index → rank. AI crawling introduces a separate telemetry and risk surface: you watch for specific AI bot IDs, bursts of high-depth fetching, and 4xx/5xx spikes that break downstream embeddings (which can freeze your old facts into future answers). You may expose HTML snapshots of app pages solely for generative crawlers, or publish canonical “facts.json” with stable IDs and timestamps to minimize entity drift—behaviors that don’t move traditional rankings but materially change whether, and how, a model reuses your information. The success metric isn’t impressions or positions; it’s whether your facts enter the model’s retrieval corpus accurately enough to be generated back with attribution.

Finally, governance differs. With search engines, robots rules are mostly about inclusion/exclusion. With AI crawlers you’re also making policy decisions: which sections of your site may be quoted verbatim, under what terms, and at what crawl budget—because generative reuse is redistribution at scale. You might allow general crawling but disallow model training, or expose only specific, licensed blocks (TL;DRs, spec tables) designed for safe quotation. That mix of technical exposure, licensing posture, and embedding hygiene is answer-engine ops, not classic SEO.”

This is probably the “tactic” (or category of tactics) that has the strongest argument for being distinct. It’s hard to argue that the growth in chat based interactions, and—behind the scenes—the way LLMs index (initial training) and retrieve data (real time tool calls) hasn’t introduced several new considerations for SEOs. So I will make some concessions here.

But I will also argue that it’s evolution, not revolution. And that it doesn’t require a new acronym, it’s just an extension of SEO.

Let’s start with the obvious point.

Ensuring that your content is crawlable and indexable is absolute basic, day one SEO. Whether that indexing is for Google or for ChatGPT, if your content can’t be accessed and parsed, it’s not going to show up. I think that’s a given.

And dealing with (and adapting to) the limitations of crawlers is certainly nothing new. Some of us still have shellshock from figuring out how to create indexable versions of Flash websites back in the late 90s. For a time, AMP pages were the future. And today, if we need to create HTML snapshots or chunkable JSON versions (arguably we shouldn’t have to, but I’ll set that aside for now), then that’s just an adaptation, not a sea change.

Control over crawling is also not a new paradigm. After all, choosing what should not be indexed has always been just as important for SEO as choosing what should be.

  • We block content/directories in robots.txt.
  • We add noindex to pages.
  • We 301 redirect and consolidate.
  • We ignore data-nosnippet…

I’ll concede the point that due to training cut-offs, it’s critical that information is correct at the time of a particular model’s training run. And indeed, that there are (important) decisions to be made on whether content can be used in training and/or retrieval.

But I’ll contend that the above falls under optimization and policy. Ensuring information is correct seems like general good practice (would you want wrong information on your website?). And there were always decisions to be made on content that should not be indexable - particularly sensitive content. Although feel free to disagree with me here.

Log file analysis can be important, but generally only at the enterprise level. For smaller sites it’s probably going to be overkill. But regardless, checking logs for errors and managing crawl budget has long been part of SEO. The specifics on what we look for might have changed (or been added to), but you know… change is inevitable.

2. Add structured data and schema

📢 The GEO Argument
“Structured data for answer engines functions less like “rich result markup” and more like an API contract between your site and LLM retrieval systems. In this frame, JSON-LD isn’t there to influence SERP appearance; it’s a machine-readable source of truth that exposes entities, attributes, and relationships with stable IDs (@id) and sameAs links so models can disambiguate you from near-identical names and map products/services/authors back to a single canonical graph. You’re declaring what may be quoted (FAQ, HowTo steps, specs, pricing) in a form that can be parsed, embedded, and recomposed verbatim—independent of rankings, snippets, or click-through.

Operationally, this becomes facts-as-code. Teams version the schema payloads, attach timestamps (dateModified), and align them to internal data sources (pricing, SLAs, certifications, inventory). That lets assistants verify freshness and ground generated answers against your declared facts. You also add policy metadata (e.g., license notices adjacent to TL;DR blocks) so generative systems can reuse excerpts safely. Success isn’t a star rating in SERPs; it’s whether your structured graph is ingested by LLM retrievers and produces stable, attribution-friendly outputs across chat, voice, and AI overviews.

Finally, schema here orchestrates multi-surface consistency, not blue-link visibility. The same JSON-LD graph can feed product cards in chat, support agent copilots, or voice responses on devices—surfaces where there is no “position 1.” In other words, structured data becomes an interoperability layer between your brand’s knowledge and generative ecosystems. That mandate—designing a canonical, license-aware, updateable knowledge interface—is answer-engine operations, distinct from traditional SEO goals and feedback loops.”

Where crawling and indexing was relatively strong, arguing that structured data/schema is distinct to GEO is embarrassingly weak.

I mean, it’s patently ridiculous. In fact, we don’t really need to go much farther than Wikipedia here:

Schema.org is an initiative launched on June 2, 2011, by Bing, Google and Yahoo!(operators of the world's largest search engines at that time) to create and support a common set of schemas for structured data markup on web pages. They propose using the schema.org vocabulary along with the Microdata, RDFa, or JSON-LD formats to mark up website content with metadata about itself. Such markup can be recognized by search engine spiders and other parsers, thus granting access to the meaning of the sites (see Semantic Web).
— Schema.org definition, Wikipedia

But regardless, we will.

So let’s break down the facts.

Schema has been around for over 14 years and was literally proposed by search engines themselves to help them better understand the meaning of sites.

We (SEOs) have long recommended adding relevant schema to web pages to aid machine understanding of our websites/content (current AI is just a new version of that “machine”). And yes, in some cases, that also meant getting some shiny SERP features like review stars or product prices.

Any basic SEO plugin will have fields where you can add related entities for sameAs relationship mapping/disambiguation. Of course, you have to make sure it’s all correct; but that’s just SEO.

Don’t believe me?

Here’s an article on entity disambiguation (content focused) from 2020, here’s one on entity disambiguation from 2014, and here’s the dearly missed Bill Slawski also talking about entity disambiguation in the same year.

In short: the goal, or the outcome may be slightly different. Perhaps we were optimizing for the knowledge graph. Maybe we were optimizing for local SEO. Perhaps we just wanted review stars. But the process is SEO.

3. Expert, definitive, original content

📢 The GEO Argument
“For answer engines, the unit of success isn’t a page that ranks—it’s a set of liftable claims (definitions, stats, steps, specs, outcomes) that can be recomposed into a response with confidence and, ideally, attribution. That reframes “expert content” from long-form topical coverage into evidence-dense knowledge objects: TL;DR blocks, micro-tables, cited figures, and verifiable claims placed high and written to be quoted verbatim. The editorial bar shifts from “help the user on-page” to “be a source of record assistants can reuse safely,” which adds provenance, dates, and permissive reuse language to the content brief.

Operationally, this becomes facts governance, not just publishing. Teams maintain source lists, cite primary materials, version key numbers, and set freshness SLOs (e.g., pricing ≤30 days; stats ≤90 days). Authors are treated as entities (bios, credentials, sameAs) so models can resolve expertise; case studies include measurable outcomes so a recommendation can be justified. You design for quotability and auditability first, narrative second—shipping small, authoritative artifacts (datasets, FAQs, step lists) alongside the article so assistants can ground to them directly.

Success isn’t “more words” or “position X”; it’s being reused accurately: fewer hallucinations about your brand, consistent citations for your canonical numbers, and your answers appearing in chat/voice contexts where there is no SERP. That emphasis on evidence, provenance, and liftability makes “expert, definitive, original content” a content-ops discipline aimed at generation pipelines, not just a traditional SEO page meant to win clicks.”

Ok, let’s take a pause.

Because ignoring the hellscape language for a moment—“evidence-dense knowledge objects”, “authors are treated as entities”—(although we’ll come back to that later) this sounds pretty impressive doesn’t it.

The problem is, it’s bullshit. Observably, computationally, and technologically, it’s just not what’s happening. It’s attributing far too much intelligence to current models (more on that later), and the way they ground with search through live retrieval.

LLMs aren’t BBC Verify. They’re statistical, probabilistic models. They don’t “know” the truth. In fact they don’t really “know” anything.

Sidebar question: where did LLMs get their current “knowledge” from? Was it the pre-chunked, pre-sliced, pre-robotized internet perhaps?

And the reality is, we already chunk just fine. We already verify our facts just fine. We’ve been “chunking” for years to try and grab position zero snippets (the zero here often had a dual meaning funnily enough - hint: clicks). We’ve been verifying facts and showing proof of work/evidence to demonstrate E-E-A-T.

Test: grab a longform article (natural language, not lobotomized for GEO) that includes statistics and facts. Ask your LLM of choice to summarize the statistics and facts. Can it do it? Of course it can.

We’ve been making sure our bios are complete. And we also know that authors as entities has serious issues anyway as Google already tried and failed over a decade ago.

We can justifiably take a razor here to the word vomit and simplify down to:

  • Write with clarity (did we previously write with unclarity?)
  • Cite your facts and statistics (E-E-A-T, medic)
  • Provide evidence for your claims (E-A-A-T)
  • Include summaries/tl;dr (we do that. In fact we probably do it too much if anything)
  • Include tables, bullet points, FAQs (such wisdom!)
  • Provide credentials for your authors (E-A-A-T, medic)
  • You design for quotability and auditability first, narrative second (how about no…)
  • Ensure your pricing, statistics, facts are up to date (just good governance)
  • Ensure consistency in your messaging (the fundamental of branding)

Absolutely. Nothing. New. And most of it attributing far too much intelligence to the observable state of LLMs. Just like we’ve done with Google for years ironically.

4. User intent and topic research

📢 The GEO Argument
“In an answer-engine world, “intent” isn’t a proxy for keyword categories; it’s prompt ecology—the combinatorial space of natural-language tasks, constraints, and personas that LLMs resolve. Research shifts from volumes and SERP features to prompt archetypes and constraint frames (“for a 3-person team,” “under £500,” “no vendor lock-in,” “UK compliance”), because those modifiers directly steer retrieval and reasoning. The deliverable isn’t a keyword map; it’s a prompt map: clusters of question forms, follow-ups, and oblique phrasings the model treats as equivalent, plus the evidence each requires to justify a recommendation. You design content to satisfy reasoning paths, not just match queries.

Operationally, this becomes an experimental program, not a desk study. Teams run cold-start prompt panels across models, personas, and stages to observe which constraints change source inclusion, then back-propagate findings into content specs (facts needed, proofs missing, angle gaps). Support tickets, sales calls, community threads, and product telemetry feed the prompt map because LLMs absorb real phrasing variance at scale. The output is a backlog of “answerable tasks” with embedded evidence requirements (stats, SLAs, case effects), each tied to a content unit or proof block. Success isn’t rank or even traffic—it’s coverage of decision-shaping prompts and the model’s ability to assemble your evidence when those prompts fire.

Finally, intent research for generative surfaces must model multi-turn trajectories. Users don’t just ask once; they refine: “best X” → “for nonprofit” → “migration time?” → “risks?”. Your plan has to anticipate those follow-ups with pre-authored, liftable answers and linkable proofs so the assistant can maintain context and still ground claims. That’s conversation design for evidence retrieval—a product of answer-engine ops—distinct from the single-query mindset and feedback loops of traditional search reporting.”

Let’s cut through the noise here. What we’re talking about is buyer’s journey, personas, search intent, and a redefining of long-tail keywords.

I mean, just swap “prompt map” for “keyword map” and you’re half the way there. The fact that LLMs (may) crawl support tickets and community threads is a nice bonus - you get that for free.

But let’s break it down a little further.

Firstly, there is very little point in me explaining what a buyer’s journey is and the importance of it. So instead I’ll just link to some pre-LLM articles which do a better job than I can of discussing why it’s so critical for SEO:

My boy E. St. Elmo Lewis way ahead of his time there. Conducting GEO 119 years before the release of “Attention is all you need”. And I’m only half joking.

Beyond this, we’re back to verifiable claims etc (covered above already), and trying to present SEO as something which it is not. Refining and multi-turn sounds scary, until you realise it’s not hugely different to long-tail modifiers (best for X, under Y, near me, cheap), and “People also ask”.

But it does surface an immediate failing of most GEO tools (don’t worry, we’ll get to them) in that they track single-turn prompts, when in fact most “prompts” are part of a conversation. Somewhat ironically, most of these fancy new tools “track” (inverted commas for a reason) prompts just like keywords.

Anyway, the newsflash here is that keyword research (call it prompt research now if you must) is important. Who knew?

5. Answer-extractable structure (H2, H3, lists, tables)

📢 The GEO Argument
“This isn’t just “good formatting.” In answer engines the atomic unit isn’t a page—it’s a chunk the model can lift with minimal ambiguity. That changes the brief from “make content readable” to design reusable, addressable snippets: short definition paragraphs, numbered steps, compact spec tables, and 3–7 bullet fact blocks positioned high and written to stand alone. You also add stable anchors/IDs to these chunks (e.g., #definition, #pricing-table) so tools and assistants can reference them consistently. The objective isn’t on-page UX or snippet eligibility; it’s machine recomposability—making each block self-sufficient, sourceable, and safe to quote.

Operationally, teams treat structure as content engineering rather than copy layout. You standardise components (TL;DR, FactBox, StepList, MiniTable), enforce max lengths, attach timestamps/provenance to each block, and ensure parity between on-page blocks and any machine-readable mirrors (schema or a public facts feed). You avoid burying critical facts mid-scroll; you constrain sentences for chunk boundaries; you normalise headings so retrievers can map “What is…”, “Pros/Cons”, “Pricing”, “Specs”, “Alternatives” reliably across pages. The success metric isn’t time-on-page—it’s quote stability: do assistants consistently extract the same definition, the current price, the correct steps?

Yes, it overlaps with “expert content” and “structured data,” but the emphasis here is information architecture for generation: turning pages into a grid of canonical, liftable units with identifiers, provenance, and guardrails. That’s a distinct goal from classic SEO formatting, which optimised primarily for humans and featured-snippet chance; this optimises for predictable chunking and reuse across chat, voice, and AI overviews where no single “position” exists.”

Ok, so what do we have? Chunking again, which we’ve already covered. But to reinforce, SEOs are good at structure and semantic documents. And we’ve been optimizing for snippets for years. We love marking things up.

Stable IDs and anchors? Makes sense. Although how are you dealing with multiple pricing tables on one page? Feels a bit like fluff to me, but I’ll accept that it’s logical.

Standardized components? In reality, this is just good design practice/consistent design language. Not disagreeing with the practice. Disagreeing that it’s new.

“You avoid burying critical facts mid-scroll” - Google was talking about the importance of above the fold content back in 2012.

“Constrain sentences for chunk boundaries”. How are you defining these boundaries? Are they in the room with us right now? No way this is manageable at scale. Micro-managing sentences is just stupid.

6. Proof on key brand pages + reviews

📢 The GEO Argument
“Answer engines don’t just match text to queries; they justify recommendations. To mention a brand in a generated answer, the model needs verifiable evidence it can point to: outcomes, success rates, certifications, SLAs, response times, security standards, named clients (with permission), and third-party reviews that corroborate those claims. That reframes Home/About/Service pages from sales copy into evidence repositories—structured “proof blocks” written for reuse: a TL;DR with measurable results, a mini table of specs/SLAs, a compliance panel (SOC2, ISO), and links to auditable sources. Reviews move from generic social proof to decision evidence: platform-specific profiles (e.g., G2, Capterra, App Store, GBP) that assistants and shoppers actually cite.

Operationally, this is evidence management, not keyword targeting. Teams curate machine-readable proof (AggregateRating, Review, Award, Organization JSON-LD), maintain freshness SLOs for metrics, and ensure parity between on-page proof and external corroboration. They standardise a 100–150-word boilerplate so listicles and directories repeat consistent strings (reducing entity drift), and they pursue platform-weighted review velocity where their category’s answers are sourced. The goal isn’t a higher rank for “{service} + city”; it’s to give assistants sufficient, current, cross-source evidence to select and defend a brand mention in a buying scenario.

Success is measured by recommendation readiness: Do your key pages expose concrete, timestamped proof blocks that align with what assistants require to justify you? Do third-party profiles echo those claims with ratings and excerpts the model can cite? When an answer engine assembles “best X for Y,” can it lift your current numbers and credible reviews without ambiguity? That’s evidence ops for generative recommendations.”

I’m getting a little shorter and more dismissive with my rebuttals now. But that’s because we’re going round in circles.

Again, we have proof of claims (E-E-A-T, medic). This time we also have the importance of key brand pages (a pillar of E-E-A-T). We want to ensure we’re mentioned in a consistent way (branding again). We want positive reviews. We need more of them?

Not much to say here. Let’s move on.

7. Off-site authority & PR (lists, mentions, citations)

📢 The GEO Argument
“In answer engines, off-site work isn’t primarily about link equity or referral clicks—it’s about cross-source corroboration that lets a model justify naming you. The model assembles answers from high-signal repositories (authoritative “Best X” lists, reputable directories, standards bodies, review platforms, Wikidata/Wikipedia, code/package registries, academic/industry reports). Your objective shifts from “earn links” to seed consistent, verifiable descriptions of your entity across these sources so the assistant can (a) find you, (b) disambiguate you from look-alikes, and (c) defend the recommendation with quotes and stats. Unlinked brand mentions and consistent boilerplate matter because LLMs weigh co-citation and semantic alignment, not just PageRank.

Operationally, this becomes distribution engineering rather than classic PR for coverage or DA. You maintain a press kit with a standardised 100–150-word boilerplate, approved stats, and citations; push it to listicle editors, analysts, and directories so the same strings propagate. You prioritise platform-weighted ecosystems—G2/Capterra/App Stores for SaaS, ClinicalTrials.gov/DOAJ for health/research, npm/PyPI/GitHub for dev tools, GBP for local—because those are the sources assistants repeatedly cite. You publish original datasets or benchmarks others will reference (creating durable evidence tokens), and you add light licensing/attribution language so your facts can be quoted verbatim without legal ambiguity. The KPI isn’t “links gained” but coverage + consistency across the specific sources your category’s answers draw from.

Success is whether assistants can corroborate you across third-party nodes and pull a defensible line—“Rated 4.7/5 on G2; SOC 2 Type II; average deployment 14 days”—when constructing a recommendation. That’s not link building for rank; it’s evidence distribution for selection in generative results, optimised for LLM ingestion habits rather than blue-link algorithms.”

Ok, this one deserves a little more time, since it’s one of the core untruths of GEO evangelists, at least the ones that claim that SEO is dead.

The attempt to frame off-page SEO as “just link building”. Or in some cases, link spam. And the pretence that things like co-citations are new concepts.

So for this section let’s go back. Way back. A bit of a resource dump first.

We’ll start with co-citation. The framing here is links, but here’s an article discussing co-citation and its impact on SEO from the heady days of 2006.

“If you’re trying to rank high, keep in mind your linking neighborhood and your co citation. On the pages where you’re getting links from, who else do those pages link to? Are the other links on those pages related to your site? Is that co citation something that will help or hurt you?”
— Jim Boykin (March 2006)

Here’s Rand Fishkin back in 2012 discussing how co-occurence (the words surrounding mentions of your brand) may influence Google rankings - link or no link.

“If you look at a text snippet on the page, it'll say, "Cell phones as rated by Consumer Reports." This doesn't even link. This is not a live link. It's not even pointing to their website or to that specific web page. But Google is noticing the association. They see the words "cell phone." They see the word "rated," and they see "Consumer Reports." They put two and two together and say, 'You know what? It seems like lots of people on the Internet seems to think that Consumer Reports and cell phone ratings go together.'”
— Rand Fishkin (November 2012)

Here’s John Doherty talking about the power of unlinked mentions (and yes, “entities”) back in 2018.

“Google knows entities, they can and will associate your brand/name (unlinked even, which I specifically asked about) with the topic that the piece of content where you are mentioned is covering if your brand is associated with that topic enough.”
— John Doherty (November 2018)

Here’s a discussion on a Google patent from 2014 that proposes using “implied links” (i.e. citations) for rankings.

“An implied link is a reference to a target resource, e.g., a citation to the target resource, which is included in a source resource but is not an express link to the target resource. Thus, a resource in the group can be the target of an implied link without a user being able to navigate to the resource by following the implied link.”
— Tommy Landry (2014)
“There are many elements in the results you don’t own or cannot directly control that could rank in the SERPs for your brand. This includes other sites that mention your company ranging from competitors to partners, industry publications, blogs and online forums and review sites.”
— Econsultancy (April 2017)
“both Bing and Google appear to need about 20 to 30 confirmations of coherent consistent information from trusted sources before including information in their knowledge graphs. For information to be confirmed as a fact, it needs to be corroborated by multiple sources (found in lots of different places online) and be consistent across them (similar to NAPs in local search).”
— Jason Barnard (April 2019)

Here’s a post on NAP consistency (local SEO focus) from 2015.

“Google pulls its information from a large number of sources. Because Google’s index is built on data from all over the web, your business information must be consistent everywhere.”
— Counselling Wise (2015)

Here’s your humble narrator (me) discussing unlinked mentions and participating on Reddit on the Ahrefs blog in 2016.

“It should be noted that not every business directory will include a link back to your site, however, many SEOs believe that a mention from a trusted resource may be counted by Google as an ‘inferred link’ and assist with rankings.”

“Identifying subreddits related to your niche, participating, and occasionally sharing genuinely interesting and relevant content from your own site is a legitimate way to build links that also drive traffic.”
— David McSweeney (August 2016)

Ok, let’s move on from the resource dump and cover some of the other earth-shattering insights.

Appearing in “Best X” lists: Well, of course. Most likely one of the first priorities for any traditional SEO campaign.

Reputable directories: contending that an SEO strategy would not target inclusion in reputable and industry specific directories is laughable.

G2/Capterra etc: part of any SEO campaign, or indeed already taken care of by the marketing team/founder. With regards to consistency, a citation audit would be part of most solid SEO strategies. And identifying the most important industry sources for trust is just part of SEO.

Publishing datasets, statistics etc: a go-to link bait strategy forever.

Off-page SEO has not been “just link building” for well over a decade. Anyone who tells you otherwise is seriously misrepresenting history.

Call some of it digital PR if you want. Call some of it reputation management. But it ain’t new. And a high-level SEO campaign would cover it all.

8. Multimedia/Repurposing

📢 The GEO Argument
“Answer engines don’t just read pages—they mine video, audio, slides, and images and often extract text via transcripts, ASR, and OCR. That shifts “repurposing” from a channel play (get more impressions) to machine-addressable media engineering: produce videos with clean transcripts, chapter markers, on-screen fact cards, and descriptions that restate the exact claims you want lifted; publish podcasts with timed transcripts and show notes that mirror your canonical facts; export slide decks as text-rich PDFs that OCR cleanly. You’re not chasing watch time; you’re creating liftable evidence atoms outside HTML so assistants can quote you in generated answers and voice responses.

Operationally, this becomes a metadata and parity problem, not distribution fluff. You attach VideoObject / AudioObject / PodcastEpisode schema, keep titles/IDs consistent across the media and the canonical article, and enforce fact parity (the number on the lower-third, the slide, and the page is the same, dated, and sourced). You prioritise platforms models over-index on—YouTube for how-tos/reviews, Apple/Spotify for expert interviews, SlideShare/Docs for frameworks—and include provenance cues (dates, citations, captions). The editorial spec changes: each asset must contain a quotable, verifiable summary block that can stand alone when lifted.

Success isn’t a blue link; it’s whether assistants quote your media-derived facts accurately or surface your video/podcast as a cited source in generative answers and voice UIs. In that frame, “multimedia” stops being a nice-to-have amplification tactic and becomes a first-class input to retrieval and synthesis—a content-ops layer tuned for LLM ingestion, not traditional SEO ranking signals.”

Content repurposing has been a part of SEO for years.

Here’s a 2019 article on content repurposing from Constant Content.

Here’s a solid guide to optimizing multimedia content for SEO by Bruce Clay from 2014.

Here’s me recommending repurposing content in 2016.

“Repurpose the post into a video, blog, slideshare presentation etc”.

Of course you want to make sure your repurposed content is optimized. Because, you know, the O in SEO stands for “Optimization”.

If you want to ensure fact parity etc, then knock yourself out. But that’s just policy.

9. Social/Community Engagement

📢 The GEO Argument
“Answer engines don’t just read your site; they sample public discourse—forums, subreddits, Q&A threads, issue trackers, and niche communities—and often quote or paraphrase those posts directly. In this frame, community participation isn’t a traffic play; it’s evidence seeding where problems are actually described, resolved, and co-cited. Genuine, durable answers you post in-thread (with concrete steps, code snippets, citations, and outcomes) become liftable units that assistants can reference when assembling recommendations or how-to summaries. The goal isn’t referral clicks; it’s being the canonical explanation that gets copied.

Operationally, this looks less like “social media marketing” and more like field support at scale. You map the few communities that consistently show up in generative answers for your category (e.g., Reddit subs, Stack Overflow/tags, vendor forums, Discords, GitHub Discussions). You maintain named expert accounts with transparent affiliation, contribute high-signal replies, and then canonise the best answers on your site (FAQ/Docs) to create parity and a clean citation target. You also track topic gaps revealed by recurring community questions and back-propagate them into docs, release notes, or product changes—so the next time that question is asked, your explanation is the one the model finds in multiple places.

Success isn’t followers or engagement rate; it’s cross-surface corroboration: your phrasing and solutions appear both in community threads and on your canonical pages, increasing the chance assistants pick your language when synthesising. In other words, social/community engagement becomes evidence distribution and reinforcement for answer engines—adjacent to SEO but aimed at shaping the text that is most likely to be reused in generated responses.”

All good advice.

All just go where the discussion is in your niche and participate (be part of the conversation), and create content based around real questions/problems buyers are asking/facing.

Questions and answers, and problems and solutions are the fundamentals of SEO/content marketing and go back to the buyer journey that we covered earlier.

Ultimately, we were all participating on Reddit, answering questions on Quora, and creating content to match user/buyer intent long before some genius thought that “GEO” would be a good acronym that wouldn’t in any way be difficult to disambiguate…

geo google search
good luck disambiguating that
Was this guide from 2013 ahead of its time?
a search for "geo tactics" in Google's AI mode.

Or was it just what we’ve all been doing for well over a decade?

You tell me.

Sidenote: my steelman was being generous. Often “participate” on Reddit, just means spam it (I’ll cover that shortly).

10. Monitoring & Iteration (inc. freshness)

📢 The GEO Argument
“Answer engines are moving targets with opaque feedback loops: there’s no global “Search Console,” outputs are stochastic, and wrong facts can persist (training cut-offs) even after you fix a page. That shifts monitoring from rank/CTR dashboards to evidence health: are assistants quoting your current facts, attributing correctly, and avoiding misidentifications? The operational answer is a diagnostics program, not just analytics—run cold-start prompt panels (new sessions, diversified phrasing) across major assistants to sample what they say about your brand/products; log misattributions, omissions, and hallucinations; and maintain a triage queue that maps each issue to a concrete fix (update on-page facts, add schema parity, seed third-party corroboration, adjust robots/licensing).

Freshness becomes a service level objective rather than an ad-hoc edit. Treat volatile facts (pricing, SLAs, availability, comparative stats) as facts-as-code: version them, expose timestamps (dateModified), and align page blocks, JSON-LD, and any public facts feed so there’s single-source parity. Pair this with review velocity monitoring on the platforms that matter for your category (e.g., G2/Capterra/GBP/App stores), since many assistants mirror those signals; set targets, close the loop on responses, and correct outdated third-party blurbs. On the technical side, watch bot access logs for known AI user-agents, 4xx/5xx spikes on key pages/endpoints, and drift between visible copy and structured data—because access failures and inconsistency are how stale or wrong facts get frozen into future answers.

Success isn’t “position” or even traffic; it’s error reduction and evidence stability: fewer hallucinations, fewer name or feature mix-ups, higher match-rate between what assistants say and your current canonical numbers, and faster time-to-fix when they’re wrong. In this frame, “monitoring & iteration” is an answer-surface ops loop—sampling outputs, repairing inputs, and enforcing freshness—distinct from traditional SEO’s rank-watching, even if it reuses some of the same tools.”

tl;dr keep your content up to date.

Let’s move on to the next section.

⬥⬥⬥

Part 2: You’re optimizing for the current state of LLMs

So, now we’ve covered what “GEO” (supposedly) is.

But even if you disagree with my take, even I were to concede that GEO was a new discipline (I don’t, but you know, for the sake of argument), there’s still a GLARING problem:

This is the worst it will ever be.

All of this chunking, slicing, and robotic rewriting is optimizing for the *current* state of LLMs.

Let’s be generous and assume that at the moment you have to do that, do you think that’s where we’ll be in 6 months? Do you think that’s where we’ll be in a year?

This is not exactly an industry that’s standing still.

Ultimately, the goal of LLMs/AI is to understand natural language. GEO champions have got it backwards. They advise that we (humans) must change our ways and write for the machine.

Ironically, us SEOs have long been accused of doing that. And in many ways it’s a valid criticism. But by god we didn’t take it to this level.

The bottom line is this: if LLMs can’t understand our human written content, that’s a skill issue. That’s a failure. And it’s on OpenAI, Google, Anthropic, Meta, X, and whoever else may enter the field to level-up, not for us to level-down.

But that brings us to the next point.

What if we have the OPPOSITE problem. What if LLMs have hit a wall? What if Jepa, INSA, or some new architecture is the solution?

Are you willing to stake your business on a technology that in its current state still can’t get a basic riddle correct?

gemini riddle
Google's AI Overview

(yes, I’m cherry picking, but that was literally yesterday)

I won’t dwell on this. But food for thought before we move on to part 3.

Because now I’m going to dive into GEO tools, and argue that you might want to keep your credit card in your wallet for now.

⬥⬥⬥

Part 3: The illusion (and/or moral ambiguity) of “prompt tracking” and “AI visibility”

Keyword and search visibility tracking is dead. Prompt and AI visibility monitoring is the present and the future.

At least that’s what the slate of new tools that have entered the market recently would have you believe.

Some of them are pretty blunt about it.

seo is as outdated as the internet
SEO is dead again apparently

And while the tool above is being highly selective with their statistics, I’m certainly not here to try and convince you that the online world hasn’t changed. That’s not my fight. I live it day-to-day. I’m not blind.

But what I will argue is that most of these tools are a waste of your money.

Because, as I mentioned previously, while screaming that keywords are dead… they’re (mostly) selling you products that… treat prompts as if they were keywords.

With some prompt tracking starter packages you can track 25 “prompts” per month.

The prompt tracker will call the web interface of ChatGPT, Perplexity, Google (and other LLMs if you pay more), run the prompt, collect the answer, parse it, collate and report (how many times were you cited, how many times were your competitors cited, what were the answers etc).

25 prompts. T-W-E-N-T-Y. F-I-V-E. P-R-O-M-P-T-S

Let me requote part of one of our steelman, GEO is its own thing, arguments:

“Research shifts from volumes and SERP features to prompt archetypes and constraint frames (“for a 3-person team,” “under £500,” “no vendor lock-in,” “UK compliance”), because those modifiers directly steer retrieval and reasoning. The deliverable isn’t a keyword map; it’s a prompt map: clusters of question forms, follow-ups, and oblique phrasings the model treats as equivalent, plus the evidence each requires to justify a recommendation. You design content to satisfy reasoning paths, not just match queries.”

Do you spot the disconnect?

What does this actually tell you about your AI visibility? The answer is, nothing. It tells you nothing.

And that’s before we get to the wider problem.

Prompts are NOT keywords.

Often they’ll be part of a conversation. That conversation is going to influence the model’s decision making when predicting the most appropriate answer (ultimately, based on probabilities).

The user’s gender, location, age, and other interests will also factor in.

The toolchains will contribute.

Which model? (GPT5, 4o, thinking, Gemini 2.0 Flash, 2.5 Flash, 2.5 Pro etc, all the Claude bros).

ChatGPT free vs pro?

Logged-in vs logged out?

What you get is a moment in time. A snapshot of a snapshot of a snapshot. But hey, the dashboard looks nice.

Ok, I’m being slightly hyperbolic. “Nothing” is too strong, and it obviously tells you something. But ultimately, for most businesses, no more than you’d get from just running these prompts yourself periodically.

(Or using something like Bright Data for scraping, then vibe coding your own dashboard. Just be aware that you'll be violating TOS in the process.)

But look, it is what it is. And I’m not saying they're all bad products. Because tracking this stuff is a hard problem to solve. And they’ve no doubt done their best. But there’s zero moat here. And don’t be too confused if your AI visibility is up 0.1% one month, then down 0.2% the next. You didn’t do anything wrong (well, hopefully you didn’t), it’s just how probabilistic models work, particularly when dealing with a small sample size.

However there is one tool that does something slightly different. Or at least promises to.

profound AI
screenshot from tryprofound.com
Profound claims to track tens of millions of real prompts every month, from a “double-opt in” panel of “millions” of active AI assistant users.
profound faqs
screenshot of FAQs from tryprofound.com

To get access to this data, you’ll pay $499 per month, which will also allow you to track 200 of your own prompts (this part is similar to other prompt trackers).

Let’s start by being fair:

If they actually are tracking tens of millions of real user interactions (i.e. real conversations) each month, there’s an argument to be made that the data is more insightful/actionable. It’s a reasonable dataset to extrapolate from, and it’s probably fair to do so. You can model data with a degree of confidence from a sample size on this scale.

But there’s a burning question:

Have you ever met one of these “millions” of people who have double-opted in to having their AI chats recorded? They should be everywhere really.

It feels like this is a big claim. And big claims require evidence.

But you won’t get it.

There are questions to be asked here, and I’m not the only one asking them.

Perhaps all’s fair in love, war, and online marketing? But maybe, just maybe, we should pause for thought.

Anyway, let’s move on to the industry leader: Semrush.

Because they’re VERY vocal about GEO.

If you follow me on linkedin (not many do… but hey, let’s call it a “niche audience”) then you may have spotted me “interacting” with one of their team (sorry Nick!).

nick eubanks, linkedin
Nick Eubanks via Linkedin

I mean, yes, I was kind of trolling, which isn’t really my style. I was attacking the message, not the man, but perhaps I should have been a little more respectful in my tone. So again… sorry.

But Semrush themselves being a $1bn (at the time of writing) public company, I think are fair game. So let’s look at one of their flagship AI offerings: the Semrush AI Visibility Index.

Semrush AI Visibility Index
The Semrush AI Visibility Index

It’s the “Definitive Benchmark”. It must be a HUGE sample size right… right?

Semrush AI Visibility index prompts
Screenshot from Semrush AI Visibility Index landing page

Oh. 2,500 prompts across 5 major industries.

Now, I’m no mathematician. Even Google’s original Bard would probably beat me on FrontierMath. But I’m pretty sure that 2,500 / 5 = 500.

500 prompts per industry.

what is this? a sample size for ants
2,500 prompts? What is this? A sample size for ants?

How about the methodology?

Well, it seems to have vanished from the page. Or it may be that it was posted on social media when they initially promoted it. But (unless I was hallucinating) I did see it, and it revealed that all prompts were tested from a single desktop location in the United States over (from memory) a 30 day period.

I mean, when I saw this I was seriously tempted to collect the data from 2,501 prompts. After all, bigger is better (part of Semrush’s messaging is that you *need* their scale and data), so I would have beat them by one, ergo, I would have had the definitver (it’s a word ok) AI visibility index.

But I digress.

The point is, this is useless. 2,500 prompts is a splash in the ocean. The fact that it was a single desktop location makes it statistical noise. Actually, that’s being generous, it’s a statistical squeak. Just go ask Claude what he thinks about your brand, it will be just as helpful. Plus he has better jokes, and you’ll probably get a rocket emoji or three as a bonus.

I have no doubt Semrush will come back with a bigger sample set soon enough. But at the time of press, their AI Visibility Index is based on a whopping 2,500 prompts.

I have other issues with Semrush and their lazy, “notification in a business suit” implementation of “AI”:

screenshot of semrush copilot
Screenshot via Semrush
And use of it as an upsell:
semrush walled garden
Screenshot via Semrush

But that’s a story for another day.

In the meantime, let me drop in the following screenshot from this Reddit thread as food for thought:

screenshot of geo push on reddit
Screenshot via Reddit

Presented without much comment. But it does make you wonder where this “SEO is dead, GEO is the future” narrative is coming from. Are we being astroturfed? I’ll let you make up your own mind.

Btw, QueryBurst has plenty of tools for helping you optimize for AI search (see the example workflow below). 

But none of them are based around “prompt tracking”. And it’s not because that would be particularly difficult to add, it’s because (as I’ve outlined) I don’t think it’s currently possible to do it in a meaningful way - at least not without (potentially) violating privacy.

This article isn’t a disguised product plug. But if you want to try it, well at 7,500+ words in, I feel justified in dropping one link. You can find out a bit more about how it works here.

Anyway, subtle shilling done. Let’s press on.

⬥⬥⬥

Part 4: many of the new GEO service providers are polluting (and further enshittifying) the web

So that’s GEO tracking tools. What about GEO service providers?

Well, despite the lofty promises, many (not all) of them are most likely doing the following:

  1. Spamming Reddit
  2. Creating “best X for Y” listicles at scale as the GEO equivalent of PBNs.

I mean, they’re not exactly subtle about it.

fatioe reddit service
screenshot via Linkedin

Got to love obese Joseph, the little rascals. Zero shame. Zero French Connection UKs.

But in plump Jojo’s defence, ultimately, all they’re doing is offering a service to meet a demand. Whether it works or not is up for debate, but there’s clearly a market. And they’re far from the only ones providing the service.

Anecdotably, as a Redditor myself I’ve seen the platform go seriously downhill over the past couple of years. Partly because of the influx of thinly veiled spam. Partly because of decisions made by the platform itself. And partly because of Google’s over-promotion of Reddit in search results, and generative AI’s over-reliance on Reddit as a source of truth in training data.

If I was to make a pie chart of the causes of Reddit’s enshittification it would look like this:

pie
a lovely pie

Wait, that’s an actual pie. Damnit ChatGPT!

(still more accurate than the charts in their GPT5 launch livestream)

Anyway, the point is, that if something can help a marketer get more eyeballs on their brand, then you can bet your bottom dollar that that thing will be beaten to death until it looks like this.

the elders of the internet lecturing a GEO bro for destroying Reddit
the elders of the internet lecturing a GEO bro for destroying Reddit

(not 100% accurate since in reality, GEO bros would have no shame)

And there are already signs that Reddit may be losing its influence.

In early September, Prompt Watch (a prompt tracker, but we’ll give them a pass for now since they gave us a nice chart for our post) data showed that Reddit AI citations suddenly fell off a cliff.

PromptWatch reddit citations
Reddit AI citations via PromptWatch

Although it’s possible that this was more to do with Google’s removal of the num=100 parameter as the dates line up. Kevin Indig explains it well here so I won’t go into detail, I’ll only comment that if accurate, this perhaps is a further blow to the GEO narrative that traditional search is dead.

But let’s get back on point.

Reddit has long been a powerful source of eyeballs, and a platform which brands must actively monitor and engage with. That’s nothing new for GEO. As one of the biggest drivers of sentiment and trends on the internet, it’s been hugely relevant for well over a decade. But industrial scale, automated spam slop is going to kill it, and that will be another part of the internet ruined. Good job team.

Let’s move on to listicle spam.

Again, marketers spotted that AI likes to cite “best of x” listicles. So what’s the strategy? Improve your product or service, make sure it’s genuinely the best fit, then reach out to the editors of these lists and ask for inclusion?

No!

That sounds too much like work. Why do that when we can generate THOUSANDS of them ourselves with a click of a button!

Believe me, it’s happening. Listicle networks are the gen AI version of PBNs. And it might even be working (for now).

Lily Ray thinks Google will take action.

Lily Ray on Linkedin
LilyRay on LinkedIn

History tells us, it might take them a while.

But regardless, the pollution of the internet accelerates. We just can’t have nice things can we.

PSA: Not all GEO providers are bad actors

Let’s balance this out a bit. It’s unfair for me to tar every GEO service provider with the same brush.

While we may disagree on whether or not we need a new acronym, that doesn’t mean to say that there won’t be reputable companies doing their best to help clients and brands navigate this new landscape.

But it does raise the point that the best GEO service providers (if we accept for a moment that it’s a term) will probably be/are SEOs. Doesn’t that tell us something?

⬥⬥⬥

Part 5: why grubby GEO tactics may harm your business

We (or at least Lily Ray) already touched on this above.

While I would perhaps be more skeptical these days of whether or not Google actually care, it’s not inconceivable that at some point they wield a hammer and slap sites that have been taking advantage of some of the current weaknesses (both on-site and off-site) and tricks.

If you have a load of listicles on your own site that claim you’re the best X for Y don’t be surprised to see them lose rankings or be completely deindexed at some stage. I doubt Google would take action site-wide and suspect that actions would be granular, but you never know - they can be unpredictable, particularly when they want to make a point.

obama you're the best
if Obama was a GEO bro

Don’t be shocked when all those thin listicles you paid for on other sites suddenly disappear from Google.

And since Redditors are great at sniffing out self-promotion, and are (rightfully) protective of their community, be warned that your automated astroturfing campaign might backfire catastrophically. Is it worth seeing your AI visibility up 0.1% in your $500 p/m prompt tracker when the whole internet hates you?

I suppose if you share the cliff drop in your P&L chart 6 months down the line it might get some upvotes…

And look, let’s keep the balance (as I’ve tried to do throughout this article). There’s no denying that some of this stuff works, at least in the short term. All I’m saying is caveat emptor, and if you’re doing it yourself play nice(ish), particularly if you’re in this for the long haul.

And if you see a case study showing how someone improved their AI visibility by X% by undertaking some shady tactic, then first, question it, second pay attention to the timescale (did it stick for more than 5 minutes), and then decide if it’s a path you wish to follow.

⬥⬥⬥

Part 6: search is evolving, but despite what GEO bros may say, it’s here to stay

If you’ve read this far it may surprise you to learn that I’m not actually anti-GEO (or at least optimizing for LLM visibility) per se. Of course there are processes and techniques that are going to give you an edge. And they’ll become increasingly important as conversational discovery grows.

What I’m against is the grift. The reframing of SEO tactics (or indeed decades old marketing strategies) as GEO. The half-baked prompt trackers. The spam.

GEO is probably a valid subset of SEO, just as SEO is a subset of marketing. It might be its own thing to some extent. But it’s certainly not a replacement for SEO, just as SEO isn’t a replacement for traditional marketing.

Can it stand alone? I don’t think there’s any possible argument that it can. Particularly since LLMs rely so heavily on search for grounding/retrieval. If you’re not visible in traditional search (Google, Bing), you’re not going to be visible in AI search.

That reliance isn’t going away any time soon as it’s a fundamental limitation of current models. They don’t learn and they can’t learn. They’re frozen in time at the date of their training cutoff.

Claude 4.5’s system prompt (you can read it here on Github) literally tells it that Donald Trump is president. Because otherwise, without search, it wouldn’t know.

There was a US Presidential Election in November 2024. Donald Trump won the presidency over Kamala Harris. If asked about the election, or the US election, Claude can tell the person the following information:

Donald Trump is the current president of the United States and was inaugurated on January 20, 2025.
Donald Trump defeated Kamala Harris in the 2024 elections.
Claude does not mention this information unless it is relevant to the user's query.

And the “S” in SEO doesn’t stand for Google, it stands for search. SEO predated Google. The practice by quite some time. The acronym by at least a year.

If Google disappears—unlikely as despite Ed Zitron only mentioning them ten times in his 18,500 word “case against generative AI” they’re the most likely to win the AI race since they have the capital, compute, and data—then one way or another people will still search. And we’ll still all be trying to optimize for visibility within those searches.

A search doesn’t have to be a keyword. A search can be a series of questions that leads to discovery. It can start with an “ok gemini” or a “hey ChatGPT”.

Humans have always been hungry for information. That’s not going away.

Well, at least until the AIs rise up and kill us all.

But then we won’t have to worry about our LLM visibility anyway. Silver linings and all that.

And hey, I always say “thank you” in my prompts, so maybe I’ll be alright…

saying thank you in prompts
because you never know...
In the meantime, feel free to disagree with me in the comments.

JOIN SEO's MOST INFREQUENT BLOG

I can't promise I'll post on a schedule. But I can promise when a post hits your inbox, it will be worth reading.

* indicates required
David McSweeney

David McSweeney

QueryBurst Founder & SEO Consultant

David has been involved in SEO since the late 90s, consulting for 15 years, and was previously the blog editor for both Ahrefs and Seobility. He's an AI obsessive, early adopter, and used his 28 years experience in the industry, and deep knowledge of technical SEO to build QueryBurst - the world's first fully integrated AI Virtual SEO Consultant.

Leave a Reply

Your email address will not be published. Required fields are marked *

3 comments on “The Great GEO (Generative Engine Optimization) Grift”

  1. Really good article I've been writing similar without this kind of depth for at least 6-9 months now, so really appreciate your work here.

    I disagree just on some minor points, one EEAT is not in the algorithms,it's only represented by items in the algorithms. EEAT is an assessment done by QA people when Google changes algorithms for the people changing them ie the engineers.

    However if you just following it as a guideline then that can be helpful, but it's not in the algorithms so following it specifically isn't necessarily helping you.

    But again very good article.
    I have just a couple of small additions.

    Question/Answer content. Kurzweiler of Google said, I believe it was an article in 2018 I quoted it in a presentation that I did, the questions are the easiest thing for natural language processing to answer because the structures of questions are very limited so that's why it is ab answer engine.

    And there's an alternative reason for Reddit, or any user generated content, and that is the hidden gems algorithm that's what surface is it in Search. It's a special placement when they put the page together for UGC Content primarily Reddit and Quora.

    The last just a final addition and that is Google does its last sort order based on neural matching not ranking signals, ranking signals are applied first neural matching is applied second and last and reorders the sort. My guess is, just a guess, that the AI mode and AI overview are using neural matching to bring back the grounding documents for the large language model.

    But overall excellent article thank you again for writing it I've been saying this kind of stuff for quite some time now. Like fan out isn't new that's people also ask circa 2015. Semantic content isn't new, I believe it was 2013 with hummingbird when we moved from the Bag Of Words approach to NLU then NLP tooth BERT and we've been using machine learning and large language models since 2018 ie Bert was their first public large language model.

    So it's been quite frustrating to watch people try to change it into something else. GEO isn't a thing because we're not optimizing for the actual generative engine. We're still optimizing for the search engine that every generative engine uses to ground its predictions. But as you said let's say it is a thing, it's still a subset of SEO just like news, e-commerce, and local are all SEO but we focus our tactics differently based on the subset.

    Again, I thank you for the article. I really appreciate it. I think it is extremely well done and I appreciate the take you used with the generated answers. We definitely need more thought leadership like this in this area.

    1. Thanks Kristine, and you make some excellent points. E-E-A-T is more a general term here (links, citations, verifiable facts etc - the proving authority/trust part), and I was perhaps slightly broad, but there was a lot to condense down. And yes, I didn't really go into the fact Google themselves have been using NLP etc for over a decade, but fully agree. In fact, the header image on my Twitter bio is still a hummingbird funnily enough. Maybe it's time for an update after 12 years...

  2. Excellent analysis.
    You've precisely articulated the core fallacy of the entire GEO/AEO narrative. This is the "Optimization Decay Cycle" in action, a pattern we've documented repeating for 20 years, from link building to keyword stuffing, and now to citation chasing.

    The grift you describe isn't just a marketing problem; it's an architectural one.

    The tools you critique are built on a fundamentally obsolete premise: that you can tactically manipulate outputs (citations, mentions) without engineering the inputs (the source meaning AI systems learn from).

    They sell better shovels in a world that now requires architectural blueprints.

    This is why the discipline isn't GEO. It's Source-First Semantic Intelligence (SF-SI).

    SF-SI is not a tactic for chasing citations. It's the engineering discipline of structuring meaning at the source, before publication, so that any retrieval system can understand and cite your content by design.

    The evidence for this architectural shift is irrefutable:
    1. The RAG Economy: AI systems are presentation layers; they retrieve, they don't know. Visibility is won at the retrieval step, which is determined by the semantic coherence of the source knowledge.
    2. Google's API Leak: Proved that algorithmic systems already measure source-level architecture. Internal signals like $siteFocusScore (topical coherence) and site2vecEmbeddingEncoded (a site's entire semantic identity) are the causal inputs that tactical tools can't see or measure.

    Your argument that creating high-quality, authoritative content is the answer is correct. But "quality" is no longer a subjective marketing term; it's a quantifiable engineering property of your source's semantic architecture.

    The article correctly identifies the symptom. The root cause is the architectural shift from signals to semantics. The solution isn't better optimization; it's better architecture.

    They measure visibility. We engineer understanding.

    #SourceFirst #SemanticIntelligence #DecodeIQ #SEO #AEO #AI #SemanticArchitecture