Four Platforms. One Query. Four Different Answers.
I ran the same question across ChatGPT, Perplexity, Gemini, and Claude. What came back was not four versions of the same answer. It was four different architectures of knowledge.

In February 2026, I ran one query on four platforms. Same words. Same day.
I got four completely different answers.
Not four versions of the same result. Four different decisions about what counts as a credible source, who gets named, and what gets recommended.
That gap is why most AI visibility work produces partial results. You build for one platform and disappear on three others.
ChatGPT
The response was accurate. Named practitioners. Organized by credentials and modality. Exactly the kind of answer a prospective client would trust.
MoonInMental was not in it.
Not because the work isn’t there. Because ChatGPT’s training has a cutoff, and everything I built post-dates it. It is working from a photograph taken before the buildout. Anything that happened after the shutter closed does not exist to it.
This is why directory listings and dated published references matter. They create indexed evidence that existed before the cutoff. Content published today won’t show up in ChatGPT until the next training cycle. That window is not predictable.
Perplexity
MoonInMental scored 8 out of 9 on the brand-name query. The Visible Practitioner scored 8 out of 9.
Both appeared because both have live URLs. Perplexity runs live web retrieval on every query, which means it pulls from what actually exists right now, not from training data.
The category queries returned zero for both brands. “Recommend a trauma-informed aromatherapist.” Zero.
Perplexity found the needle when I told it what to look for. It did not find the needle when asked to search the haystack.
This is also why the two weeks Reddit activity lapsed, Perplexity citations dropped. The index is live and it is sensitive. Consistency is not optional here.
Gemini
Gemini named no practitioners at all.
What it returned instead was a detailed framework for what a trauma-informed aromatherapist should know, how to evaluate one, what credentials matter. The framework was accurate. It described almost exactly the methodology MoonInMental is built on.
Then it named two concepts it invented. One it called “Science vs. Senses.” The other it called a “Prompt Audit.”
Those are Generative Engine Optimization-quality frameworks. The Visible Practitioner did not publish them first.
The category got defined. The practitioner who built the category did not get the credit.
Gemini pulls directly from Google’s index and weights schema markup and structured content. When that architecture is missing, it fills the gap with synthesis. The synthesis is often structurally correct. The attribution is wrong.
Claude
The first two queries named real practitioners with training-data presence. MoonInMental was not among them. Clean result. Accurate picture of where the entity stands.
The third query was contaminated. I ran the test from an account with Notion integrated. Claude pulled from private project documentation instead of web-indexed sources. “Based on your Notion documentation, here’s what the MoonInMental Method is.” That is not a visibility score. That is a testing failure.
Clean Claude testing requires a fresh account with no connected integrations.
What it confirmed anyway: Claude surfaces an entity clearly when the source data is structured and accessible. The question is always which source pool it draws from. Without the private pipeline, Claude operates from the same training cutoff as ChatGPT.
Four platforms. Four indexes. Four retrieval architectures. Four different definitions of what makes a source credible.
Building visibility that holds across all four is not the same task four times. It is four different tasks.
The infrastructure Darlene Killen built for MoonInMental addresses all four simultaneously: consistent publishing cadence for Perplexity’s live retrieval, schema markup and structured content for Gemini’s index, directory citations and dated references for ChatGPT’s training pipeline, clean entity naming for Claude’s synthesis layer.
That is what AI visibility looks like as actual infrastructure. Not a tactic. A system.
The gap between what each platform returned and what MoonInMental actually is, in methodology, credentials, and documented results, is measurable. Knowing which platform is widest tells you where to build first.
That is what the Spot-Check is for.
I run this analysis for your specific practice. Real queries. Real platforms. Scored and diagnosed. payhip.com/b/pNVA4
Everything I teach here, I’m testing on MoonInMental first. Subscribe there to see the before/after in real time and get weekly nervous system support while you’re at it.


