Why We Built This

Most AI visibility tools are selling you data that means nothing.
Here's why, and what we do differently.

Let us be direct with you.

Most AI visibility tools are built on a lie. Not a small one. A fundamental, structural lie about how AI actually works.

AI Changed Everything. The SEO Industry Panicked.

ChatGPT launched. Then Claude. Then Gemini. Then Perplexity.

Suddenly, millions of people stopped Googling and started asking AI for answers.

"What's the best CRM for small businesses?"

"Which SEO tool should I use?"

"Best running shoes under $150?"

And SEOs realized something terrifying: if an AI model doesn't mention your brand in its response, you don't exist. No page two. No "below the fold." Just... invisible.

That's a real problem. And it created a gold rush.

A Wave of "AI Visibility" Tools Appeared Overnight

They promised to track your "AI rank." Show your "position" in ChatGPT. Optimize you for LLMs.

The dashboards looked great. The charts went up and down. It felt like rank tracking for a new era.

There's just one problem.

The Data Is Meaningless

Here's how most AI visibility tools actually work:

  1. They write a canned prompt like "What's the best project management tool?"
  2. They send it to ChatGPT once
  3. They scan the response for your brand name
  4. They tell you "You're Position 3!"

That's it. One synthetic prompt. One response. One "rank."

Looks like data. Feels like a rank tracker. But it's theater.

That Prompt Isn't What Your Customers Ask

It's what the tool vendor imagined your customers ask.

Your real customers ask things like:

  • "My team is remote and we need something that integrates with Slack, what do you recommend?"
  • "We're switching from Basecamp, what's similar but cheaper?"
  • "Best tool for managing a 50-person engineering team?"

Those are wildly different prompts. And they produce wildly different responses.

A canned prompt tells you nothing about what happens when real people ask real questions.

But There's an Even Bigger Problem

Try this yourself. Open ChatGPT. Type "What's the best CRM?" and hit enter.

Now open a new chat. Type the exact same thing.

You'll get a different answer. Different brands. Different order. Different reasoning.

This isn't a bug. This is how LLMs work.

LLMs Don't Have Rankings

Every time an LLM generates a response, it's choosing the next word based on probabilities. Not looking up answers in a database. Not following a ranking algorithm.

There is no "Position 1." There is no stable list. The concept doesn't exist inside these models.

So when a tool tells you "You rank #3 in ChatGPT," here's what actually happened: they asked once, got one response, and called it your rank.

Ask again in five minutes and the number changes.

It's like checking the temperature once and calling it the climate.

And people are making real strategic decisions based on this.

So What Should Measurement Actually Look Like?

If LLMs give different answers every time, you can't measure them with snapshots.

You need to measure patterns.

Not "did my brand appear in this one response?" but "how often does my brand appear across many responses, and how prominently?"

Not "what position am I?" but "does this model even know my brand exists without searching the web?"

Not "did I get mentioned?" but "when the model has 10 sources to choose from, does it pick mine?"

That's what we built.

Five Metrics. Each One Measures Something Different.

Latent Brand Association measures what the model believes about you.

Not what it said once. What it has absorbed into its neural weights across billions of training documents.

For example: a model might associate "Nike" with "running" and "innovation" because those connections appear thousands of times in its training data. What associations does it have with your brand? Positive ones? Outdated ones? None at all?

LLM Authority Score measures how often you show up and how high up you appear.

A brand that appears in 8 out of 10 responses but always last is very different from one that appears in 6 out of 10 but always first. This metric captures both.

Top of Mind measures whether the model recalls you from memory, without help from web search.

This is the distinction most people miss. Modern AI uses two paths: what the model already knows (recall) and what it fetches from the web (retrieval). If your visibility depends entirely on retrieval, you're in a fragile position. The moment the retrieval system changes, you vanish.

Semantic Self-Sufficiency measures whether your content survives being pulled apart.

AI models don't read your pages top to bottom. They break your content into chunks and evaluate each piece on its own.

If a paragraph on your site says "Our premium plan includes all of the above features," that's useless to an AI that can't see what's "above." This is the one metric you fully control. You can fix it today by changing how you write.

Citation Capture Rate measures whether your content actually gets picked.

When an AI generates an answer, it pulls in multiple candidate sources and chooses the best ones. Most content gets retrieved and then thrown away. This metric measures whether yours makes the cut.

Read the full explanation of each metric here.

We Measure Across Four LLMs. Not Just One.

ChatGPT, Claude, Gemini, and Perplexity each have different training data. Different cutoff dates. Different retrieval systems.

Your brand might score well on ChatGPT but be completely absent from Gemini.

Measuring one model and assuming the others match is just another version of the snapshot problem. We give you per-model breakdowns so you know exactly where you stand with each one.

The Bottom Line

AI visibility is a real problem that deserves real measurement.

Not synthetic prompts pretending to be your customers. Not single snapshots pretending to be stable rankings. Not one model pretending to be the whole AI landscape.

Five metrics designed for probabilistic systems. Four models for the complete picture.

Honest measurement of where your brand actually stands inside the AI models your customers are already using.