AI Visibility
AI Visibility enables you to monitor how your brand and competitors appear in AI-generated responses from providers like ChatGPT, Perplexity, Gemini, Claude, Google AI Mode, and Google AI Overviews.
The system sends prompts to AI providers on a configurable schedule, extracts citations and brand mentions from responses, and calculates visibility metrics including position, sentiment, and quality scores.
Core concepts
AI Visibility is organized around a hierarchy of entities:
Account
└── Project (brand monitoring workspace)
├── Primary Brand (your brand, with associated domains)
├── AI Providers (ChatGPT, Perplexity, Gemini, etc.)
├── Topics (categories of prompts, e.g. "Technical SEO")
│ └── Prompts (questions sent to AI providers)
│ └── Prompt Runs (individual executions per provider)
│ └── Answers
│ ├── Citations (URLs referenced)
│ └── Mentions (brand name appearances)
├── Brands (discovered from AI responses)
└── Page Runs (content evaluation crawls)
Projects
A project is the top-level container for brand monitoring. Each project has a primary brand (your brand) and is configured with a schedule cadence that controls how often prompts are automatically sent to AI providers.
Topics and prompts
Topics group related prompts together. For example, you might have a "Technical SEO" topic containing prompts like "What are the best tools for site auditing?" and "How do I fix crawl budget issues?".
When a prompt is created, it automatically runs against all configured AI providers. Prompts also run automatically based on the project's schedule cadence.
Brands
Brands are discovered automatically from AI provider responses. Each brand found in citations or mentions is tracked. You can classify brands as own, competitor, or other, and merge duplicate brand names that refer to the same entity.
Visibility score
The visibility score is a composite metric (0-100) that measures how visible a brand is in AI responses. It combines two components:
- Citation quality score (25% weight) -- how well the brand is cited in responses
- Brand mention quality score (75% weight) -- how prominently the brand is mentioned
Each component is scaled by an appearance rate -- the ratio of prompt runs where the brand appeared to total runs. This ensures brands that appear consistently rank higher than those with rare but high-quality appearances.
Page runs
Page runs are content evaluation crawls that analyze cited URLs. They can be triggered automatically when AI providers cite your brand's pages, or manually for any URL. Page runs produce detailed metrics about content quality, precision, recall, and brand positioning.
Prerequisites
- An active Lumar account with AI Visibility enabled (
Account.aiVisibilityAvailablemust betrue) - A valid API session token (see Authentication)
Getting started
A typical workflow for setting up AI Visibility:
- Create a project with your primary brand name and domain
- Create topics to organize your monitoring (or use AI-suggested topics)
- Create prompts under each topic (or use AI-suggested prompts)
- Wait for prompt runs to complete (they execute automatically)
- Query analytics to see visibility scores, citations, and mentions
Each of these steps is covered in detail in the following guides.
Common query parameters
Most AI Visibility queries share these parameters:
| Parameter | Type | Description |
|---|---|---|
accountId | ObjectID! | Your Lumar account ID |
aiVisibilityProjectId | ObjectID! | The AI Visibility project to query (some queries use this) |
aiVisibilityBrandId | ObjectID! | The brand to calculate metrics for (usually your primary brand) |
dateRange | AiVisibilityDateRangeInput | Optional date range filter (start and end as ISO dates) |
aiProviderTypes | [AiVisibilityAiProviderType] | Optional filter to specific AI platforms |
country | String | Optional ISO 3166-1 alpha-2 country code filter |
Date range behavior
When dateRange is omitted, most queries default to the last 30 days. When provided, metrics are calculated only for prompt runs within that range.
Country filter
The country filter has three-state semantics:
- Omitted -- includes all regions
- Explicit
null-- worldwide only (prompts with no country set) - String value (e.g.
"US") -- specific country only
Pagination
All list queries use cursor-based pagination with first and after parameters. See Pagination for details.
Authorization
AI Visibility queries require Viewer role access. Mutations require Editor role access. Sharelink access is also supported for queries.