# Page Runs (Content Evaluation) https://api-docs.lumar.io/docs/ai-visibility/ai-visibility-page-runs Page runs are content evaluation crawls that analyze URLs cited by AI providers. They measure content quality, relevance, brand positioning, and other metrics to help you understand why certain pages are (or aren't) being cited. ## How page runs work Page runs can be triggered in two ways: 1. **Automatically** -- When a prompt run finishes and the project has `autoCrawlEnabled = true`, citation URLs matching your brand's domains are automatically crawled. URLs are skipped if they were recently crawled within the `freshnessThresholdDays` window. 2. **Manually** -- Trigger a content evaluation for any URL via the API. Each page run uses the Single Page Requester (SPR) to crawl the URL and run content evaluation metrics. ## Status lifecycle | Status | Description | | ----------- | ---------------------------------- | | `pending` | Created and waiting to be crawled | | `crawling` | Currently being crawled by SPR | | `completed` | Successfully crawled with metrics | | `failed` | Crawl failed (see `failureReason`) | ## List page runs Retrieve page runs for a project. Optionally filter by URL: ```graphql query GetAiVisibilityPageRuns( $accountId: ObjectID! $aiVisibilityProjectId: ObjectID! ) { getAiVisibilityPageRuns( accountId: $accountId aiVisibilityProjectId: $aiVisibilityProjectId first: 20 ) { nodes { rawId url status precisionScore recallScore qualityScore trustScore brandMentionScore brandSentimentScore brandPositionScore failureReason createdAt updatedAt } pageInfo { hasNextPage endCursor } totalCount } } ``` **Variables:** ```json { "accountId": "TjAwN0FjY291bnQxMjM0NQ", "aiVisibilityProjectId": "QWlWaXNpYmlsaXR5UHJvamVjdDE" } ``` ### Score fields Completed page runs include these numeric scores (0-100 scale): | Score | Description | | ------------------------- | ---------------------------------------------------- | | `precisionScore` | How precisely the content matches the intended topic | | `recallScore` | How comprehensively the content covers the topic | | `uniquenessScore` | How unique the content is compared to other sources | | `qualityScore` | Overall content quality | | `trustScore` | Trustworthiness of the content | | `brandMentionScore` | How well the brand is mentioned in the content | | `brandSentimentScore` | Brand sentiment within the content | | `brandPositionScore` | How prominently the brand is positioned | | `topicalOpportunityScore` | Opportunity to improve topical coverage | | `evergreenHealthScore` | How well the content maintains relevance over time | | `qdfScore` | Query Deserves Freshness score | | `gscQueryScore` | Google Search Console query relevance | Additional fields include boolean metrics (e.g. `hasPageChanged`, `domainInResults`), text analysis fields (e.g. `primaryPageIntent`, `topicalOpportunityReasoning`), and detailed JSONB evaluation arrays. ## Trigger a page run Manually trigger a content evaluation for a URL: ```graphql mutation TriggerAiVisibilityPageRun( $accountId: ObjectID! $aiVisibilityProjectId: ObjectID! $url: String! ) { triggerAiVisibilityPageRun( input: { accountId: $accountId aiVisibilityProjectId: $aiVisibilityProjectId url: $url } ) { aiVisibilityPageRun { rawId url status sprRequestId createdAt } } } ``` **Variables:** ```json { "accountId": "TjAwN0FjY291bnQxMjM0NQ", "aiVisibilityProjectId": "QWlWaXNpYmlsaXR5UHJvamVjdDE", "url": "https://example.com/my-page" } ``` :::note Triggering a page run consumes ContentEvals credits from your account. The mutation validates credit availability before creating the run. ::: ## Page scores (aggregated) Get aggregated scores across multiple page runs, grouped by URL. Useful for understanding average content quality over time: ```graphql query GetAiVisibilityPageScores( $accountId: ObjectID! $aiVisibilityProjectId: ObjectID! $dateRange: AiVisibilityDateRangeInput ) { getAiVisibilityPageScores( accountId: $accountId aiVisibilityProjectId: $aiVisibilityProjectId dateRange: $dateRange ) { url avgPrecisionScore avgRecallScore avgQualityScore avgTrustScore avgBrandMentionScore avgBrandSentimentScore avgBrandPositionScore totalRuns latestRunAt } } ``` **Variables:** ```json { "accountId": "TjAwN0FjY291bnQxMjM0NQ", "aiVisibilityProjectId": "QWlWaXNpYmlsaXR5UHJvamVjdDE", "dateRange": { "start": "2025-01-01", "end": "2025-01-31" } } ``` Page scores return average values for each score field, along with `totalRuns` and `latestRunAt`. Results are limited to 10,000 URLs. ## Automatic crawling setup To enable automatic page run creation when AI providers cite your brand's pages: 1. Set `autoCrawlEnabled: true` on your project (see [Projects](/docs/ai-visibility/ai-visibility-projects.md#content-evaluation-settings)) 2. Optionally adjust `freshnessThresholdDays` (default: 7 days) The system will: - Filter citation URLs to match your primary brand's domains (including subdomains) - Skip URLs that have a recent page run within the freshness threshold - Skip URLs that already have an in-flight page run (pending or crawling) ## Schema reference - [`AiVisibilityPageRun`](/docs/schema/objects/ai-visibility-page-run.md) -- Page run type with all metric fields - [`AiVisibilityPageScore`](/docs/schema/objects/ai-visibility-page-score.md) -- Aggregated page score type - [`AiVisibilityPageRunStatus`](/docs/schema/enums/ai-visibility-page-run-status.md) -- Status enum