# Tutorial: Run a Crawl and Get Results https://api-docs.lumar.io/docs/graphql/tutorials/run-crawl-and-get-results This tutorial walks through the complete workflow of authenticating, creating a project, running a crawl, waiting for it to finish, and fetching the results. ## Step 1: Verify authentication Before doing anything, confirm your token is valid by querying your user details and available accounts. ```graphql query VerifyAuth { me { id username accounts(first: 1) { nodes { id name } } } } ``` **Response:** ```json { "data": { "me": { "id": "TjAwNFVzZXI4NjE", "username": "your.username@example.com", "accounts": { "nodes": [ { "id": "TjAwN0FjY291bnQ3MTU", "name": "Your Account Name" } ] } } } } ``` If this returns an error, see [Authentication](../authentication.md) to obtain a valid token. ## Step 2: Create a project (if needed) If you do not already have a project, create one. See [How to Create a Project](../create-project.md) for the full guide. Here is a quick example: ```graphql mutation CreateSEOProject($input: CreateProjectInput!) { createSEOProject(input: $input) { project { ...ProjectDetails } } } fragment ProjectDetails on Project { id name primaryDomain # ...other fields you want to retrieve } ``` **Variables:** ```json { "input": { "accountId": "TjAwN0FjY291bnQ3MTU", "name": "www.lumar.io SEO Project", "primaryDomain": "https://www.lumar.io/" } } ``` **Response:** ```json { "data": { "createSEOProject": { "project": { "id": "TjAwN1Byb2plY3Q2MTM0", "name": "www.lumar.io SEO Project", "primaryDomain": "https://www.lumar.io/" } } } } ``` ## Step 3: Run a crawl Trigger a crawl for your project using the `runCrawlForProject` mutation: ```graphql mutation RunCrawl($input: RunCrawlForProjectInput!) { runCrawlForProject(input: $input) { crawl { id statusEnum createdAt } } } ``` **Variables:** ```json { "input": { "projectId": "TjAwN1Byb2plY3Q2MTMy" } } ``` **Response:** ```json { "data": { "runCrawlForProject": { "crawl": { "id": "TjAwNUNyYXdsMTc2NjI0MQ", "statusEnum": "Queued", "createdAt": "2025-01-15T10:00:00.000Z" } } } } ``` The crawl starts in `Queued` status and progresses through `Crawling`, `Finalizing`, and finally `Finished`. ## Step 4: Poll for crawl completion Query the crawl status periodically until it reaches `Finished`. A polling interval of 30--60 seconds is reasonable for most crawls. ```graphql query PollCrawlStatus($crawlId: ObjectID!) { getCrawl(id: $crawlId) { id status createdAt finishedAt crawlUrlsTotal } } ``` **Variables:** ```json { "crawlId": "TjAwNUNyYXdsMTc2NjI0MQ" } ``` **Response:** ```json { "data": { "getCrawl": { "id": "TjAwNUNyYXdsMTc2NjI0MQ", "status": "Finished", "createdAt": "2025-01-15T10:00:00.000Z", "finishedAt": "2025-01-15T10:30:00.000Z", "crawlUrlsTotal": 2186 } } } ``` ```typescript async function waitForCrawl(crawlId: string): Promise { while (true) { const result = await executeQuery(POLL_QUERY, { crawlId }); const status = result.data.getCrawl.status; if (status === "Finished") { console.log("Crawl finished!"); return; } if (status === "Archived" || status === "Paused") { throw new Error(`Crawl ended with status: ${status}`); } console.log(`Crawl status: ${status}. Checking again in 30s...`); await new Promise(resolve => setTimeout(resolve, 30000)); } } ``` ## Step 5: Fetch the results Once the crawl is finished, query the report data: ```graphql query GetCrawlResults($crawlId: ObjectID!) { getReportStat( input: { crawlId: $crawlId, reportTemplateCode: "all_pages" } ) { basic crawlUrls(first: 5) { nodes { url httpStatusCode pageTitle } totalCount } } } ``` **Variables:** ```json { "crawlId": "TjAwNUNyYXdsMTc2NjI0MQ" } ``` **Response:** ```json { "data": { "getReportStat": { "basic": 2186, "crawlUrls": { "nodes": [ { "url": "https://www.example.com/", "httpStatusCode": 200, "pageTitle": "Example - Home" }, { "url": "https://www.example.com/about", "httpStatusCode": 200, "pageTitle": "About Us" } ], "totalCount": 2186 } } } } ``` You can request different report templates by changing the `reportTemplateCode` parameter. See [Report Templates Overview](../report-templates-overview.md) for how to discover available templates. ## Next steps - [Export Crawl Data](export-crawl-data) -- learn how to bulk-export results. - [Track SEO Health](track-seo-health) -- monitor health scores over time. - [Setup Automated Monitoring](setup-automated-monitoring) -- automate crawls with schedules and alerts.