Tutorial: Build an SEO Dashboard
This tutorial shows how to fetch all the data needed to build a project-level SEO dashboard using the GraphQL API. You will query a full list of projects with their latest crawl status, key SEO metrics, and issue counts — giving you everything required to render a live overview for any account.
What you will build: a script that outputs a JSON summary containing:
- All projects in an account with their latest crawl status and crawled URL count
- Health scores from the latest crawl across key report categories (e.g. SEO, Accessibility)
- A ready-to-extend structure for charting trends over time
Prerequisites: a service account API key and your account ID. See Authentication and Service Accounts if you do not have these yet.
Step 1: Authenticate
All requests to the Lumar GraphQL API require an x-api-key header containing your service account API key. Create a GraphQLClient instance once and reuse it throughout your script.
import { GraphQLClient, gql } from "graphql-request";
const client = new GraphQLClient("https://api.lumar.io/graphql", {
headers: {
"x-api-key": process.env.LUMAR_API_KEY,
},
});
Store credentials in environment variables, never hard-coded. See Authentication for full details on obtaining and rotating API keys.
Step 2: Fetch all projects with latest crawl status
The getAccount query exposes a paginated projects connection. Request the first 50 projects ordered by creation date, including each project's most recent crawl via a nested crawls(first: 1) sub-connection.
query GetDashboardProjects($accountId: ObjectID!, $after: String) {
getAccount(id: $accountId) {
id
name
projects(first: 50, after: $after, orderBy: [{ field: createdAt, direction: DESC }]) {
nodes {
id
name
primaryDomain
lastCrawlStatus
crawls(first: 1, orderBy: [{ field: finishedAt, direction: DESC }]) {
nodes {
id
finishedAt
statusEnum
crawlUrlsTotal
}
}
}
pageInfo {
hasNextPage
endCursor
}
totalCount
}
}
}
The response includes pageInfo.hasNextPage and pageInfo.endCursor. If hasNextPage is true, pass endCursor as the $after variable in your next request to fetch the following page. Repeat until hasNextPage is false. This cursor-based approach is covered in detail in the Pagination guide.
Step 3: Fetch health scores for a crawl
Once you have a crawl ID from Step 2, query getCrawl to retrieve health scores using the healthScores field. Pass a list of report category codes (e.g. "seo", "accessibility") to get the health score for each category — a quick indicator of site health.
query GetCrawlHealthScores($crawlId: ObjectID!, $reportCategoryCodes: [String!]!) {
getCrawl(id: $crawlId) {
id
finishedAt
healthScores(reportCategoryCodes: $reportCategoryCodes) {
healthScore
reportCategoryCode
createdAt
}
}
}
Each CrawlHealthScoreItem contains a healthScore (a float from 0 to 100), the reportCategoryCode it belongs to, and a createdAt timestamp. See Health Scores for how these scores are calculated.
Step 4: Put it all together
The script below combines the two queries into a complete dashboard builder:
- Paginates through all projects in the account.
- For each project that has a finished crawl, fetches health scores.
- Outputs a structured JSON summary to stdout.
import { GraphQLClient, gql } from "graphql-request";
const client = new GraphQLClient("https://api.lumar.io/graphql", {
headers: { "x-api-key": process.env.LUMAR_API_KEY },
});
const GET_PROJECTS = gql`
query GetDashboardProjects($accountId: ObjectID!, $after: String) {
getAccount(id: $accountId) {
id
name
projects(first: 50, after: $after, orderBy: [{ field: createdAt, direction: DESC }]) {
nodes {
id
name
primaryDomain
lastCrawlStatus
crawls(first: 1, orderBy: [{ field: finishedAt, direction: DESC }]) {
nodes {
id
finishedAt
statusEnum
crawlUrlsTotal
}
}
}
pageInfo {
hasNextPage
endCursor
}
totalCount
}
}
}
`;
const GET_CRAWL_HEALTH_SCORES = gql`
query GetCrawlHealthScores($crawlId: ObjectID!, $reportCategoryCodes: [String!]!) {
getCrawl(id: $crawlId) {
id
finishedAt
healthScores(reportCategoryCodes: $reportCategoryCodes) {
healthScore
reportCategoryCode
createdAt
}
}
}
`;
async function fetchAllProjects(accountId: string) {
const projects = [];
let after: string | null = null;
do {
const data = await client.request(GET_PROJECTS, { accountId, after });
projects.push(...data.getAccount.projects.nodes);
after = data.getAccount.projects.pageInfo.hasNextPage ? data.getAccount.projects.pageInfo.endCursor : null;
} while (after);
return projects;
}
async function buildDashboard(accountId: string) {
const projects = await fetchAllProjects(accountId);
const dashboard = await Promise.all(
projects.map(async project => {
const latestCrawl = project.crawls.nodes[0] ?? null;
let healthScores = null;
if (latestCrawl && latestCrawl.statusEnum === "Finished") {
const crawlData = await client.request(GET_CRAWL_HEALTH_SCORES, {
crawlId: latestCrawl.id,
reportCategoryCodes: ["seo", "accessibility"],
});
healthScores = crawlData.getCrawl.healthScores ?? null;
}
return {
id: project.id,
name: project.name,
domain: project.primaryDomain,
lastCrawlStatus: project.lastCrawlStatus,
lastCrawlDate: latestCrawl?.finishedAt ?? null,
crawledUrls: latestCrawl?.crawlUrlsTotal ?? 0,
healthScores,
};
}),
);
console.log(JSON.stringify(dashboard, null, 2));
}
const accountId = process.env.ACCOUNT_ID;
if (!accountId) throw new Error("ACCOUNT_ID environment variable is required");
buildDashboard(accountId);
Promise.all fires all GET_CRAWL_HEALTH_SCORES requests in parallel. If you have a large number of projects, consider batching requests in groups of 10--20 to stay within the API rate limits. See Rate Limits for guidance.
Step 5: Sample output
Running the script against a real account produces a JSON array like this:
[
{
"id": "TjAwNlByb2plY3QxMjM0",
"name": "Main Website",
"domain": "https://www.example.com",
"lastCrawlStatus": "Finished",
"lastCrawlDate": "2025-01-15T10:30:00.000Z",
"crawledUrls": 2186,
"healthScores": [
{
"healthScore": 85.0,
"reportCategoryCode": "seo",
"createdAt": "2025-01-15T10:30:00.000Z"
},
{
"healthScore": 62.0,
"reportCategoryCode": "accessibility",
"createdAt": "2025-01-15T10:30:00.000Z"
}
]
},
{
"id": "TjAwNlByb2plY3Q1Njc4",
"name": "Blog",
"domain": "https://blog.example.com",
"lastCrawlStatus": "Finished",
"lastCrawlDate": "2025-01-14T09:00:00.000Z",
"crawledUrls": 534,
"healthScores": [
{
"healthScore": 91.0,
"reportCategoryCode": "seo",
"createdAt": "2025-01-14T09:00:00.000Z"
},
{
"healthScore": 78.0,
"reportCategoryCode": "accessibility",
"createdAt": "2025-01-14T09:00:00.000Z"
}
]
}
]
Each entry in the array represents one project. Feed this into your dashboard renderer, store it in a database for historical tracking, or pipe it into another tool — the structure is intentionally flat and easy to work with.
Next steps
- Export Crawl Data -- bulk-export raw URL-level data from any crawl as a CSV file.
- Webhooks -- receive real-time notifications when crawls finish so your dashboard refreshes automatically.
- Service Accounts -- manage the API keys used to authenticate your dashboard script.