# Best Practices https://api-docs.lumar.io/docs/graphql/best-practices This guide covers recommended patterns for working efficiently with the Lumar GraphQL API. ## Request only the fields you need GraphQL lets you select exactly the fields you require. Avoid requesting deeply nested relationships or large connection fields unless you need them. This reduces response size and improves performance. ```graphql query EfficientProjectQuery($accountId: ObjectID!) { getAccount(id: $accountId) { projects(first: 10) { nodes { id name primaryDomain } } } } ``` **Variables:** ```json { "accountId": "TjAwN0FjY291bnQ3MTU" } ``` **Response:** ```json { "data": { "getAccount": { "projects": { "nodes": [ { "id": "TjAwN1Byb2plY3Q2MTMy", "name": "www.example.com Project", "primaryDomain": "https://www.example.com/" } ] } } } } ``` ## Use pagination for large result sets Always paginate through connections rather than requesting all records at once. Use `first` / `after` for forward pagination and `last` / `before` for backward pagination. Include `totalCount` in your first request so you know how many pages to expect. ```graphql query PaginatedCrawlUrls($crawlId: ObjectID!, $cursor: String) { getReportStat(input: { crawlId: $crawlId, reportTemplateCode: "all_pages" }) { crawlUrls(first: 100, after: $cursor) { pageInfo { hasNextPage endCursor } totalCount nodes { url httpStatusCode } } } } ``` **Variables:** ```json { "crawlId": "TjAwNUNyYXdsMTU4MzI0NQ", "cursor": null } ``` **Response:** ```json { "data": { "getReportStat": { "crawlUrls": { "pageInfo": { "hasNextPage": true, "endCursor": "MTAw" }, "totalCount": 2186, "nodes": [ { "url": "https://www.example.com/", "httpStatusCode": 200 } ] } } } } ``` See the [Pagination guide](pagination) for full details on cursor-based pagination. ### Pagination loop pattern When you need to fetch all pages programmatically: ```typescript async function fetchAllPages(crawlId: string): Promise { const allNodes: any[] = []; let cursor: string | null = null; let hasNextPage = true; while (hasNextPage) { const result = await executeQuery(PAGINATED_QUERY, { crawlId, cursor, }); const connection = result.data.getReportStat.crawlUrls; allNodes.push(...connection.nodes); hasNextPage = connection.pageInfo.hasNextPage; cursor = connection.pageInfo.endCursor; } return allNodes; } ``` ## Respect rate limits The API allows 6000 requests per 5-minute window (approximately 20 requests per second). Build in rate limit handling: - Check for HTTP 429 responses. - Use exponential backoff when retried. - Batch related data into fewer, larger queries instead of many small ones. See [Rate Limits](rate-limits) for details. ## Authenticate with service account keys For automated workflows and CI/CD pipelines, use [service account keys](/docs/graphql/service-accounts.md) rather than user credentials. Service account keys: - Do not expire when a user's password changes. - Can be scoped and revoked independently. - Are designed for machine-to-machine communication. ## Cache stable data Some data changes infrequently and can be safely cached on your side: - **Report templates** -- the list of available report templates rarely changes. Cache the result of `getReportTemplates` for hours or even days. - **Account and project metadata** -- names, IDs, and configuration change only when explicitly updated. - **Finished crawl data** -- once a crawl status is `Finished`, its data is immutable. ## Use Global Node IDs Every object in the API has a globally unique `id` field (a base64-encoded string). You can use the `node(id:)` query to fetch any object by its ID without knowing its type in advance. See [Global Node IDs](global-node-ids) for details. ## Handle errors gracefully Always check for the `errors` array in responses. GraphQL can return partial data alongside errors, so your code should handle both. See [Error Handling](error-handling) for patterns. ## Security considerations - Never embed API tokens in client-side code or public repositories. - Rotate service account keys periodically. - Use the minimum set of permissions required for your integration. - Validate and sanitise any user input before including it in filter or mutation variables.