# Custom metrics https://api-docs.lumar.io/docs/custom-metrics # Getting started with custom metrics Extend Lumar with your custom metrics and unlock the full power of extracting data from the web. Using TypeScript/JavaScript you can create custom metrics that extract data from your pages. You can use Puppeteer to access the DOM and extract data or use the provided API to extract data from the DOM. Custom metrics can be grouped into custom metric containers. Each custom metric container can contain multiple custom metrics. ## Create a new project Generate a fresh TypeScript custom metric container project and register it with the API. When bootstraping a new container you will be asked to provide a name. This name will be used to identify the container in the API and it needs to be globally unique. You will also have a choice between DOM or Puppeteer container. Using Puppeteer requires the project in Lumar to be run with JS rendering enabled. In case you want to extract metrics from images or style sheets, you can enable that as well. You can also provide all of these options via CLI without getting prompted. Refer to our [CLI docs.](/docs/cli.md#oreo-metric-bootstrap-path) for all the available arguments. After bootstrapping your container project, you will need to install dependencies with your package manager (npm, yarn, pnpm or other). Change to the directory where you have bootstrapped your container project and run the following command. ```shell npm install ``` ## Writing metrics extraction code Open the container project in your favorite editor. You will find a `src/index.ts` file with a sample metrics extraction script. ```typescript export interface IMetrics extends MetricScriptBasicOutput { url: string; } export const handler: MetricScriptHandler Your container can export one or more metrics. Each metric needs to have a unique name and a type. ### Handler phases Containers can define multiple handler phases inside the `handlers` section of `.oreorc.json` / `.oreorc.ts`. The `request` handler is required for URL-level metrics, while `preCrawl` and `postCrawl` handlers run exactly once per crawl—ideal for seeding data before pages are processed or for pushing aggregated results after the crawl ends. Each phase can specify its own `handler`, `entrypoint`, timeout, and (for request handlers) `metricsTypeName`. ```json { "id": "XXX", "handlers": { "request": { "handler": "handler", "entrypoint": "src/index.ts", "metricsTypeName": "IMetrics" }, "preCrawl": { "handler": "preCrawlHandler", "entrypoint": "src/index.ts" }, "postCrawl": { "handler": "postCrawlHandler", "entrypoint": "src/post-crawl.ts" } } } ``` Use the specific container input types to author these lifecycle handlers: ```typescript import type { IPreCrawlContainerInput, IPostCrawlContainerInput, MetricScriptHandler, } from "@deepcrawl/custom-metric-types"; export const preCrawlHandler: MetricScriptHandler<{}, IPreCrawlContainerInput> = async input => { // Warm up cache, fetch secrets, or emit crawl-level metadata return {}; }; export const postCrawlHandler: MetricScriptHandler<{}, IPostCrawlContainerInput> = async input => { // Aggregate crawl results or emit final metrics return {}; }; ``` `preCrawl` receives the crawl definition before any URLs are processed, while `postCrawl` receives the final crawl context (including accumulated stats and failures). Both handlers can share the same entrypoint file or live in separate modules to keep concerns isolated. ### `.oreorc` configuration reference Your `.oreorc.ts` (or legacy `.json`) file is validated against the `ContainerConfigData` schema used by the CLI and API. Every field below is optional unless stated otherwise, so you can adopt only the parts you need. #### Top-level fields - `id` — The CustomMetricContainer ID. Required when you already have a container in Lumar and want the CLI to publish new versions against it. - `handlers` — Object that defines per-phase handlers. `request` is required; `preCrawl` and `postCrawl` are optional (see the next section for the available handler options). - `secretsTypeName` / `secretsTypePath` — Point to a TypeScript interface that documents the environment variables (secrets) your container expects. The CLI uses this to type-check `process.env` access and to generate schema hints. - `paramsTypeName` / `paramsTypePath` — Similar to secrets, but for structured `params` you pass at runtime. Defining these keeps `context.params` strongly typed. - `allowedRenderingResources` — Restrict which rendering resource types (values from [`CustomMetricContainerRenderingResource`](/docs/schema/enums/custom-metric-container-rendering-resource.md)) Puppeteer may request, e.g. `["Image", "Font"]` for leaner crawls. - `navigationTimeoutMs` — Overrides the default page-level navigation timeout for request handlers. Helpful when you expect very slow pages or when you want to fail quickly. - `reportTemplates` — Array of predefined report templates. See [Report templates](#report-templates) for the structure and usage. - `targets` — Optional record of named deployment targets for multi-environment publishing. Each target specifies `id`, `profile`, and optionally `apiUrl`. See [Multi-target deployment](#multi-target-deployment). - `entrypoint`, `handler`, `metricsTypeName`, `metricsMetadata`, `metricsTypeNames`, `metricsTypePath`, `externalPackages`, `metricsSchema` — Legacy single-handler shortcuts. They mirror the per-handler options below and are still read for backwards compatibility, but we recommend moving to the `handlers` block so you can mix request/preCrawl/postCrawl logic in one file. #### Container properties These fields define the container's identity and behavior in the API. When present in `.oreorc`, they are automatically synced to the API on every publish and can be managed via `metric update` and `metric pull` commands. - `name` — Unique container name (alphanumeric, hyphens, underscores; 3-100 chars). **Immutable after creation.** When set in `.oreorc`, it acts as a safety guard — publishing will fail if this name doesn't match the container ID, preventing accidental publishes to the wrong container. - `displayName` — Human-readable display name for the container (3-100 chars). - `description` — Description of what the container does. - `inputType` — Which input type the container uses: `"DOM"` or `"Puppeteer"`. - `resourceTypes` — Array of resource types the container extracts metrics from: `"Document"`, `"Image"`, `"Script"`, `"Stylesheet"`. - `containerParams` — Default container parameters (JSON object) passed at crawl time via `context.params`. - `runFirst` — If `true`, this container runs before other containers in the execution order. - `requiresAiFeatures` — If `true`, this container requires AI features to be enabled on the project. These fields are also used by `metric create` and `metric bootstrap` as defaults — when present in `.oreorc`, the CLI skips the interactive prompt for that field. ##### Admin-only container properties The following fields require admin API access. They are accepted in `.oreorc` and synced automatically when the user has admin privileges. For non-admin users, these fields are silently ignored during sync. - `scope` — Container scope: `"Container"` (default) or `"System"`. - `executable` — Whether the container can be executed. - `isolate` — Whether to isolate container execution. - `isEssential` — Whether this is an essential metric. - `isGlobal` — Whether this container is available to all projects. - `obeyCSP` — Whether to respect Content Security Policy headers. - `requiresResponseBuffer` — Whether the container requires the response buffer. - `minRequiredCpu` — Minimum required CPU: `1`, `2`, or `4`. - `includedDatasourceCodes` — Array of supported datasource codes. - `linkedExternalSources` — Array of external source names linked to this container. - `supportedFeatures` — Array of supported feature flag names. - `relatedContainerNames` — Array of related container names. - `requiredAddons` — Array of required addon names. - `requiredFeatureFlags` — Array of required feature flags. - `supportedCrawlTypes` — Array of supported crawl type codes. - `supportedUploadTypes` — Array of supported upload types. - `costs` — Array of cost entries: `[{ cost: number, moduleCode: string }]`. - `creditAllocationTypeOverride` — Credit allocation type override. ```typescript const config: IContainerConfigData = { id: "ccc_123", name: "MyMetricContainer", displayName: "My Metric Container", description: "Extracts custom SEO metrics from crawled pages.", inputType: "DOM", resourceTypes: ["Document"], secretsTypeName: "ContainerSecrets", secretsTypePath: "src/secrets.ts", paramsTypeName: "RunParams", paramsTypePath: "src/params.ts", allowedRenderingResources: ["Image"], navigationTimeoutMs: 120000, handlers: { request: { entrypoint: "src/index.ts", handler: "handler", metricsTypeName: "IMetrics", }, postCrawl: { entrypoint: "src/post-crawl.ts", handler: "postCrawlHandler", skipForSpr: true, }, }, }; export default config; ``` #### Typing secrets and params Pairing `secretsTypeName` / `secretsTypePath` with `paramsTypeName` / `paramsTypePath` lets the CLI point to the exact TypeScript interfaces that describe the secrets and runtime parameters your handlers expect. Secrets defined this way become available at runtime via `process.env`, while params are exposed on every handler invocation through `context.params`. By wiring those interfaces into `MetricScriptHandler`’s generics you get full IntelliSense for both the container input (`IRequestContainerInput`) and the params structure. ```typescript import type { IRequestContainerInput, MetricScriptHandler, MetricScriptParamsType, MetricScriptSecretsType, } from "@deepcrawl/custom-metric-types"; export interface MySecrets extends MetricScriptSecretsType { OPENAI_API_KEY: string | null | undefined; } export interface MyParams extends MetricScriptParamsType { extractionRegex: string | null | undefined; } export interface MyMetrics extends MetricScriptBasicOutput { /** * @title Page Title * @description The title of the page. */ pageTitle: string; } export const myHandler: MetricScriptHandler #### You can also use JSDoc to provide metadata for your metrics ```typescript export interface MyMetrics extends MetricScriptBasicOutput { /** * @title Page Title * @description The title of the page. */ pageTitle: string; /** * Order of properties will be kept in the UI. * * @title My Object With Specific Order * @description My Object With Specific Order */ myObjects: Array<{ /** * @title String Metric */ aString: string; /** * @title Boolean Metric * @description You can also provide description for specific properties in object arrays. */ cBoolean?: boolean; /** * @title Number Metric */ bNumber: number; /** * @title My Date Field * @format date-time */ dateString: string; }>; url: string; extract: string[]; /** * @title Page Size * @description Page size in bytes. * @format bytes */ pageSize: number; myFloat: number; /** * @format integer */ myInt: number; } ``` ### Report templates You can define report templates directly in your `.oreorc.json` or `.oreorc.ts` file. Report templates allow you to create predefined filters and views for your custom metrics, making it easier to analyze specific subsets of your data. Set `reportTemplates` to an array of template definitions. Each entry consists of: - `code`: unique identifier using lowercase letters, numbers, or underscores (no spaces) and must be unique across templates - `filter`: filter criteria based on your custom metrics - `baseReportTemplateCode`: the template code of the base report template (typically `"all_pages"`) - `name` (optional): descriptive name for the template - `description` (optional): detailed description of what the template shows - `orderBy` (optional): array of sorting rules for the resulting report, each with a `field` and `direction` (`"ASC"` or `"DESC"`) - `metricsGroupings` (optional): array of arrays that control how metrics are grouped and ordered in the UI - `reportCategories` (optional): array of category definitions used to organise the template in the UI `orderBy` entries are applied in sequence, allowing you to define primary, secondary, and further sort keys. Use the column identifiers exposed by the base template or your custom metric paths, such as `"customMetrics.randomNumber"` or `"url"`. `metricsGroupings` define the column arrangement the UI should use when rendering the report. Each inner array represents a group of metrics shown together, in the order provided. Groups are rendered from top to bottom; the first group becomes the default set of columns visible to users. When you supply multiple categories, list them starting with the deepest category. The first entry is used to build breadcrumbs, and each category can reference its parent via `parentCode`. #### Available filter predicates The following filter predicates are available based on the metric type: **String predicates:** - `eq` - equals - `ne` - not equals - `contains` - contains substring - `notContains` - does not contain substring - `beginsWith` - starts with - `endsWith` - ends with - `matchesRegex` - matches regular expression - `notMatchesRegex` - does not match regular expression - `in` - value is in array - `notIn` - value is not in array - `isEmpty` - is empty string - `isNull` - is null **Number predicates:** - `eq` - equals - `ne` - not equals - `gt` - greater than - `ge` - greater than or equal - `lt` - less than - `le` - less than or equal - `in` - value is in array - `notIn` - value is not in array - `isEmpty` - is empty - `isNull` - is null **Array predicates:** - `arrayContains` - array contains exact value - `arrayContainsLike` - array contains value (case-insensitive) - `arrayNotContains` - array does not contain exact value - `arrayNotContainsLike` - array does not contain value (case-insensitive) - `isEmpty` - array is empty - `isNull` - array is null **Boolean predicates:** - `eq` - equals - `ne` - not equals - `isNull` - is null **Logical predicates:** - `_and` - logical AND (matches all filters in the array) - `_or` - logical OR (matches any filter in the array) ### Crawl-level metrics By default, custom metrics are stored at the URL level, meaning each URL gets its own set of metrics. However, you can configure your container to store metrics at the crawl level instead, which allows you to aggregate data across multiple URLs or store crawl-wide statistics. To enable crawl-level metrics, you need to specify the `tableType` in your container configuration and return special metadata fields in your metrics. When using crawl-level metrics, your handler must return an array of objects instead of a single object. Each object in the array represents a separate metric record and must include special metadata fields: - `@stepId`: The crawl step ID (available as `input.id`) - `@itemType`: A string identifier for the type of metric being stored - `@itemKey`: A unique key for this specific metric record ```typescript export interface MyMetrics extends MetricScriptBasicOutput { randomNumber: number; [`@stepId`]: string; [`@itemType`]: string; [`@itemKey`]: string; } export const myHandler: MetricScriptHandler ### Secrets in custom metric containers Sometimes there is a need to pass in a secret into your Container, or other variables which are unique to a project. (For example OPENAI_APIKEY.) You can set secrets for your CustomMetricContainer which will be accessible via environment variables. ```typescript const openaiApiKey = process.env["OPENAI_APIKEY"]; ``` There are two scopes you can work with: - **Container-level secrets**: configured once on the CustomMetricContainer. Every project linked to the container inherits the value by default. - **Project-level secrets**: defined for a specific project. They override any container-level secret with the same name. Use GraphQL to set container-level values with the `setCustomMetricContainerSecret` mutation. ```graphql mutation setCustomMetricContainerSecret( $input: SetCustomMetricContainerSecretInput! ) { setCustomMetricContainerSecret(input: $input) { customMetricContainerSecret { name } } } ``` **Variables:** ```json { "input": { "customMetricContainerId": 1, "name": "OPENAI_APIKEY", "value": "MY API SECRET KEY" } } ``` Set a project-level secret from the CLI when you need to override the shared value. ```shell npm run oreo metric secret set -- --name OPENAI_APIKEY --projectId 123456 --value "mySecretKey" ``` You can also call the GraphQL API directly for project-level overrides. ```graphql mutation setCustomMetricContainerProjectSecret( $input: SetCustomMetricContainerProjectSecretInput! ) { setCustomMetricContainerProjectSecret(input: $input) { customMetricContainerProjectSecret { name } } } ``` **Variables:** ```json { "input": { "projectId": 1, "customMetricContainerId": 1, "name": "OPENAI_APIKEY", "value": "MY API SECRET KEY" } } ``` :::info Container-level secrets provide the default value for every linked project. Define a project-level secret only when you need a project-specific override. ::: ### CI/CD integration You can integrate your custom metric container with your CI/CD pipeline. For example, you can use GitHub Actions to build and upload your container. For this you will need to login to the CLI programmatically without user interaction with a Lumar ACCOUNT_ID, API_KEY_ID and API_KEY_SECRET. To create API_KEY_ID and API_KEY_SECRET you can use the CLI command locally or do so via [Lumar Accounts app](https://accounts.lumar.io/api-access) where you can also find ACCOUNT_ID. ```shell npm run oreo user-key create ``` Once you have all secrets you can use them in your CI/CD worflow file. ```shell npm run oreo login -- --id ${{ secrets.API_KEY_ID }} --secret ${{ secrets.API_KEY_SECRET }} --accountId ${{ secrets.ACCOUNT_ID }} npm run build npm run upload ``` ### Programmatic access If you would like to run your custom metric container programmatically, you can do so using `@deepcrawl/oreo-api-sdk` package. For more information, see [Single Page Requester](/docs/single-page-requester.md#custom-metrics). ### Container failures If your container fails to extract metrics and returns an error, the information is stored as a separate metric `containerExecutionFailures`. Failing containers will not stop the crawl. ### Supported types for filtering Even though custom metric containers can extract and store almost any data type, not all of them will be queryable via the API in our UI. Supported filterable types are: - `boolean` - `number` - `number[]` - `string` - `string[]` ### Automatic \_\_count metrics for arrays If your metric returns an array, we will automatically generate a metric that counts the number of elements in the array. This metric will have the same name as the original metric with `__count` suffix. ### Universal container Project created from running `bootstrap` command will have a specific type either for DOM or Puppeteer specified, but you can create a universal container that can handle both, using `input.inputType`, `input.resourceType` to narrow down type during extraction. ```typescript export interface IMetrics extends MetricScriptBasicOutput { isImage: boolean; wasJsRenderingEnabled: boolean; } export const myHandler: MetricScriptHandler = input => { if (input.resourceType === "document") { if (input.inputType === "dom") { // do extractions without puppeteer return { wasJsRenderingEnabled: false, }; } else if (input.inputType === "puppeteer") { // do extractions with puppeteer return { wasJsRenderingEnabled: true, }; } } else if (input.resourceType === "image") { // do extractions for images return { isImage: true, wasJsRenderingEnabled: input.inputType === "puppeteer", }; } }; ``` ### Handler input reference Every handler receives an `input` object as its first argument. The shape of this object depends on the handler phase and (for request handlers) the container's input type and resource type. #### Common fields (all phases) All handler inputs share these base fields: ```typescript interface ICommonContainerInput { id: string; // Step ID — unique identifier for this execution projectId: number; // Lumar project ID crawlId: number; // Current crawl ID phase: "request" | "preCrawl" | "postCrawl"; } ``` #### Request handler input Request handlers receive additional fields depending on the resource type and input type: ```typescript interface ICommonRequestContainerInput extends ICommonContainerInput { resourceType: "document" | "image" | "script" | "stylesheet"; inputType: "dom" | "puppeteer"; url: string; response?: { statusCode: number; headers: Record; requestDuration: number; transferSize?: number; }; parentUrl?: string; crawlLevel?: number; disallowed?: boolean; error?: { errorMessage: string; errorCode: string }; consoleMessages?: IConsoleMessage[]; pageErrors?: Error[]; responses?: IHttpResponse[]; // All HTTP responses captured during page load } ``` **Puppeteer inputs** (`IPuppeteerRequestContainerInput`) additionally include: - `input.page` — A live Puppeteer [`Page`](https://pptr.dev/api/puppeteer.page) object for browser interaction **Document inputs** additionally include: ```typescript interface IDocumentInputContent { staticHtml: { text: string; document: Document }; // Pre-render HTML renderedHtml: { text: string; document: Document }; // Post-render HTML windowExtractions: Record; // Data from window object performance?: { navigationTiming?: { requestStart?: number; responseStart?: number; domContentLoadedEventEnd?: number; domInteractive?: number; }; paintTiming?: { startTime?: number }; webVitals?: { lcp?: number; cls?: number }; }; renderingTimedOut?: boolean; } ``` **Image inputs** include `content?: { body: Buffer }`. **Script and StyleSheet inputs** include `content?: { body: string }`. Document inputs may also include redirect details: ```typescript interface IResolvedRedirectDetails { resolvedTarget?: | { url: string; statusCode: number } // Successful redirect | { url: string; errorMessage: string; errorCode: string } // Failed redirect | { url: string; targetExclusionReason: string }; // Broken (excluded) redirect redirectChain: Array<{ url: string; statusCode: number; redirectType: "location" | "refresh" | "meta" | "js"; redirectsTo: string; exclusionReason?: string; metaRefreshDuration?: number; }>; } ``` #### Pre-crawl and post-crawl handler input Both `IPreCrawlContainerInput` and `IPostCrawlContainerInput` extend the common fields and additionally include: - `input.page` — A Puppeteer [`Page`](https://pptr.dev/api/puppeteer.page) object for browser interaction (e.g. fetching external data, calling APIs) ### Handler context reference Every handler receives a `context` object as its second argument. This is the same across all handler phases. ```typescript interface IMetricScriptContext> { params: Partial; settings: { userAgentToken: string; isJsEnabled: boolean; domain: { primaryDomain: string; secondaryDomains: string[]; startUrlsDomains: string[]; mobileDomain?: string; includeSubdomains: boolean; ignoreProtocol: boolean; domainAlias?: string; }; duplicatePrecisionIndices: number[]; aiFeaturesEnabled: boolean; ignoreXRobots?: boolean; }; externalSources?: { googleSearchConsole?: Array<{ siteUrl: string; refreshToken: string /* ... */ }>; }; keyValueStore: IContainerKeyValueStore; graphStore: IContainerGraphStore; storeAttachment: (attachment: { name: string; content: Buffer; contentType: string }) => Promise; costReporter: { report: (label: string, value: number) => Promise }; next: (token: { value: string; delaySeconds?: number }) => Promise; nextToken?: string; crawlStartedAt?: string; // ISO 8601 (e.g. '2024-11-05T10:41:33.077Z') launchBrowser: (options?: { args?: string[] }) => Promise; isInternalUrl: (url: URL | string) => boolean; logger?: { debug: Function; info: Function; warn: Function; error: Function }; } ``` #### Key context properties - **`context.params`** — Runtime parameters passed to the container. Strongly typed when you define `paramsTypeName` / `paramsTypePath` in your `.oreorc` config. - **`context.settings`** — Crawl project settings including domain configuration and rendering options. - **`context.crawlStartedAt`** — ISO 8601 timestamp of when the crawl started. - **`context.isInternalUrl(url)`** — Returns `true` if the given URL belongs to the crawled domain (respects subdomain and protocol settings). - **`context.logger`** — Structured logger with `debug`, `info`, `warn`, and `error` methods. #### Storing attachments Use `context.storeAttachment()` to save binary files (screenshots, PDFs, exports) alongside crawl results: ```typescript await context.storeAttachment({ name: "screenshot.png", content: screenshotBuffer, contentType: "image/png", }); ``` #### Launching additional browsers Use `context.launchBrowser()` to spin up a separate browser instance when your handler needs to navigate to external pages or run parallel browser work: ```typescript const browser = await context.launchBrowser({ args: ["--no-sandbox"] }); const page = await browser.newPage(); await page.goto("https://external-api.example.com"); // ... extract data ... await browser.close(); ``` #### Reporting costs Use `context.costReporter.report()` to track resource consumption (e.g. API calls to external services): ```typescript await context.costReporter.report("openai-tokens", 1500); ``` ### Key-value store The key-value store lets handlers persist and share data across handler invocations. It has two scopes: - **`crawl`** — Data scoped to the current crawl. Automatically cleaned up when the crawl ends. Use this for sharing state between request handlers processing different URLs in the same crawl. - **`project`** — Data scoped to the project, persisted across crawls. Requires an explicit TTL (max 90 days). Use this for caching expensive external data that doesn't change between crawls. #### Basic operations Both `set()` and `get()` return an `IKeyValueObject`: ```typescript interface IKeyValueObject { readonly key: string; readonly value: string; readonly ttl: number; } ``` ```typescript // Crawl-scoped storage await context.keyValueStore.crawl.set("seen-urls", JSON.stringify(urls)); const result = await context.keyValueStore.crawl.get("seen-urls"); if (result) { const urls = JSON.parse(result.value); // result.key and result.ttl are also available } await context.keyValueStore.crawl.remove("seen-urls"); // Project-scoped storage (TTL in seconds, max 90 days) const ttl = 30 * 24 * 60 * 60; // 30 days await context.keyValueStore.project.set("api-cache", JSON.stringify(data), ttl); const cached = await context.keyValueStore.project.get("api-cache"); await context.keyValueStore.project.remove("api-cache"); ``` #### Collections Collections are sets of unique string values. Useful for tracking membership (e.g. "have I seen this URL before?"). ```typescript // Crawl-scoped collection (no TTL needed) const crawlCollection = await context.keyValueStore.crawl.collections().set("processed-urls"); await crawlCollection.add("https://example.com/page-1"); const exists = await crawlCollection.has("https://example.com/page-1"); // true await crawlCollection.remove("https://example.com/page-1"); // Iterate over all members for await (const member of crawlCollection.streamMembers()) { console.log(member); } // Project-scoped collection (TTL in seconds required) const projectCollection = await context.keyValueStore.project.collections().set("known-sitemaps", 2592000); ``` #### Maps Maps are key-value dictionaries. Useful for building lookup tables. ```typescript // Crawl-scoped map const urlMap = await context.keyValueStore.crawl.collections().map("url-to-category"); await urlMap.set("/page-1", "blog"); const category = await urlMap.get("/page-1"); // "blog" await urlMap.has("/page-1"); // true // Iterate over all entries for await (const [key, value] of urlMap.streamMembers()) { console.log(`${key} => ${value}`); } // Project-scoped map (TTL in seconds required) const cacheMap = await context.keyValueStore.project.collections().map("external-data", 2592000); ``` ### Graph store The graph store lets you persist and query graph-structured data scoped to the project. It uses an openCypher-compatible API and is useful for modelling relationships between entities (e.g. internal link graphs, content hierarchies). `upsertNode` takes a `label`, a `match` object (identity properties used to find or create the node), and an optional `set` object (mutable properties updated on each upsert): ```typescript // Upsert a single node — { url } is the identity key, { title } is updated on each upsert await context.graphStore.project.upsertNode("Page", { url: input.url }, { title: "My Page" }); // Upsert relationships await context.graphStore.project.upsertRelationship({ label: "Page", match: { url: input.url } }, "LINKS_TO", { label: "Page", match: { url: targetUrl }, }); // Query with openCypher const result = await context.graphStore.project.query("MATCH (p:Page) WHERE p.url = $url RETURN p.title", { url: input.url, }); // Delete nodes and relationships await context.graphStore.project.deleteNode("Page", { url: input.url }); await context.graphStore.project.deleteRelationships({ label: "Page", match: { url: input.url } }, "LINKS_TO"); ``` All graph store writes accept an optional `ttl` (in seconds, max 90 days, defaults to 60 days): ```typescript await context.graphStore.project.upsertNode("Page", { url: input.url }, { title: "My Page" }, { ttl: 2592000 }); ``` #### Batch operations For better performance when writing many nodes or relationships, use the batch variants: ```typescript // Upsert multiple nodes at once await context.graphStore.project.upsertNodes("Page", [ { match: { url: "/page-1" }, set: { title: "Page 1" } }, { match: { url: "/page-2" }, set: { title: "Page 2" } }, ]); // Upsert multiple relationships at once await context.graphStore.project.upsertRelationships({ label: "Page", match: { url: input.url } }, "LINKS_TO", [ { label: "Page", match: { url: "/target-1" } }, { label: "Page", match: { url: "/target-2" } }, ]); ``` ### Batch processing and pagination For `preCrawl` and `postCrawl` handlers that need to process large datasets iteratively (e.g. fetching all pages from a sitemap index, paginating through an external API), use `context.next()` and `context.nextToken`. Calling `context.next()` signals that the handler should be re-invoked with the provided token. On the next invocation, the token is available via `context.nextToken`. When `context.next()` is not called, the handler completes. ```typescript export const preCrawlHandler: MetricScriptHandler<{}, IPreCrawlContainerInput> = async (input, context) => { const BATCH_SIZE = 100; const currentOffset = Number(context.nextToken ?? 0); const items = await fetchExternalData(currentOffset, BATCH_SIZE); for (const item of items) { await context.keyValueStore.crawl.set(`item:${item.id}`, JSON.stringify(item)); } // If there are more items, schedule the next batch if (items.length === BATCH_SIZE) { await context.next({ value: String(currentOffset + BATCH_SIZE) }); } return {}; }; ``` You can also add a delay between invocations to avoid rate-limiting external APIs: ```typescript await context.next({ value: String(nextOffset), delaySeconds: 5 }); ``` ### Link-producing containers Custom metric containers can discover new URLs during a crawl and feed them back into the crawler's link pipeline. This is useful when you need to extract links from non-HTML resources (e.g. `.txt` or `.md` files) or from content that the crawler's built-in parser does not handle. To enable this, set `linksProducer: true` on the handler in your `.oreorc` config: ```json { "id": "1070", "handlers": { "request": { "entrypoint": "src/handler.ts", "handler": "handler", "metricsTypeName": "ITextLinkOutput", "metricsTypePath": "src/output.ts", "linksProducer": true } } } ``` The output interface describes the link fields the crawler pipeline expects: ```typescript export interface ITextLinkOutput extends MetricScriptBasicOutput { source: string; type: string; parentUrl: string; isParentNofollow: boolean; attributes: Record; } ``` The handler extracts links from the resource body and returns them. The crawler automatically filters, deduplicates, and enqueues the discovered URLs: ```typescript export const handler: MetricScriptHandler = (input, context) => { if (input.phase !== "request") return undefined; if (input.resourceType !== "script") return undefined; const content = "content" in input ? input.content : undefined; const body = content && "body" in content ? content.body : undefined; if (!body || typeof body !== "string") return undefined; const links = extractLinks(body, { parentUrl: "url" in input ? input.url : "", isInternalUrl: context.isInternalUrl, }); return links as unknown as ITextLinkOutput[]; }; ``` > **Note:** For link-producing containers targeting non-HTML resources (e.g. `.txt` files), the crawl project must have **Crawl non-HTML URLs** enabled so that discovered text URLs are followed. ### Google Search Console If your container needs to connect to GSC, it should be enabled via `linkedExternalSources` during creation or updating of the custom metric container. Include `googleSearchConsole` in the array. Once enabled, publish a new version of the container code. During runtime, `context.externalSources` will include the `googleSearchConsole` configuration. ### Changing API URL ```shell npx @deepcrawl/oreo@latest config set --name=apiUrl --value=https://api.staging.lumar.io/graphql ```