Skip to main content

Disallowed Pages with Bot Hits (Uncrawled)

Pages that are disallowed, but which were crawled by search engine crawlers

Priority: Critical

Impact: Negative

What issues it may causeโ€‹

These pages may have been disallowed within the timeframe of the log data (in which case you should ensure that they are intentionally disallowed), or the server logs may be recording a different URL to the one which was requested.

How do you fix itโ€‹

If the pages were disallowed during the timeframe of the log data they will disappear from this report when the timeframe of the log data is changed.

If the pages were not disallowed during the timeframe of the log data, the server logs should be checked to ensure they use the exact requested URLs.

What is the positive impactโ€‹

The log files will provide a more accurate understanding of crawl budget useage.

How to fetch the data for this report templateโ€‹

You will need to run a crawl for report template to generate report. When report has been generated and you have crawl id you can fetch data for the report using the following query:

query GetReportForCrawl($crawlId: ObjectID!, $reportTemplateCode: String!) {
getCrawl(id: $crawlId) {
reportsByCode(
input: {
reportTypeCodes: Basic
reportTemplateCodes: [$reportTemplateCode]
}
) {
rows {
nodes {
... on CrawlUncrawledUrls {
url
foundAtUrl
foundAtSitemap
level
restrictedReason
robotsTxtRuleMatch
foundInWebCrawl
foundInGoogleAnalytics
foundInGoogleSearchConsole
foundInBacklinks
foundInList
foundInLogSummary
foundInSitemap
}
}
}
}
}
}

Try in explorer