Disallowed Pages
All URLs included in the crawl which were disallowed in the robots.txt on the live site, or from a custom robots.txt file in Advanced Settings.
These URLs were crawled by Lumar because the 'Check disallowed links' setting in Advanced Settings > Scope > Link Validation was enabled. If the setting is disabled, they will not be crawled and appear in the Disallowed URLs report.
Priority: None
Impact: Neutral
How to fetch the data for this report template
You will need to run a crawl for report template to generate report. When report has been generated and you have crawl id you can fetch data for the report using the following query:
GetReportStatForCrawlTry in Explorer
GraphQL
query GetReportStatForCrawl(
$crawlId: ObjectID!
$reportTemplateCode: String!
$after: String
) {
getReportStat(
input: {crawlId: $crawlId, reportTemplateCode: $reportTemplateCode}
) {
crawlUrls(after: $after, reportType: Basic) {
nodes {
pageTitle
url
description
foundAtUrl
deeprank
level
disallowedPage
robotsTxtRuleMatch
duplicatePage
noindex
nofollowedPage
foundInGoogleAnalytics
foundInGoogleSearchConsole
foundInBacklinks
foundInList
foundInLogSummary
foundInWebCrawl
foundInSitemap
}
totalCount
pageInfo {
endCursor
hasNextPage
}
}
}
}