Skip to main content

Disallowed JS

JavaScript files which are disallowed in robots.txt

Priority: Critical

Impact: Neutral

What issues it may cause

If the resources are disallowed in the robots.txt, the search engines may be unable to render the pages correctly.

How do you fix it

The resources should be reviewed using tools such as the Search Console Live URL Test or Mobile-Friendly Test to determine if the resources are required. If so, the robots.txt should be updated to allow the URLs to be fetched.

What is the positive impact

Search engines will be able to render the pages correctly and process all the content and meta data so the pages can be indexed as expected.

How to fetch the data for this report template

You will need to run a crawl for report template to generate report. When report has been generated and you have crawl id you can fetch data for the report using the following query:

query GetReportStatForCrawl(
$crawlId: ObjectID!
$reportTemplateCode: String!
$after: String
) {
getReportStat(
input: {crawlId: $crawlId, reportTemplateCode: $reportTemplateCode}
) {
crawlUrls(after: $after, reportType: Basic) {
nodes {
url
foundAtUrl
level
httpStatusCode
disallowedPage
failedReason
js
robotsTxtRuleMatch
foundInGoogleAnalytics
foundInGoogleSearchConsole
foundInBacklinks
foundInList
foundInLogSummary
foundInWebCrawl
foundInSitemap
}
totalCount
pageInfo {
endCursor
hasNextPage
}
}
}
}

Try in explorer