Skip to main content

Duplicate Pages in SERPs

Duplicate pages which had impressions in Google Organic SERPs.

Priority: High

Impact: Negative

What issues it may causeโ€‹

Although search engines will attempt to automatically identify duplicate pages and roll them together, this may not be completely effective on very large websites or those with a high churn of URLs.

Duplicate pages can result in the dilution of authority signals, which can affect the ranking performance and reduce the crawl efficiency of the site wasting crawl budget.

How do you fix itโ€‹

It's likely that the duplicates shown are not considered the primary version but this should be verified. The duplicates can be eliminated by either;

  • removing internal links to the URLs
  • redirecting all duplicate URLs to the primary URL in each set
  • adding canonical tags which point to the primary duplicate

What is the positive impactโ€‹

  1. Reducing the amount of duplicate pages can avoid the dilution of PageRank helping the remaining pages to rank better, resulting in more traffic and conversions.

  2. Canonicalised or redirected pages will be crawled less often, improving crawl efficiency and saving on server costs.

How to fetch the data for this report templateโ€‹

You will need to run a crawl for report template to generate report. When report has been generated and you have crawl id you can fetch data for the report using the following query:

query GetReportForCrawl($crawlId: ObjectID!, $reportTemplateCode: String!) {
getCrawl(id: $crawlId) {
reportsByCode(
input: {
reportTypeCodes: Basic
reportTemplateCodes: [$reportTemplateCode]
}
) {
rows {
nodes {
... on CrawlUrls {
pageTitle
url
foundAtUrl
deeprank
level
searchConsoleTotalClicks
searchConsoleTotalImpressions
httpStatusCode
indexable
duplicatePage
foundInGoogleAnalytics
foundInGoogleSearchConsole
foundInBacklinks
foundInList
foundInLogSummary
foundInWebCrawl
foundInSitemap
}
}
}
}
}
}

Try in explorer