# Duplicate Pages in SERPs Duplicate pages which had impressions in Google Organic SERPs. **Priority**: High **Impact**: Negative ## What issues it may cause [Although search engines will attempt to automatically identify duplicate pages and roll them together](https://www.youtube.com/watch?v=oCNi7dTircw#t=44m11s), this may not be completely effective on very large websites or those with a high churn of URLs. Duplicate pages can result in the dilution of authority signals, which can affect the ranking performance and reduce the crawl efficiency of the site wasting crawl budget. ## How do you fix it It's likely that the duplicates shown are not considered the primary version but this should be verified. The duplicates can be eliminated by either; ## What is the positive impact
  1. Reducing the amount of duplicate pages can avoid the dilution of PageRank helping the remaining pages to rank better, resulting in more traffic and conversions.
  2. Canonicalised or redirected pages will be crawled less often, improving crawl efficiency and saving on server costs.
## How to fetch the data for this report template You will need to run a crawl for report template to generate report. When report has been generated and you have crawl id you can fetch data for the report using the following query: ```graphql query GetReportStatForCrawl( $crawlId: ObjectID! $reportTemplateCode: String! $after: String ) { getReportStat( input: {crawlId: $crawlId, reportTemplateCode: $reportTemplateCode} ) { crawlUrls(after: $after, reportType: Basic) { nodes { pageTitle url foundAtUrl deeprank level searchConsoleTotalClicks searchConsoleTotalImpressions httpStatusCode indexable duplicatePage foundInGoogleAnalytics foundInGoogleSearchConsole foundInBacklinks foundInList foundInLogSummary foundInWebCrawl foundInSitemap } totalCount pageInfo { endCursor hasNextPage } } } } ``` **Variables:** ```json {"crawlId":"TjAwNUNyYXdsNDAwMA","reportTemplateCode":"duplicate_pages_in_serp"} ```