Google Search Console Coverage Report: The ultimate guide

Jonathan Pellerin

  | Published on  

March 5, 2024

The coverage report in Google Search Console is a vitally important, though at times confusing, tool for any marketer looking to optimize their website. This article will help to clear up the confusion to bring you back on track.

One of Google Search Consoles' more important features is its ability to give reports on crawling/indexing of your web pages through the index coverage report. Throughout this article we'll dive into the basics of the coverage report and what it is, why it's important, and how to diagnose errors and know what they mean. This article will also tell you how to fix index/crawling errors to help improve your SEO performance. 

What is the Coverage Report in Google Search Console

The coverage report in Google Search Console tells you what web pages have been successfully indexed and shown in Google search results after Google bots have crawled and indexed your pages. Google also shows you what web pages won’t show in Google search results and why after unsuccessful crawling and indexing. 

So, what does crawling and indexing even mean? 

When you publish your website to Google, Google has these bots that discover new web pages. Often this happens through internal links on each web page or your sitemap. Crawling is an important part of the ranking process because it helps determine how your web pages will be ranked in SERP based on optimization, structure of web pages, and how relevant the web pages are. Indexing takes place after crawling, which is when Google adds your web pages to the Google index. Your website can’t be ranked in Google until Google bots successfully crawl and index your web pages. 

How to Diagnose Index/Crawl Errors

Now that we know what the coverage report is and why it's important, let's dive into how to identify errors in the coverage report. 

Google Search Console provides a convenient user interface that includes a graph where you can select between the different URL statuses. 

There are four different URL statuses:

  •  Error 
  • Valid with Warning
  • Valid 
  • Excluded

 The two that you should look out for are errors and valid warnings. This is the first step to identifying and eventually resolving index/crawl errors. Google Search Console also has a tool for URL inspection that allows you to check the index status of a URL and see if there are any issues/errors. 

Google Search Console Index Errors and Meanings

There are a lot of errors that you could come across in the coverage report when you're overviewing your Google index status. Below is a comprehensive list of all of the potential index errors and what they mean. 

  • Server error (5xx): This is a 500-level error, which means that the server failed to fulfill a valid request.some text
    • How To Fix: Identify the server error type and reach out to the server administrator to have the error resolved
  • Redirect error: This occurs when Google experiences one of the following redirect errorssome text
    • A redirect chain that was too long
    • A redirect loop
    • A redirect URL that went above the max URL length
    • A faulty or empty URL in the redirect chainsome text
      • How To Fix: Determine the correct destination page and make changes to ensure Google Search Console redirects to the right URL.
  • URL blocked by robots.txt: This error message means that the website is telling crawlers not to access or crawl the URL.some text
    • How To Fix: If you want your URL to be indexed, make sure to update the robots.txt tile to allow crawling and indexing. And if your URL should not be indexed, then insert a 'noindex' directive to block it from being indexed.
  • URL marked 'noindex': 'noindex' error message means that Google tried to index the page but failed due to a 'noindex' directive. This error is usually intentional as a 'noindex' directive is created by the website owner for web pages that they don't want to be indexed. some text
    • How To Fix: If you want this URL to be indexed, then remove all noindex directives and include the URL in your sitemap for indexing.
  • URL seems to be a Soft 404: A soft 404 response is when Google has found your website but some issues need fixing or there was just bad content on the page. It's called a soft 404 because it's not a genuine 404 but should be treated as if it were one. some text
    • How To Fix: If the URL should be a 404, then reconfigure your server so that it returns a 4040 or 401 error code. If your URL should not be a 404, then change the content on the page to prevent it from looking like a 404.
  • URL returns unauthorized request (401): The page was blocked to Googlebot due to an authorization request, so the page wasn't indexed. some text
    • How To Fix: If the URL should be public then reconfigure the page so that authorization is not required to access it. 
  • URL not found (404): Google tried accessing a page but couldn't find it, thus resulting in a 404 error. This could mean that the page in question existed before but was deleted, and so Google kept crawling it. Or, the link could've been broken to the server needing some reconfiguration, along with a host of other possibilities. some text
    • How To Fix: If you designed the URL to be a 404, then remove it from the sitemap so that Google won't index it. If the page isn't supposed to be a 404, then plan to restore the content or redirect to a more relevant page using a 301. 
  • Blocked by page removal tool: This error means that someone who manages the website decided to remove the URL from the index. some text
    • How To Fix: If you desire the URL to not be indexed, then insert a 'noindex' to block the page from being added to the index. 
  • Blocked due to access forbidden (403): This error means that despite the user providing credentials, they weren't granted access. Googlebot by default doesn't provide credentials so the error may be inaccurate. This error also means that the page won't be indexed. some text
    • How To Fix: If the URL should be public, then remove any and all 403 HTTP status codes. Alternatively, if the URL shouldn't be public, then insert a 'noindex' to block it.
  • Crawl Anomaly: A crawl anomaly is when the crawlers encounter an error of some kind while crawling a web page. This error could be either a 4xx or a 5xx level error depending on what problem occurred.some text
    • How To Fix: Go through the list of URLs to make sure that they indeed aren't working and try to use Google's URL inspection tool if there are only a few URLs. 
  • Crawled - currently not indexed: Being one of the most common messages in the coverage report, this message means that the page has been crawled by not being added to the index. This may be for a few reasons, one being that Google may think that this page is not part of your main content. Also, a lack of content could be the cause of the message. some text
    • How To Fix: If you only recently published your page, then allow for more time as Google will most likely index it soon. Though, if it's been a while and it still isn't indexed, then optimize with better content and more internal links.
  • Discovered - currently not indexed: This message means that the URL is known to Google but it hasn't been visited or crawled yet. A busy server could cause this message as well, meaning that Google will crawl later when the server isn't so overloaded. some text
    • How To Fix: If you are noticing this issue with an increasing amount of URLs, then you might have a crawl budget issue. One way to fix this is to disallow some non-canonical URLs on the robots.txt file to stop Google from crawling them. 
  • Duplicate without user-selected canonical: This message will appear when Google detects multiple duplicate pages or pages with duplicate content where none of them have been canonicalized. Usually, Google will select its own canonical version to add to the index, unless the webmaster decides to choose their own canonical. some text
    • How To Fix: Look over any duplicate pages and decide which one you want as the canonical version for each. If the URL shouldn't be indexed then add a 'noindex' to block it from being indexed. 
  • Duplicate, Google chose different canonical than user: This is when Google selects/indexes a canonical that is different from the user-selected canonical. Webmasters can go back and manually change it back to the original canonical or just leave it be. some text
    • How To Fix: Manually choose the correct canonical page and update its tags to reflect it if you think Google was correct. If you feel that Google wasn't correct, then research Google's reasonings and try to optimize your page to match the one selected by Google. 
  • Page removed because of legal complaint: This message occurs after a 3rd party filed a grievance to Google, who then removed the content in response to the legal complaint. These legal complaints could be due to a copyright issue, stolen content, violence, explicit content, etc. 
  • Page with redirect: This message means that the URL currently in the coverage report has a redirect and cannot be added to the index. some text
    • How To Fix: Specify the correction destination page and make sure that it's the final redirect to fix redirect loops. To fix redirect chains, make sure to redirect the initial URL directly to the final destination URL.

Download the Google Search Console Error Handbook

With over 10 potential reasons Google could not index a page, I don't blame you for wondering how you'll remember them.

You could print out flashcards. Maybe tax a pop quiz.

I have a better idea: download our free Google Search Console Beginner's Guide. It's a handy reference complete with examples and guidance for every Google Search Console error.

This post ranks highly on Google, want to see how we did it?

Despite competing with thousands of sites that know a heck of a lot about SEO, our site shows up over 1,000,000 times each month in Google Search results.

How'd we manage that?

We built a strategy designed to outsmart our competition and win. It’s the same approach we’ve taken for our clients, and you can download our free eBook detailing it below.

10x your traffic with our proven SEO strategy framework

Get the same strategy framework we teach every single client. Follow these 4 steps to outsmart your competitors on Google and rank your website higher than ever.