Google Search Console’s new crawl stats report is thoroughly explained by Search Advocate Daniel Waisberg in a new training video.
The crawl stats report in Search Console received a major update a few months ago. If you haven’t had a chance to look at the new report, now is a good time to get familiar with all the insights that have been added.
Google’s new video breaks down every section of the crawl stats report and explains the data can be used to determine how well Googlebot is able to crawl a particular site.
When Googlebot is able to efficiently crawl a site it helps get new content indexed in search results quickly, and helps Google discover changes made to existing content.
Here’s a recap of the video starting with the absolute basics: what is crawling?
What is Web Crawling?
The crawling process begins with a list of URLs from previous crawls and sitemaps provided by site owners.
Google uses web crawlers to visit URLs, read the information in them, and follow links on those pages.
The crawlers will revisit pages already in the list to check if they have changed, and also crawl new pages it discovered.
During this process the crawlers have to make important decisions, such as prioritizing when and what to crawl, while making sure the website can handle the server requests made by Google.
Continue Reading Below
Successfully crawled pages are…