How to use Fetch as BingbotFetch as Bingbot is a very useful tool for troubleshooting. You will find this tool in the Diagnostics & Tools section in the navigation menu. Fetch as Bingbot allows you to see exactly what the Bingbot sees when downloading a page from your site. As such, it’s a great tool for determining if a page on your site can be crawled. To use it, simply enter in the URL from your site and click Fetch. This tool will send the request to Bingbot for processing, so it’s normal that it takes a few seconds to run your request. Just below the URL entry form, the URL you requested (along with any previous requests) will be shown. When Bingbot is finished with your request, the status for the URL will change to Completed. To see what Bingbot found at your URL, simply click the Completed link and scroll down the page to view the page source that Bingbot found.
See what Bingbot sees
Fetch as Bingbot shows you exactly what your HTTP Headers and page source look like to Bingbot. Viewing your page’s source in this manner ensures you know exactly what Bingbot is seeing when it crawls the URL. This is especially useful in helping find areas when you want to make sure content can be indexed or in cases where your site may have been compromised and is for example sending different HTML to Bingbot then to users. By scanning through the page source, you could note things like links injected into blog posts or any other manner of additive items you did not place on your website.
Fetch as Bingbot is also a reliable way to test if a URL is being blocked by your robots.txt file. It’s polite, so sending it to fetch a URL blocked via robots.txt will get you a notice explaining that this barrier is in place.
Also, if Fetch as Bingbot is not able to crawl a page because of other blocks or politeness constraints, for example the maximum amount of crawls we are allowed to make in a given time window, the tool will tell you too.
|To see the crawled content of a page using the crawlable Ajax protocol using #! notation in the URL, you need to input the static URL instead. The static URL is the URL that contains the ?_escaped_fragment_= portion. Note that although Bing supports this protocol we do not recommend it for frequent use.|
What does "Redirection Limit Reached mean"?
Unlike the SEO Analyzer tool, Fetch as Bingbot does not follow redirects. Instead it will let you know that the page resulted in a redirect and shows you the HTTP headers it received from your server. The line most commonly starting with HTTP/1.1 shows the status code (301, 302, or in some cases 303 or 307 - see this full list of 3xx status codes) and the location: header shows you where the server tells Bingbot (and other clients) where the redirect should go to. To fetch the page at that location you will need to perform a new fetch using that URL. This is in fact how the Bingbot crawler works: every redirect requires a brand new fetch, and this new fetch does not necessarily happen immediately after encountering the redirect.
What does "Download of Robots.txt Failed" mean?
When the result of a fetch reads “Download of Robots.txt Failed”, then Bingbot was not able get a proper server response when trying to access your site’s robots.txt. This usually means that your site’s server is configured to refuse Bingbot or the IP addresses from which Bingbot operates a proper connection or simply denies access to this URL. This is a problem, since Bingbot now cannot tell whether or not a robots.txt file actually exists nor read any directives in the robots.txt file should it exist. You should investigate (or ask your ISP to investigate) whether such a configuration is in place.
What does "Represent download not attempted due to politeness issue" mean?
When the result of a fetch reads “Represent download not attempted due to politeness issue”, then Bingbot did not try fetch the page due to restrictions on how much we can crawl the site. This usually means that Bingbot is backing off at this point in time and that it cannot fetch the page "politely", that is, honoring crawl delays or heuristically determined maximum fetches per second for your site. If this issue persists it may point to insufficient bandwidth available to Bingbot to crawl your site efficiently. First, you should check your site's robots.txt for any crawl-delay: directives. Removing or lowering crawl-delay directives can also help with this issue. If there are no crawl-delay directives present, you should check Crawl Control settings to see if they can help improve by setting higher levels (especially during times when you expect fewer visitors).
What does "Unexpected Error" mean?
If you are seeing frequent "Unexpected Error" results in Fetch as Bingbot, this may indicate that our crawler is not able to connect to your server and not getting a server response at all. Your server may be unintentionally blocking Bingbot based on user agent or IP range. To remedy this, look for server configurations or modules that may be dropping connections for Bingbot or work with your ISP to identify whether the issue is on their side.