Troubleshooting Google's Inconsistent Page Crawling: Seeking Insights and Solutions?

I’ve been facing an issue with one of my pages that hasn’t been crawled by Google for the past three months. I’m reaching out to seek insights and assistance in resolving this matter.

Initially, Google crawled the page when it returned a “Notfound” status on October 13, 2023, using the Smartphone bot. However, the page is currently returning a 200 OK response, but Google seems to be neglecting further crawls.

Upon checking the URL - https://pro.experience.com/reviews/company/fusco-orsini-insurance-services in the URL inspection tool, I noticed that the last crawl date is October 6, 2023, by the Desktop bot. The initial “404” error was attributed to a website server stability issue, but after reviewing the server log file, no issues are currently detected.

My Questions:
1.Inconsistency in Last Crawl Date: Why has the last crawl date changed from October 13 to October 6 in the URL inspection tool? Please refer to the attached screenshot for clarity.
2.Reasons for No Recent Crawls: What could be the underlying reasons for Google not initiating a recent crawl, especially when the page is returning a 200 OK response?
3.Page Discovery:Google seems to be discovering the page, as indicated by differences in the Last “discovery” section in the URL inspection tool. How does this impact the crawling process?
4.Suggestions Needed: Are there any specific suggestions, both from an SEO and developer standpoint, to ensure the successful crawl of this page?


I genuinely appreciate your time and expertise in helping me resolve this issue. Thank you in advance for any insights or guidance you can provide.

I’m far, far away of knowing the reasons about this behaviour.

I assuming that things like robots.txt are well configured or you don’t have it.

The change on the date doesn’t make much sense, but I assume that something happened with the smartphone-like bot and that crawl was deleted and therefore didn’t count.

Regarding the frequency between crawls, there are various aspects to this. The bandwidth, the consistency of servers and their time response, how often the site is updated, etc. has something to do.
Also, the frequency changes automatically depending on the number of the HTTP response received by the bot. HTTP response like 500, 503 or 429 instead of content will delay the next crawl.

I know that you can change the frequency of the crawl using the search console, so I would check that just in case.

Regarding Page Discovery… I am not sure if you’re talking about Google Discover.
If that’s the case, Google Discover doesn’t search based on a particular term, as you normally would.
What it does is to show what Google thinks that it is interesing for that particular person. That includes new and old content. It is more personalized and tailored to the user, so to say.

Thank you for your response, Molona. Your insights are valuable.

We have confirmed that there are no issues with our robots.txt file. Additionally, Google has recently deprecated the custom crawl-rate setting.

When we deploy updates on our website’s production server, we aim to minimize disruptions.

Regarding Page Discovery: Yes, I am specifically referring to Google Discovery. Do you have any specific advice for my situation?

So the advice would be… be the user you want to be discovered by.

IE: Your content would have to match that individual user’s interests (as guessed by Google).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.