July 09, 2023  SEONews

The story of blocking 2 high ranking pages with Robots.txt


I blocked two of our ranking pages using robots.txt. We lost a position here or there and all the featured snippets for the pages. I expected a lot more impact, but the world didn’t end.

Warning

I don’t recommend doing this, and it’s entirely possible that your results may differ from ours.

I tried to see the impact on rankings and traffic that removing content would have. My theory was that if we blocked the pages from being crawled, Google would have to rely on the link signals alone to rank the content.

However, I don’t think what I saw was actually the impact of removing the content. Maybe it is, but I can’t say with 100% certainty, as the impact feels too small. I will run another test to confirm this. My new plan is to remove the content from the page and see what happens.

My working theory is that Google may still be using the content it saw earlier on the page to rank it. Google Search Advocate John Mueller confirmed this behavior in the past.

So far, the test has lasted almost five months. At this point, it doesn’t look like Google will stop ranking the page. I suspect that after a while it will probably stop trusting that the content that was on the page is still there, but I haven’t seen evidence of that happening yet.

Keep reading to see the test setup and impact. The main takeaway is that accidentally blocking pages (that Google already ranks) from being crawled by robots.txt probably won’t have much of an impact on your rankings, and they’ll probably...



source: https://news.oneseocompany.com/2023/07/09/the-story-of-blocking-2-high-ranking-pages-with-robots-txt_2023070947412.html

Your content is great. However, if any of the content contained herein violates any rights of yours, including those of copyright, please contact us immediately by e-mail at media[@]kissrpr.com.