Crawl directives archives

Recent Crawl directives articles

WordPress robots.txt: Best-practice example for SEO

7 November 2019 Jono Alderson

Your robots.txt file is a powerful tool when working on a website’s SEO – but you should handle it with care. It allows you to deny search engines access to different files and folders, but often that’s not the best way to optimize your site. Here, we’ll explain how we think site owners should use their …

Read: "WordPress robots.txt: Best-practice example for SEO"
WordPress robots.txt: Best-practice example for SEO




Pagination & SEO: best practices

Paginated archives have long been a topic of discussion in the SEO community. Over time, best practices for optimization have evolved, and we now have pretty clear definitions. This post explains what these best practices are. It’s good to know that Yoast SEO applies all these rules to every archive with pagination. Indicate that an …

Read: "Pagination & SEO: best practices"
Pagination & SEO: best practices

Closing a spider trap: fix crawl inefficiencies

Quite some time ago, we made a few changes to how yoast.com is run as a shop and how it’s hosted. In that process, we accidentally removed our robots.txt file and caused a so-called spider trap to open. In this post, I’ll show you what a spider trap is, why it’s problematic and how you …

Read: "Closing a spider trap: fix crawl inefficiencies"
Closing a spider trap: fix crawl inefficiencies


Google Panda 4, and blocking your CSS & JS

A month ago Google introduced its Panda 4.0 update. Over the last few weeks we’ve been able to “fix” a couple of sites that got hit in it. These sites both lost more than 50% of their search traffic in that update. When they returned, their previous position in the search results came back. Sounds too good to be …

Read: "Google Panda 4, and blocking your CSS & JS"
Google Panda 4, and blocking your CSS & JS