WordPress 5.3 will change how it blocks search engines from indexing content
WordPress announced an important change to how it will block search engines from indexing websites.
This change abandons the traditional Robots.txt solution in favor of the Robots Meta Tag approach. The change brings WordPress in line with the reason for blocking Google, which is to keep the blocked pages from showing in Google’s search results.
This is the Robots Meta Tag that WordPress will use:
<meta name=’robots’ content=’noindex,nofollow’ />
Blocking Google From Indexing
It has long been a standard practice to use Robots.txt to block the “indexing” of a website.
The word “indexing” meant crawling of the site by GoogleBot. By using the Robots.txt blocking feature you could stop Google from downloading the specified web page and, it was assumed, Google would be unable to show your pages in the Search Results.
But that robots.txt directive only stopped Google from crawling the page. Google was still free to add it to its index if it was able to discover the URL.
So to block a site from appearing in the index, a publisher would block Google from “indexing” the pages. Which wasn’t consistently effective.
WordPress 5.3 Will Truly Prevent Indexing
WordPress adapted the Robots.txt approach. But that’s changing in version 5.3.
When a publisher currently selects “discourage search engines from indexing this site” what that does is add an entry to the site’s robots.txt that prohibits Google from crawling a site.
Starting with WordPress 5.3, WordPress will adopt the more reliable Robots Meta Tag approach for preventing the indexing of a website.