Seo, in its the majority of fundamental sense, trusts one thing above all others: Search engine spiders crawling and indexing your site.
But almost every site is going to have pages that you do not wish to include in this exploration.
In a best-case scenario, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more crucial pages.
Luckily, Google permits webmasters to inform online search engine bots what pages and content to crawl and what to overlook. There are numerous methods to do this, the most typical being utilizing a robots.txt file or the meta robots tag.
We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you need to definitely check out.
But in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).
Robots.txt offers spiders with guidelines about the website as an entire, while meta robots tags include instructions for specific pages.
Some meta robots tags you might employ include index, which tells search engines to include the page to their index; noindex, which tells it not to add a page to the index or include it in search engine result; follow, which instructs a search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags work tools to keep in your toolbox, however there’s likewise another way to advise search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to manage how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it controls indexing for an entire page, in addition to the specific aspects on that page.
And whereas utilizing meta robotics tags is fairly uncomplicated, the X-Robots-Tag is a bit more complicated.
However this, of course, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any regulation that can be utilized in a robotics meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are specific situations where you would wish to utilize the X-Robots-Tag– the two most typical being when:
- You want to manage how your non-HTML files are being crawled and indexed.
- You wish to serve directives site-wide rather of on a page level.
For example, if you wish to block a particular image or video from being crawled– the HTTP action approach makes this simple.
The X-Robots-Tag header is likewise useful because it enables you to integrate several tags within an HTTP action or use a comma-separated list of directives to define directives.
Possibly you do not desire a particular page to be cached and want it to be not available after a certain date. You can use a mix of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these directions.
Basically, the power of the X-Robots-Tag is that it is far more flexible than the meta robots tag.
The benefit of using an X-Robots-Tag with HTTP responses is that it enables you to use regular expressions to perform crawl instructions on non-HTML, in addition to use specifications on a larger, international level.
To help you comprehend the distinction in between these regulations, it’s valuable to classify them by type. That is, are they crawler regulations or indexer regulations?
Here’s a handy cheat sheet to describe:
|Crawler Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, allow, prohibit, and sitemap regulations to define where on-site search engine bots are enabled to crawl and not allowed to crawl.||Meta Robots tag– enables you to define and prevent online search engine from showing particular pages on a site in search results.
Nofollow– allows you to specify links that ought to not pass on authority or PageRank.
X-Robots-tag– enables you to manage how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you want to obstruct particular file types. A perfect approach would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP responses in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds terrific in theory, however what does it appear like in the real life? Let’s take a look.
Let’s say we desired search engines not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the below:
location ~ * . pdf$
Now, let’s look at a different circumstance. Let’s say we wish to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would appear like the below:
Please keep in mind that comprehending how these regulations work and the effect they have on one another is vital.
For example, what occurs if both the X-Robots-Tag and a meta robots tag lie when crawler bots discover a URL?
If that URL is obstructed from robots.txt, then certain indexing and serving regulations can not be found and will not be followed.
If instructions are to be followed, then the URLs consisting of those can not be prohibited from crawling.
Look for An X-Robots-Tag
There are a few different methods that can be utilized to check for an X-Robots-Tag on the website.
The easiest method to examine is to install an internet browser extension that will inform you X-Robots-Tag details about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to determine whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.
By clicking on the plugin in your browser and navigating to “View Action Headers,” you can see the various HTTP headers being used.
Another approach that can be utilized for scaling in order to identify issues on sites with a million pages is Shrieking Frog
. After running a website through Yelling Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which areas of the site are utilizing the tag, along with which particular instructions.
Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Comprehending and managing how online search engine connect with your site is
the cornerstone of seo. And the X-Robots-Tag is a powerful tool you can use to do simply that. Simply understand: It’s not without its dangers. It is extremely simple to make a mistake
and deindex your whole website. That said, if you read this piece, you’re most likely not an SEO novice.
So long as you use it sensibly, take your time and examine your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ SMM Panel