X-Robots-Tag

The HTTP X-Robots-Tag response header is an unofficial HTTP header providing indexing and serving directives to web crawlers, functioning as the HTTP equivalent of the HTML robots meta tag.

Usage

The X-Robots-Tag header controls how search engines index and display URLs in search results. Unlike the HTML <meta name="robots"> tag, the HTTP header applies to any resource type: HTML pages, PDFs, images, videos, and other non-HTML files where a meta tag is not an option.

The header accepts one or more directives as a comma-separated list. When multiple directives conflict, search engines apply the most restrictive directive. For example, combining noindex with max-snippet:-1 results in the page being excluded from search results entirely.

Directives apply to all crawlers by default. To target a specific crawler, prefix the directives with the crawler name followed by a colon. Multiple X-Robots-Tag headers with different crawler targets are allowed in a single response.

The header is supported by major search engines including Google and Bing. Yandex supports a subset of directives.

Bingbot directive support

Bingbot supports max-snippet, max-image-preview, and max-video-preview directives. The nocache directive is a Bing-specific synonym for noarchive. To prevent Bing image indexing, apply noindex on the image URL itself.

Note

The "X-" naming convention for HTTP headers, "X" referring to "experimental", has been deprecated and needs to be transitioned to the formal naming convention for HTTP headers.

Directives

noindex

The noindex directive prevents the URL from appearing in search results. Without this directive, the default behavior is index, meaning the URL is eligible for indexing.

nofollow

The nofollow directive instructs crawlers not to follow links on the page. The default behavior is follow.

Note

Applying noindex over a prolonged period causes search engines to eventually stop following links on the page, even if follow was never explicitly removed. The follow signal becomes unreliable once a page is consistently excluded from indexing.

noarchive

The noarchive directive prevents search engines from showing a cached copy of the page in search results. Bing also recognizes the equivalent nocache directive.

nosnippet

The nosnippet directive prevents search engines from displaying a text snippet or video preview for the page in search results.

noimageindex

The noimageindex directive prevents images on the page from being indexed by image search.

none

The none directive is equivalent to specifying both noindex and nofollow, blocking the URL from search results and preventing link crawling.

all

The all directive is equivalent to specifying both index and follow. This is the default behavior when no X-Robots-Tag header is present.

max-snippet

The max-snippet directive sets the maximum character length for text snippets in search results. A value of 0 is equivalent to nosnippet. A value of -1 removes length restrictions and lets the search engine determine the snippet length. When absent, search engines determine the snippet length at their discretion.

max-image-preview

The max-image-preview directive controls the maximum size of image previews in search results. Accepted values: none (no image preview), standard (default-sized preview), and large (preview up to viewport width).

max-video-preview

The max-video-preview directive limits video preview duration in seconds. A value of 0 allows only a static image. A value of -1 removes duration restrictions.

notranslate

The notranslate directive prevents search engines from offering a translation of the page in search results.

indexifembedded

The indexifembedded directive allows content to be indexed when embedded via an iframe, even when noindex is also set. This applies only to the embedded URL. The embedding page is unaffected.

unavailable_after

The unavailable_after directive specifies a date and time after which the URL is removed from search results. Google documents the old HTTP date format for this directive.

X-Robots-Tag: unavailable_after: 31 Dec 2026 23:59:59 GMT

Example

A server preventing a PDF from being indexed. The noindex directive works at the HTTP level since PDFs have no HTML meta tag support.

X-Robots-Tag: noindex

Targeting a specific crawler while leaving the default behavior for others. The first header restricts Googlebot, while the second header applies to all other crawlers.

X-Robots-Tag: googlebot: nosnippet, notranslate
X-Robots-Tag: index, follow

A server applying multiple restrictions with a snippet length limit. The noarchive directive prevents cached copies and the max-snippet directive limits snippet length to 100 characters.

X-Robots-Tag: noarchive, max-snippet:100

Takeaway

The X-Robots-Tag response header delivers search engine indexing directives through HTTP headers, extending robots control to non-HTML resources and server-level configuration.

See also

Last updated: March 11, 2026