999 Request Denied

The HTTP 999 Request Denied status code is an unofficial status code used by LinkedIn to block requests the platform identifies as automated, scraping, or otherwise unwelcome. The code falls outside the valid 100599 range and is not registered with IANA. The choice of 999 is sometimes attributed to the German word nein (no), making the code a phonetic nein-nein-nein (no-no-no). The number is also the highest possible three-digit integer.

Usage

LinkedIn returns 999 Request Denied when its security infrastructure determines a request is not from a legitimate browser session. The evaluation runs at the platform edge before the request reaches the application layer.

Triggers include:

  • User agent filtering. Requests with non-browser user agents (curl, python-requests, wget, custom scrapers) are blocked. The same URL accessed through a standard web browser returns 200 OK.
  • Rate limiting. An abnormally high number of requests from the same IP address or network within a short window triggers the block.
  • IP reputation. Requests from IP addresses associated with prior abuse, data center ranges, or cloud hosting providers are more likely to receive a 999.
  • robots.txt violations. Crawlers ignoring LinkedIn's robots.txt restrictions encounter 999 responses.

The block is automatic and temporary. LinkedIn states access is restored once network activity returns to normal. Connecting through a different network during the block period bypasses the restriction.

Note

The HTTP specification defines valid status codes as three-digit integers between 100 and 599. Status code 999 falls outside this range. A client receiving an unrecognized status code "MUST treat it as being equivalent to the x00 status code of that class." Since 999 falls outside the defined 1xx–5xx range, no class applies and client behavior is undefined. Most clients are likely to treat the response as a 500 Internal Server Error.

SEO impact

Search engines treat 999 responses as server errors. Persistent errors cause crawlers to reduce crawl frequency and eventually drop affected URLs from the index.

Example

A command-line tool sends a GET request to a LinkedIn profile page. Visited in a web browser, the URL returns the profile of Fili (the creator of this website). Using curl, LinkedIn returns 999.

curl Command

curl -I https://www.linkedin.com/in/filiwiese

Request

GET /in/filiwiese HTTP/1.1
Host: www.linkedin.com
User-Agent: curl/8.5.0
Accept: */*

Response

HTTP/2 999
content-type: text/html
content-length: 1530
pragma: no-cache
cache-control: no-cache, no-store, no-transform
strict-transport-security: max-age=31536000
x-content-type-options: nosniff

The same URL accessed through a standard web browser with a full browser user agent string returns 200 OK.

How to fix

The 999 response targets automated access patterns. Strategies depend on whether the access is legitimate.

For monitoring tools and uptime checkers: Configure the tool to use a realistic browser User-Agent string. Reduce check frequency to avoid triggering rate limits. Use LinkedIn's official API endpoints with proper Authentication instead of scraping public pages.

For link checkers and validators: Exclude LinkedIn URLs from automated validation. The 999 response does not indicate a broken link. The page is accessible through a browser. Tools like html-proofer and site validators commonly encounter this false positive.

For legitimate API access: Use the LinkedIn API with OAuth 2.0 Authentication. Authenticated API requests return standard HTTP status codes and are not subject to the 999 block.

If blocked from browser access: LinkedIn states the block lifts automatically once network activity returns to normal. Switching to a different network connection bypasses the block immediately. If the issue persists, contact LinkedIn support. The block cannot be lifted manually, but support investigates the triggering activity.

Takeaway

The 999 Request Denied status code is an unofficial, non-standard response used by LinkedIn to block automated and bot traffic. The code falls outside the valid HTTP status code range and serves as a scraping deterrent rather than a standard error signal.

See also

Last updated: March 11, 2026