How to use proxy protocol in nginx?

Implementing the PROXY protocol in nginx is a specific technical procedure designed to preserve original client connection information—such as the source IP address and port—when nginx is behind a proxy or load balancer that terminates TCP connections. This is essential in architectures where intermediaries like HAProxy, AWS Network Load Balancer, or certain cloud proxies strip this information from the packet headers, as nginx would otherwise only see the proxy's IP address. The protocol operates by having the intermediary prepend a small header containing the original connection details to the TCP stream before forwarding it to nginx. To utilize it, you must configure both the upstream proxy and your nginx instance correctly; nginx cannot unilaterally enable this if the upstream device is not sending the PROXY protocol header.

Configuration on the nginx side is performed within the `listen` directive of a `server` block, specifically for streams handled by the `ngx_stream_proxy_module` (for TCP/UDP) or within an `http` `server` block for HTTP traffic. For a TCP stream load balancer, you would add the `proxy_protocol` parameter to the `listen` directive in the `stream { }` context, for example: `listen 1234 proxy_protocol;`. This instructs nginx to parse the incoming header. Subsequently, you must use the `set_real_ip_from` and `real_ip_header` directives to instruct nginx on how to replace the direct connection's address with the one provided in the PROXY header. A typical configuration snippet within an `http` block would be: `set_real_ip_from <proxy-ip>;` and `real_ip_header proxy_protocol;`. It is critical that the `set_real_ip_from` directive specifies the trusted address of your proxy to prevent IP spoofing.

For the configuration to function, the upstream proxy must be explicitly configured to send the PROXY protocol version 1 or 2. Version 1 is human-readable, while version 2 is binary; nginx supports both. A common point of failure is a mismatch where the proxy sends the header but nginx is not listening for it, resulting in connection errors, or vice-versa. Furthermore, enabling the PROXY protocol means the raw TCP stream is modified, so any health checks or modules that read data directly from the socket must be PROXY protocol-aware. In practical deployment, this often requires adjusting health check mechanisms on both the proxy and nginx, as a standard HTTP health check will fail when nginx expects the PROXY protocol preamble.

The primary implication of using the PROXY protocol is the reliable passing of client IP addresses for logging, geolocation, rate-limiting, and access control within nginx, which is indispensable in multi-tiered cloud environments. However, it introduces a layer of coupling between your proxy and web server; they must agree on the protocol version and the network path must be secured, as the header is not cryptographically signed. Therefore, its use should be restricted to connections from trusted proxies within a private network or VPC, with appropriate firewall rules. The mechanism is robust but requires a consistent configuration across your infrastructure stack to avoid silent failures in client IP logging.