3

I have backend web servers that receive requests by way of haproxy->nginx->fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left).

At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application.

I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set:

option forwardfor

Nginx fastcgi_params config defines this header to be passed to the app:

fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for;

Anyone have any ideas on what might be going wrong here?

EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging...

EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd:

Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O

Here are the options i set for this in my frontend:

mode http
option httplog
capture request header X-Forwarded-For len 25
capture response header X-Forwarded-For len 25
option httpclose
option forwardfor

EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

JesseP
  • 196

2 Answers2

2

To respond to your last question in the comment, it is normal to have more than one IP address in XFF, this header is a list of values, and proxies often add their client's address there. Since everyone in the long chain appends values there, your server must use them in reverse order. For instance, the last value will be the one added by the haproxy instance in front of the server, and the previous value will be the one added by the reverse-cache before haproxy, etc...

If you'd prefer not to adapt the application to correctly parse the header, you can also ask haproxy to remove it before adding its own XFF header:

reqidel ^X-Forwarded-For:

That way the server will only get the value added by haproxy which will be haproxy's client.

0

I think there is some confusion in the way you attempt to use the X-Forwarded-For header.

First, the fact that nginx sees one IP address means that haproxy correctly adds it. The header contains only the source address that haproxy received the connection from, so it is normal that you don't see haproxy's IP address in nginx logs.

Second, it is also expected that you don't observe x-forwarded-for in incoming requests, because only some outgoing proxies add the header, but in general it's recommended not to do so when going to the internet. If some users send you a request with such a header, you'll see it in haproxy's capture, and nginx will log both this value and the client's IP added by haproxy.

What I don't understand is your point #3, because you seem to assume that the header is necessarily present in incoming requests, which obviously is not the case, judging by both haproxy's captures and nginx logs. I have just sent you a request right now with "X-Forwarded-For: Hi,Jesse,this is Willy" that you should see in both haproxy and nginx logs if it can help you troubleshoot.

What is possible is that earlier you were used to see multiple addresses there because either one of your main visitors was using and outgoing proxy which added the XFF header, or because you had another reverse-proxy in front of haproxy (eg: apache, stunnel, ...).

BTW, you should replace "option httpclose" with "option http-server-close", it will enable keep-alive with the clients and reduce the page load time for those who experience a high latency.