#devops #http #kubernetes

If you followed my previous article to install Nginx Ingress using Helm, you might have noticed that you didn't get the actual external IP address in the HTTP headers.

If you check the HTTP headers X-Forwarded-For or X-Real-Ip, you will get the internal IP addresses from within your cluster.

In many cases, you actually want to obtain the real IP address of the client (to do e.g. Geocoding, logging, access control, …).

It takes a couple of steps to reconfigure your load balancer as that's the one which need to be configured in a slightly different way. The instructions here were based on a Digital Ocean Kubernetes Cluster.

If you already had the Nginx Ingress installed via Helm, you can do the following:

1helm upgrade --set controller.publishService.enabled=true \
2    --set controller.service.externalTrafficPolicy=Local \
3    --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-enable-proxy-protocol=true" \
4    --set controller.replicaCount=2 \
5    --set-string controller.config.use-proxy-protocol=true,controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true \
6    nginx-ingress stable/nginx-ingress

If you didn't have it installed yet, you can run the following command:

1helm install stable/nginx-ingress --name nginx-ingress \
2    --set controller.publishService.enabled=true \
3    --set controller.service.externalTrafficPolicy=Local \
4    --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-enable-proxy-protocol=true" \
5    --set controller.replicaCount=2 \
6    --name nginx-ingress \
7    --set-string controller.config.use-proxy-protocol=true,controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true

The first thing which needs to be configured correctly is the externalTrafficPolicy which needs to be set to Local as explained in the Kubernetes documentation:

service.spec.externalTrafficPolicy- denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.

The other thing which is important is that the load balancer should have the proxy protocol enabled. This is done by adding a Digital Ocean specific annotation while creating the load balancer.

One other thing we configure as well is the ConfigMap for the Nginx Ingress controller. The use-forwarded-headers passes the proper HTTP headers through:

If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

The option compute-full-forwarded-for appends the remote address to the headers instead of replacing it:

Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.

The option use-proxy-protoco is a more generic way of enabling the proxy protocol:

Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

After doing this, you should get the actual client IP address in the X-Forwarded-For or X-Real-Ip headers. If you host your cluster on a different provider than Digital Ocean, you'll need to check their documentation on how to enable the proxy protocol for your load balancer.

Don't forget to set the replica count correctly. It should be set to the total number of nodes in your cluster.

controller.replicaCount=2

If you don't do this, the load balancer will only show the nodes which run the Nginx Ingress controller as healthy.